Ambra Demontis
Cited by
Cited by
Towards poisoning of deep learning algorithms with back-gradient optimization
L Muñoz-González, B Biggio, A Demontis, A Paudice, V Wongrassamee, ...
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security …, 2017
Yes, machine learning can be more secure! a case study on android malware detection
A Demontis, M Melis, B Biggio, D Maiorca, D Arp, K Rieck, I Corona, ...
IEEE Transactions on Dependable and Secure Computing, 2017
Adversarial malware binaries: Evading deep learning for malware detection in executables
B Kolosnjaji, A Demontis, B Biggio, D Maiorca, G Giacinto, C Eckert, ...
2018 26th European Signal Processing Conference (EUSIPCO), 533-537, 2018
Secure kernel machines against evasion attacks
P Russu, A Demontis, B Biggio, G Fumera, F Roli
Proceedings of the 2016 ACM workshop on artificial intelligence and security …, 2016
Is deep learning safe for robot vision? adversarial examples against the icub humanoid
M Melis, A Demontis, B Biggio, G Brown, G Fumera, F Roli
Proceedings of the IEEE International Conference on Computer Vision …, 2017
Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks
A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, ...
28th {USENIX} Security Symposium ({USENIX} Security 19), 321-338, 2019
On security and sparsity of linear classifiers for adversarial settings
A Demontis, P Russu, B Biggio, G Fumera, F Roli
Joint IAPR International Workshops on Statistical Techniques in Pattern …, 2016
secml: A Python Library for Secure and Explainable Machine Learning
M Melis, A Demontis, M Pintor, A Sotgiu, B Biggio
arXiv preprint arXiv:1912.10013, 2019
Infinity-Norm Support Vector Machines Against Adversarial Label Contamination.
A Demontis, B Biggio, G Fumera, G Giacinto, F Roli
ITASEC, 106-115, 2017
Super-sparse regression for fast age estimation from faces at test time
A Demontis, B Biggio, G Fumera, F Roli
International Conference on Image Analysis and Processing, 551-562, 2015
Super-sparse learning in similarity spaces
A Demontis, M Melis, B Biggio, G Fumera, F Roli
IEEE Computational Intelligence Magazine 11 (4), 36-45, 2016
Deep neural rejection against adversarial examples
A Sotgiu, A Demontis, M Melis, B Biggio, G Fumera, X Feng, F Roli
EURASIP Journal on Information Security 2020, 1-10, 2020
Do Gradient-based Explanations Tell Anything About Adversarial Robustness to Android Malware?
M Melis, M Scalas, A Demontis, D Maiorca, B Biggio, G Giacinto, F Roli
arXiv preprint arXiv:2005.01452, 2020
Securing Machine Learning against Adversarial Attacks
A Demontis, F Roli, B Biggio
2016 Index IEEE Computational Intelligence Magazine Vol. 11
H Abbass, N Agell, M Alamgir, J Alcala-Fdez, C Alippi, J Alonso, Y Altun, ...
Notes 1101, 46-58, 2016
Attacking machine learning for fun and profit (with the authors of SecML)(Ep. 80)
M Melis, A Demontis, M Pintor, B Biggio
The system can't perform the operation now. Try again later.
Articles 1–16