Follow
Ambra Demontis
Ambra Demontis
Assistant Professor at University of Cagliari
Verified email at unica.it - Homepage
Title
Cited by
Cited by
Year
Towards poisoning of deep learning algorithms with back-gradient optimization
L Muñoz-González, B Biggio, A Demontis, A Paudice, V Wongrassamee, ...
Proceedings of the 10th ACM workshop on artificial intelligence and security …, 2017
7402017
Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks
A Demontis, M Melis, M Pintor, M Jagielski, B Biggio, A Oprea, ...
28th USENIX security symposium (USENIX security 19), 321-338, 2019
4872019
Adversarial malware binaries: Evading deep learning for malware detection in executables
B Kolosnjaji, A Demontis, B Biggio, D Maiorca, G Giacinto, C Eckert, ...
2018 26th European signal processing conference (EUSIPCO), 533-537, 2018
4472018
Yes, machine learning can be more secure! a case study on android malware detection
A Demontis, M Melis, B Biggio, D Maiorca, D Arp, K Rieck, I Corona, ...
IEEE Transactions on Dependable and Secure Computing, 2017
3632017
Is deep learning safe for robot vision? adversarial examples against the icub humanoid
M Melis, A Demontis, B Biggio, G Brown, G Fumera, F Roli
Proceedings of the IEEE international conference on computer vision …, 2017
1272017
Wild patterns reloaded: A survey of machine learning security against training data poisoning
AE Cinà, K Grosse, A Demontis, S Vascon, W Zellinger, BA Moser, ...
ACM Computing Surveys 55 (13s), 1-39, 2023
1142023
The threat of offensive ai to organizations
Y Mirsky, A Demontis, J Kotak, R Shankar, D Gelei, L Yang, X Zhang, ...
Computers & Security 124, 103006, 2023
932023
Secure kernel machines against evasion attacks
P Russu, A Demontis, B Biggio, G Fumera, F Roli
Proceedings of the 2016 ACM workshop on artificial intelligence and security …, 2016
792016
Deep neural rejection against adversarial examples
A Sotgiu, A Demontis, M Melis, B Biggio, G Fumera, X Feng, F Roli
EURASIP Journal on Information Security 2020, 1-10, 2020
712020
secml: Secure and explainable machine learning in Python
M Pintor, L Demetrio, A Sotgiu, M Melis, A Demontis, B Biggio
SoftwareX 18, 101095, 2022
63*2022
ImageNet-Patch: A dataset for benchmarking machine learning robustness against adversarial patches
M Pintor, D Angioni, A Sotgiu, L Demetrio, A Demontis, B Biggio, F Roli
Pattern Recognition 134, 109064, 2023
502023
On security and sparsity of linear classifiers for adversarial settings
A Demontis, P Russu, B Biggio, G Fumera, F Roli
Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR …, 2016
462016
Machine learning security against data poisoning: Are we there yet?
AE Cinà, K Grosse, A Demontis, B Biggio, F Roli, M Pelillo
Computer 57 (3), 26-34, 2024
392024
Indicators of attack failure: Debugging and improving optimization of adversarial examples
M Pintor, L Demetrio, A Sotgiu, A Demontis, N Carlini, B Biggio, F Roli
Advances in Neural Information Processing Systems 35, 23063-23076, 2022
382022
Do gradient-based explanations tell anything about adversarial robustness to android malware?
M Melis, M Scalas, A Demontis, D Maiorca, B Biggio, G Giacinto, F Roli
International journal of machine learning and cybernetics, 1-16, 2022
372022
Adversarial detection of flash malware: Limitations and open issues
D Maiorca, A Demontis, B Biggio, F Roli, G Giacinto
Computers & Security 96, 101901, 2020
342020
Domain knowledge alleviates adversarial attacks in multi-label classifiers
S Melacci, G Ciravegna, A Sotgiu, A Demontis, B Biggio, M Gori, F Roli
IEEE Transactions on Pattern Analysis and Machine Intelligence 44 (12), 9944 …, 2021
33*2021
Energy-latency attacks via sponge poisoning
AE Cinà, A Demontis, B Biggio, F Roli, M Pelillo
arXiv preprint arXiv:2203.08147, 2022
222022
Why adversarial reprogramming works, when it fails, and how to tell the difference
Y Zheng, X Feng, Z Xia, X Jiang, A Demontis, M Pintor, B Biggio, F Roli
Information Sciences 632, 130-143, 2023
212023
The hammer and the nut: Is bilevel optimization really needed to poison linear classifiers?
AE Cinà, S Vascon, A Demontis, B Biggio, F Roli, M Pelillo
2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021
182021
The system can't perform the operation now. Try again later.
Articles 1–20