Nicholas Carlini
Nicholas Carlini
Google Brain
Email verificata su google.com - Home page
Titolo
Citata da
Citata da
Anno
Towards evaluating the robustness of neural networks
N Carlini, D Wagner
2017 IEEE Symposium on Security and Privacy (SP), 39-57, 2017
20382017
Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples
A Athalye, N Carlini, D Wagner
ICML 2018, 2018
8232018
Adversarial examples are not easily detected: Bypassing ten detection methods
N Carlini, D Wagner
Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security…, 2017
6222017
Audio adversarial examples: Targeted attacks on speech-to-text
N Carlini, D Wagner
2018 IEEE Security and Privacy Workshops (SPW), 1-7, 2018
3292018
cleverhans v2. 0.0: an adversarial machine learning library
N Papernot, N Carlini, I Goodfellow, R Feinman, F Faghri, A Matyasko, ...
arXiv preprint arXiv:1610.00768, 2016
308*2016
Hidden Voice Commands.
N Carlini, P Mishra, T Vaidya, Y Zhang, M Sherr, C Shields, D Wagner, ...
USENIX Security Symposium, 513-530, 2016
3062016
{ROP} is Still Dangerous: Breaking Modern Defenses
N Carlini, D Wagner
23rd {USENIX} Security Symposium ({USENIX} Security 14), 385-399, 2014
3052014
Control-flow bending: On the effectiveness of control-flow integrity
N Carlini, A Barresi, M Payer, D Wagner, TR Gross
24th {USENIX} Security Symposium ({USENIX} Security 15), 161-176, 2015
3002015
Provably minimally-distorted adversarial examples
N Carlini, G Katz, C Barrett, DL Dill
arXiv preprint arXiv:1709.10207, 2017
176*2017
Adversarial example defense: Ensembles of weak defenses are not strong
W He, J Wei, X Chen, N Carlini, D Song
11th {USENIX} Workshop on Offensive Technologies ({WOOT} 17), 2017
1612017
Defensive distillation is not robust to adversarial examples
N Carlini, D Wagner
arXiv preprint arXiv:1607.04311, 2016
1542016
The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks
N Carlini, C Liu, J Kos, Erlingsson, D Song
141*2019
Mixmatch: A holistic approach to semi-supervised learning
D Berthelot, N Carlini, I Goodfellow, N Papernot, A Oliver, CA Raffel
Advances in Neural Information Processing Systems, 5050-5060, 2019
1332019
On Evaluating Adversarial Robustness
N Carlini, A Athalye, N Papernot, W Brendel, J Rauber, D Tsipras, ...
arXiv preprint arXiv:1902.06705, 2019
1072019
An Evaluation of the Google Chrome Extension Security Architecture.
N Carlini, AP Felt, D Wagner
USENIX Security Symposium, 97-111, 2012
1062012
Magnet and "efficient defenses against adversarial attacks" are not robust to adversarial examples
N Carlini, D Wagner
arXiv preprint arXiv:1711.08478, 2017
1012017
On the Robustness of the CVPR 2018 White-Box Adversarial Example Defenses
A Athalye, N Carlini
arXiv preprint arXiv:1804.03286, 2018
732018
Adversarial Examples Are a Natural Consequence of Test Error in Noise
N Ford, J Gilmer, N Carlini, D Cubuk
arXiv preprint arXiv:1901.10513, 2019
542019
Imperceptible, Robust, and Targeted Adversarial Examples for Automatic Speech Recognition
Y Qin, N Carlini, I Goodfellow, G Cottrell, C Raffel
arXiv preprint arXiv:1903.10346, 2019
402019
Unrestricted Adversarial Examples
TB Brown, N Carlini, C Zhang, C Olsson, P Christiano, I Goodfellow
arXiv preprint arXiv:1809.08352, 2018
332018
Il sistema al momento non pu eseguire l'operazione. Riprova pi tardi.
Articoli 1–20