Follow
Marina Marie-Claire Höhne (née Vidovic)
Marina Marie-Claire Höhne (née Vidovic)
Full Professor at Uni Potsdam, Head of the Data Science department at ATB-Potsdam
Verified email at uni-potsdam.de
Title
Cited by
Cited by
Year
Improving the robustness of myoelectric pattern recognition for upper limb prostheses by covariate shift adaptation
MMC Vidovic, HJ Hwang, S Amsüss, JM Hahne, D Farina, KR Müller
IEEE Transactions on Neural Systems and Rehabilitation Engineering 24 (9 …, 2015
1782015
Quantus: An explainable ai toolkit for responsible evaluation of neural network explanations and beyond
A Hedström, L Weber, D Krakowczyk, D Bareeva, F Motzkus, W Samek, ...
Journal of Machine Learning Research 24 (34), 1-11, 2023
1082023
Feature importance measure for non-linear learning algorithms
MMC Vidovic, N Görnitz, KR Müller, M Kloft
arXiv preprint arXiv:1611.07567, 2016
392016
This looks more like that: Enhancing self-explaining models by prototypical relevance propagation
S Gautam, MMC Höhne, S Hansen, R Jenssen, M Kampffmeyer
Pattern Recognition 136, 109172, 2023
322023
Using transfer learning from prior reference knowledge to improve the clustering of single-cell RNA-Seq data
B Mieth, JRF Hockley, N Görnitz, MMC Vidovic, KR Müller, A Gutteridge, ...
Scientific reports 9 (1), 20353, 2019
322019
DeepCOMBI: explainable artificial intelligence for the analysis and discovery in genome-wide association studies
B Mieth, A Rozier, JA Rodriguez, MMC Höhne, N Görnitz, KR Müller
NAR genomics and bioinformatics 3 (3), lqab065, 2021
292021
Noisegrad—enhancing explanations by introducing stochasticity to model weights
K Bykov, A Hedström, S Nakajima, MMC Höhne
Proceedings of the AAAI Conference on Artificial Intelligence 36 (6), 6132-6140, 2022
262022
How Much Can I Trust You?--Quantifying Uncertainties in Explaining Neural Networks
K Bykov, MMC Höhne, KR Müller, S Nakajima, M Kloft
arXiv preprint arXiv:2006.09000, 2020
262020
Explaining bayesian neural networks
K Bykov, MMC Höhne, A Creosteanu, KR Müller, F Klauschen, ...
arXiv preprint arXiv:2108.10346, 2021
232021
Protovae: A trustworthy self-explainable prototypical variational model
S Gautam, A Boubekki, S Hansen, S Salahuddin, R Jenssen, M Höhne, ...
Advances in Neural Information Processing Systems 35, 17940-17952, 2022
202022
Covariate shift adaptation in EMG pattern recognition for prosthetic device control
MMC Vidovic, LP Paredes, HJ Hwang, S Amsu, J Pahl, JM Hahne, ...
2014 36th annual international conference of the IEEE engineering in …, 2014
192014
Opening the black box: Revealing interpretable sequence motifs in kernel-based learning algorithms
MMC Vidovic, N Görnitz, KR Müller, G Rätsch, M Kloft
Machine Learning and Knowledge Discovery in Databases: European Conference …, 2015
172015
Finding the right XAI method--A Guide for the Evaluation and Ranking of Explainable AI Methods in Climate Science
P Bommer, M Kretschmer, A Hedström, D Bareeva, MMC Höhne
arXiv preprint arXiv:2303.00652, 2023
122023
DORA: exploring outlier representations in deep neural networks
K Bykov, M Deb, D Grinwald, KR Müller, MMC Höhne
arXiv preprint arXiv:2206.04530, 2022
112022
SVM2Motif—reconstructing overlapping DNA sequence motifs by mimicking an SVM predictor
MMC Vidovic, N Görnitz, KR Müller, G Rätsch, M Kloft
PloS one 10 (12), e0144782, 2015
112015
The meta-evaluation problem in explainable AI: identifying reliable estimators with MetaQuantus
A Hedström, P Bommer, KK Wickstrøm, W Samek, S Lapuschkin, ...
arXiv preprint arXiv:2302.07265, 2023
102023
ML2Motif—Reliable extraction of discriminative sequence motifs from learning machines
MMC Vidovic, M Kloft, KR Mueller, N Goernitz
PloS one 12 (3), e0174392, 2017
92017
Demonstrating the risk of imbalanced datasets in chest x-ray image-based diagnostics by prototypical relevance propagation
S Gautam, MMC Höhne, S Hansen, R Jenssen, M Kampffmeyer
2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), 1-5, 2022
72022
Self-supervised learning for 3d medical image analysis using 3d simclr and monte carlo dropout
Y Ali, A Taleb, MMC Höhne, C Lippert
arXiv preprint arXiv:2109.14288, 2021
62021
How much can I trust you
K Bykov, MMC Höhne, KR Müller, S Nakajima, M Kloft
Quantifying Uncertainties in Explaining Neural Networks, 2020
62020
The system can't perform the operation now. Try again later.
Articles 1–20