Ákos Kádár
Ákos Kádár
Tilburg Center for Cognition and Communication, Tilburg University
Email verificata su uvt.nl - Home page
TitoloCitata daAnno
Representation of linguistic form and function in recurrent neural networks
A Kádár, G Chrupała, A Alishahi
Computational Linguistics 43 (4), 761-780, 2017
792017
Imagination improves multimodal translation
D Elliott, A Kádár
arXiv preprint arXiv:1705.04350, 2017
422017
Textworld: A learning environment for text-based games
MA Côté, Á Kádár, X Yuan, B Kybartas, T Barnes, E Fine, J Moore, ...
Workshop on Computer Games, 41-75, 2018
34*2018
Figureqa: An annotated figure dataset for visual reasoning
SE Kahou, V Michalski, A Atkinson, Á Kádár, A Trischler, Y Bengio
arXiv preprint arXiv:1710.07300, 2017
332017
Learning language through pictures
G Chrupała, A Kádár, A Alishahi
arXiv preprint arXiv:1506.03694, 2015
322015
Lessons learned in multilingual grounded language learning
Á Kádár, D Elliott, MA Côté, G Chrupała, A Alishahi
arXiv preprint arXiv:1809.07615, 2018
62018
NeuralREG: An end-to-end approach to referring expression generation
TC Ferreira, D Moussallem, Á Kádár, S Wubben, E Krahmer
arXiv preprint arXiv:1805.08093, 2018
62018
Improving lemmatization of non-standard languages with joint learning
E Manjavacas, A Kádár, M Kestemont
arXiv preprint arXiv:1903.06939, 2019
52019
DIDEC: The Dutch image description and eye-tracking corpus
E van Miltenburg, A Kádár, R Koolen, E Krahmer
Proceedings of the 27th International Conference on Computational …, 2018
52018
Revisiting the hierarchical multiscale lstm
A Kádár, MA Côté, G Chrupała, A Alishahi
arXiv preprint arXiv:1807.03595, 2018
52018
Learning word meanings from images of natural scenes
Á Kádár, A Alishahi, G Chrupała
Traitement Automatique des Langues. In press, preprint available at http …, 2015
52015
Towards replication in computational cognitive modeling: A machine learning perspective
C Emmery, Á Kádár, TJ Wiltshire, AT Hendrickson
Computational Brain & Behavior 2 (3-4), 242-246, 2019
12019
Lingusitic analysis of multi-modal recurrent neural networks
A Kádár, G Chrupała, A Alishahi
Proceedings of the Fourth Workshop on Vision and Language, 8-9, 2015
12015
Bootstrapping Disjoint Datasets for Multilingual Multimodal Representation Learning
Á Kádár, G Chrupała, A Alishahi, D Elliott
arXiv preprint arXiv:1911.03678, 2019
2019
Learning Visually Grounded and Multilingual Representations
A Kádár
[sn], 2019
2019
On the difficulty of a distributional semantics of spoken language
G Chrupała, L Gelderloos, Á Kádár, A Alishahi
arXiv preprint arXiv:1803.08869, 2018
2018
FigureQA: An Annotated Figure Dataset for Visual Reasoning
S Ebrahimi Kahou, V Michalski, A Atkinson, A Kadar, A Trischler, ...
arXiv preprint arXiv:1710.07300, 2017
2017
Towards learning domain-general representations for language from multi-modal data
A Kadar, G Chrupala, A Alishahi
The 26th Meeting of Computational Linguistics in the Netherlands (CLIN26), 2015
2015
Grounded Learning for Source Code Component Retrieval
Á Kádár
Tilburg University, 2014
2014
In collaboration with
A Alishahi, M Barking, L Gelderloos
Il sistema al momento non puň eseguire l'operazione. Riprova piů tardi.
Articoli 1–20