News 2: A Special Issue on Learning Semantics is on in Machine Learning.
A key ambition of AI is to render computers able to evolve in and interact with the real world. This can be made possible only if the machine is able to produce a correct interpretation of its available modalities (image, audio, text, …), upon which it would then build a reasoning to take appropriate actions. Computational linguists use the term “semantics” to refer to the possible interpretations (concepts) of natural language expressions, and showed some interest in “learning semantics”, that is finding (in an automated way) these interpretations. However, “semantics” are not restricted to natural language modality, and are also pertinent for speech or vision modalities. Hence, knowing visual concepts and common relationships between them would certainly bring a leap forward in scene analysis and in image parsing akin to the improvement that language phrase interpretations would bring to data mining, information extraction or automatic translation, to name a few.
Progress in learning semantics has been slow mainly because this involves sophisticated models which are hard to train, especially since they seem to require large quantities of precisely annotated training data. However, recent advances in learning with weak and limited supervision lead to the emergence of a new body of research in semantics based on multi-task/transfer learning, on learning with semi/ambiguous supervision or even with no supervision at all. The goal of this workshop is to explore these new directions and, in particular, to investigate the following questions:
- How should meaning representations be structured to be easily interpretable by a computer and still express rich and complex knowledge?
- What is a realistic supervision setting for learning semantics? How can we learn sophisticated representations with limited supervision?
- How can we jointly infer semantics from several modalities?
- Chris Burges – Microsoft
- Derek Hoiem – UIUC – Pascal2 Invited Speaker
- Percy Liang – Stanford
- Raymond Mooney – UT at Austin
- Richard Socher – Stanford University
- Josh Tenenbaum – MIT
- Submission deadline:
23:59 EST, Monday, October 3, 2011 (passed).
– Acceptance notification:
October 21, 2011 (passed).
– Workshop date:
Saturday December 17, 2011 (passed)
We solicit submission of abstracts to the workshop. Abstracts should be at most 2 pages long in the NIPS format (excluding references). Submissions should not be anonymized and should include a title, the author names as well as electronic and physical addresses. They can be structured as extended abstracts or as 2-pages NIPS papers.
Selected abstracts will be presented as posters during a morning and an afternoon sessions. Submissions should be sent by email to antoine [dot] bordes [at] hds [dot] utc [dot] fr .
Abstracts should be sent no later than 23:59 EST, Monday, October 3, 2011.
- Antoine Bordes – CNRS – Université de Technologie de Compiègne
- Jason Weston – Google
- Ronan Collobert – IDIAP
- Léon Bottou – Microsoft
- Y-Lan Boureau – NYU/INRIA
- Nicolas Le Roux – INRIA
- Marc’Aurelio Ranzato – Google
- Ilya Sutskever – University of Toronto
- Graham Taylor – NYU
- Nicolas Usunier – UPMC
- Parsing Natural Scenes and Natural Language with Recursive Neural Networks. R. Socher, C. Lin, A. Y. Ng, and C. D. Manning. The 28th International Conference on Machine Learning (ICML), 2011.
- Learning dependency-based compositional semantics. P. Liang, M. I. Jordan, D. Klein. Association for Computational Linguistics (ACL), 2011.
- From Machine Learning to Machine Reasoning. L. Bottou. arXiv:1102.1808, February 2011.
- Panning for Gold: Finding Relevant Semantic Content for Grounded Language Learning. D. L. Chen and R. J. Mooney. Proceedings of MLSLP, 2011.
- Blocks World Revisited: Image Understanding using Qualitative Geometry and Mechanics. A. Gupta, A. A. Efros, M. Hebert. Proceedings of ECCV 2010
- Unsupervised Ontology Induction from Text. H. Poon and P. Domingos. Proceedings of the Forty-Eighth Annual Meeting of the Association for Computational Linguistics (ACL), 2010.
- Towards Understanding Situated Natural Language. A. Bordes, N. Usunier, R. Collobert and J. Weston. in Proceedings of the 13th AISTATS, 2010.
- Object Detection Grammars. P. Felzenszwalb and D. McAllester. University of Chicago, Computer Science TR-2010-02, 2010.
- Describing Objects by their attributes. A. Farhadi, I. Endres, D. Hoiem, and D.A. Forsyth. in Proc. of CVPR, 2009.
- Grammar-based Representations in Visual Scene Parsing. V. Savova, F. Jaekel and J. Tenenbaum. In Proc. of CogSci, 2009
- Learning Language Semantics from Ambiguous Supervision. R. J. Kate and R. J. Mooney. In Proc. of AAAI, 2007