MilaNLP 2021 in Review Part III: Reasoning, Meaning, and Language

Reviewing MilaNLP 2021 research papers on reasoning, meaning and language

Federico Bianchi
5 min readOct 8, 2021

In this blog post series, we revise what MilaNLP has been doing during 2021, analyzing the main themes of research and which output the team has produced.

MilaNLP is the NLP Lab in Milano (Italy) lead by Prof. Dirk Hovy at Bocconi University.

The first blog post covered what we did in the area of Bias and Ethics:

Instead, our second blog post covered the are of text analytics.

Now, we move to a different topic. Indeed, this blog post will cover differents aspects of language like meaning, reasoning and pragmatics.

Our MilaNLP Logo. The left part represents the Dome in Milan.

Part I: Reasoning, Language and Meaning

Some of our work has looked into more theoretical dimensions of reasoning and language. From the pragmatics aspects of language to the use of embeddings for better understand how language evolves, we have provided contributions in different directions.

This blog post has been compiled by different authors:

Our awesome team!

While a few of the papers we show are preprints, most of what we present has been peer-reviewed and has been presented at the most important conferences of the field.

1) The importance of modeling social factors of language: Theory and practice

by Dirk Hovy and Diyi Yang

NAACL2021

We show that language is about more than just information: it includes a social dimension that spans a range of factors, from speaker and listener characteristics to the culture they are having the conversation in.

NLP is only starting to incorporate these aspects into its models. What limited research exists is very encouraging, but there are many open questions and exciting research avenues waiting.

Taxonomy of social factors.

2) Words with Consistent Diachronic Usage Patterns are Learned Earlier: A Computational Analysis Using Temporally Aligned Word Embeddings

by Giovanni Cassani, Federico Bianchi and Marco Marelli

Cognitive Science

Our results show a unique relation between language change and AoA, which is stronger when considering neighborhood-level measures of language change: Words with more coherent diachronic usage patterns tend to be acquired earlier.

We use temporally aligned word embeddings and a large diachronic corpus of English to quantify language change in a data-driven, scalable way, which is grounded in language use. We show a unique and reliable relation between measures of language change and age of acquisition (AoA) while controlling for frequency, contextual diversity, concreteness, length, dominant part of speech, orthographic neighborhood density, and diachronic frequency variation. We analyze measures of language change tackling both the change in lexical representations and the change in the relation between lexical representations and the words with the most similar usage patterns, showing that they capture different aspects of language change.

Words like “finger” and “thunder” have low age of acquisition and the 2D word representations of the words over the different years are close in space. While this does not happen for words like “pregnant” and “recorder” that have higher age of acquistion.

3) Towards bridging the neuro-symbolic gap: deep deductive reasoners

by Monireh Ebrahimi, Aaron Eberhart, Federico Bianchi and Pascal Hitzler

Applied Intelligence

This paper provides a brief summary of the authors’ recent efforts to bridge the neural and symbolic divide in the context of deep deductive reasoners. Throughout the paper we will discuss strengths and limitations of models in term of accuracy, scalability, transferability, generalizabiliy, speed, and interpretability, and finally, will talk about possible modifications to enhance desirable capabilities.

In terms of architectures, we are looking at Memory-augmented networks, Logic Tensor Networks, and compositions of LSTM models to explore their capabilities and limitations in conducting deductive reasoning. We are applying these models on Resource Description Framework (RDF), first-order logic, and the description logic EL+ respectively.

4) Contrastive Language-Image Pre-training for the Italian Language

by Federico Bianchi, Giuseppe Attanasio, Raphael Pisoni, Silvia Terragni, Gabriele Sarti and Sri Lakshmi

PrePrint

We present the first CLIP model for the Italian Language (CLIP-Italian), trained on more than 1.4 million image-text pairs. Results show that CLIP-Italian outperforms the multilingual CLIP model on the tasks of image retrieval and zero-shot classification.

CLIP (Contrastive Language-Image Pre-training) is a very recent multi-modal model that jointly learns representations of images and texts. The model is trained on a massive amount of English data and shows impressive performance on zero-shot classification tasks. Training the same model on a different language is not trivial, since data in other languages might be not enough and the model needs high-quality translations of the texts to guarantee a good performance.

5) Language in a (Search) Box: Grounding Language Learning in Real-World Human-Machine Interaction

by Federico Bianchi, Ciro Greco, Jacopo Tagliabue

NAACL2021

We investigate grounded language learning through real-world data, by modelling a teacher-learner dynamics through the natural interactions occurring between users and search engines.

We explore the emergence of semantic generalization from unsupervised dense representations outside of synthetic environments. A grounding domain, a denotation function and a composition function are learned from user data only. We show how the resulting semantics for noun phrases exhibits compositional properties while being fully learnable without any explicit labelling. We benchmark our grounded semantics on compositionality and zero-shot inference tasks, and we show that it provides better results and better generalizations than SOTA non-grounded models, such as word2vec and BERT.

So Long

Thank you for reading our work! Feel free to contact us if you have any questions!

If you find errors, you can send me a message on Twitter.

--

--