Word embeddings allow us to model the semantics of words in a computational manner. They are hence widely used in the field of natural language processing and find applications in a variety of language-related tasks. This workshop seeks to introduce word embeddings to researchers working across the computational social sciences.
The workshop will take place on September 2, 2019 at the 2019 European Symposium on Societal Challenges in Computational Social Science in Zurich, Switzerland.
We invite researchers from the whole range of computational social science working with text data to participate in this workshop. Requirements for participants are a (very) basic understanding of mathematical statistics and probability theory as well as a basic knowledge of the R or python programming language.
The workshop will be of particular interest to researchers working on cross-disciplinary problems that seek to incorporate recent advancements in natural language processing.
Questions in advance? Send an email to firstname.lastname@example.org or email@example.com.
The availability of large digital corpora (collections of texts) as well as the computational resources to analyse these corpora efficiently led to ground-breaking advancements in the area of natural language processing (NLP). Novel computational methods aiming at modelling the semantics of text utilise vector representations to encode the meaning of words mathematically. The resulting representations are widely-known as word embeddings . These embedding vectors capture the semantic relatedness between words co-occurring in a predefined context and can be utilised to quantify the degree of similarity between different textual representations. For example, “man” and “woman” have vector representations that are very close although nowhere in the model building phase are any semantic relationships specifically induced. Moreover, word embeddings hint at potential arithmetic operations with semantics: for example, the vector representation of “king” minus that of “man” plus that of “woman” is closest in cosine similarity to the vector of “queen” (see ). Thus, word embeddings offer a means to harness vast amounts of data to automatically capture semantic relationships between words and incorporate context into language models, and have found applications across the broad spectrum of NLP.
In this workshop, we provide a theoretical and mathematical introduction as well as an overview of potential applications of vector space models and word embeddings in the social sciences. We thereby highlight the suitability of word embeddings for interdisciplinary tasks dealing with text data and also illustrate the limitations of this heavily data-reliant framework. In doing so, we aim to equip researchers with a critical understanding of and the practical knowledge to implement these advanced approaches to open up new avenues of research in their specific areas of expertise.
Furthermore, we are happy to announce that Laura Burdick from the University of Michigan’s Artificial Intelligence Lab will be giving a guest talk on her research on word embeddings during our workshop.
|Introduction to word embeddings and vector space models (word2vec [2, 3] and GloVe )||09:00 - 10:00|
|Applications and limitations of word embedding models in the computational social sciences; guest talk by Laura Burdick||10:00 - 11:15|
|Coffee break||11:15 - 11:30|
|Blind question round||11:30 - 12:00|
|Practical session: developing vector space models in python and R||12:00 - 12:30|
This workshop will help participants to understand the fundamental theory of vector space models and will provide them with potential applications of these methods for interdisciplinary tasks. Furthermore, participants will learn how to implement word embeddings models in a straightforward way using the R and python programming languages.
|Maximilian Mozes||Bennett Kleinberg|
|PhD student (University College London)||Assistant Professor in Data Science (University College London)|
 D. Jurafsky and J. H. Martin, “Speech and Language Processing 3rd ed. draft.” [Online]. Available at: https://web.stanford.edu/~jurafsky/slp3/. [Accessed: 09-Mar-2019].
 T. Mikolov, W. Yih, and G. Zweig, “Linguistic regularities in continuous space word representations.,” in NAACL-HLT, 2013, vol. 13, pp. 746–751.
 T. Mikolov, I. Sutskever, K. Chen, G. S. Corrado, and J. Dean, “Distributed Representations of Words and Phrases and their Compositionality,” in Advances in Neural Information Processing Systems, 2013, pp. 3111–3119.
 J. Pennington, R. Socher, and C. Manning, “Glove: Global Vectors for Word Representation,” in Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), Doha, Qatar, 2014, pp. 1532–1543.