Theoretical explanation and a practical example.

Word embedding is a technique to represent words (i.e. tokens) in a vocabulary. It is considered as one of the most useful and important concept in natural language processing (NLP).

In this post, I will cover the idea of word embedding and how it is useful in NLP. Then, we will go over a practical example to comprehend the concept using **embedding projector**
of TensorFlow.

Word embedding means representing a word with vectors in n-dimensional vector space. Consider a vocabulary that contains 10000 words. With traditional number encoding, words are represented with numbers from 1 to 10000. Downside of this approach is that we cannot capture any information about the meaning of words because numbers are assigned to words without any consideration of the meaning.

If we use word embedding with a dimension of 16, each word is represented with a 16-dimensional vector. The main advantage of word embedding is that words that share similar context can be represented close to each other in the vector space. Thus, vectors carry a sense of semantic of a word. Let’s assuma we are trying to do sentiment analysis of customer reviews. If we use word embeddings to represent words in the reviews, words associated with positive meaning point a particular way. Similarly, words with negative meaning are likely to point in a different direction.

A very famous analogy that bears the idea of word embeddings is king-queen example. It is based on the vector representations of the words “king”, “quenn”, “man” and “woman”. If we substract man from the kind and then add woman, we will end up with a vector very close to queen:

There are different methods to measure the similarity of vectors. One of the most common methods is **cosine similarity**
which is the cosine of the angle between two vectors. Unlike euclidean distance, cosine similarity does not take the magnitude of vectors into consideration when measuring the similarity. Thus, cosine similarity focuses on the orientation of the vectors, not the length.

Consider the words “exciting”, “boring”, “thrilling”, and “dull”. In a 2-dimensional vector space, the vectors for these words might look like:

As the angle between vectors decreases, cosine of the angle increases and thus, the cosine similarity increases. If two vectors lay in the same direction (angle between them is zero), the cosine similarity is 1. On the other hand, if two vectors point in the opposite direction (angle between them is 180), cosine similarity is -1.

When we use word embeddings, the model learns that thrilling and exciting more like to share the same context than thrilling and boring. If we represented the words with integers, the model would have no idea of the context of these words.

There are different methods to create word embeddings such as Word2Vec, GloVe or an embedding layer of a neural network. Another advantage of word embedding is that we can use a pre-trained embedding in our models. For instance, Word2Vec and GloVe embeddings are open to public and can be used for natural language processing texts. We can also choose to train our own embeddings using an embedding layer in a neural network. For examples, we can add **Embedding**
layer in a sequential model of **Keras**
. Please note that it requires lots of data to train an embedding layer with high performance.

The example we had with 4 words is a very simple case but grasps the idea and motivation behind word embeddings. To visualize and inspect more complication examples, we can use Embedding Projector of TensorFlow.

## 我来评几句

登录后评论已发表评论数()