You know what you want to call your product, but not why?
This App generates the explanation of your acronym intelligently from the information you provide.
When you inserted the helper words under your acronym, you gave us the means to generate a (hopefully) sensible meaning of the acronym.
We took the helper words and asked a Word2vec model (where each word gets a vector representation) for nearest neighbours. This model is a neural network consisting of two shallow layers trained on corpuses from Wikipedia and Gigaword 5. This model has the awesome property that words appearing together in linguistic contexts are also close to one another in the 50 dimensional space of this embedding.
If you inserted more than 4 words, they are too many for a meaningful query of the Word2vec model. This is why we first cluster them in this 50 dimensional space and compute nearest neighbours for every cluster.
These clusters we finally plot for you. Since a 50 dimensional space is hard to visualise, we first embed it into 2 dimensions by using a Locally Linear Embedding. This algorithm projects into a lower-dimensional space while preserving distances within local neighbourhoods. This embedding is a very tricky thing to do, so do not wonder if clusters from the 50d space overlap in 2d.
Note that words that are not in the Word2vec vocabulary are removed. Do not worry, the vocabulary is very comprehensive. Just try typing in "blah" . We also remove the suggestions that are too similar to one another, like experimental is to experiments.