Word Embedding Explained and Visualized - word2vec and wevi
Xin Rong Xin Rong
418 subscribers
56,439 views
896

 Published On Mar 21, 2016

This is a talk I gave at Ann Arbor Deep Learning Event (a2-dlearn) hosted by Daniel Pressel et al. I gave an introduction to the working mechanism of the word2vec model, and demonstrated wevi, a visual tool (or more accurately, a toy, for now) I created to support interactive exploration of the training process of word embedding. I am sharing this video because I think this might help people better understand the model and how to use the visual interface.

The audience is a mixture of academia and industry people interested in the general neural network and deep learning techniques. My talk was the one out of the six talks in total. Thank you, Daniel, for organizing the amazing event! It was truly amazing to learn so much from other researchers in just one single afternoon.

I apologize for not speaking as clearly as I can. I did not realize I was talking this fast... I had only two hours of sleep in the night before and apparently that created multiple short circuits in the neural networks in my brain... Please turn on the subtitles for best understandability.

Links:
slides: http://bit.ly/wevi-slides
wevi demo: http://bit.ly/wevi-online
wevi git repository: https://github.com/ronxin/wevi
my homepage: http://bit.ly/xinrong
a2-dlearn event: http://midas.umich.edu/event/a2-dlear...
word2vec website: https://code.google.com/p/word2vec/
word2vec parameter learning explained: http://arxiv.org/abs/1411.2738

show more

Share/Embed