Posts

Word Embeddings - Vectors that represents words

Ever since I started being a part of the R & D in our company. I have been dealing with understanding neural networks that generate representations of words and documents. Representing words and document as vectors allow us to carry out Natural Language Processing Tasks mathematically.An example, we can see how similar two documents are (using cosine similarity), a quick analogy (vector operations), and ranking documents. But how do you produce the vectors that will represent the words? Well, there are many ways. There are traditional NLP approaches that can still work very well like Matrix Factorization (LDA, GloVe) and newer methods many of which uses neural networks (Word2Vec). I have been producing document vectors using the gensim doc2vec (which uses Word2Vec). I have been using the hierarchical softmax skip-gram model ( Word2Vec.py Neural network ). When I was reading this part of the code, I thought that if this is a neural network (shallow - not a deep learning model) w

RE WORK - Boston Deep Learning Summit

I have recently attended the Deep Learning Summit in Boston. The event was organized by RE WORK. RE WORK was founded in London. The team is all women. The mission of the RE WORK team is to encourage conversations around entrepreneurship, technology, and science to shape the future.  This is a quick recount of the event from my perspective. First of all, I have never been to Boston. The public transportation that I took from the airport to the place of the conference was really easy to navigate (In short, I did not get lost). This is probably a result of the effort put in by the local government to make Boston a premier conference venue. Traffic congestion is another story. Schedule of Talks The conference schedule is packed.  The speakers are researchers from some of the top tech companies. Facebook, Google, Amazon, Ebay, and Spotify are all represented. I was excited about two topics in the schedule. Here are some of the papers presented. The papers I chose below are som

Gensim Doc2Vec on Spark - a quest to get the right Vector

Ever since I joined the R & D group we have been doing a lot of cool things, like trying IBM Watson (see previous blog entry). Now we are doing a lot of Natural language processing.  We wanted to compare the similarity of two documents. There is this excellent project Gensim ( doc2vec ) that easily allow to you to translate large blocks of text to a fixed length feature vector to make comparisons. Here is the link to the  original paper  from some people from Google that explains the approach. In essence, they wanted to find a representation that will overcome the weaknesses of the bag of words model. The doc2vec approach proves to be a reliable approach to comparing similarities of documents because it takes into consideration the semantics and the order of the word in context. So with that, we wanted to use it for a corpus of 26 million documents. Calculating the doc2vec for 26 million documents is not a small task, so we need to process it in Spark. The problem is that there

Spark DataFrame - Array[ByteBuffer] - IllegalAurmentException

IllegalArgumentException - ByteBuffer - Spark DataFrame I was processing a several million documents (~ 20 million) in which we need to extract the NLP features using NLP4J, OpenNLP, and WordNet. The combination of the three NL features blows up each record to 11 times its original size. We are using all three because we do not know yet what feature sets will be helpful to us. The original dataset is in parquet files in HDFS (16 partitions). I thought that was convenient just use withColumn and pass a UDF (User Defined Function) on the column where it needs those features. withColumn adds the calculated column back to the DataFrame. So I created the spark job (I am on Spark 1.5.2-cdh5.5.2)for the above, and things started to get nasty. I am blowing up the ByteBuffer array on the in-memory columnar storage. This is the exception that I am getting. There seems to be no reference in my code in this stack trace. java.lang.IllegalArgumentException at java.nio.ByteB

Watson - The mystery after jeopardy

We have been deep diving in cognitive computing. One of the best platforms that a business can leverage to hit the ground on cognitive computing is IBM Watson. Watson has a lot of capabilities especially with the acquisition of Alchemy's API as well. ( Alchemy Acquisition - IBM ). You get a language translator, language classifier, retrieve and rank, text to speech, tone analyzer, and a lot more. It is just a matter of how these capabilities can be integrated to your business use cases. As part of "the answer" company we have a tremendous and diverse use case for searching - and giving you answers in a way that makes sense, relevant and make a user decide better is at the heart of what makes us "the answer" company. I was a part of the team given the freedom to explore IBM Watson (no matter what the cost). So we have tried the different APIs in a span of a few weeks. Of course, we have to take a look at the Watson's retrieve and rank ( IBM Watson;s Retrieve

2016 - Movies Data Analysis - Linear Regression Modelling

Java 1.8 Migration - Performance and Garbage Collection

Java 7 to Java 8 - that is easy! I have been working on migrating our web application from Java 1.7 to Java 1.8. Migrating our web app is a lot of challenge. What makes it more challenging is that our web application has a really unique process footprint (well that can be said for all web application). You have to know your application like the back of your hand especially if you want to tune garbage collection for it. When I accepted the challenge of changing our web application from Java 1.7 to Java 1.8. I thought that it was going to just a breeze considering that from 1.7 to 1.8 was not that far of a version. It turned out that I was totally wrong. Here are some of the major challenges that I encountered: 1. Permanent Generation turned into MetaSpace     Before Java, 1.8 class metadata is located in the permanent generation of the java heap. This can be set using the -XX:PermSize option. This was removed in Java 1.8 ( Remove Permanent Generation ). The reason why it was remo