Posts

Natural Language Processing (NLP): A Key to Human Advancement

Abstract  In this paper, we will explore why NLP (Natural Language Processing) techniques contribute to Human Advancement. We will revisit how the interpretation of a text, learning from writing, and interaction with NLP based technologies affect our lives, decision, and behaviors. We will also see how NLP affects our community and social structure. 1 Introduction  NLP technique is key to our human advancement. It is crucial now for interpreting a corpus of text. It will be critical as more sources of text are digitized. It is also the reason why we need to advocate digitization adaptation. NLP techniques are how we are going to sustain the information needs of our society. NLP is also a key technology in our learning. It is becoming a necessity for our human development as we shift our social interactions online. As we produce more text that can be analyzed, we can look into a person’s profile and information without actually revealing it. Profiling can be used to target and influence

Are you tired of waiting ? - AI can help!

Are you tired of waiting?  Almost every day of your life will be spent waiting. You will be waiting in traffic. You will be waiting to buy or acquire a service. You will be waiting on the internet for your movie, music, or social media update. For the longest time, the human race has been inventing a lot of things for us to save time. But why are we still waiting?  We live in only one Earth, where we are bound within finite space and resources. More human beings are getting introduced to this home of ours. There are only enough resources that we can share. So, the future will bring us longer waiting times for us to share these resources because there will be more of us. There will be longer wait times because there are a lot more of us (humans) vying for the same resources. Roadways, Food, Water, Air, Internet, and Services (e.g., Healthcare, Legal, Social) will need to be efficiently used and scheduled. We have systems in place to access these resources. Our current policies a

AI Speeds Up Our Rendezvous With Complexity

Intelligent systems are augmenting our ability to make decisions. This ability to make decisions is based on machine learning models or deep learning models that use large amounts of data that are non-parametric that fit into non-linear functions.  These models are hard to comprehend and explain. We are plugging a lot of these models into business processes in every industry.  Soon it will be hard to monitor and justify the root of a decision made because of these models. Take, for example, a step in a process where a human decides.  We can ask the human about the reason for the decision. There is also discretion on the decision based on the ethics of the one who decides. The decision might not be as consistent or efficient to maximize any benefit after that decision, but at least somebody can explain the details about the decision and understand the reason behind it. Now let's take the data behind past human choices and create a model using  AI in an intelligent system. We opt

AI's Part In Our Evolution

For the longest time, our ancestors have transferred a lot of knowledge through social learning. Social learning in a family or community setting builds the foundation for adaptability as an adult. An extended childhood with a lot of opportunities for social learning is one of our adaptation. ( https://www.americanscientist.org/article/the-benefits-of-a-long-childhood ). All humans learn socially and individually. We use whatever we learn to guide our behavior or decision to what we think would be beneficial to us. The context of social learning is an ever-changing landscape. The setting of our social learning and the influences that we have from people are now shifting online. With just a few clicks of the button, you can learn from professors from ivy league universities, consult with a doctor, or even cook dinner with a world-class chef. Gone are the days that you learn your trade from your father or immediate family. You no longer are bound by the tradition of your ancestry o

Ethics and Fairness in Artificial Intelligence

We are at a time where leveraging Artificial Intelligence becomes commonplace. Every business organization is now looking into its data and building intelligent agents that predict, recommend, and even decide on organizational transactions.  Automated intelligent systems are under scrutiny because of the recent biases uncovered from systems rolled out by some of the giants in tech right now. It is being proposed that such intelligent agents are regulated, developed, and implemented ethically. Equality, non-discrimination, accountability, and safety are being suggested as guiding principles when building such intelligent agents. Regulating automated intelligent systems is going to be huge in impact in every industry. Need To Be Accountable Most intelligent systems are an ensemble of Machine Learning Algorithms or Deep Learning Neural Networks. Which fundamentally is fitting a function over past data. Some are from a math formula developed through simulation or past data. Ulti

Predicting Helpful Posts

Image
Original paper:  https://www.aclweb.org/anthology/N19-1318 Here is a quick summary: The research purpose is to identify helpful posts from discussion threads in forums, especially long-running discussions. The approach is to model the relevance of each post concerning the original post and the novelty (not presented in the earlier posts of the discussion thread)of a  post based on a windowed context. To model, the 'relevance' the original post and the target post are encoded using an RNN (GRU). The encoded sequences are then element-wise multiplied. As for modeling of the 'novelty,' the target post and the past K posts (where K is the number of past posts taken into context. A 'K' between 11 to 7 worked best for the Reddit dataset used in the experiment - performance stops improving after a certain number of posts taken into context) are also encoded using the same RNN text encoder. Once the 'K' posts are text encoded it is then fed thru another R

Abusive Language Detection

Original paper -  https://www.aclweb.org/anthology/N19-1221 Here is a quick summary: In the paper submitted by Facebook AI, London to the recent NAACL (North American Chapter of the Association for Computational Linguistics) conference held in Minneapolis, they presented a novel approach using Graph Convolutional Networks to outperform some the best ways to detect abusive language on the internet. The approach made use of a heterogeneous graph that contains an authors community network and tweets. The graph is then used to predict the class and generate an embedding. In the paper's experiments, the researchers used embeddings from node2vec (sample implementation here https://snap.stanford.edu/node2vec/) and a 2-layer Graph Convolutional Network.  The Graph Convolutional Network that represents the author's profiles and tweets were used to predict the author's tweet into three classes using a softmax layer as the output layer of the network. To extract the embedding f