Sentiment analysis : Frequency-based models

We give our tenants insights about their online reputation based on their online reviews and ratings. In doing so, one thing we try to do is pull apart the text of reviews to understand what the reviews are dealing with, and tell our clients what their customers are talking about and how happy those customers are with key aspects of our clients’ business.

So for example, we might identify 100 reviews for our client mentioning price, and leveraging the star rating of those reviews, we might discern that 80% of those reviews are positive and the average rating of those reviews is 4.0 stars. However, this method could be improved: a positive review mentioning price is not necessarily positive about price. For example:

The food was awesome, and the service absolutely excellent. The price was very high for a coffee-shop style restaurant.

This 5 star review is obviously negative about the price of the restaurant. We need a model that tells us the local sentiment of a sentence or a subsentence in order to be able to understand what elements drive the rating of the review. I’ll explain some of the techniques we have studied, implemented and benchmarked in order to build our Sentiment Mining Tool.

Naive Bayes Classifier

Naive Bayes is the first and the easiest method to classify sentiment in a text. It’s based on the Bayes formula for conditional probabilities:

Bayes Formula

 

 

We’ll represent a text by a Bag of Words, which is a set of features “the word w appears f times” for each word w in the sentence and f, the frequency of w in the sentence. Assuming the Naive Bayes assumption that these features are independent, this formula helps us deduce the probability that the sentence is positive (A) knowing that w appears f times (B) for every w. In fact, we can deduce from the frequencies in a large enough dataset the probability for a sentence to be positive (A), and the probabilities of every feature and then of their intersection (B). Training the model on a training set of 10,000 annotated sentences, we get a set of informative features that are helpful to predict whether a sentence is positive or negative. Here are the 10 most informative features we get:

Naive Bayes sentiment-bearing keywords

Naive Bayes classifier’s informative features


This method is the easiest to implement and the big advantage is that it’s completely transparent. When we process it, we know that the classifier found a set of strongly positive or of strongly negative words, and that it is why we classified the sentence in such a way.

How to improve it

However, there are several drawbacks using this method.

First, it fails to identify the neutral class. As a matter of fact, words can have a positive or a negative meaning (“good”, “awesome”, ”horrible”, …) but no word has a neutral connotation. Often, it’s all about the absence of such positively or negatively meaningful words or about the structure of the sentence that reflects the absence of strong emotion. The Bag of Words representation doesn’t address this problem.

It also fails to understand intensity and negations. Comparing “good” and “quite good” for instance, the first one is more likely to appear in a positive sentence than the second one. We tried some methods to address this: adding a list of meaningful bigrams (which mean that we would read “quite good” as a single word for instance), or training the model on bigrams instead of training it on single words, but both didn’t improve our model very much. We also fail to identify negations most of the time, because this model doesn’t take the word order into account.

Most of all, the Naive Bayes model doesn’t perform very well in solving the local sentiment analysis problem. In a long text, having a high frequency of positive words: “sensational”, “tasty”, … makes it very likely that the author is expressing positive sentiment. But as our goal is to determine the local sentiment, we want to process the tool on short sentences and subsentences. (We already have a star rating that tells us the author’s overall sentiment.) We don’t have enough words in the sentence to aggregate so we need to understand very precisely the semantic structure.

The Bag of Words representation is a very bad way to do this. For instance, the sentence “The food could have been more tasty.”, we detect the word “tasty” that is related to a positive feeling, but we don’t understand that “could have been more” is a kind of negation or nuance. Many short sentences are like that, and looking at only a small sentence dataset reduced our accuracy from around 77% to less than 65%.

Rule-based sentiment models

To improve the Naive Bayes methods and make it fit the short sentences sentiment analysis challenge, we added some rules to take into account negations, intensity markers (“more”, “extremely”, “absolutely”, “the most”, …), nuance, and other semantic structures that appear very often near sentimental phrases and change their meanings. For instance, in “The food wasn’t very tasty”, we want to understand that “not very tasty” is less negative than “not tasty” or “not tasty at all”.

We leveraged the results of the Naive Bayes training to build a large vocabulary of positive and negative words. When we process a given sentence, we attribute every word a positive and a negative score, and calculate the overall scores by a precise analysis of the semantical structure based on the open-source library spacy’s pipelines for part-of-speech tagging and dependency parsing. We get a metric for positive, negative and neutral scores, the neutral score being defined as the proportion of words that are neither positive nor negative in the sentence. We used a deep-learning technique to deduce from our training set the relation between these scores and the sentiment. Here are the graphs we obtained for negative, neutral and positive sentences:

Sentiment scores for negative sentencesSentiment scores for neutral sentencesSentiment scores for positive sentences

The model helps us decide very well whether an expressive sentence is positive or negative (we get around 75% accuracy), but struggles understanding a criteria for neutrality or absence of sentiment (on our test-set, it’s wrong 80% of the time). It’s much better than the Naive Bayes, but 75% is less than the state-of-art for positive/negative decision.


Finding Optimizations in Python With Program Profiling

This blog post discusses profiling methods, specifically for the Python programming language.

Within the data science team, one of the things we are working to build is a processing model for large amounts of textual and review data using natural language processing.

Because we are processing data at such a large scale, it is important that our model is properly optimized to reduce any unnecessary overhead. As such, it is important to identify which areas in our code are taking up the most time. This is where profiling comes in.

Program profiling is a form of analysis that measures things such as the memory usage, time usage, the usage of particular instructions, or the frequency and duration of function calls. It is a way to understand where the largest amount of resources are being spent in order to target optimizations to these areas.

Use Case

Our use case was to find optimizations in a series of Python files used in our model. In order to find which parts of the program were stalling execution, profiling was used. Python has many native and third party profiling tools that allow for a range of analysis for runtime, memory usage and visualization. Some of the tools we looked at were cProfiler, line_profiler, memory_profiler and QCachegrind. For the purpose of our use case, we are most interested in profiling methods that enable us to see which parts of the program were using up the most time, and if there are any blocking resources.

Profiling Using the Standard Python Library

Profiling Python can be done with the standard Python library, as well as third party modules and programs.

The standard Python library provides three different implementations of the same profiling interface: cProfile, Profile and Hotshot. The most popular of the three is cProfile.

cProfile

cProfile can be run in terminal, as well as imported as a module in Python.

It shows profiling results by functions for time for ncalls  tottime  percall  cumtime  percall, (number of calls to that function, total time of that function excluding calls to other functions, time per call, cumulative time of the function and other function calls, time per cumulative call).

Example:

import cProfile
import re
cProfile.run(‘re.compile(“foo|bar”)’)

Output:

    197 function calls (192 primitive calls) in 0.002 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.000    0.000    0.001    0.001 <string>:1(<module>)
    1    0.000    0.000    0.001    0.001 re.py:212(compile)
    1    0.000    0.000    0.001    0.001 re.py:268(_compile)
    1    0.000    0.000    0.000    0.000 sre_compile.py:172(_compile_charset)
    1    0.000    0.000    0.000    0.000 sre_compile.py:201(_optimize_charset)
    4    0.000    0.000    0.000    0.000 sre_compile.py:25(_identityfunction)
  3/1    0.000    0.000    0.000    0.000 sre_compile.py:33(_compile)

Although we are able to see timing information on a function basis, we aren’t able to see which lines specifically are taking up the most time.

Here is an example that runs in the terminal with cProfiler and reduces the output to the 33 top lines with highest cumulative time:

python -m  cProfile -s ‘cumulative’ program.py > temp_file && head -n 33 temp_file && rm temp_file

Third Party Profiling Modules

Third party modules include line profiler and memory profiler for line by line profiling, and QCacheGrind for program visualization.

line_profiler – Line-by-line Timing

line_profiler is a third party module that does line-by-line profiling in a Python program. It shows the time spent on each individual line in the Python program. After installing line_profiler, you use it by decorating the functions that you want to profile using ‘@profile’. Then you create a kernprof script of your Python file that can be used with line_profiler.

Install:

pip install line_profiler

Example:

kernprof -l myProgram.py

python -m line_profiler myProgram.py.lprof

Timer unit: 1e-06 s

File: primes.py
Function: Proc2 at line 149

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  149                                           @profile
  150                                           def Proc2(IntParIO):
  151     50000        82003      1.6     13.5      IntLoc = IntParIO + 10
  152     50000        63162      1.3     10.4      while 1:
  153     50000        69065      1.4     11.4          if Char1Glob == ‘A’:
  154     50000        66354      1.3     10.9              IntLoc = IntLoc – 1
  155     50000        67263      1.3     11.1              IntParIO = IntLoc – IntGlob
  156     50000        65494      1.3     10.8              EnumLoc = Ident1
  157     50000        68001      1.4     11.2          if EnumLoc == Ident1:
  158     50000        63739      1.3     10.5              break
  159     50000        61575      1.2     10.1      return IntParIO

In this output, we can see by line the amount of times a line is executed, the time per execution, the total execution time and percentage time usage. This helps you zero in on which lines are actually causing slowdowns in your program.

memory_profiler – Line-by-line memory usage

memory_profiler is another third party package that is similar to line profiler. It does line by line profiling of a Python program with memory as opposed to time.

pip install -U memory_profiler

python -m memory_profiler example.py

Line #    Mem usage  Increment   Line Contents
==============================================
    3                           @profile
    4      5.97 MB    0.00 MB   def my_func():
    5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
    6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
    7     13.61 MB -152.59 MB       del b
    8     13.61 MB    0.00 MB       return a

The output is similar to line profiler. In the output above, it is seen that memory usage increased when required computation power increased. It is helpful if a program is doing operations that require a lot of memory.

Visualization with QCacheGrind

QCacheGrind is a visual profiling tool, it can be used to view the call stack of a program and see the cumulative time usage of each function in the call stack. You can visually trace through the call stack, and even view the time usage line by line of the source file.

Installation:

pip install pyprof2calltree

brew install graphviz

brew install qcachegrind –with-graphviz

Use:

python -m cProfile -o myscript.cprof myProgram.py

pyprof2calltree -k -i myscript.cprof

Result and Comparison

Profiling helped us zero in on an iteration loop that was taking a large percentage of time. It turns out repeated index references to a large DataFrame object were driving a large percentage of the time usage. This is because while the Pandas DataFrame is a powerful data structure to apply vector operations and aggregation across large amounts of data, it’s inherently a slower data structure when it comes to accessing indexed rows repeatedly or iterating through a number of rows compared to a simple dictionary. After identifying this via profiling, the program was optimized by converting the DataFrame to a list of dictionaries.

We found line_profiler to be the most useful in terms of finding which areas of code to optimize. Using line_profiler we can see the percentage time usage of each function we are interested in line by line. Tools such as cProfile and QCachegrind are able to give a broad perspective on which functions are taking the most time, but do not show which lines of the function are the trouble areas. memory_profiler is good for programs that use heavy amounts of memory, but for our use case memory was not limited.