Topic Modeling to Understand Online Reviews

With so many online reviews across many social media websites, it is hard for companies to keep track of their online reputation. Businesses can benefit immensely if they can understand general trends of what their customers are talking about online. A common method to quickly understand trends in topics being discussed in a corpus of text is Latent Dirichlet Allocation.

Latent Dirichlet Allocation assumes each document consists of a combination of topics, and each topic consists of a combination of words. It then approximates probability distributions of topics in a given document and of words in a given topic.

Goal

We will perform topic modeling via Latent Dirichlet Allocation (LDA) on online reviews of a beauty retailer from various social media sources.

Methodology

Preprocessing:

  1. Cleaning Text Data: Before we model the reviews data with LDA, we clean the review text by lemmatizing, removing punctuation, removing stop words, and filtering for only English reviews.
  2. Identifying Bigrams and Trigrams: We want to identify bigrams and trigrams so we can concatenate them and consider them as one word. Bigrams are phrases containing 2 words e.g. ‘social media’, where ‘social’ and ‘media’ are more likely to co-occur rather than appear separately. Likewise, trigrams are phrases containing 3 words that more likely co-occur e.g. ‘Proctor and Gamble’. We use Pointwise Mutual Information score to identify significant bigrams and trigrams to concatenate. We also filter bigrams or trigrams with the filter (noun/adj, noun), (noun/adj,all types,noun/adj) because these are common structures pointing out noun-type n-grams. This helps the LDA model better cluster topics.
  3. Filtering Nouns: Nouns are most likely indicators of a topic. For example, for the sentence ‘The store is nice’, we know the sentence is talking about ‘store’. The other words in the sentence provide more context and explanation about the topic (‘store’) itself. Therefore, filtering for the noun cleans the text for words that are more interpretable in the topic model.

Modeling

  1. Optimizing the number of topics: LDA requires that we specify the number of topics that exists in a corpus of text. There are several common measures that can be optimized, such as predictive likelihood, perplexity, and coherence. Much literature has indicated that maximizing coherence, particularly a measure named Cv, leads to better human interpretability. This measure assesses the interpretability of topics given the set of words in generated topics. Therefore, we will optimize this measure.

    The number of topics that yield maximum coherence is around 3-4 topics. We will examine both because 4 topics may still be coherent, while providing more information.
  2. Using gensim’s LDA package to perform topic modeling: With the optimal number of topics, we use gensim’s LDA package to model the data. After comparing 3 topics (left) and 4 topics (right), we concluded that grouping into 4 topics yielded more coherent and insightful topics:

    These 4 main topics can be summarized as: hair salon service, product selection and pricing, brow bar and makeup service, and customer service.
  3. Further enhance interpretability via relevancy score: Sometimes, words that are ranked as top words for a given topic may be ranked high because they are globally frequent across text in a corpus. Relevancy score helps to prioritize terms that belong more exclusively to a given topic. This can increase interpretability even more. The relevance of term w to topic k is defined as:
    The first term measures the probability of term w occurring in topic k, and the second term measures the lift in the term’s probability within a topic to its marginal probability of occurring across the corpus. A lower lambda value gives more importance to the second term, which gives more importance to topic exclusivity. We can use Python’s pyLDAvis for this. For example, when lowering lambda, we can see that topic 0 ranked terms that are even more relevant to the topic of hair salon service the highest.The pyLDAvis tool also gives two other important pieces of information. The circles represent each topic. The distance between the circles visualizes how related topics are to each other. The above plot shows that our topics are quite distinct. Additionally, the size of the circle represents how prevalent that topic is across the corpus of reviews. Circle number 1 represents the topic about customer service, and the fact that it is the biggest circle means that the reviews mention customer service the most. Circle number 2 represents the topic about hair, and the visualization indicates that this topic makes up 22.3% of all tokens.

Results

After applying the above steps, here are the 4 topics and top words for each:

The model has enabled us to understand the 4 most common topics talked about in online reviews about beauty retailer: customer service, hair salon service, product selection, and eyebrow/makeup service.


Online reviews: keyword clusters

Introduction

When analyzing online reviews, we often focus on keywords. For various purposes such as review classification or keyword suggestion, we may need to group those words by closeness of meaning. Given a set of keywords, we may want to split it into clusters of words. One way we can do so is by using Word2Vec to map each word to a vector value, and then apply hierarchical clustering to those vectors.

First, let’s look at a simple example. We have the following input: [“banana”, “apple”, “lemon”, “nice”, “car”, “RV”, “truck”, “desk”]

     From a clustering algorithm, we would expect this output:

  • cluster 1: [“banana”, “apple”, “lemon”]
  • cluster 2: [“car”,”rv”,”truck”]
  • out of the clusters: [“nice”, “desk”]

1. Word2Vec

In this post we focus more on the grouping algorithms than on the details of the implementation of the Word2Vec model.

In short, Word2Vec provides word embedding. It associates vector values to words. Word2Vec trains a neural network to predict the neighboring words around each word from a corpus. Once it’s trained, the vector value is extracted from the neural network layers before the projections try to predict the context. Words with close meanings should be represented by vectors with close values.

We trained our Word2Vec model on 10^6 online reviews with the gensim Word2Vec implementation. We used a window of 10 words and generated 300 dimensional vectors.

Our use cases involve data from various industries, so when we want to work with keywords on online reviews for a given industry, we train Word2Vec on reviews from this industry, since this context may be useful in specifying the meaning of a word. For example, considering the word “limb”, the meaning may be different in restaurant reviews than in hospital reviews.

2. Hierarchical clustering

To generate clusters, we will apply a strategy of hierarchical clustering. This is an iterative process to create a new cluster at each step by aggregating two clusters. At each step it tries to minimize the increase of a given distance. For a set of n vectors, there are n-1 steps that take you from n singleton clusters to one cluster with everything. This process can also be well represented graphically to show any level of granularity within a single graph (as below).

example of hierarchical clusters

3. Distances used for clustering

Of course the key to generating these clusters is the distance metric used to determine which keywords or clusters are “close” together. To measure the dispersion in a set of vectors we can use different metrics. A classic distance is the empirical variance, corresponding to the geometric distance between the vectors.

3.1 Empirical variance

Let S = (x_i)_{i \in [|1,n|]} be a set of n points, and G its center of inertia (its mean). Then the empirical variance is:

V = \sum_{i = 1}^{n}{||x_i - G||^2}

But we can define two other variances : the inter-class variance V_{inter}, and the intra-class variance V_{intra}.

Let (C_k)_{k \in [|1, K|]} be K subset of S, creating a partition of S (the clusters). (G_k)_{k \in [|1, K|]} is the center (mean) of the clusters (C_i)_{k \in [|1, K|]} and |C_k|_{k \in [|1, K|]} is the number of elements in each cluster.

V_{inter} = \sum_{k \in [|1, K|]}{|C_k|\times|| G_k - G ||^2}
V_{intra} = \sum_{k \in [|1, K|]}{\sum_{i \in C_k}{||x_i - G_k||^2}}

We understand than V_{intra} is a measure of the dispersion inside each subset, and V_{inter} is a measure of the distance  between the clusters.

Huygens theorem shows that:

V = V_{inter} + V_{intra}

Therefore, when we do clustering, since V is a constant, minimizing the intra-class variance is the same than maximizing the inter-class variance. This is what we want to generate: clusters very distant one from another (high V_{inter} ), and every cluster close to his own center (low V_{intra} ).

Now we will see that at each aggregation of two clusters, there is a way to make the aggregation with the lowest increase of V_{intra} with ward distance.

3.2 Ward distance

Ward distance is a distance between two clusters, which we can use in hierarchical clustering: at each step of the algorithm, we try to aggregate two clusters with the smallest Ward distance between them. This distance between two clusters is chosen because of the following result:

Aggregating the two clusters with the minimum Ward distance is equivalent to make the aggregation with the smallest increase of V_{intra} .

This is an example of hierarchical clustering using Ward distance on a set of keywords from hospital online-reviews.

dendrogram_with_ward_distance

3.3 Thresholds on cluster distance

Now that we have an efficient method to perform hierarchical clustering which tries to keep the intra-class variance as low as possible, we have to determine where to stop this process of aggregating groups; i.e which level of granularity to use as our clusters in any particular application.

There are different ways to choose the right number of clusters (when to stop the aggregations). A classic way to do that in hierarchical clustering is to use the maximum of the second derivative of the Ward distance, which means to effectively look for a big gap in the Ward distance and to stop before this gap.

But we are looking for clusters with high word similarity, so we don’t want to use the second derivative method, which is relative. We also want to find a standard criterion that is independent of the cluster size. Therefore, we will use thresholds on a given distance but the distance should be normalized.

Even though we use the Ward distance as an aggregation distance, we do not necessarily have to use it to delimit the clusters. Instead we can use a threshold on a “dispersion” metric for each cluster C_k. Let’s consider the following three metrics, in addition to the Ward distance:

  • Mean variance in the cluster: \frac{\sum_{i \in C_k}{||x_i - G_k||^2}}{|C_k|}
  • Maximum distance between two elements in the cluster: \max\limits_{i,j \in C_k}{||x_i - x_j||^2}
  • Maximum distance to the mean of the cluster: \max\limits_{i \in C_k}{||x_i - G_k||^2}

dendrograms_with_different_metrics

On this and other examples, the maximum distance seems to be worse than the other metrics. The maximum distance to the mean is a good metric but it doesn’t consider all the words in the cluster. We will use a threshold on the mean variance to delimit the clusters. In the next dendrogram we plot the threshold: Each cluster has to have a mean variance below 0.6. In this example we have 4 clusters.

threshold_dendrogram

But, we need to be careful when using other metrics than the Ward distance to select clusters. The metric may not increase at each iteration, unlike the Ward distance. Here is an example of this:

variance_not_increasing

This would give an advantage to the maximum distance between two vectors in each cluster over other metrics because the maximum distance between two vectors can’t decrease when we add words in a cluster. For our purposes, we will keep the mean variance but choose to stop the aggregation the first time that the mean variance is over the threshold, in order to keep this method consistent.

Now that we have a way to select the clusters, let’s explore some other issues and possible ways to address them.

4. Improve the quality of the clusters

4.1 Outliers

We select clusters on the mean variance. But we can have a low mean variance due to a high number of vectors close to each other and one vector too far of them. We want to remove vectors which are too far from the center of the cluster as we see in the left part of the following schema :

cluster_outlier

In this schema, the left mean variance is lower in the red cluster than in the blue one, but the point on the bottom left corner of the red cluster needs to be removed from it.

4.2 “Dimension” effect

We used the mean variance to select the clusters because it’s normalized. We did that because we don’t want to preselect the size of the clusters.

But we can notice that for the same mean variance and a different number of elements in the clusters the lexical proximity seems to increase with the size of the cluster. For example in the dendrogram that we used to illustrate the thresholds, [“arrogant”,”personality”,”bedside”] has the same mean variance as [“encouraging”, “understanding”, “caring”, “passionate”,”listens”]). This may be caused by the high dimension (300) of the vector space.

We can use a better metric to keep the selection of clusters independent of their size. We have good results with: \frac{||x_i - G_k ||}{\sqrt{|C_k|}}

But, it seems difficult to use the mean of this metric directly with a threshold after the Ward aggregation because the mean of this metric is not increasing with the aggregations of the clusters.

4.3 New filter against those two problems

Even though the mean of the last metric  \frac{||x_i - G_k ||}{\sqrt{|C_k|}} is not useful in selecting the clusters, we may have another use for it. Instead of using it on a cluster level, we can use it on a word level. For each vector, if it’s too far from its cluster’s center, it will be removed from the cluster. This is very useful for two purposes: to get rid of outliers, and to get rid of small clusters of words with very distant meanings.

Ultimately, we have a three steps process. Let’s return to our the first example: [“banana”, “apple”, “lemon”, “nice”, “car”, “RV”, “truck”, “desk”]

  1.  Hierarchical aggregation with Ward distanceclustering_example
  2. Select clusters with a threshold t1 = 0.6 on their mean varianceclustering_example_thresholdWe have three clusters :
    – [“banana”, “apple”, “lemon”]
    – [“rv”, “car”, “truck”]
    – [“nice”,”desk”]
  3. Remove outlier x_i from cluster C_k when \frac{||x_i - G_k ||}{\sqrt{|C_k|}} is lower than a threshold t2 = 0.4.We limit the “dimension” effect at the same time.
    Both “nice” and “desk” don’t pass the test, so we have the expected result :
    – cluster 1 : [“banana”, “apple”, “lemon”]
    – cluster 2: [“rv”, “car”, “truck”]
    – out of the clusters: [“desk”,”nice”]

5. Further extensions – Use of clusters to extract lexical information

Now that we can select groups of words with very similar meaning, we have some ideas for further research.

An advantage of the Word2Vec word embedding is that it may have geometrical interpretations. We have the example from the Word2Vec creators, who found the word “smallest” by looking at the word representation with the closest cosine distance to vector(“biggest”) – vector(“big”) + vector(“small”).

We understand easily the meaning of the vector (vector(“biggest”) – vector(“big”)). But what about (vector(“big”) – vector(“small”)) ? Along the little segment from vector(“small”) to vector(“big”) we hope to find some adjectives related to the size and increasing from “small” to “big”.

We tried to detect lexical information with tools such as Principal Component Analysis (PCA) on a huge group of words but without success, so we hope that this clustering will help us to achieve this.

It may be interesting to use tools such as PCA on each cluster to detect geometrical structures with lexical interpretations.

This is an idea of the kind of results that we may expect:

PCA_on_clusters

6. Conclusion

Vectors generated by Word2Vec can be used to find clusters of words. We try to find clusters without restraining their number or their size, with hierarchical clustering. We can manually choose thresholds values, depending on how scattered we want the clusters to be.

These clustering techniques are useful to find groups of words with the same meaning. It can be used to find keywords more precisely: Let say that we have 100 keywords related to a theme and we want to use Word2Vec to detect new keywords related to this theme. Instead of looking for vectors close to each word representation or close to the entire 100 vector set, we can look for vectors close to each cluster that we generated.

It may be very interesting to analyze geometrical repartitions of the vectors inside the clusters with tools such as PCA, and to find if they correspond to a lexical structure.

Sources

Efficient Estimation of Word Representations in Vector Space, Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, 2013

SciPy Hierarchical Clustering and Dendrogram Tutorial, Jörn Hees, 2015


ROI analysis: how Google My Business pages traffic is correlated to online reputation

As an online reputation management company we care about ROI because it reveals the true value of our service and product.

We have clearly seen that working with us results in a valuable increase in web traffic for our clients. However, as a Data Science team we want to go deeper in order to identify and quantify what really drives better online traffic.

Towards this goal, I analyzed web traffic data (in particular Google My Business Insights data) from the GMB pages of our customers and specifically focused on the volumes of actions (calls, website clicks and driving direction requests) and views (appearances of the listing on Google Search and Google Maps) that these listings had over time. Throughout my exploration, I tried to tie this data with data we have internally such as rating and review histories for these locations and Google Search results for  these locations, which gives us information about the locations and their competitors.

The difficulty of this analysis lies in the fact that this data has a lot a variance due to various factors, only some of which are tied directly to Online Reputation and Reputation Management. For instance, traffic fluctuations might be due to seasonality, changes in search engine ranking algorithms, improved customer reaction toward a GMB that has more/better reviews, a new advertisement campaign or a sale the company could have launched, or just underlying variance due to the unpredictability of hundreds of millions of consumers.

Despite these challenges, we have been able to come up with some interesting insights when looking at views and actions across different locations in particular enterprises or industries. I’ll discuss two of these here. 

Distributions of views, actions, and reviews

Unsurprisingly, a location that has a high number of views also has a high number of actions. Actions (click throughs) can only come from people that have viewed your page.

What’s interesting is how different these volumes of GMB views can be. The distribution of the number of views by location for locations within an enterprise tends to follow a log-normal distribution. (It is interesting to note that we would have exactly the same type of distribution histogram by considering actions instead of views). This skew can make analysis or modeling very challenging.

Furthermore, it turns out that the distribution of the number of reviews by location has a similar very-right-skewed shape and is highly correlated to the view volume. Not surprisingly, the bigger the traffic size of the location is, the more reviews it usually has.

As mentioned earlier, one of the key challenge of related analysis is being able to work with data that is in different orders of magnitude. Besides the very nice smoothing effect that a logarithm transformation of these values (views, actions, number of reviews) provides, it makes sense in the way that fluctuations of these values over time are mostly relative, and that gives a nice interpretation of the log values. Indeed, a locations that has 100 reviews might get 10 more in the coming month about as easily as a location that has 10 reviews could get 1 more, and it is the same for web views.

However, this transformation has some caveats. One of them is that locations with smaller values tend to fluctuate by a relatively higher percentage than bigger and more stable locations. It is the simple manifestation of a regression to the mean phenomenon (higher values tend to decrease, smaller values tend to increase) but is challenging to take into account in models. Another one, is that values in general, and aggregated values lose their primary quantitative sense. Indeed having models minimizing a metric (MSE for a regression for instance) over log values and then switching back to real values will not be successful if you ultimately want to optimize MSE over the real values. Defining what metric to use is something that is not evident but fundamental, and is of course, part of the art of Data Science.

More on the relationship between reviews, views, and actions

Admittedly it is not terribly newsworthy that locations with more views have more reviews on average and vice versa. We obviously want to dig deeper and study questions such as: “Does having or generating more online feedback (i.e. reviews)  for a location generate more online traffic or actions.”

We have been looking at this relationship from a number of directions, and one of the most interesting insights has come from looking at the conversion rate (the ratio of actions to views) for a location as a function of how many reviews that location has.

As we discussed, there is an obvious correlation between views and actions, but this conversion rate is not strictly constant. In fact it is generally distributed in a range between 0 and 0.3 and provides us interesting information.

Furthermore, we have found a strong relationship between this rate and the volume of reviews per location. In the graph below, we plotted this conversion rate for different groups based upon their relative number of reviews to their top competitors. We define this relative number of reviews as the ratio of our location’s number of reviews to the average number of reviews of the top three locations that show up on a Google categorical search for this location (e.g. “tire store near me”. We defined this variable, because we found that even more than the absolute number of reviews, having more reviews than your competitors plays a critical role in influencing customer behavior.

As I mentioned above, we are performing other analyses to understand how not just having more reviews, but generating more reviews, generates more online views and conversions, and we will continue to surface over the coming months.


Doc2Vec and Online Reviews

At reputation.com, we process large amounts of text data for our customers, with the goal of figuring out what people are talking about in a set of reviews and what that can tell us about customer sentiment for our clients. There are a lot of open source tools that we leverage to help us extract information about the text, and one of those tools is Doc2Vec, the algorithm developed by Thomas Mikolov from Google. This article is an introduction to some ways we can leverage Doc2Vec to gain insight into a set of online reviews for our clients.

Doc2Vec is a 3 layer neural network that simultaneously learns the vector representation of each word and each sentence of a corpus in a vector space of a fixed number (e.g. 300) of dimensions.

Doc2Vec and Classification

Sentence classification

To start with, let’s look at how we could use Doc2Vec to help us categorize sentences in reviews. We start by training 10 Doc2Vec models on 100,000 online reviews related to the dental industry. Before feeding the text to the algorithm, we clean it by lemmatizing and removing stop words to reduce the initial dimensionality.

A good model is a model in which two sentences with very different meanings are represented by vectors distant in the vector space. For instance, given the three sentences:

  • “I love this dentist, he has such great bedside manners”,
  • “Dr. Doe truly cares about his patients, and makes them feel comfortable”,
  • “The parking is always full”

the distance between the vectors representing the first two sentences should be shorter than between sentences 1 and 3.

Methodology

We have a database of sentences about the dental industry that have been manually tagged as dealing with certain aspects of the experience of going to the dentist. Those sentences were extracted from online reviews. They can deal with multiple aspects but as they are relatively short in practice they are often about only one topic. To evaluate our model, we are going to use sentences from the categories “Parking and Facilities” and “Bedside Manner”.  Let’s focus on a pool of a hundred sentences that have been manually tagged as being about “Parking and Facilities” and a thousand sentences about “Bedside Manner”. We use the following process:

  • We pick up two sentences from “Parking and Facilities” and compute the cosine distance between their representative vectors in our Doc2Vec models
  • We look at all the sentences in the “Bedside Manner” pool one by one and determine if our two sentences from “Parking and Facilities” are closer to each other or to the sentence about “Bedside Manner”. If the distances of the two sentences about “Parking and Facilities” to the sentence of “Bedside Manner” are greater than the distance between the two sentences about “Parking and Facilities”, then it is a success.
  • We do this for all the couples of sentence in the “Parking and Facilities” pool of sentences.
Results

Across all those comparisons, the success rate is 74%, meaning that 74% of the time the two sentences about “Parking and Facilities” were closer to each other than EITHER WAS to a given sentence from the “Bedside Manner” category. As a first pass, this is good but not great. Our model obviously captures some of the nuance of the language, but not enough to serve as a stand alone classification algorithm. In practice, we are using this as just one component of the machine learning tagging model we have built to tag reviews without manual inspection. 

Doc2Vec and Topic Modeling

Text clustering

Now, let’s see how the model can be used to spot recurring topics. To do that, we cluster a sample of data from the dentist industry running a KMeans algorithm on the set of vectors representing the sentences. We want each cluster to represent a semantic entity, meaning that vectors from the same cluster should be close in meaning. The more clusters you build, the smaller they are and the more similar the sentences are within the cluster. But having too many small clusters does not provide relevant information: we end up with very specific clusters, and different clusters about the same topic. Therefore, we are interested in finding K, the number of clusters that will allow us to have the smallest number of coherent clusters. To do that, let’s draw a plot of the clusters’ average inertia (sum of squares within cluster) divided by the number of points and find the “elbow” of the curve: the point where inertia starts decreasing more slowly with the number of clusters.Clustering loss curve for Doc2vec representations of dental industry reviews.

Results

We notice a break around 14 clusters. Let’s now try figure out what each cluster represents: in each cluster, let’s read a few sentences and words close to the center. Here is a sample of the results:

Cluster 1
  • Great Service and all of the staff were friendly and professional.
  • The care was excellent and the medical staff was at the cutting edge.
Cluster 2
  • I highly recommend Dr. xxx.
  • I highly recommend Dr yyy.
  • Worst experience ever.
Cluster 3
  • They had me waiting 5 hours in the waiting room.
  • 2 hours and still waiting.
  • I filled out my paper work and couldn’t have been waiting more than 5 minutes before being called back.
Cluster 4
  • Love this place good doctors and nurses
  • Wonderful staff, and Dr. Bruckel was warm, personable, and reassuring.
  • Fast friendly and helpful but still personable.
Cluster 5
  • Waste of time and money.
  • Do not waste your time here.
Cluster 7
  • Great service in the emergency room.
  • They took great care of me
Cluster 8
  • Went to the e.r. with a kidney stone.
  • I went to the ER for some chest pain, I got xray, bloodwork, etc.
Cluster 9
  • Would definitely recommend this hospital for labor and delivery.
  • I would definitely recommend this hospital to anyone.
Cluster 10
  • The doctor had a great bedside manner.
  • He has the best bedside manner of any doctor.
  • The staff was courteous and very professional.
Cluster 13
  • This is an urgent care.
  • I highly recommend this urgent care.
Cluster 14
  • They saved my life.
  • Always treated well and with respect.
  • This place saved my life, I’m Very thankful with the doctors and nurses.
Analysis

For most clusters, a dominant them emerges; e.g. recommendation, wait time. Some of these themes span multiple clusters. Some of the clusters however, seem to mix multi unrelated topics. Looking at the inertia of each cluster helps us ID some of the better clusters (e.g. 1, 3, and 8). The lower the inertia, the more coherent the cluster, the more likely the sentences are of similar meaning. We can also look at the distance between clusters to find good candidates for regrouping (e.g. 2 and 9).

Ultimately, as we saw with classification, we can see that Doc2Vec is a useful tool in identifying key topics, but not a standalone tool, at least in the version implemented here. Nonetheless, in these and other applications we have already found and are continuing to find valuable ways Doc2Vec can help us extract actionable insights for our clients.

 


Contextualizing the Word Cloud

Word Clouds can provide a powerful visual representation of text data. When it comes to customer review data, word clouds can quickly showcase the most prominent keywords people are using and their associated sentiment. The basic word cloud works by selecting the most frequent keywords in a set of data (in our case customer reviews or surveys) for a given client or industry. The font size of text represents the frequency with which a keyword has been mentioned in these reviews. The color of text (ranging from dark green for positive to dark red for negative) represents the average sentiment of these mentions. This way users can gain immediate insight into what customers are talking about the most and their relative sentiment with respect to those topics without having to read through each customer review one by one. Our goal in the data science labs was to experiment with different ways of depicting a Word Cloud to maximize the insight this tool could provide.

ZingChart

I began this exploration by selecting ZingChart to help me build initial word clouds on top of our data. It is a fairly easy to use JavaScript library with dozens of built-in responsive chart types. The library itself can be easily integrated with my work stack Vue.js and the syntax is straightforward. I was able to build the cloud simply using their pre-defined keyword attributes including “words”, “rotate” and “color”. The default CSS setting for the cloud is quite nice in that it does not require any additional styling.

The cloud below showcases customer review data relating to staff professionalism in a hospital. The word “nurse” indicates that a lot of customers evaluated their nurses in reviews and they had somewhat positive experiences. People were even more positive about the “staff” in general. On the other hand, there were various complaints around “bills”, “ultrasounds”, and “rude” and “unprofessional” staff.

But what are the main insights with respect to these reviews? Are we to conclude that experiences with the nurses were not as positive as with the rest of the staff? How positive were these experiences compared to nurse services in other hospitals? Do people mention the nurses and overall staff this frequently in other hospitals as well? Unfortunately, this single cloud cannot answer those questions.

Towards digging into that, we wanted to figure out how to get more context into the word cloud. However, when it comes to specific customization, the free version of ZingChart will not do the trick. For example, the only data I could pass in for a tooltip when hovering over the word is the word itself and its count. I would have to pay technical support fees to extend this feature. Because of this I decided to explore other libraries. In particular I decided to explore what I could do with the popular data visualization tool D3.

D3

D3.js is a dynamic, interactive and data-driven JavaScript library for producing data visualizations. The D3 word cloud library I used was d3-cloud, created by Jason Davies. It uses HTML5 canvas and sprite masks to achieve near-interactive speeds. The layout algorithm runs asynchronously, which makes it possible to animate words as they are placed without stuttering. The syntax of the library is fairly straightforward. The library uses d3.layout.cloud() to construct a cloud, start() and stop() for layout algorithms, and font(), size() functions to specify the attributes. The cloud is highly customizable. One function I particularly enjoy is random() – it sets initial position and clockwise/counterclockwise direction of the spiral of each word, which means it can print out two clouds with the same order for words.

This possibility was particularly intriguing to us, as it provided an option to layout a benchmark word cloud side-by-side with the initial word cloud. If both clouds could have exactly the same text and text position, the two could easily align side by side to provide a quick visual benchmark for each word. Thus, users could not only gain general insights of their own customer reviews, but also compare easily to various benchmarks such as industry averages or performance over a longer time period.

In the cloud below, I implemented this for one hospital’s reviews over two time periods, the last 3 months and the last 24 months. Immediately you can see a value to the benchmark. With the two clouds side-by-side (more recent reviews on the left) you can see that feedback about “doctors” and care related to “children” is in fact getting worse. You can also quickly see that sentiment around waiting (“wait”, “hour”, “minute”) is in fact getting more negative compared to some of the other words showing up in yellow here.

This is a big improvement, and I feel this is a good way to add context to a word cloud. However, we do run into some difficulties with this approach. In the case above, all of the words are in pretty much the same place, but this is primarily because the relative frequencies of the words are very similar in the two data sets. In some cases however, this is not true. In a followup post, next month, I will go through one of these cases in more depth and discuss the custom cloud we had to build to overcome these issues.


Finding Optimizations in Python With Program Profiling

This blog post discusses profiling methods, specifically for the Python programming language.

Within the data science team, one of the things we are working to build is a processing model for large amounts of textual and review data using natural language processing.

Because we are processing data at such a large scale, it is important that our model is properly optimized to reduce any unnecessary overhead. As such, it is important to identify which areas in our code are taking up the most time. This is where profiling comes in.

Program profiling is a form of analysis that measures things such as the memory usage, time usage, the usage of particular instructions, or the frequency and duration of function calls. It is a way to understand where the largest amount of resources are being spent in order to target optimizations to these areas.

Use Case

Our use case was to find optimizations in a series of Python files used in our model. In order to find which parts of the program were stalling execution, profiling was used. Python has many native and third party profiling tools that allow for a range of analysis for runtime, memory usage and visualization. Some of the tools we looked at were cProfiler, line_profiler, memory_profiler and QCachegrind. For the purpose of our use case, we are most interested in profiling methods that enable us to see which parts of the program were using up the most time, and if there are any blocking resources.

Profiling Using the Standard Python Library

Profiling Python can be done with the standard Python library, as well as third party modules and programs.

The standard Python library provides three different implementations of the same profiling interface: cProfile, Profile and Hotshot. The most popular of the three is cProfile.

cProfile

cProfile can be run in terminal, as well as imported as a module in Python.

It shows profiling results by functions for time for ncalls  tottime  percall  cumtime  percall, (number of calls to that function, total time of that function excluding calls to other functions, time per call, cumulative time of the function and other function calls, time per cumulative call).

Example:

import cProfile
import re
cProfile.run(‘re.compile(“foo|bar”)’)

Output:

    197 function calls (192 primitive calls) in 0.002 seconds

Ordered by: standard name

ncalls  tottime  percall  cumtime  percall filename:lineno(function)
    1    0.000    0.000    0.001    0.001 <string>:1(<module>)
    1    0.000    0.000    0.001    0.001 re.py:212(compile)
    1    0.000    0.000    0.001    0.001 re.py:268(_compile)
    1    0.000    0.000    0.000    0.000 sre_compile.py:172(_compile_charset)
    1    0.000    0.000    0.000    0.000 sre_compile.py:201(_optimize_charset)
    4    0.000    0.000    0.000    0.000 sre_compile.py:25(_identityfunction)
  3/1    0.000    0.000    0.000    0.000 sre_compile.py:33(_compile)

Although we are able to see timing information on a function basis, we aren’t able to see which lines specifically are taking up the most time.

Here is an example that runs in the terminal with cProfiler and reduces the output to the 33 top lines with highest cumulative time:

python -m  cProfile -s ‘cumulative’ program.py > temp_file && head -n 33 temp_file && rm temp_file

Third Party Profiling Modules

Third party modules include line profiler and memory profiler for line by line profiling, and QCacheGrind for program visualization.

line_profiler – Line-by-line Timing

line_profiler is a third party module that does line-by-line profiling in a Python program. It shows the time spent on each individual line in the Python program. After installing line_profiler, you use it by decorating the functions that you want to profile using ‘@profile’. Then you create a kernprof script of your Python file that can be used with line_profiler.

Install:

pip install line_profiler

Example:

kernprof -l myProgram.py

python -m line_profiler myProgram.py.lprof

Timer unit: 1e-06 s

File: primes.py
Function: Proc2 at line 149

Line #      Hits         Time  Per Hit   % Time  Line Contents
==============================================================
  149                                           @profile
  150                                           def Proc2(IntParIO):
  151     50000        82003      1.6     13.5      IntLoc = IntParIO + 10
  152     50000        63162      1.3     10.4      while 1:
  153     50000        69065      1.4     11.4          if Char1Glob == ‘A’:
  154     50000        66354      1.3     10.9              IntLoc = IntLoc – 1
  155     50000        67263      1.3     11.1              IntParIO = IntLoc – IntGlob
  156     50000        65494      1.3     10.8              EnumLoc = Ident1
  157     50000        68001      1.4     11.2          if EnumLoc == Ident1:
  158     50000        63739      1.3     10.5              break
  159     50000        61575      1.2     10.1      return IntParIO

In this output, we can see by line the amount of times a line is executed, the time per execution, the total execution time and percentage time usage. This helps you zero in on which lines are actually causing slowdowns in your program.

memory_profiler – Line-by-line memory usage

memory_profiler is another third party package that is similar to line profiler. It does line by line profiling of a Python program with memory as opposed to time.

pip install -U memory_profiler

python -m memory_profiler example.py

Line #    Mem usage  Increment   Line Contents
==============================================
    3                           @profile
    4      5.97 MB    0.00 MB   def my_func():
    5     13.61 MB    7.64 MB       a = [1] * (10 ** 6)
    6    166.20 MB  152.59 MB       b = [2] * (2 * 10 ** 7)
    7     13.61 MB -152.59 MB       del b
    8     13.61 MB    0.00 MB       return a

The output is similar to line profiler. In the output above, it is seen that memory usage increased when required computation power increased. It is helpful if a program is doing operations that require a lot of memory.

Visualization with QCacheGrind

QCacheGrind is a visual profiling tool, it can be used to view the call stack of a program and see the cumulative time usage of each function in the call stack. You can visually trace through the call stack, and even view the time usage line by line of the source file.

Installation:

pip install pyprof2calltree

brew install graphviz

brew install qcachegrind –with-graphviz

Use:

python -m cProfile -o myscript.cprof myProgram.py

pyprof2calltree -k -i myscript.cprof

Result and Comparison

Profiling helped us zero in on an iteration loop that was taking a large percentage of time. It turns out repeated index references to a large DataFrame object were driving a large percentage of the time usage. This is because while the Pandas DataFrame is a powerful data structure to apply vector operations and aggregation across large amounts of data, it’s inherently a slower data structure when it comes to accessing indexed rows repeatedly or iterating through a number of rows compared to a simple dictionary. After identifying this via profiling, the program was optimized by converting the DataFrame to a list of dictionaries.

We found line_profiler to be the most useful in terms of finding which areas of code to optimize. Using line_profiler we can see the percentage time usage of each function we are interested in line by line. Tools such as cProfile and QCachegrind are able to give a broad perspective on which functions are taking the most time, but do not show which lines of the function are the trouble areas. memory_profiler is good for programs that use heavy amounts of memory, but for our use case memory was not limited.


Reputation.com Data Science Labs Stack

This blog post is an introduction to the technology stack we use within the Data Science & Data Engineering Teams at Reputation.com to build our Data Science Labs portal. The Data Science Labs portal serves as a platform for our team to prototype data products, and iterate and experiment with new ideas to develop the exact algorithms and data products that will provide the most insight and value for our customers. For many of the tools we are building, the product use-case is being developed alongside the algorithms, so the more quickly we can prototype and collaborate with the product, design, and engineering teams, the more effective we will be at providing the most valuable and useful data products within our customer-facing product stack.

The portal is being actively built and iterated upon by our team of data scientists, data engineers and interns. After a few rounds of experimentation over the course of 2016, we have found that the following stack provides the best combination of feature set and flexibility to help us be as efficient as possible in prototyping new data products.

Backend

Since most of the data science toolkits the team uses daily are Python-based (Jupyter iPython, Pandas, Sklearn, TensorFlow), naturally we opted to go with a Python ecosystem for our Data Science Labs backend.  We have found Flask to work well as our backend web framework because it has the following characteristics:

  • Micro framework
  • Rich documentation
  • Large open source community support
  • Simple to learn and use
  • Routing is easy
  • Small core codebase and extensible

We also considered Django and CherryPy. Django is much more of a full-fledged solution, providing a ton of features that our application doesn’t need—such as deep ORM integration. There’s also a steep learning curve to ramp up. CherryPy, on the other hand, is comparable to Flask in that it’s an easy-to-learn micro framework. But Flask won out because of its great community support, better documentation, and vast library of plugins and extensions.

Frontend

We were looking for a frontend framework for our Data Science Labs that:

  1. is easy to ramp up
  2. provides a clean abstraction to help us separate our logic and view, and
  3. allows us to build reusable components.

We chose Vue.js over AngularJS or ReactJS. All three frameworks use 2-way data binding for reactive programming. Angular, being the oldest of the three, has a lot of community support and a deep knowledge base, but it is a behemoth in term of scale and size. Angular also is more inflexible in that it imposes a specific structure on how you layout your codebase, definitely much more opinionated than Vue.js or ReactJS.

ReactJS is rising in popularity and on its way to surpassing AngularJS as the dominant framework. It has many more advanced features than Vue.js, such as mobile rendering and server-side rendering, but neither are priorities for our project.

Vue.js, while being the newcomer, already has a large following and tremendous momentum moving forward, mainly for its reputation of being a minimalistic, simple framework. We find the syntax of Vue.js much more pleasant to work with than ReactJS’s JSX, producing code that is a lot more readable and maintainable. It is much easier to separate your CSS, JS and view files. And with the least amount of ramp-up time as far as learning curve is concerned, we find Vue.js to be the most suitable framework for our project. To summarize, Vue.js proved to be the right choice for us because it us:

  • easy to learn
  • very flexible; high compatibility with other libraries
    code readability
  • the core library focuses on the view layer, can build on top of as you go

Click here for more detailed comparisons between Vue and other frameworks.

As I mentioned, this stack is actively evolving with every new prototype we build, so check back here for updates as this platform evolves.


Semantic Convolution with Word2Vec

At Reputation.com, we work with millions of online reviews from hundreds of sources. One of the unusual characteristics of reviews compared to the vast majority of text corpora is that, almost by definition, reviews are structured in such a way that they can be categorized (in one or many dimensions depending on the review site and/or industry). However, we often find ourselves doing text classification/tagging on topics that are not already labeled by the review site.  This article is an informal introduction to a set of techniques we have developed to leverage existing unlabeled corpora in conjunction with the labeled data. In particular, we present a semi-supervised learning algorithm for multi-label text classification.

In recent years, a lot of text classification projects have used supervised learning methods (Naive Bayes, SVM) primarily due to their substantial improvements over non-supervised strategies such as traditional clustering in NLP tasks. Until very recently, most NLP classification work was done with the traditional Bag of Words (BOW) approach – perhaps with a bit of context through the use of a limited range of N-grams and skip-grams. BOW is a feature extraction technique where the text is represented as the frequency of each word in the document, disregarding grammar and ordering but keeping multiplicity. In most cases, defining a pipeline combining the BOW feature extraction technique with a Tf-Idf transform and a simple classifier (Naive Bayes, SVM) produces decent results with respect to most classification metrics.

Semi-Supervised Learning with Word2Vec

In most tutorials, Word2Vec is presented as a stand-alone neural net preprocessor for feature extraction. Word2Vec generates a vector for each word in the text corpora in higher-dimensional space such that words that share contextual meaning are located in close proximity to one another. To use Word2Vec for classification, each word can be replaced by its corresponding word vector and usually combined through a naive algorithm such as addition with normalization or cross product to get a sentence or text vector. Then, using these document vectors we could use a simple classifier for multi-label classification. The advantage of using Word2Vec over a simple BOW feature extraction technique is it supports semi-supervised learning, since the vocabulary from the labeled and unlabeled text can be used to generate the word vectors. This allows the words to have more contextual meaning. However, we have found that this approach does not appear to provide significant improvements over a BOW approach especially when there isn’t a lot of labeled data for training the classifier.

Semantic Convolution for Low Support Topics

A common problem that is seen in multi-label text classification is a major imbalance of labels in a textual corpora. We often see cases where most (>60%) of the sampled data is about the most prevalent topic, and more than half the topic labels exist in <0.1% of the sampled data. Almost inherently with NLP and a BOW approach, this causes a p (number of features) >> n (size of training corpus) problem. Based on a general rules of thumb, getting 1,000 training examples for the low support topic would require millions of labeled training examples, which is prohibitively expensive.

In this world of ‘big data’ the data itself is actually cheap, but developing a tagged training set can be expensive. In the course of our development, we devised an elegant and scalable way to develop and maintain a robust training set across tens of industries (this will be the topic of a separate blog post).

The premise of Semantic Convolution is simple: if a particular word is a good indicator of a particular label, then words with similar meanings (semantics) should also be good indicators of the label. Since we have qualitative evidence that Word2Vec vectors encode a semantic meaning, we can use it to help find words with similar meanings from non-labeled corpora. This allows us to apply a Semantic transform after getting the term frequencies in the BOW pipeline, and before applying the Tf-Idf transform. To apply the Semantic Transform, we use the Word2Vec data to generate a correlation matrix between words with similar contextual meaning in the vocabulary.

Given vocabulary is a dictionary mapping each term with an index, the code to generate the correlation_matrix is:

correlation_matrix = scipy.sparse.identity(len(vocabulary), format="dok")
for idx, word in enumerate(vocabulary.keys()):
    similar_words = []

    try:
        similar_words = [x[0] for x in word2vec_model.most_similar(word, topn=5) if x[1] > 0.5]
    except:
        raise

    for similar_word in similar_words:
        if similar_word in vocabulary:
            correlation_matrix[vocabulary[word], vocabulary[similar_word]] = 1

Using this correlation matrix we can generate the term-document matrix with the augmented term frequencies.

term_frequency_vector += term_frequency_vector * correlation_matrix

Applying this transformation with the correlation matrix increases the word count of all words with contextually similar meaning in the text. This improves the feature collection for low support topics, which allows more precise classification of reviews about low support topics with higher confidence. This allows small amounts of labeled data to be more useful for the machine-learning model, which reduces the cost of developing a robust training set. Also, as mentioned above, this leverages semi-supervised learning from the unlabeled data by building the vocabulary and Word2Vec vectors based on the entire text corpora.

Conclusions

Ultimately the Semantic convolution provides more value from the little labeled data, and improves the performance of the machine leaning algorithm for classification tasks, especially the low support categories. Also, semi-supervised learning with Word2Vec leverages the information gained from the vast amounts of unlabeled data while increase both the precision and the support of the machine-learning model.

Dweep Shah and Anthony Johnson


Review Classification with Neural Network Models

Introduction

Extracting meaning from online reviews is key to turn seemingly anecdotal reviews into actionable customer satisfaction insights that point to improvement opportunities or authentic, and potentially differentiating strengths. One way to do that is to apply machine learning to automatically read customer reviews and identify the most relevant topics that are the subject of the review. With this information, you can find themes in what customers are saying about a business across thousands of reviews and then help businesses identify areas in which they are receiving a disproportionate number of negative reviews so that they can focus operational efforts on these areas and improve customer experience as well as their online reputation.

We have been working for a while on several approaches, models, and data sets to extract topics and categories from customer reviews with a high precision. In this post I will give an overview of a few neural network models that provide satisfactory results for physician-related reviews. To start, we built a taxonomy of categories that are relevant to physician reviews looking both at clinical patient experience topics from standard patient assessment surveys designed by CMS (Center for Medicare and Medicaid Services) as well as non-clinical topics related to parking, technology/amenities, and cleanliness that are commonly referred to in physician reviews. Then we gathered training data by having a group of crowd-sourced individuals tag a set of 10,000 reviews with the following categories (this is the subject of an upcoming blog entry):

  • Administrative Process
  • Bedside Manner
  • Cleanliness
  • Competence
  • Getting an Appointment
  • Likely/Unlikely to recommend
  • Parking
  • Responsiveness
  • Staff Courtesy
  • Technology/Amenities
  • Price/Billing issues
  • Wait Time

Given this training data, we used a biologically-inspired variant of Artificial Neural Networks to build a classifier that automatically assigns categories to online physician reviews. These neural network classifiers are based on how an animal’s visual cortex processes and exploits the strong spatially local correlation present in natural images. Those models are generally used for image recognition, but are being increasingly used in other fields, especially text classification. Given the promising results documented in this space, we decided to evaluate Convolutional Neural Networks (CNN) and Recurrent Neural Networks (RNN) with respect to our classification problem.

After an initial trial, we decided to focus our implementation on CNN models as they execute faster, are easier to understand, and had comparable results to RNN.

Principle of CNNs

screen-shot-2016-10-03-at-1-51-58-pmThe starting point of our CNNs was to represent a review by a matrix where each row is a vector that represents a word. This vector could be low-dimensional representations or one-hot vectors that index words into a vocabulary. Given this vector, you can then apply several convolutional filters on groups of rows followed by a 1-max pooling (the largest number from each feature map is recorded) in order to extract the meaning of the group of words considered at the beginning. Finally, a softmax layer is applied to generate assessed probabilities of the review belonging to each class.

CNN Implementation Approach

We generated a CNN model for each category independently broken into two classes: reviews that belong to this category and reviews that do not.

Thus, to produce all of the hidden parameters of these models, we fed them with reviews from the training data that were already categorized and modified the parameters incrementally in order to minimize a loss function (a function that represents the difference between the prediction and the real categories).

CNN Model Effectiveness

To assess the performance of these models, we split the 10,000 reviews into 8,000 reviews for training and 2,000 for testing. Given a model built on the training data, we predicted whether each review in the test data belonged in each category and assessed the precision and recall of our predictions with respect to each category.

For the largest categories, we found that our models delivered an overall precision of 81% and an overall recall of 75%. At first sight, those results did not appear very good. However, when we dug deeper, we found that when we considered at each element of crowd-sourced (human-based) training data to be a prediction in itself, these tags exhibited precision and recall metrics lower than 70% (versus the consensus of the group). Thus, our model outperformed human classification.

Furthermore, looking deeper into the training data, we realized that some reviews were truly ambiguous and the categories not precise or discerning enough, which resulted in a high degree a disagreement between humans evaluating the same review. After removing the most ambiguous reviews from the training data set, we observed a marked increase of the overall accuracy of the model. What to do about those ambiguous reviews or how to fine-tune the categories will be the subject of a future post.

Next Steps

The results from CNN models are promising, and we are pushing them further by experimenting with several modifications of the model such as: oversampling the training set in order to have balanced data for each category, splitting reviews by characters instead of by words, and initializing with a low-dimensional representation of words using Word2Vec. Stay tuned for further updates.


Impact of Online Reviews on a Company’s Local SEO – Part II

A couple of months ago, we looked at the relationship between review site rankings on a business’s local SERP/SEO and the number of reviews of the business on those sites. We found a significant positive relationship between the number of reviews and how highly those sites ranked in local search results.

Most of the analysis that time around centered on automobile dealers around the country and where their Facebook and DealerRater presences ranked in Google search results targeting that dealer. This time around we expanded that analysis to multiple review sites and multiple industries and found that the relationship between review volume and local search ranking varies wildly by domain and industry. We also dug a little deeper into the data to try to estimate the value of adding reviews on these sites over time, and we found that new reviews are valuable in two ways. First, new reviews help review sites rise on search engine results pages, and second, the more reviews that site acquires, the better its chances are of staying at the top of the results page as well.

Reviews and SEO across sites and industries

First, let’s look at the relationship between review volume and domain ranking in local search for that same set of automobile dealers. (Note: None of these analyses include Google, since the Google review presence is usually anchored on the right-hand side of the page.)

The Facebook and DealerRater lines here match up pretty well to the data we presented before. We also see a correlation between review volume and domain rank for the other domains, but it is notable that the apparent impact of additional reviews varies a bit by source. For instance, for four of these domains the average rank of the review site for a location with no reviews is between 8.5 and 10. Having 100 reviews on DealerRater brings the expected rank of that domain down to the top half of the first page, whereas for cars.com and Facebook, we would still expect 100 reviews to leave that site below the fold when someone is searching for that location. Edmunds.com is even worse. It seems no matter how many reviews you get, Google is determined to pin your Edmunds presence to the top of the 2nd page.

This data would lead us to hypothesize that, on average, an additional review on DealerRater is worth considerably more for a car dealer than an additional review on one of these other sites. But before we explore that hypothesis a little more, lets look at similar data for a few other industries. Next let’s look at hospitals:


These are the review site domains that most commonly showed up when we googled over 1000 US hospitals. Again we see the expected directional relationship, more reviews means a generally better SERP/SEO ranking. However, none of these curves are as steep as the steepest curves for auto dealers. It’s also very interesting to note how much Google seems to value a healthgrades.com page regardless of whether there are any reviews on it.


And here is the data for the Self-Storage Unit industry. There isn’t as much breadth in this industry, as there aren’t as many review sites with high volume, but it is very interesting to note that in the storage industry, Facebook has a very strong correlation between review volume and SERP/SEO rank.

All of this is very interesting, but it raises several questions. Most notably, what makes the SERP/SEO ranking of particular review sites seem to be so responsive to review volume in particular industries? And is there actually causation here or does something else explain why some of these correlations are so strong?

Review volume impact on local SEO over time

Let’s address the causation question by looking at some more dynamic data, specifically by looking at how these rankings and volumes change over time. This is still a long way from a controlled experiment, but it would be more compelling if we could show that as review volumes rise for a particular location on a particular site, then the SERP/SEO ranking of that location tends to fall.

Over the last couple of months we gathered SERP/SEO data once a week for several thousand US auto dealers. We then looked at the rankings for major review sites over time and how those changes correlated with total review volume and with changes in review volume. To model this, we fit a Markov Chain that predicted the probability of any weekly SERP/SEO ranking for a review site based upon the domain, that site’s ranking the previous week, the total number of reviews for that location on that site, and whether the number of reviews went up or not.

The first thing we wanted to measure was this – Does getting new reviews positively impact your search engine rank? According to our data, the answer would appear to be yes. In the graph below we plot the predicted impact of getting a new review on one review site according to our model.

Screen Shot 2016-04-19 at 11.23.19 AM

According to our data, after we normalize for domain, rank, and total number of reviews prior, review sites that got at least one new review in a given week tended to be placed higher the following week than sites that did not get new reviews. Obviously this impact is much higher when you have no reviews or very few reviews (an average improvement of 1/3 of a spot for sites getting their first review!), and it levels off pretty quickly once you have around a dozen reviews.

Our model spit out one other interesting insight. It found that review volume is important not just for getting a site ranked highly on SERP, but for keeping it there as well. Review site rankings drift from week to week, and our Markov Chain model captures that drift. But what the model also found is that for review sites with a high volume of reviews, regardless of where they ranked the week before, they tended to drift more towards the top of the page (or were more likely to stay there) than review sites with very few reviews.

Screen Shot 2016-04-19 at 11.29.38 AM

This graph plots how much an auto dealer’s review volume will impact the drift of that ranking on average. In other words, if you have no reviews, your review site page will lose one spot every three weeks, on average, relative to the norm. If you have 50+ reviews, it will gain 1 spot every 5 weeks on average, relative to the norm. You might ask, “how can I gain a spot if I am already at the top?” Well, links that are in the top spot tend to lose that spot about 20% of the time. If that site has 50+ reviews, it will be much less likely to do so.

Conclusions

Hopefully this analysis shines a light on the value of generating a healthy review volume on review sites that you want your customers to be able to find on Search Engines. And makes it clear that those reviews are valuable not just because they will help those review sites climb to the top of search engine results, but because they will help those sites stay there as well. Also beware that these effects can vary considerably from domain to domain, and the most responsive domains may also vary from industry to industry.