Using Aspect-Based Sentiment Analysis to Understand User-Generated Content

Introduction

User-generated content has increased significantly in the recent past. Much of this content is text-based, generated mainly via online forums and social media platforms, an will often contain users’ opinions about organizations or hot-button issues.

Businesses exist to provide goods and/or services, which means that communication and relationships with customers are crucial elements of their success. Analyzing customer feedback—either customer reviews or complaints—shared on online or social medium platforms can provide key insights necessary to optimize customer service. In fact, there a lot of statistics that suggest this kind of analysis via user-generated content is a key part of any brand strategy.

Despite the perceived benefits, it’s still a great challenge for businesses to parse and organize this large amount of unstructured data into more digestible and actionable insights. Unstructured textual data coming from disparate sources in the form of natural language is especially difficult to analyze manually. However, machine learning-based opinion mining techniques have the potential to enable automatic extraction of opinions and their corresponding polarities from such user-generated content. This approach is known as aspect-based sentiment analysis (ABSA).

Formally, Sentiment analysis or opinion mining is the computational study of people’s opinions, sentiments, evaluations, attitudes, moods, and emotions. Aspect-based sentiment analysis involves two sub-tasks; firstly, detecting the opinion or aspect terms in the given text data, and secondly, finding the sentiment corresponding to the aspect terms detected.

In this article, we’ll demonstrate the building of a natural language processing (NLP) pipeline to extract meaningful insights from a large volume of customer reviews. This is in an attempt to automate the process of understanding opinions about a given subject from user-generated text.

Dataset

We’ll use the restaurant reviews dataset generated during the 2016 SemEval annual competition. The competition targeted development of computational techniques for extraction from customer reviews of aspect terms mentioned and their associated sentiment scores.

We start by importing the required library:

# NLTK
import nltk
from nltk.corpus import stopwords
from nltk.stem import SnowballStemmer
nltk.download('stopwords')

#Spacy
import spacy
nlp = spacy.load('en')

# Other
import re
import json
import string
import numpy as np
import pandas as pd

import warnings
warnings.filterwarnings('ignore')

#Keras
from keras.models import load_model
from keras.models import Sequential
from keras.layers import Dense, Activation

The training dataset is loaded using the pandas read_csv() function. We also show the first 5 rows of the dataset using the head() function:

#load data
import pandas as pd
reviews_train = pd.read_csv("Training_Set_Restaurant_Cleaned.csv").astype(str)

#show first 5 records
reviews_train.head()

The aspect_category and the sentiment are the target variables for the aspect and the sentiment classifiers respectively. We can obtain the total number of aspect categories using the following code:

# reviews_train.columns
print(reviews_train.groupby('aspect_category').size().sort_values(ascending=False))

#how many categories
print("number of categories",reviews_train.aspect_category.nunique())

In our case, we have a total of 13 aspect categories.

Model Training

Using the Keras Library, we’ll build and train neural networks for both aspect category and sentiment classification. Keras is a neural networks API that enables fast experimentation through a high-level, user-friendly, modular and extensible API. Keras was developed and is maintained by Francois Chollet and can run on both CPU and GPU.

Defining the Neural Network Architecture

Let’s start by defining the aspect classifier architecture:

absa_model = Sequential()
absa_model.add(Dense(512, input_shape=(6000,), activation='relu'))
absa_model.add((Dense(256, activation='relu')))
absa_model.add((Dense(128, activation='relu')))
absa_model.add(Dense(13, activation='softmax'))
#compile model
absa_model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])

We’ll use a fully-connected network structure with three layers. We first create a Sequential object and use the add function to add layers to our model. The Dense class is used to define a fully-connected layer, where each neuron in the network receives input from all the neurons in the previous layer.

The input shape is set to 6000, which is the maximum size of vocabulary created using word embedding, with relu used as the non-linear activation function. Nonlinear functions transform data such that the resulting transformed points can effectively be classified into different classes. The output layer comprises 13 neurons, one for each class. The softmax activation function used in our model returns the probabilities of each class—the target class will have the highest probability.

Once the architecture is specified, we need to configure the learning process by specifying an optimizer, a loss function, and accuracy metrics. “Learning” simply means finding a combination of model parameters that minimize a
loss function for a given set of training data samples and their corresponding targets. Since the problem at hand is multi-class classification, categorical cross entropy loss was specified as the loss function.

Vector Representations of Words

To encode the reviews in vectors we use a word embedding technique known as the Bag-of-Words (BoW). In this approach, we use tokenized words for each observation and find out the frequency of each token:

vocab_size = 6000 # We set a maximum size for the vocabulary
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(reviews_train.review)
reviews_tokenized = pd.DataFrame(tokenizer.texts_to_matrix(reviews_train.review))

We need to encode the aspect category column as well:

label_encoder = LabelEncoder()
integer_category = label_encoder.fit_transform(reviews_train.aspect_category)
encoded_y = to_categorical(integer_category)

The above three steps are repeated for the sentiment classifier. The output layer for this classifier will, however, be initialized with the value 3 since there are 3 types of sentiments—namely positive, neutral, and negative.

#model architecture
sentiment_model = Sequential()
sentiment_model.add(Dense(512, input_shape=(6000,), activation='relu'))
sentiment_model.add((Dense(256, activation='relu')))
sentiment_model.add((Dense(128, activation='relu')))
sentiment_model.add(Dense(3, activation='softmax'))
#compile model
sentiment_model.compile(loss='categorical_crossentropy', optimizer='Adam', metrics=['accuracy'])

#create a word embedding of reviews data
vocab_size = 6000 # We set a maximum size for the vocabulary
tokenizer = Tokenizer(num_words=vocab_size)
tokenizer.fit_on_texts(reviews_train.review)
reviews_tokenized = pd.DataFrame(tokenizer.texts_to_matrix(reviews_train.review))

#encode the label variable
label_encoder = LabelEncoder()
integer_sentiment = label_encoder.fit_transform(reviews_train.sentiment)
encoded_y = to_categorical(integer_sentiment)

The Training Process

Our two models are ready to be trained. To train, we’ll use the fit()function on our models with the training data (reviews_tokenized), target data (encoded_y), the number of epochs, and verbose parameters. Verbose helps us see the training progress for each epoch.

#fit aspect classifier
absa_model.fit(reviews_tokenized, dummy_category, epochs=100, verbose=1)
#fit sentiment classifier
sentiment_model.fit(reviews_tokenized, dummy_category, epochs=100, verbose=1)

The accuracy of our models can be improved further by hyperparameters.

Finally, we test our models using a list of reviews, as shown below. Minor pre-processing, which includes changing the reviews to lowercase, is done on the reviews.

test_reviews = [
    "Good, fast service.",
    "The hostess was very pleasant.",
    "The bread was stale, the salad was overpriced and empty.",
    "The food we ordered was excellent, although I wouldn't say the margaritas were anything to write home about.",
    "This place has totally weird decor, stairs going up with mirrored walls - I am surprised how no one yet broke their head or fall off the stairs"
]

# Aspect preprocessing
test_reviews = [review.lower() for review in test_reviews]
test_aspect_terms = []
for review in nlp.pipe(test_reviews):
    chunks = [(chunk.root.text) for chunk in review.noun_chunks if chunk.root.pos_ == 'NOUN']
    test_aspect_terms.append(' '.join(chunks))
test_aspect_terms = pd.DataFrame(tokenizer.texts_to_matrix(test_aspect_terms))
                             
# Sentiment preprocessing
test_sentiment_terms = []
for review in nlp.pipe(test_reviews):
        if review.is_parsed:
            test_sentiment_terms.append(' '.join([token.lemma_ for token in review if (not token.is_stop and not token.is_punct and (token.pos_ == "ADJ" or token.pos_ == "VERB"))]))
        else:
            test_sentiment_terms.append('') 
test_sentiment_terms = pd.DataFrame(tokenizer.texts_to_matrix(test_sentiment_terms))

# Models output
test_aspect_categories = label_encoder.inverse_transform(absa_model.predict_classes(test_aspect_terms))
test_sentiment = label_encoder_2.inverse_transform(sentiment_model.predict_classes(test_sentiment_terms))
for i in range(5):
    print("Review " + str(i+1) + " is expressing a  " + test_sentiment[i] + " opinion about " + test_aspect_categories[i])

The models performed fairly well in classifying our test reviews.

Conclusion

Aspect-based sentiment analysis (ABSA) can help businesses become customer-centric and place their customers at the heart of everything they do. It’s about listening to customers, understanding their voices, analyzing their feedback, and learning more about customer experiences, as well as their expectations for products or services.

While the use of computation approaches in mining customer opinions have proven to be quite promising, more needs to be done to improve the performance of these models. A plausible approach to improving the performance would be to use advanced word representation techniques, specifically contextual word embedding.

Contextual word representations derived from pre-trained bidirectional language models (biLMs) have recently been shown to provide significant improvements to the state-of-the-art for a wide range of NLP tasks. Among the popular contextual word embedding techniques is Google’s Bidirectional Encoder Representations from Transformer (BERT).

Contextual representation is characterized by generating a representation of each word based on the other words in the sentence. Therefore, the performance of the classification model is expected to improve significantly as a result of the rich contextual representations learned from the user-generated content.

Discuss this post on Hacker News and Reddit.

Fritz

Our team has been at the forefront of Artificial Intelligence and Machine Learning research for more than 15 years and we're using our collective intelligence to help others learn, understand and grow using these new technologies in ethical and sustainable ways.

Comments 0 Responses

Leave a Reply

Your email address will not be published. Required fields are marked *

wix banner square