When most of us picture the future of technology, one of the first things that comes to mind is a world where most of the mundane tasks of everyday life are taken over by robots and computers. The one that always comes to me is Tony Stark and his virtual butler Jarvis.
In the movie, Tony Stark can write code, run his company, and presumably order a pizza simply by talking to Jarvis. Real-world counterparts like Apple’s Siri and Google Assistant seem to pale in comparison. They are getting better, gone are the days where a bad Siri moment could mean texting your mom an invite to a “tastefully” themed college party, but most virtual assistants can still only execute a few key commands.
You probably already know that tools like Siri and Google Assistant use machine learning and AI to learn the nuances of language. The specific field responsible for this is called natural language processing or NLP; which is the field of computer science that is giving computers the ability to understand written and spoken language the same way that people can. This is a big task since language usually involves more than words, and factors like context and intent can change the meaning of a statement. Even in basic conversation colloquialisms, contextual words, or synonyms can create big problems for computer systems.
But how far along is this science, and how long until NLP and machine learning become useful in everyday situations? What is the future of qualitative analysis with NLP?
Today, most NLP models have conquered the basics of language comprehension. They can read through a piece of text, and break it down into Parts of Speech. They can compare this data to a dictionary to figure out, on a high level, what the piece of text is probably about. They can also group lines of text together into different topics, and use dictionaries of positive and negative words to discover sentiment.
Of course, these systems still need an analyst to verify their findings and fill in the gaps, but even this basic ability to dissect and summarize language is a huge step forward for qualitative analysts who would otherwise have to sift through thousands of lines of text on their own.
Use Cases for NLP in Qualitative Analysis and Beyond
With a core structure of NLP in place to understand the basics of language, the race is on to bridge the gap and transform this core technology into something that can be used in the world beyond academia. Many companies, ours included, are releasing products that leverage NLP to provide insights into the minds of stakeholders or improve the efficiency of customer interaction.
Virtual assistants: Mentioned above, Apple’s Siri and Google Assistant are examples of virtual assistants that receive voice commands to execute daily tasks like adding an appointment, changing a song, or even changing the temperature of your home. Due to the massive amounts of data these programs have to learn from, and their use of AI, they are in a position to do more advanced tasks, like distinguishing different people by the sound of their voice, or even using text-to-speech technology to speak back to you in a unique voice.
Sentiment analysis on social media, and customer feedback: NLP is becoming an important tool for business everywhere. Market research is enhanced by analyzing sentiment around different topics on social media. The same can be done with customer reviews, to provide insight on the specific product features customers loved and hated, and ideas for improvements that come right from the source.
Qualitative Research: For researchers who use qualitative tools like interviews and surveys in their work, NLP seems too good to be true. Advancement in areas such as speech-to-text (re: Siri and Google Assistant) means that researchers can search through hours of recorded interviews and create transcripts in a fraction of the time.
Chatbots: Chatbots are often the first step in modern customer service interactions. By analyzing thousands of customer interactions, these bots can learn to recognize patterns in the way people speak and provide even better responses over time. The data from these bots can give valuable insights that qualitative analysts can use to improve the customer experience.
Translation services: These are a classic example of NLP at work; however, you may remember playing a game of telephone with these, translating a sentence back and forth to see how warped it could get. The good news is that these services have improved a lot, and are more reliable than ever before.
Key Problems for NLP
With the core technology of NLP in place and some interested customers eager to discover the magic of NLP, the focus is on bridging the gap between working NLP models and customer expectations. This is tricky, because of the Natural Language Processing problems that must be solved so that products fit seamlessly into the lives of customers:
Contextual words and phrases: One of the most important parts of understanding human language is context. We use context clues all the time to fill in the gaps of what we’re saying. For example, if your friend says they had a good time on their date, you know from context that in this case the word “date” is referring to a date with a person instead of a date in the calendar.
Synonyms: Similar to context, we often have many different words that mean the same thing or almost the same thing. Adding to the complexity, different people will use the same word to mean different things. For NLP, this is solved by having a complete dictionary, which may include domain-specific information. The quality of a dictionary will depend on the training data given to the model, something we’ll touch on later.
Irony and sarcasm: Have you ever received a sarcastic text message and been confused about what to say next? That’s what it’s like for a computer model dealing with ironic or sarcastic statements. The default for computers is to take all information literally unless they are given training data that can show them what to look for in sarcastic statements.
Ambiguous language: In NLP, ambiguity means that a sentence or word has two or more possible interpretations. These kinds of statements are common in our language, so they are a major problem for developers to solve.
Lexical ambiguity is when a word could be used as a noun, a verb, or an adjective; "Saw” can mean either “to see” or a wood cutting tool. This is usually solved with context clues.
Semantic ambiguity is the interpretation of a sentence in context. The sentence “I saw you using my new binoculars” could mean that I saw you through my binoculars, or that I saw you using my binoculars.
Colloquialisms, slang, and jargon: Similar to the problems of synonyms and contextual words, colloquialisms, slang, and jargon require a combination of complete dictionaries and context clues to sort out. Language is always evolving, so models constantly need new, accurate data to train from.
Bridging the gap between functional technology and usable product requires Natural Language Processing models that can interpret context clues and process ambiguous situations. But how does that happen?
NLP models started with the use of word-based dictionaries (e.g. Wordnet) to hold information about the meaning of words, including details like the associated sentiment (e.g. joy = positive, hate = negative). These dictionaries can be manually created by developers who pre-determine a set of rules, but the reality is that rule-based systems, take a long time to build and age poorly. Instead, machine learning models that leverage NLP will develop their own dictionary using neural nets and deep learning algorithms to build off training data fed to them by developers. Training data is created by having a group of people annotate a piece of text, like a Wikipedia article, often through crowdsourcing initiatives. By observing patterns in the data the model will build an evolving dictionary to use in real-world applications. Therefore, the quality of training data is paramount to developing NLP models that are actually useful for qualitative analysis.
Training data is obviously an important part of making really great NLP models, but getting data that reflects the majority of people can be difficult. A classic example of this is the phenomenon of WEIRD data. WEIRD stands for Western, Educated, Industrialized, Rich, and Democratic; it refers to a 2010 paper by Joe Henrich, Steven Heine, and Ara Norenzayan where they point out that the broad claims of behavioral scientists on the human psyche are based on data from people who live in western, industrialized, democratic societies, who are likely educated and rich compared to the rest of the world. The same can be said for the training data that has been used to teach machine learning models. The result is that the model is only capable of accurately analyzing qualitative data produced by other WEIRD people. Even when models use more realistic data from crowdsourcing or social media, they quickly reveal our own inherent biases.
One option to deal with this problem of WEIRD data is to broaden the experiences of the people annotating data. This creates "thick" data, which can give developers more bang for their buck during training. By training models on "thick" data provided by the people the model will work with, there is a greater likelihood that the model will be useful. This is one of the key differences between Amazon’s Alexa and Apple’s Siri. Alexa learns by aggregating the data from all customer interactions together so that it can improve over time. Your interaction with Alexa is informed by your neighbours, and their neighbours, and so on. Apple has taken a different route in the interest of data privacy; Siri won’t send your data back to Apple. The tradeoff is that it takes Siri longer to learn the nuances of human language, but customers enjoy a higher degree of data privacy.
Conclusion: The Future of Qualitative Analysis with NLP
Natural Language Processing is undoubtedly the key to making the interaction between everyday people and powerful technology tools feel seamless. When customers can speak directly to a virtual assistant or chatbot, companies save on time and resources that they can focus on developing better products. Feedback from stakeholders can be analyzed to determine sentiment, and inform the strategic planning of corporations, education institutions, and even government. Health authorities can use NLP to scan for signs of disease in the ways we speak and write. I’ll finally get to be Tony Stark.
For this to happen, NLP models need to understand how a typical person speaks and writes. This includes picking up on context clues, solving ambiguity, as well as more advanced concepts like irony and sarcasm.
One of the best ways to speed up this process is to improve the quality of training data to better reflect the majority of people, rather than a narrow group already engaged in academic research.
When this is done, anyone who makes use of qualitative data will be strengthened by a robust Natural Language Processing structure, which will not only increase the efficiency of ongoing projects but also make this kind of research more accessible to those limited by access to resources.
If you found this article useful, you might enjoy our newsletter. It’s a bi-monthly email that keeps you up to date on what we’re up to and articles on topics we find interesting.
If you want to dive deeper, sign up for a free, 30-minute consultation to see what Analytics can do for you.
Commentaires