top of page

Sentiment Analysis in the Corporate Setting

by David Pinto


Consumers have been doing sentiment analyses for quite a while. They look through unstructured customer feedback and figure out if it is positive or negative based on a raw score. Natural language processing tools use text from a variety of sources, such as emails, social media, and review sites, to break down what people are saying, put words in context, and see if they are positive or negative for a brand.



Has this been done before?


When the US Federal Energy Regulatory Authority looked into one of the biggest corporate failures in history -Enron- they analyzed 500,000 emails that were sent by employees that had been made available to the public. The signs were there, but had not been noticed at the time. Data analysis of these emails, which were sent years after the company went bankrupt in 2001, tells a good story.


After interactions between the company's top 150 executives were examined, index scores were evaluated at different points so when the company filed for bankruptcy suddenly it all made sense. 30 months before the company went bankrupt; tensions rose. This in-turn lead to an initial accounting fraud investigation and ultimately the demise of Enron. After further examination, it was found Enron was setting up partnerships to hide its losses. The records recorded by executives were explicitly different to the emails shared between each other. As a result, if there had been a way to track Enron's board of directors prior habits, the corporation may have have been able to change their toxic habits and avoid cultural suffering and a money meltdown.


How can it help my enterprise?


In the workplace, sentiment analysis could be used to find out about a lot of things, including:

  • Management perception: how people think about leaders and the respective business.

  • Identifying Frustrated teams: especially those who may be linked to performance figures.

  • Reputational risk: employees spreading rumors.

  • Preventing health and safety violations: if people say they don't feel safe.

  • Flight risks: employees highly anticipating to leave their jobs.

  • Diversity and inclusion: how people from certain groups work in certain departments and how this affects other employees.

  • Pinpointing misconduct: as well as policy violations.


How can this be done?


Management can make changes right away based on immediate feedback and address issues that would otherwise be unknown. This could be because a new policy has been put in place or because there has been a big shift in the benefits for employees.

All the signs of behavior can be found in language, which can help you find an employee problem before it gets worse & give you the chance to do something about it. Predictive analytics means you can see if the signs are there before an incident.

What about understanding context and sarcasm?


An academic group at a computational linguistics summit did a study of social media sentiment analysis. They found that these tools could remember that a tweet was sarcastic because an operator told them so, but they might not recognize sarcasm again because the context would be different. This also raises questions about how inclusive sentiment analysis is. If the team that makes the "rules" for the analysis isn't from a wide range of backgrounds, the algorithm may only look for things that are common to that group. It may not understand the subtleties of other cultures, generations, or people who are neurodiverse.

This should be of no concern in any corporate setting since the data is so vast and the use of sarcasm is seen by the system as normal error rate that needs to be constant.

As the story of Enron shows, however, its main selling point can be that it can pick up on changes in employees' feelings that they might not say outright. This gives it a level of analysis that other methods might not be able to tackle. With more businesses moving to long-term remote and distributed working, they can quickly find problems and fix them, which helps keep good employees and reduces overall satisfaction risk.


Okay, I’m in; what’s the Solution?


ELEFense analyzes company communications in real time & builds a unified dashboard to measure corporate culture, sentiment, and key words.

All the data is looked at the aggregate level, so no one can be identified and machine learning makes the interpretations more accurate over time.


Employees usually say what they think management wants to hear when they fill out the surveys, even when done on a regular basis. By working in the background, sentiment analysis avoids this problem by looking through a lot of different data sources for comments and trends, rather than asking employees their thoughts directly.


This is especially important during strenuous events like the pandemic. Employees are under more pressure than ever to tell their bosses what they want to hear. Interpreting and verifying survey responses requires a level of depth that can only be done with smart tools.

ELEFense can also be used to predict certain things, the system can assume employees’ future terminations by looking through employee communications for words that are often used in exit interviews. This means Organisations can figure out if someone is going to leave up to nine months before they Make the decision to follow it through.


In compliance-minded industries like financial services and healthcare, sentiment analysis can be used to look for signs of risk. For example, if certain keywords are used, the AI system can conclude those as signs of fraud or harassment.


It is possible to do some sentiment analysis without the help of artificial intelligence. For example, keywords can be used to group conversations into themes. Completely automatically, the sentiment analysis algorithm can "learn" how certain words and statements are affected by context.


Wait, is there a privacy issue?


The benefits of being able to search through employee communications for clues of engagement are clear, but what about the employees' right to privacy and possible data protection issues? The system does not look at any individual employee but only groups of 8 people or more, as specified in the GDPR privacy convention. No specific conversations can be viewed.


On the whole, ELEFense combines and anonymizes data so that people can't be identified, and hide insights from small teams where a sudden change in behavior could be linked to a few people.


Your electronic communication policies or data protection policies may already say how the organization will collect and use employee data. Even so, a general rule is that more transparency and consultation is better, even if your policies already say that. Microsoft, for example, came out with a tool called Productivity Score in 2020. It claimed to be able to show how employees used Teams, like who they talked about in chat functions. However, it took away the ability to see usernames after people said that these insights would be like employee surveillance. 30 percent of employees said they were OK with their employers looking at their emails, compared to just 10 percent of employees who said the same thing in 2015. When an employer explained why they were going to be monitoring, more than half of the workers were fine with it. ELEFense data is combined in a way that makes employee identification impossible. If executed properly, no employees should have any direct issues with their organization trying to make itself a happier workplace.


bottom of page