<img height="1" width="1" style="display:none;" alt="" src="https://dc.ads.linkedin.com/collect/?pid=586106&amp;fmt=gif">

Contextual Sentiment

Quantify the sentiment of text with human-like accuracy

Image Description
Overview

Understand and contextualize the sentiment of a piece of text just like a human. Using an ensemble of hybrid deep-learning models, comprising of sequential and feed-forward neural networks the contextual sentiment knowledge function is able to classify sentiment with 96% accuracy. This near-human like preciseness is achieved using a new class of models that are superior to old methods reliant upon positive/negative word counts. This knowledge function is able to understand and interpret the relevant context, and can contextually categorize text like a human.

Key Features

  • Continuous stream -

    The function provides a probability-measure for the sentiment to account for how-positive / negative the text is as a continuous stream, rather than as a bag of words approach.

  • Quantifying magnitude -

    Humans, in general, are good at qualifying / classifying statements to different classes but not necessarily at quantifying the magnitude. This function, following this principle, is trained as a classifier, and hence achieves much better performance than its competitors

  • Continuous Learning -

    The hybrid ensemble approach is utilized to ensure the performance is maximized with respect to Bayes Human Error. As with every other function on our platform, this function also has the ability to continually learn from new data while mitigating catastrophic forgetting.

Illustration
Demo
data = [
{ " Text " :
[ "As Marvin shared, the home improvement backdrop remained strong, driven again by robust real residential investment and home price appreciation, which continues to encourage homeowners to engage in discretionary projects." ]
}
]
response = requests.post (ACCRETE.SENTIMENT , data = data)
return (response.json ())
""
result : " "
[ 0.9927117349113222 ]
" "
How It Works

The contextual sentiment function is built on the foundations of how humans perceive language, as well as how we, as humans classify something in different categories. We are good at categorizing, i.e. we would be in agreement (on average) about whether something is positive, negative, neutral, etc., but if asked to provide a quantitative score we would generally not be on the same page. This is the underlying principle of how we have built this function. This ensemble of hybrid deep-learning models is first trained as a classifier for sentiment, rather than as a quantifier. Then, the probabilistic measure of this ensemble is fine-tuned to further improve the value and performance of the function with respect to different use cases, e.g. trading in financial applications, a measure of side-effects in pharmaceuticals, etc. These models are built to ensure they achieve maximal performance with respect to the Bayes Human Error metric and provide scalability to problems that would be impossible for humans to comprehend.

Ready to get started?

The simplest and fastest way to build your own AI

Get early access >>