Author | Aditya Khandekar
Its been a busy summer work-wise, we have been cooking up (literally!) AI driven solutions for credit underwriting and fraud detection. There is huge promise of using Natural Language Processing (NLP) to extract signals from Chat/Voice/IVR interactions and using them for fraud detection. I read this quote recently and it resonated in context of account takeover fraud:
“Remember, behavior is the leading threat indicator. You can steal an identity, but you can't steal behavior.”
So, what is account takeover and how does it fit into an overall fraud system?
Account takeover occurs in situations where a fraudster takes over a legitimate account and uses that to his/her benefit. So, for example, in a credit card context, a fraudster could use phishing techniques to infiltrate your online banking account, change billing address of your card account by calling into call center, get an add-on card, use it for large purchases and disappear!
Generally, account takeover is a leading indicator of fraud and if detected early using signals from structured & unstructured data, it can prevent transactional fraud downstream. In the flow below, an Account Takeover Engine (ATE) generates ‘hot lists’ of accounts every few hours which informs the transaction fraud engine to monitor compromised accounts closely & alert customers as and when necessary.
I can go into more details, but I want to move into one component of the ATE system, which is, getting signals from unstructured data using powerful NLP engine.
Building powerful NLP engines
One of the drawbacks of traditional NLP is the amount of “train” data you need to feed it to get acceptable performance for predictions like “what is this chat transcript about?”. One of our goals was, can we provided minimal train data and get the NLP to “generalize” and perform well across large corpus of text for which we want to derive signals.
With that objective, I was very intrigued by the potential power of two ground breaking building blocks:
Word Embedding for data preparation
Convolutional Neural Nets (CNN’s) which have been used mostly for image recognition AI
Let’s quickly discuss these two building blocks:
Word Embedding: Imagine representing each token from text with a 100-dimensional vector which is based on an algorithm that finds meaning of words by “context” of it being used with other related words. So “laugh” and “joke” are related since they appear generally together. This explanation video offers a good technical description of it. There are pre-train word embeddings available from Google (GloVe) and Stanford (Word2Vec). To make this real, in the diagram below, you can find all words which are in context of the word “problem”
What is powerful is that word embedding allows me to expand the vocabulary of train data text easily without the need to add large corpus of additional train data. This is very powerful. I can then use this “enhanced” vocabulary of words while training my NLP.
Convolutional Neural Nets (CNN): CNNs have been extensively used for image recognition tasks but are also very effective in identifying sub structure of text patterns in data (tokens or sequence of tokens) in a way that is invariant to their positions in a corpus of text.
CNNs built on top of word embeddings then become a feature extraction process to extract salient features from documents and use those for downstream document classification.
To understand how this all fits into a machine learning pipeline, see a simplified flow diagram below:
Focusing a bit deeper into the CNN architecture for feature engineering, I am leveraging a conceptual diagram taken from a journal article called “Convolution Neural Network for Sentence Classification”.
NLP Engine Performance
So, how did the NLP engine do for tagging unseen text?
I started the article with the premise that by training the NLP model on a small dataset, I wanted to achieve high performance of the engine. To this goal I trained the NLP engine on only 13 “made-up” examples of client interaction with a bank. Some samples are included below:
I then took “live” examples of interactions between the bank & its customers and scored them through the NLP engine. We got very good results. Let’s take some examples:
In the first example, the NLP was able to detect the subtle combination issue of “Usage Issues” and potentially “Fraud”, though leaning more towards Usage Issues.
In the second example, it clearly identified the issue as being an “Usage Issue”, versus “Negative Customer Experience”. One more thing, lots of grammar issues in this live text, the NLP engine was able to ignore it!
Hopefully this post has given you a flavor of how NLP driven signals can be used to drive higher level use cases like fraud detection and some of advances in the NLP engine methodologies to build powerful & generalizable NLP engines.
Visit us at https://www.scienaptic.com/blog to read more thought leadership articles of practical application of analytics to real business problems.