LAW-GAME Consortium partner Helvia.ai presented a paper on Conversational GPT models for few-shot text classification at the 5th Financial Technology and Natural Language Processing (FinNLP), in Macau, China. The event took place between 19-25 August 2023 and was a pole of attraction to an international audience eager to get up-to-date on the cutting-edge developments in AI.
The IJCAI-2023 Joint Workshop of the 5th FinNLP brought together researchers from natural language processing, computer vision, speech recognition, machine learning, statistics and quantitative trading communities to expand research on the intersection of AI and finance.
Lefteris Loukas from Helvia.ai presented the paper titled “Breaking the Bank with ChatGPT: Few-Shot Text Classification for Finance”. The paper describes how the best conversational GPT approaches are evaluated for few-shot text classification and in particular how to classify user messages employing
a) in-context learning; and b) contrastive learning with a very small set of examples instead of a large task-specific dataset. The most promising approaches are evaluated with traditional fine-tuning and with Large Language Models (LLMs) in-context learning on a public dataset from the banking domain.
The results are indicative of any application of text classification of user messages and the insights are directly applicable to the LAW-GAME project.
The challenge of identifying the user intent in conversational bots
One of the main challenges when designing conversational bots is to make sure that the bot correctly understands what the user has asked. This is due to the fact that people may phrase their questions in a variety of ways, utilizing different terms and expressions to convey the same meaning. That’s where intent detection comes in. Intent detection is a classification process that involves separating the user’s inquiry into various categories based on its underlying intention. Once the chatbot understands the intent of the user’s inquiry, it can deliver the most appropriate response.
Through the application of NLP techniques and machine learning algorithms, chatbots can interpret user inquiries and categorize them according to their intended meaning. This enables chatbots to deliver tailored and effective responses. Nonetheless, classifying intents can present significant difficulties, particularly in certain contexts where there is overlap between categories or when there are limited examples available to train models.
The challenge of intent detection is particularly evident in the context of the LAW-GAME project, where helvia.ai leads the development of interactive chatbots that support the project’s use cases. For LAW-GAME, intent detection includes an extra level of complexity, due to the nature of the project where limited data is available to train the AI models.
Intent detection plays a critical part in creating chatbots that can offer precise and useful answers to user inquiries. However, it remains largely unexplored across several industries due to the scarcity of appropriate datasets. The presented paper contributes to connecting the industry with the latest academic advancements in intent classification.
Applying Generative LLMs to tackle the classification challenge across various domains
The study focused on the intent classification task using a real-world and open dataset (Banking77). The dataset comprises customer service queries related to finance, with 77 classes that have significant semantic overlaps.
Acquiring sufficient data for machine learning models can be challenging in a business setting. To address this, we employed a “Few-Shot Setting” approach, which is more practical for organizations with limited annotated data. Helvia used various methods, including in-context learning of GPT-3.5 and GPT-4, fine-tuning masked language models (MLMs), and few-shot contrastive learning, to train the model with a small number of examples. We also curated a representative subset through human expert annotation to tackle the challenges posed by class overlaps.
The research indicates that in-context learning with conversational LLMs can produce accurate responses, even with limited training data. Generative LLMs like GPT-3.5 and GPT-4 can perform better than MLM models in scenarios where data is scarce, but they come with substantial costs. Findings extend beyond the finance industry, as they can be applied to other fields where accurate and prompt results are crucial, even with limited examples.
In future work, it is envisaged to explore other generative open-source models and cost-effective ways of deploying LLMs.
For more information visit the LAW-GAME website at: https://lawgame-project.eu/
Stay tuned on the project’s latest updates by following our social media channels at:
LAW-GAME has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 101021714.