Follow Datanami:
July 1, 2024

Cohere’s LLM Solutions Advance Knowledge Work in Financial Services

July 1, 2024 — While generative AI (GenAI) is reinventing how work gets done across industries, financial services (FS) is poised to reap the greatest productivity gains. According to Accenture, banks could see an impressive 30% boost in productivity, topping the more than 20 industries evaluated.

The reason for this is straightforward: the strengths of large language models (LLMs) align perfectly with both a top differentiator among competing banks — customer service — and a function that sits at the heart of the industry: knowledge work.

An intensely competitive environment has historically motivated financial services to adopt technologies earlier than many other industries, tech sector aside. With LLMs, all indications are that FS will again stand at the forefront of innovation.

McKinsey’s latest State of AI survey shows 41% of respondents from the industry investing anywhere from 6% to 20% of digital budgets in GenAI, behind only the tech sector (and on par with energy). A steady cadence of press releases from top FS firms touting their GenAI pilots, along with what can be seen in Cohere‘s work, corroborates the high interest.

The same McKinsey research, however, shows FS lagging in adoption and ahead of only the healthcare sector to date. This likely stems from concerns about the security risks associated with LLMs and mirrors the cautious approach taken in the early days of the cloud.

To help more FS firms tap into GenAI use cases, this post will show how first movers are increasing knowledge worker efficiency and improving customer service, as well as address concerns for which mitigations exist.

Making Knowledge Work More Efficient

Knowledge work is at the core of financial services. Investment bankers, financial planners, credit analysts, wealth advisors, equity analysts, risk managers — the list of industry roles that deal with vast amounts of information daily could fill the rest of this article. LLM applications can support them, both in finding information and analyzing it.

Faster Knowledge Search and Synthesis

These workers spend significant chunks of their days sifting through and extracting insights from troves of financial data, product specs, procedural documentation, and spreadsheets. Numerous studies have quantified the substantial amount of time workers spend hunting for information, with estimates ranging from 2 hours to 3.6 hours per day. The search, retrieval, and summarization capabilities of LLMs offer a significant source of time savings.

Homing in on one role, wealth advisors must monitor capital markets closely to manage risks and help clients make wise investment choices. While traditional machine learning models have enabled institutions to amass vast amounts of data, advisors still must sift through a myriad of documents — from regulatory filings to economic reports — to find the right data.

AI knowledge assistants are changing this by distilling the latest financial statements and market analysis into concise summaries within seconds, and then answering follow-up questions to provide sources, clarifications, comparisons, and contextualization.

One financial technology company, whose customers include banks and asset managers, built a GenAI solution that enables advisors to quickly find information-rich documents, like 10K filings, and analyze them. Advisors can, for instance, ask for a company’s gross margin and then ask what other information in the report — perhaps in the MD&A or notes — might have driven it to be unusually high or low. Or they can ask about a competitor’s position in the market based on statements in various documents, such as investment analyst notes, account filings, and earnings transcripts.

The solution is built on a retrieval-augmented generation (RAG) architecture that combines Cohere’s suite of models: Command R, Embed, and Rerank. Embeddings ensure that the application extracts the information that best reflects the query’s context and how it relates to other information in the source documentation. Rerank improves the accuracy of search results to ensure that they’re matched with the advisor’s intent. And Command R+ can generate relevant responses and document summaries with source citations that ensure that the information delivered is verifiable and trustworthy.

Improving Data Analysis

LLM solutions can do more than just find knowledge — they can also help analyze it. Enterprises can automate financial, operational, and tabular data analysis by leveraging LLMs to examine various factors and generate reports. Equipping an LLM with a Python console, for example, enables organizations to start analyzing spreadsheets and financial data automatically.

Financial AI assistants are also capable of executing sophisticated prompts that can scan across documents, extract common information, and organize it in a standardized format that makes it easier to spot trends. In one case, the managing partners of an investment firm use Command R+ as part of a solution that helps them assess performance across their portfolio. Every quarter, they ask leaders of portfolio companies to send an update on how their organization is performing on various dimensions. They input the information and enter a proprietary prompt into Cohere’s generative model, which looks for patterns in the updates, such as sectors seeing the most growth or common attributes among their companies that correlate with success. The model then delivers its analysis to the partners in an easy-to-parse tabular format.

Better Customer Service Through Quick, Accurate Responses

Customer service quality is a top factor influencing where people choose to do their banking and other financial activities. In retail banking, for example, a 2024 J.D. Power survey found that poor customer service is almost as important as unexpected fees when it comes to why customers might switch banks. And banks with high customer satisfaction often see an increase in customer deposits.

While many factors can impact customer service, one significant challenge has been the speed and efficiency with which banking staff can respond to customer questions or issues. Bank workers often need to navigate many and varied documents, from policy manuals to product specs, to find relevant information, which can be time-consuming and cumbersome.

Building AI assistants with advanced retrieval capabilities and semantic search offers a massive performance upgrade over typical keyword searches. With turbocharged search, any role involved in customer service — from support agents to wealth advisors — can quickly surface more relevant answers to customer queries from voluminous knowledge stores and across databases. These assistants can even be configured to serve as virtual coaches, guiding staff through each step required to address a customer concern, such as a suspected account compromise.

And now, with newer capabilities like multi-step tool use, LLMs can also help support staff perform tasks that help them resolve issues faster. For example, after a recorded conversation with a customer, tool use functions with LLMs could ingest the chat transcript, update the support ticket based on the conversation, and then execute downstream functions, such as submitting a return request or modifying order details in a separate application.

Managing the Risks

Understandably, financial services firms want to ensure that they can manage risks associated with the use of LLMs, just as they have done with the cloud and other innovative technologies. It’s not only sound practice to protect and serve clients, but it’s also necessary for compliance with strict regulations that the industry faces around data usage, model-driven decision-making, and other rules pertinent to the use of LLMs.

There are three primary concerns Cohere hears most often in conversations with customers: protecting sensitive data, avoiding intellectual property rights violations, and minimizing inaccuracies. Cohere takes these and other risks seriously given that the Company focuses solely on enterprise AI solutions. While a deep dive into the state of AI security is warranted, here Cohere will briefly share the mitigations that exist for these risks today.

For starters, Cohere models can be deployed on premises, ensuring that all data, including private client data, stays within the organization’s walls (Cohere also takes additional measures). And Cohere has long been committed to protecting customer data, privacy, and safety to help enterprises use this technology with peace of mind. As part of this, Cohere provides companies with additional protection in the area of intellectual property (IP), with indemnification against infringement claims. Finally, LLM systems like retrieval-augmented generation with inline citations help reduce inaccuracies and provide a method to audit results.

As with any technology, the adoption of LLMs in financial services is not without risks, but these can be managed with careful planning and mitigation strategies. The experience of integrating cloud technology into the financial sector provides a valuable precedent, showing that with the right approach, firms can successfully navigate these challenges and reap the benefits of new technologies.

Financial services firms have historically been among sectors that lead the way in leveraging innovative technologies effectively, and LLMs offer another opportunity — perhaps one of the most compelling in history — for them to do so. With sound implementation strategies and the right partners, these firms can mitigate risks and unlock substantial productivity gains, driving the industry forward once again.


Source: Cohere

Tags: ,
Datanami