Amidst the dizzying set of launches and demos of capabilities and new startups around Generative AI – it is worth reflecting where we are seeing some of the broadest use cases in the enterprise today and the early learnings we are all taking away as we put it into play.
Mainstream enterprise use cases for Generative AI are emerging
We are seeing three enterprise uses cases emerge into the mainstream, amidst a lot of dialog and discussion in board rooms across the corporate world:
First, in customer support, generative AI – including GPT3+ and other Large Learning Models, is transforming conversational chatbot capabilities into one that feels natural, is more accurate, and is better able to sense and react to tone and emotions. As a result, conversational AI in product support chatbots is one of the first enterprise use cases we see in the industry. These chatbots can search and query existing internal information, and communicate in a human-like manner, answering queries and resolving common issues for customers. For companies already using some form of conversational AI, GPT improves response quality and customer satisfaction. GPT becomes an attractive alternative for companies looking to switch their manual call centers into a more responsive, always-on, and more efficient capability.
Second, around business insights, one of the largest challenges in data science has been the separation of the business user from the data scientist. The former understands the business nuance the best and the questions to be answered, but it’s only the latter that can actually program in a computer language to get those questions answered. Generative AI now allows business users to ask questions in natural language. The AI can now convert these into SQL queries, run against the internal databases, and return the answer in a structured narrative – all within minutes. The advantage here isn’t just the efficiency – it’s the speed of decision-making and the ability for business users to interrogate the data more directly and interactively.
Third, in programming automation, Large language models are highly accurate in multiple languages – including computer language. Software developers are reducing the time to write code and associated documentation by almost 50%. For example, the Microsoft Power Automate program – a tool for robotic process automation – can now be programmed using natural language to automate tasks and workflows in a more intuitive and user-friendly manner. Not only is this more efficient than getting large teams of programmers and testers involved, but it also reduces time and iteration to get automation up and running.
Generative AI brings new sets of challenges to the enterprise
As generative AI gains momentum, there are several challenges we are finding enterprises need to keep an eye on, foremost among them:
Like any new emerging technology, one of the largest challenges in executing Generative AI today is its relative immaturity. While Generative AI is great for experimenting with chatbots in personal use, it is still early in enterprise applications in the mainstream. Organizations that are deploying it are having to do much of the heavy lifting themselves – experimenting to find the best use cases, sifting through an ever-increasing and confusing list of available options (such as choosing between ChatGPT service from OpenAI vs Microsoft Azure), or integrating it into their business processes (by adequately integrating it into numerous application workflows). The upshot is that much of this will go away as the technology matures – and application providers race to incorporate more of it into their core offerings in an already-integrated fashion.
Second, one of the main pitfalls of Generative AI is the possibility of incorrect but apparently convincing responses. Because GPT has made significant advances in natural language processing, there is a sizable risk that the responses it delivers sound right but are factually incorrect. This can be a non-starter in industries where accuracy is critical, such as healthcare or financial services. Organizations must carefully choose the right application areas, and then build the governance and oversight to mitigate this risk.
Third, companies need to pay attention to setting and managing corporate guidelines: Data privacy and maintaining the confidentiality of protected corporate data is key to success for corporations. As a result, defining and setting appropriate corporate guidelines as a first step is critical. In addition to the risk of loss of confidential or personally identifiable, or otherwise protected data, the additional risk in training publicly available Language Models with proprietary data is that it can lead to inadvertent loss of intellectual property, especially when results based on the training are made available to other competitors. Robust policies and thoughtful frameworks are hard because they must balance the need for innovation on the one hand with the associated risks of Generative AI on the other hand.
Finally, finding the right balance between over-keeling into a hyped-up technology and focusing on the highest returning initiatives can be challenging. Organizations need to ensure that they allocate appropriate capital and resources to the most pressing initiatives. On the other hand, organizations that sit out too long, waiting for the technology to mature, risk losing out on the mainstreaming of AI in the industry, falling behind the latest technologies that can meaningfully disrupt their business, and reducing their durable competitive advantages.
Best strategies for success today and in the long run
As organizations look to leverage Generative AI to drive innovation and growth, a few strategies can ensure success today and in the long-term:
Through all the innovation on the horizon and organizations’ experimentation with Generative AI, corporations must define and publish appropriate use rules and privacy/confidentiality guidelines for the organization. The clarity of approach accelerates innovation whilst protecting broader corporate interests and provides a more mature and stable glide curve to bring Generative AI models into the mainstream of the enterprise.
Enterptises should set up a small focused group tasked with experimenting with Generative AI and reengineering core business processes. This group should report to the highest levels in the organization and be goaled with figuring out how to disrupt current processes and business models. Because it has the potential to disrupt existing ways of doing things, it requires a sharp focus and clear sponsorship, and making this part of someone’s night job, or hobby doesn’t always deliver clear business results.
Finally, it is essential to evaluate emerging solutions in the Generative AI ecosystem continually. Many different LLMs are already available today, and more are on their way – each with associated strengths and weaknesses. As they become commercially available, new models will include ones that are domain-trained in a specific industry or provide better enterprise grade security, making them excellent choices. Similarly, as enterprise applications integrate Generative AI into their core, it requires thoughtful consideration around the best ways to incorporate. And as always, change management to embrace new ways of working is critical to realizing the full value of any technological change.