How to make the most of ChatGPT

How to make the most of ChatGPT

We asked OpenAI’s ChatGPT model about the advice that Frontier Economics gives to its clients.

Its response was balanced - and reassuring: "Frontier Economics is a reputable firm and their advice is based on sound economic principles.However, you should always consider the specific advice and recommendations given in the context of your own situation and objectives." 

That sounds fair. At Frontier we follow the latest developments in technologies that affect our clients: as good economists, we view this new generation of AIs as the latest in a long line of important technological step changes.

While large language models (LLMs) are not new, the integration of a later-generation GPT model into Microsoft’s search engine, Bing, has helped to raise awareness of their potential use cases and risks.  Open AI and its competitors have democratised access to AI by removing the barriers to entry posed by the investment, data and skillsets needed. In doing so they have created not only a wave of attention (and hype), but also new opportunities for innovation in economic models.

Since ChatGPT’s launch in late 2022, commentators have found it easy, and possibly dangerous, to overestimate the emergent capabilities of LLMs. But it’s also easy, and dangerous, to underestimate them. It’s clear that LLMs have the potential to disrupt industries, reshape value chains and change economic models.

The very democratisation of access to AI, and the promise of competition between publicly available models, means that their capabilities will be available both to your firm and to your competitors. Competitive advantage in any given sector may be gained and lost – but not because of access to the models.  Rather cognitive automation will differentially affect parts of our organisation and how it operates. Firms quick to take advantage may move ahead in their markets; others may fall behind. 

So what is good advice at this time of transformation?  Well, it’s difficult to predict how the technologies will develop, but while humans are still in the loop I suggest three considerations for organisations today:

  1. Identify your use cases – and why. Nail down the use cases that are most relevant and economically important for your business and its economic model with your own objectives in your markets. Be clear what issue you are looking to solve or improve. Are you aiming to improve efficiency, take out cost, increase the robustness of your processes, reduce noise (unwanted variation) in decision making or improve customer service?  We see use cases for each of these and many more.
  2. Test, learn - and innovate. To grasp how you can capture the benefits – in efficiency, creativity or accuracy - you need to test and learn. Whichever approach you take - to build, buy or partner to access the capabilities you need - be sure that you are acquiring a better understanding of the risks and challenges as the tools evolve. 
  3. Develop - and deepen - your policies for responsible AI. If you haven’t done so already, put in place your organisation’s policies on Responsible AI (RAI). Big Tech, the EU and, recently, the Italian privacy regulator and the UK government have all published guidelines - often very high level. As the AI said, it’s important to put the guidelines in the context of your own organisation and be more specific about what they mean for you.

Law firms have moved quickly to test use cases that get right to the heart of their time-based economic models and ‘speed up their lawyers’. Firms have implemented AI legal assistants that can support some of the more standardised tasks. This makes them more productive, but it remains to be seen if that will mean cheaper! If effective, AI-enabled lawyers may gain a short-term competitive advantage over firms that stick with more manual models. Indeed, such firms that fail to move with the times may be doomed in the longer run, although AI is unlikely to replace lawyers altogether.

In the creative industries and social media, it’s now possible to combine LLMs with image-based, even video-based, generative AI tools to produce materials for tailored campaigns in a matter of minutes. These can be used as prompts for human creatives, but they also offer the possibility of democratising access to marketing campaigns for millions of small businesses. Copyright challenges will no doubt have to be navigated, but Dan’s Deli on the corner could soon have a cool, AI-designed re-branding, logo and social media campaign.  

As the use of AI in financial services evolves, it is likely that LLMs will play an increasingly significant role in applications such as customer support and investment advice. Technological changes have shaped and reshaped customer service down the years. From face-to-face agents and branches, to call centres onshore, offshore and back again, then online and mobile and chatbot-supported. Some robo-advisers use Natural Language Processing (NLP) to analyse textual data to provide personalised investment recommendations. While these systems do not specifically use LLMs such as Chat GPT, they rely on similar underlying technologies. LLMs can now better interact with and sense a customer’s intention on a call or more expertly interpret text provided. We therefore expect to see more testing and learning and deployment of AI-powered tools given that they promise both to improve service and to lower costs. Plus ca change…

Some tech leaders and academics have called for a pause in the training of AI systems more powerful than GPT-4. They argue that society needs time to reflect and regulators need time to catch up. The risks inherent in the current models are well documented, including the threat to privacy and the potential for bias and even hallucination (generative AIs such as Chat GPT have been shown to create fake facts). However, the incentives to deploy the models appears significant, especially in competitive markets. RAI guidelines drawn up by law, financial service and media companies may not be as detailed or comprehensive as those from the tech giants but they do exist. As AI becomes embedded in operating models, we will need to see deeper, specific guidance for staff. 

As for Frontier, we will be sure to follow sound economic principles to help our clients cut through the hype around generative AI. Remembering, of course, that what works well for one client may not be the best advice for another. So we need to consider your unique needs and context when formulating advice. In the meantime, we continue to reflect on how we ourselves can innovate in order to best serve our clients - in a responsible way.