What flavor of AI will be used in pricing?
We are in the middle of an AI renaissance, sparked by the enormous popularity of Chat GPT and its many applications. AI is not new to the pricing space. The classic old heavy metal pricing applications, PROS, Vendavo and Zilliant, have been using advanced predictive analytics from from their first solutions, which in the case of PROS goes all the way back to the mid 1980s. Their cloud competitor Pricefx also has some powerful approaches to optimizing prices based on a mashup of internal and external data.
At Ibbaka, we have been applying SNA (Social Network Analysis) techniques to find the underlying clusters in data that can inform the customer segmentation models that inform our pricing design.
But there has been a sea change over the past year, driven by the scale and power of new approaches.
We reached out to the pricing community recently to ask …
Which approach to AI will be most relevant to pricing over the next three years?
Predictive Analytics - 76.3%
Large Language Models like GPT - 6.0%
Computational Game Theory - 14.0%
Other - 3.4%
As is usually the case, there were some interesting comments, and people did not always agree, especially on Large Language Models.
“It’s almost a no-brainer! Generative AI is very hot now, and everyone is trying to use it everywhere, but these tools have a different scope of application that is only indirectly related to pricing, if at all. Predictive analytics, on the other hand, provide us with price elasticity, demand curve, and other statistical models with numerous applications in pricing!”
“LLM’s don’t really help in pricing. However LLM’s will make designing pricing experiments and research really easy. I fed ChatGPT a hypothetical product and asked it to design a conjoint and it did it fairly well. Maybe tomorrow LLM will simulate buyer persona and answer this survey or any survey. So it’ll make research easier. I think LLM’s and generative AI doesn’t have direct application in pricing modelling per say. There are better models for pricing strategy I believe that are more tuned to pricing. Predictive analytics I think is already there.”
“I think LLMs will be tweaked to bring the benefits of its ability to create connections to data and describe better relationships between products and the market.”
Let’s put each of these three approaches in context and explore a few other interesting possibilities.
Predictive analytics and pricing
This is the established approach to using AI in pricing, and as the above poll suggests, most pricing people are comfortable with this. All of the major pricing software packages make use of this form of AI and are migrating their platforms to build their predictive models using machine learning generally and deep learning specifically. Deep learning is the approach to AI led by Geoffrey Hinton at the University of Toronto and Yoshua Bengio at the Université de Montréal.
Using predictive analytics for load balancing and price optimization is well established. The question to ask is how we can extend this approach. Some of the ideas being discussed put pricing in a wider context.
Predictive Engagement - can we predict the future engagement of users on software platforms? If we can, then we can use this to make usage based pricing more predictable, which would remove one of the common objections to this approach to pricing.
Renewal Probability and Price Optimization mashups - SaaS businesses are obsessed with renewals as it underpins the business model and customer lifetime value. Customer satisfaction platforms look at usage, CSAT (Customer Satisfaction) and other data to predict probability of renewal. Could pricing analytics be combined with user analytics to improve predictions of renewals.
Market Dynamics - there are complex interactions between price elasticity of demand and cross price elasticity (the probability that a customer will defect to a competitor in the case of a price increase). Conventional pricing software has not been very good at modeling and predicting these interactions, which can be chaotic systems. It is possible that Deep Learning will do better, especially when used in Generative Adversarial Networks or GANs (see below).
See Understand your market's dynamics before you set your pricing strategy.
Computational game theory and pricing
One of the most exciting areas of AI is computational game theory or algorithmic game theory as it is also known.
Definition: an area in the intersection of game theory and computer science, with the objective of understanding and design of algorithms in strategic environments. See Computational Game Theory
One of the galvanizing events in recent AI research was when DeepMind’s AlphaGo defeated human champion Lee Sedol in 2016. Chess had been solved earlier, as early as 1997 when IBM Deep Blue defeated Gary Kasparov, but Go was thought to be a much harder problem. DeepMind (now part of Alphabet) trained AlphaGo on games played by human masters, but a follow up program AlphaGo Zero was trained by playing games against itself in an interesting implementation of a Generative Adversarial Network (see below). AlphaGo Zero is even more powerful and adaptive than the version trained on human data.
Some people took solace in the idea that Chess and Go are games of perfect information. They believed that AIs would struggle to win in a game like poker or contract bridge, where some information is hidden and psychology plays a big role. The confidence was misplaced.
In 2017 an AI Liberatus defeated a cadre of top poker players. It defeated them so badly that all four of the humans ended up down with Liberatus taking the entire pot.
The lead designer Tuomas Sandholm said that his team “designed the AI to be able to learn any game or situation in which incomplete information is available and "opponents" may be hiding information or even engaging in deception.” Does that sound like a lot of pricing situations to you?
There are still some challenges to using computational game theory for pricing, at least for the approach to pricing that Ibabka advocates.
Positive sum games - the most effective pricing creates positive sum games in which buyers, sellers and even competitors can all win - this is still a hard problem for AIs built on finding the Nash equilibrium to solve.
In many pricing situations there are multiple players, which is harder than two player games (but this problem is being solved as the success of Liberatus shows).
See Important future directions in computational game theory by Sam Ganzfried.
Computational game theory is also being applied to evolutionary systems. See Steering evolution strategically. Pricing and packaging is a complex adaptive system embedded in several larger complex adaptive systems. Long term, computational game theory and computational evolution will be one of the best ways to model pricing strategy and design. But the question we asked gave a three year time horizon. I think computational game theory is on more of a ten year time horizon. Something to begin studying and to explore but probably not the place for big bets at this time. I hope I am proven wrong.
Large language models (LLMs), content generation and pricing
Many people are sceptical about the application of LLMs like GPT-4 to pricing (see the comments above). There are many stories making the rounds of ChatGPT hallucinating or giving the wrong answer to technical questions.
Two thoughts on these objections.
Hallucinations are the tendency for LLMs to invent answers or facts that are not in its source materials. Humans do this too. I suspect it is essential to any creative production and that if we want LLMs to contribute to creative work they will continue to hallucinate. This is actually one of the generative properties of language itself.
I am not impressed by people tricking ChatGPT into giving a wrong answer. Any tool can be badly used. We need to develop skills in the design and management of prompts and See What skills do I need to use ChatGPT?
Let’s take a step back. What is a Large Language Model?
One of the best explanations comes from Stephen Wolfram in What is ChatGPT Doing … And Why Does It Work. LLMs are large (very large, GPT-4 has about one trillion parameters) models built from text and other information on the Internet. GPT stands for Generative Pretrained Transformer, which is a good description of how it works. Generative in that it can generate new content. Pre-trained in that one needs to build (train) the model and once trained it is static. Transformer refers to the architecture, which was first reported in 2017 by Google Research: Attention is All You Need. Given a set of inputs (prompts) an LLM generates as set of outputs. Open.ai prices both inputs and outputs. See How Open.ai prices inputs and outputs for GPT-4.
LLMs are language based, but language is much wider than we sometimes think. Software code is a kind of language. One way of thinking about mathematics is as a language. Logical reasoning is a kind of language. Value and pricing models, as sets of equations, are also made of language.
With the right set of prompts (the inputs to a model that generate the outputs), implemented sequentially and recursively, I believe that LLMs can generate the following:
Value propositions
Value drivers and value driver equations
Pricing models
Price levels
Price strategies in competitive situations (long term this will be done by applications built using computational game theory, but short term LLMs will provide a bridge solution)
To make this work will require a few things that are still being developed.
LLMs will need to be augmented with data from pricing applications, including value proposition and value driver libraries, pricing model libraries and the systems of equations used in pricing work (there is a lot of math under the covers in pricing). This means that new ways of representing these data structures that are digestible by the systems generating LLMs may be needed. One way to do this will be with an open source LLM. LLaMA may be one place to explore this.
Prompt management systems will be needed. Value model development and pricing model design rely on many different inputs. Outputs are often repurposed as inputs. The order of operations helps determine the results. Software to manage prompts and to optimize them will be needed.
Evaluation and assessment is part of using an LLM. Assume the outputs have some errors or hallucinations and that a formal validation process is needed.
Ibbaka has its work cut out as it moves to leverage LLMs in its pricing and customer value management platform Ibbaka Valio.
Generative Adversarial Networks (GANs) and pricing
Not covered in the poll were Generative Adversarial Networks or GANs. GANs pit two AIs against each other so that each will improve faster. AlphaGo Zero mentioned above is a special case of a GAN.
This architecture will become standard in training the AIs used in pricing. GANs are already being used to train algorithmic trading programs. Alexandre Gonfalonieri covers some business uses in Integration of Generative Adversarial Networks in Business Models.
One can think of many ways to use this AI architecture in pricing applications.
Have pricing strategies compete over time and test impact on market share, revenues, profitability.
Generate different pricing metrics and combinations of pricing metrics and test for performance against different market structures (and then feed the results back in so that the market structure evolves).
Model market dynamics to understand how price elasticity of demand and cross price elasticity interact.
GANs will be combined with other AI approaches to accelerate and automate development.
Social Network Analysis (SNA) and pricing
Social Network Analysis (SNA) is a branch of network science that asks how graphs are connected. Ibbaka has been applying this for several years to find clusters in the data we use for market segmentation, value segmentation and pricing segmentation. Segmentation is the foundation of good pricing. One wants to identify clusters of users that get value in different ways and then target the segments where you can deliver the highest differentiated value at a reasonable cost.
One of the keys to AI is developing ways to represent data that are amenable to AI. Graphs are one way to do this.
One can represent the many different pieces of data used in value and pricing analysis as a graph (a set of nodes and edges connecting them) and then use network science techniques to understand how value and price are connected, or how customer or user characteristics and behaviors shape willingness to pay or probability of upgrade or renewal. SNA is just one of many relevant approaches. One could use percolation to understand how price changes will be accepted in a market, measures of network centrality to understand how to connect value paths, and so on.
Thamindu Dilshan Jayawickrama gives a good overview of community detection in Community Detection Algorithms.
Conclusions: Leveraging AI in Value Modeling and Pricing Design
Over the next three years AI will completely transform how value models are developed, pricing models designed, the impact of pricing actions assessed and how pricing strategies are implemented.
Predictive analytics alone will not be enough to do this. Other approaches including the use of Large Language Models and Computational Game Theory will be required. New ways of representing and connecting data and models will open new technologies and drive innovation.
Ideally, this transformation in how we price will move us away from pricing as a zero sum game and move us to positive sum games where buyers, sellers and competitors all benefit from better pricing design and execution. See How to negotiate price (getting to positive sum pricing).