Ibbaka

View Original

What pricing metric for ChatGPT Plus?

Steven Forth is a Managing Partner at Ibbaka (currently on leave). See his Skill Profile on Ibbaka Talio.

Open.ai has announced a priced version of its compelling ChatGPT application. ChatGPT has attracted a lot of attention, and use, and users since its launch back in November. It reached more than 100 million users in two months, faster than any other app in history.

With all that use have come scaling challenges. The app is often not available or unresponsive. This is a big problem for people who want to make it part of their workflow. There has been a lot of demand for more reliable access, even if people need to pay for this. Open.ai responded by announcing ChatGPT Plus on February 1, 2023. ChatGPT growth is priced at US$20 per user per month. Paying users will get …

  • General access to ChatGPT, even during peak times

  • Faster response times

  • Priority access to new features and improvements

How did ChatGPT come up with this price?

In January people on the ChatGPT Discord server noticed the below form.

People in pricing recognized this as a standard Van Westendorp survey. One of the most common ways to get insight into price sensitivity and willingness to play. I filled it out myself, my price points were …

  • Too low: $5 per month, I would question the quality

  • Starting to get expensive: $50 per month, I would hesitate before buying

  • A bargain: $10 per month, I would buy without hesitation.

Given that Open.ai ended up at a price of $20 per month I think I am within the normal range.

Right now, ChatGPT Plus is only available to people in the US (I am in Canada) so I am on the waiting list but do not have access. I will buy at $20 per month, I would have bought at $30 or $40 per month, maybe for $50 per month. I might have been willing to pay even more with a different pricing metric (more on the below).

Van Westendorp studies are meant to generate a graph like the one shown below. That are a number of survey tools, including Conjoint.ly, Momentive and Qualtrics, that have configured Van Westendorp studies. They are quite easy to do and to analyse, and are quite popular among some pricing consultants.

All of the responses get synthesized to determine a range of acceptable prices.

Problems with the Van Westendorp approach to pricing research

There are a number of known problems with these surveys. Asking people how much they will pay for something, when they don’t have to actually pay for it, invites people to game the process. Some people will give a lower price than they would actually pay in order to drive the price lower. Van Westendorp surveys tend to underestimate willingness to pay.

A bigger concern is that these surveys are not realistic as they do not present alternatives. In the real world most decisions are a choice between alternatives. Van Westendorp does a poor job of representing alternatives. The alternative approach is known generically as a conjoint study, where people are asked to choose between a number of alternatives with price being only one of the attributes (for ChatGPT other attributes could be a service level agreement with an access guarantee, the ability to save and organize prompts, the ability to share prompts and so on). Conjoint studies always give deeper insight and more actionable insights than Van Westendorp, which is a blunt instrument.

Another issue is that Van Westendorp attempts to measure willingness to pay (WTP) but WTP is the outcome of effective value communication or delivery. This is especially important when introducing a new solution where people do not understand the value. Before you can price a solution you have to be able to say how it delivers value. The price has to track value. The pricing metric needs to track the value metric.

Value Metric: The unit of consumption by which a user gets value

Pricing Metric: The unit of consumption for which a buyer pays

The biggest problem with what ChatGPT has done is different though. It has jumped to a per user pricing metic. Is per user per month the best pricing metric?

What is the best pricing metric for ChatGPT?

ChatGPT is not the only service offered by Open.ai. It sells access to its models in various different ways.

More to the point, Open.ai also provides the very popular Dall-E image generation solution and charges for the number of images generated.

One can embed Dall-E in an application as well. Here the price per image depends on the resolution. This is getting close to a value-based pricing metric.

Note the fundamental difference between how Open.ai is charging for Dall-E and how it is charging for ChatGPT.

For Dall-E it is charging for outputs, for ChatGPT it is charging for access, or more accurately, reliable access. Looking at Ibbaka’s AI pricing model we can see the difference.

Choice of pricing metric is the most important pricing choice, it is much more important than setting the price level. See The Three Key Pricing Questions.

We asked the experts at the Professional Pricing Society LinkedIn Group what pricing metric they would recommend for ChatGPT.

There were some good comments.

“I would start with a flat monthly fee, then measure usage stats across paying user segments, then answer the question again...”

“A monthly fee makes sense as the use cases are not yet clear, it is too early to settle on a pricing metric”

“Pricing based on meterage, perhaps based on a combination of tokens and prompts. Fee per user sounds nice however it doesn't account for the vastly different range of use from one user to another and would likely underprice for daily/corporate type use or overprice for more retail/casual user”

I answered ‘other’ to my own poll. My comment was as follows …

Number of tokens in the output.

I chose this because it meters use rather than access and I think use is at least a tiny step closer to value. Tokens are how Open.ai (all LLMs or Large Language Models actually) process input and measure output. Open.ai already prices its language models on tokens (read the fine print in the language model pricing).

I like that Open.ai is testing several different pricing models for the different ways it provides access to AI. It is still early days and many different business and revenue models for AI are going to emerge.

The pricing models should be designed to gather as much information as possible. A per user per month model does not do this. Open.ai may want to experiment with a number of alternative pricing models to optimize its own learning.

Read other posts on pricing design