Ibbaka

View Original

How New AI Functionality is Getting Priced: Q&A Follow Up 1

Steven Forth is CEO of Ibbaka. See his Skill Profile on Ibbaka Talio.

On Thursday, May 23rd, Mark Stiving and Steven Forth led a webinar “How New AI Functionality is Getting Priced.” The webinar attracted a lot of interest, with about 500 people across four continents registering.

You can see the webinar here.

There were a lot of questions asked in the chat. This is the first in a series of blog posts where we respond.

How New AI Functionality is Getting Priced: Q&A Follow Up 1 (This Post)

How New AI Functionality is Getting Priced: Q&A Follow Up 2

How New AI Functionality is Getting Priced: Q&A Follow Up 3

How are you using AI? Do you also mean an AI-based feature or product that uses other ML architectures besides transformer based models?

The focus of our research has been on generative AI applications that use Large Language Models (LLMs) built using transformers. This includes Generative Adversarial Networks (GANs), Retrieval Augmented Generation (RAGs) and other emerging architectures.

That said, there are many other important ways to use deep learning to enhance value and not all AI is or will be built using deep learning. One important example is Google Deepmind’s AlphaFold for exploring proteins. The underlying approach can be applied to many other domains where combinatorial explosion is combined with constraints.

The most impactful solutions may well be those that combine more than one approach to AI. An early example is how Wolfram and generative AIs can be connected.

Do you think "democratized" systems should be free or low-cost?

This is a very big question, one of the key questions facing all of us over the next five years. It is perhaps too big to be answered here. Personally, I (Steven) believe that having full access to generative AIs will be as necessary as access to electricity and that learning how to work with them is as important as learning how to read. The best open source LLMs should perform as well or better as any of the closed models and government’s may need to subsidize access.

One of the challenges with AI is that so much can be done at such a low price relative to the cost of inputs. Value-based pricing could reasonably put AI products in line with value over human costs. However, as AI becomes ubiquitous, do you expect pricing to drop closer to the cost of inputs?

I don’t think this is a full representation of the costs of developing and operating generative AI based applications.

  1. Development costs may go down but much more functionality will be needed to be competitive. This is an example of a ‘Red Queen Game." As the Red Queen in Lewis Carrol’s Alice Through the Looking Glass said, ‘You have to run as fast as you can to stay in the same place. If you want to get somewhere else you must run twice as fast as that.’ In economics, a Red Queen game is one in which all market participants are innovating and the baseline, or table stakes, is constantly improving.

  2. Customer support and customer success costs may drop quickly. This is one of the obvious and most advanced applications of AI (we talked about Intercom and Fin AI in the webinar). On the other hand, customer service and customer success teams may be needed at the strategic level to make sure customers are getting full benefit from AI based innovations.

  3. Compute costs for generative AI applications are much higher than for conventional database applications. A prompt is not a SQL query and takes much more computing resources to process. Generative AI applications will have much higher operating costs and this will have a big impact on everything from valuation metrics to the ability to offer free versions of software.

Conventional economic theory holds that prices in markets with perfect competition will fall until they are close to unit costs. Will B2B SaaS markets become closer to perfect competition (symmetric information, perfect substitutes).

Will AI bring us closer to symmetric information and perfect substitutes? I am skeptical.

The other part of this question is what will happen to the cost of generative AI applications and wages if (when) human and AI performance converges. Given that current tax regimes favour AI there will need to be big changes to taxation if there is to be a level playing field.

Will enhanced AI-capabilities pricing eventually be pressured downward (bundled with product -- incorporated into the product pricing tiers - vs priced separately) with general market deployment?

I think what will happen is a bit different. Successful SaaS companies will be forced to re-platform so that AI is the platform upon which other applications are built. I think this is importantly different from AI functionality becoming part of existing functionality (Mark likely disagrees with me on this though, we will dig into this in one of the Impact Pricing Podcasts).

Pricing power will come from being able to offer differentiated value. That will not change.

Will generative AI make it more or less difficult to create differentiated value?

That is the big question, but the safe assumption is to assume that it will still be possible to create differentiated value and to do everything in one’s power to create that differentiated functionality and then to find the pricing model that allows you to price to capture your fair share of that value. (Karen Chiang and Rashaqa Rahman will be talking about this in their June 5 PeakSpan Master Class Maximizing Value: The Art of Measurement).

How can you verify that a synthetic user can provide a valid input?

This question came up in the context of Synthetic Users and their claims about

  • User research without the users

  • User research without the recruitment

  • User research without the scheduling

  • User research without the cost

For any of these claims to be true the synthetic users need to provide responses that give insight into actual users (at least until synthetic users become a class of users and buyers).

The only way to test this is with double-blind studies. Several companies are carrying out such studies (including Ibbaka) but the results are not yet public. I hope to see an independent body take on such work and share the results. We will let readers of this blog know what Ibbaka finds out.

Read other posts on pricing AI