How New AI Functionality is Getting Priced: Q&A Follow Up 2

Steven Forth is a Managing Partner at Ibbaka. See his Skill Profile on Ibbaka Talio.

On Thursday May 23 Mark Stiving and Steven Forth led a webinar “How New AI Functionality is Getting Priced.” The webinar attracted a lot of interest, with about 500 people across four continents registering.

You can see the webinar here.

There were a lot of questions asked in the chat and Q&A. This is the second in a series of blog posts where we respond.

How New AI Functionality is Getting Priced: Q&A Follow Up 1

How New AI Functionality is Getting Priced: Q&A Follow Up 2 (This Post)

How New AI Functionality is Getting Priced: Q&A Follow Up 3

Maturing to performance based pricing has been challenging for certain organizations; in terms of pricing revolution, how is AI expediting or fueling the ability to get to performance based pricing?

I think there are three barriers to the adoption of outcome based pricing.

  • Attribution - what led to the outcome and who gets to take credit

  • Predictability - outcomes based pricing can make revenue (for the seller) and costs (for the buyer) hard to predict

  • Accountability - who is responsible for taking the actions that will lead to the outcome and what happens if someone does not take actions that have been agreed on

To what extend will AI help address these barriers?

Causal models will be needed to manage attribution. This is already well advanced in healthcare and HEOR (Health Economics and Outcomes Research). Doing this at scale for B2B SaaS will require new toolsets,. One that I am tracking is PyWhy, a set of Python libraries for causal modeling with Large Language Models.

Predictability is being solved with improved predictive models. The additional data being made available by AI will improve prediction. I think this will be solved in the next few years.

Accountability will be more difficult to manage. It is going to require trust and shared commitments enforced through smart contracts. Smart contracts are likely to be a key enabling technology.

Gartner is saying that accounting reporting is becoming continuous. Do you think AI will upgrade the connection between pricing and reporting, to drive development of nimble real-time pricing adjustments?

Yes. I think that is going to be essential and will be one of the biggest changes to enterprise pricing. If data is available real time, configuration becomes real time and systems have more and more complex interactions then pricing will need to be real time.

This does not mean dynamic pricing, at least not as it is currently imagined. We need a new model for this, something that I think of as generative pricing. This graphic from the webinar is meant to signal that.

A SaaS solution where implementation project is needed and is charged to the client separately (which is currently based on Time and Material model). Customer is expecting reduction in T&M with AI usage. How can this be overcome to have minimal effect on such project revenue?

The time needed for configuration and the cost of configuration is about to contract. In some cases from months to minutes (this is what Totogi is claiming and why we included in in the webinar).

Given this a time and materials approach to configurations will NOT work going forward. If your revenue model depends on this get ready to change.

A better question is “What is the value of a configuration?” and “How do we create additional value through our configuration services?” This goes beyond configuration to strategy, adoption and customer success.

Understand the value of configuration and adoption services and then price to share that value.

Steven, bias is important in pricing and people have bias against AI. How do you think about overcoming this potential double bias for AI enhanced feature to a product?

We have seen this in some work we did last year on a pilot system for price changes. The system used an LLM (Mistral) and a RAG architecture to generate price change recommendations. Some people accepted these recommendations as they were generated by an AI; others resisted then as they were made by an AI.

There are a few things that we need to do.

  1. De-bias the models as much as possible. This mostly means de-biasing the training data. Before using data for training run tests to see what biases exist in the data. This is becoming a standard part of AI Ops. Splunk has a good explanation of this.

  2. Make sure that your AI platform supports traceability and attribution. The current generation of AI management platforms like Perplexity do a good job of saying what resources inform an answer. This is a best practice.

  3. Use explainable AI. The black-box nature of deep learning models has been of concern for some time (the same could be said of the revenue optimization and dynamic pricing functions of the large pricing platforms). There is a lot of work being done on how to make AI easier to understand. Wikipedia has a good introductory article.

Not understanding real-time configuration driven pricing? Will you please elaborate/ give an example?

Generative AI is being used to power real-time configuration of complex business applications. Totogi was given above as an example. Real-time configuration implies real-time pricing. If your system can be stood up and configured in minutes buyers will not want to wait hours for a pricing proposal.

As we figure out pricing and take costs into consideration, if development and support costs come down and more spending on AI, will there be a balancing out?

This is an open question at present and will no doubt differ by SaaS vertical and solution. We will explore this more in the AI Monetization in 2025 Research Report that will be published in January 2025. You can get the 2024 report here.

I do not think development costs will actually go down, rather we will be able to do more and be expected to do more, for the same development budget. It is possible that we will also see a move to continuous development where R&D becomes a variable operating cost.

I do expect compute costs to stay relatively high. Price per token (how most LLM companies price access to their models) may go down, but the number of tokens used in inputs and outputs is likely to increase quickly.

Support costs seem to be an open question. Most routine support will be provided by AI. Does that mean support costs will go down or will customers simply up their expectations? I suspect that both will happen and this will become a way to segment customers and offers.

Previous
Previous

How New AI Functionality is Getting Priced: Q&A Follow Up 3

Next
Next

How New AI Functionality is Getting Priced: Q&A Follow Up 1