Ibbaka

View Original

How New AI Functionality is Getting Priced: Q&A Follow Up 3

Steven Forth is CEO of Ibbaka. See his Skill Profile on Ibbaka Talio.

On Thursday, May 23rd, Mark Stiving and Steven Forth led a webinar “How New AI Functionality is Getting Priced.” The webinar attracted a lot of interest, with about 500 people across four continents registering.

You can see the webinar here.

There were a lot of questions asked in the chat. This is the third in a series of blog posts where we respond.

How New AI Functionality is Getting Priced: Q&A Follow Up 1

How New AI Functionality is Getting Priced: Q&A Follow Up 2

How New AI Functionality is Getting Priced: Q&A Follow Up 3 (This Post)

Isn’t the predicted increase in computing costs for vendors similar to what happened with hosting costs for SaaS?

No, I think this is quite different.

  1. SaaS and Cloud just moved the servers from the customer to the vendor, and consolidating servers was more cost-effective so structural costs went down.

  2. Generative AI is net new computing power needed.

  3. The compute needed to process a prompt to a language model is several orders of magnitude larger than that needed to process a SQL query to a database (which is most of what B2B SaaS is doing).

Do you think the revenue recognition patterns might change as well as a result? Wall Street likes recurring predictable revenue. If AI is away from this, there will be an impact. Therefore no matter what we think, there will be a bias toward predictable value delivery - even if AI suggests outcome (which might be less predictable)?

Revenue recognition is an accounting issue and well defined so I don’t see a change here.

The real question is valuation metrics and benchmarks. For example, there are suggestions that for the rule of 40, investors are weighing profitability over growth. I was recently told by Geoff Hansen at Garibaldi Capital that profit margin is weighted 1 1/3 and growth at 2/3.

If standard operating margins go down because of the cost of computing and the cost of accessing the large language models (most solutions will be built by enhancing a third-party model, open source or not) this will lead to a recalibration of valuations and new benchmarks. This has not happened yet, but it will, and will take several years to work out.

Both Mark and Steven, considering the potential long-term impacts of generative AI on company valuations, how should B2B SaaS companies prepare for changes in investor expectations and market dynamics? What metrics should they prioritize to ensure sustained growth and attractiveness to investors in an AI-driven market?

This reads like a follow-up to the above question.

As Mark has said, the fundamentals have not changed, although the relative importance of each metric is changing (see note above on Rule of 40).

The most important metric is value to customer (V2C). Karen Chiang and Rashaqa Rahman are giving a webinar on June 5.

The other metrics remain important. Growth and profitability as captured in the Rule of 40.

  • New customer acquisition

  • Average contract value (Does this change with your new AI-enabled solution?)

  • Net Revenue Retention (at the factor level)

  • Lifetime Customer Value (LTV)

  • Customer Acquisition Costs (CAC)

  • LTV/CAC

What are the best metrics for delivering a GPT-like vertical solution?

Value metrics. And these will depend on the specific vertical and the solution. Each solution will have its own metrics that represent its differentiation as well as some standard metrics common to the vertical. CRM applications will include metrics on number of contacts, pipeline velocity, conversion rates, and so on..

In other words, the solution needs a value model. A properly trained and configured generative AI can do a lot of the heavy lifting here.

Other than that, the metrics mentioned above are relevant for any B2B SaaS solution, based on generative AI or not.

Gartner is saying that accounting reporting is becoming continuous. Do you think AI will upgrade the connection between pricing and reporting, to drive the development of nimble real-time pricing adjustments?

Yes. And I think this will be necessary as we move to real-time and dynamic configuration. Given the complexity of the task, this means that pricing will depend on an AI as well.

This approach to pricing will be very different from the current dynamic pricing systems used for revenue and yield management. They will be built off value models and integrate any type of data.

As we figure out pricing and take costs into consideration, if development and support costs come down and more spending on AI, will there be a balancing out?

This is a critical question. I think it will differ from vertical to vertical and solution to solution. Over time we will learn to be more parsimonious in our prompts and outputs and computing will become more cost efficient, bringing down compute costs. But competition and the need for more and more sophisticated solutions, that integrate more and more data of different types will push these back up.

On support costs, basic support costs are likely to go way down, but more sophisticated forms of support that help customers achieve strategic objectives will become more common. So I am not sure that support/success costs will go down, though the skills of the people providing the support will change.

Overall, it is going to be interesting to see how this plays out over the next decade. There will be a lot of work for strategic pricing experts!

Read other posts on pricing AI