Crossing the Enterprise Generative AI Chasm
AI solution providers and infrastructure providers committed to spend $1T on AI-related capex over the next few years, in anticipation of high demand for and heavy use of generative AI enterprise solutions. A few years ago, the automotive industry committed similar scale investments toward the development and deployment of battery electric (BEV) and other new energy vehicles (NEV) anticipating strong consumer demand. However, today as a result of slower-than-expected demand, automakers are reducing their investment commitments and run the risk of ending with unused manufacturing capacity. What return on investment (ROI) does an enterprise need from each generative AI solution it deploys so that it can confidently cross the generative AI chasm and for the huge investments in software, data, and infrastructure that have been committed to be justified?
In a previous post, I described the ingredients of a successful generative AI enterprise application. To answer the question posed above, the enterprise will need to establish the target ROI and the period over which it will be achieved. For example, should the generated AI solution provide a 3x, 5x, 10x, or even higher ROI over one year, five years, or even longer? Enterprises must also recognize that to cross the generative AI chasm in addition to technology the achievement of the desired ROI will require the development of new business models, organizational priorities, and a culture shift.
Even though it may be instructive to look for analogs as far back as the first internet wave of the late nineties, I want to consider two different areas: the use of BEV by corporate fleets, and the adoption of cloud computing by the enterprise. It is instructive to analyze what is holding back today the adoption of BEVs and understand what facilitated the enterprise’s broad adoption of cloud computing and how long it took for the adoption to reach the current level.
BEVs and Cloud Computing
Fleet managers are considering BEVs and NEVs for three reasons: to reduce their organization’s carbon footprint, increase their fleet’s uptime, and reduce the fleet’s total cost of ownership (TCO). Use cases ranging from urban package delivery, public transportation, and long-haul logistics have different ROI multiples and periods over which to achieve the projected ROI. Today the high upfront cost associated with these vehicles that are offered using the existing business models, their range limitations, and inadequate charging and energy transmission infrastructures which negatively impact their operation and uptime, challenge the establishment of an acceptable ROI by a desired period. As a result, fleet operators continue to delay the broad adoption of commercial BEVs and NEVs, leading automakers to reduce their investment commitments.
About ten years ago the enterprise started to broadly adopt cloud computing, crossing the proverbial chasm, resulting in many successful cloud-based solution providers. This happened fifteen years after the technology was first introduced to the enterprise, and only after the early adopters started reporting the ROI they achieved with these solutions. The ROI was achieved by lower IT costs and capex, greater deployment flexibility, and easier system integrations, as well as a new IT solutions business model (subscriptions and pay for only what you consume). Even then, crossing the chasm and achieving the desired ROI took about 15 years and required a painful and lengthy cultural change within corporate IT organizations. The transition period was characterized by intense arguments about the merits and limitations of cloud-based solutions when compared to the equivalent on-premise ones. Therefore, in addition to technology,
Crossing the Generative AI Chasm
Crossing the generative AI chasm will require the enterprise to ask a few important questions:
- Does my AI solution need to access a commercial foundation model like GPT-4, or would fine-tuning an open-source model with my enterprise data be adequate for the solution?
- What business model do I need to use for the solution I’m developing?
- Which of my teams are ready to deploy and take advantage of the resulting solution?
To achieve broad enterprise adoption, and not only in enterprises of the high-tech industry, a generative AI solution applied to a particular use case must deliver ROI commensurate with the investments it required (in people, data, software, and infrastructure) and the perceived execution risk that was assumed over a period that is consistent with the company’s strategic objectives. The selected use cases must solve a hard problem ideally one that is associated with cost reduction, revenue generation, or improved customer satisfaction. Use cases that focus on other components of productivity improvement such as time savings, risk mitigation, and employee well-being, can be considered as candidates but they are never viewed as important as the three mentioned above.
Consider the role of cloud computing in the Customer Relationship Management (CRM) use case. Merrill Lynch was Salesforce’s biggest early adopter and used CRM for revenue generation. Use cases such as corporate document summarization, email response generation, marketing collateral creation, and some others where today generative AI today excels may still be cheaper to accomplish using low-cost employees rather than a generative AI solution, such as GPT-4 or Gemini particularly if one considers the inferencing costs the generative AI solution provider incurs. Today such costs are not reflected in what solution providers charge enterprise users. The availability of abundant venture and private equity capital, as well as the rich balance sheets of AI infrastructure providers such as Google, Microsoft, Amazon, NVIDIA, and a few others, enable this discrepancy.
There is no question that over time (two years, five years, fifteen years?) the available LLMs will be able to consistently provide superb summaries, and marketing collateral, among even more important creations, at a fraction of today’s cost, even after we consider the costs of model fine-tuning and specialization which is not typically taken into consideration in today’s model training and inferencing cost estimates, resulting in acceptable ROI both to the provider and the enterprise. But we are not there yet.
Experimentation that aims to gain familiarity and comfort with generative AI requires simpler use cases. Experimentation helps the enterprise gain familiarity with the technology and maybe even test new business models. However, projects that demonstrate generative AI’s ROI must address hard business problems that are critical to the enterprise. Some of these may be industry-specific and very narrow, while others may apply across industries. An experiment may last six months. However, the process of developing and deploying a solution to a critical problem may take years during which time technology, business model, and culture need to be considered and likely transformed. As part of selecting the problem to address using generative AI once the experimentation phase is completed, enterprises must consider three types of opportunities:
- Rethinking existing business processes and recreating them with a generative AI-first approach.
- Augmenting existing business processes by introducing generative AI-based components.
- Focusing on new processes and developing them using a generative AI-first approach.
Applying generative AI to customer support or computer programming, as many enterprises across different industries do today, may provide the enterprise with encouraging results during the experimentation phase but improving customer satisfaction and reducing software development costs respectively requires a multiprong effort that involves more than simple prompt engineering, simple LLM fine-tuning, or naive Retrieval-Augmented Generation approaches, and is not as easy and inexpensive as it may have appeared during an experiment.
Only once we measure the ROI of a few of these use cases that involve critical problems and find it to be satisfactory, venture and other investor types will be able to claim with certainty whether we have been investing at the right level, in the right technologies using reasonable valuations. Until then, generative AI solution providers, infrastructure providers, and their investors are speculating at a grand scale. And lessons from cloud computing, and more recently battery electric and new energy vehicles teach us that this approach typically results in many disappointed parties.
Leave a Reply