Recently, Goldman and Sequoia published thought pieces expressing skepticism over generative AI’s impact, and the ROI of AI related capex (e.g. GPU purchases, data centers, etc). Ed Zitron also wrote a widely circulated blog post, leaving Twitter abuzz with FUD over AI adoption.
So, are the AI skeptics right about AI being mostly a house of cards, and that trough of disillusionment is imminent? First, it’s important to note there’s “levels” to AI skepticism, which goes like:
“Full on” AI skeptics (”Gen AI is mostly hype and useless, stochastic parrots, we are running out of data anyways, all this reminds me of web3”) - Ed Zitron
“Partial AI” skeptics (”Gen AI is useful but not useful enough to warrant $250bn of capex, or use cases may take too long to emerge) - GS’s Jim Covello, David Cahn from Sequoia.
The common factor in these views is doubt over AI’s utility, timing, and impact that can justify $250bn+ of capital expenditure. Where is the $600bn of “AI revenue” going to coming from to pay for the infrastructure buildout?
But what if I told you, that all this can be paid for, if AI can help big cloud providers (e.g. AWS, Azure) migrate on-prem workloads to cloud.. which is a $500bn/year opportunity? This way of thinking - I found - makes it much easier for investors to stomach the bullish thesis on AI adoption.
In this post, I’ll offer both “bottom-up” and “top-down” views on this matter, after doing over 300+ enterprise calls on AI at AWS and through consulting - so this isn’t armchair commentary. For the short run (<2 years), I believe AI capex will be easier to justify than the skeptics think, even though sentiment can oscillate in the meantime.
To keep this debate grounded, I will strictly comment on the enterprise AI adoption, and leave out commentary about job market impact, etc (that’s a separate post. Ironically, the more bullish you are on AI, the more you should be concerned about job market impact). Also, this is not a stock market commentary, since AI adoption isn’t the only driver of stock returns. There are just too many moving pieces like interest rates.
But in short, there are 5 reasons to not feel too skeptical about generative AI’s commercial impact.
Reason #1: there is no “$600bn hole”
First, there’s no “$600bn hole” in AI capex (as per Sequoia’s piece), because AI - in “the worst case” - is just a better sales funnel for cloud services.
Think of AI is the perfect sales acceleration mechanism for migrating the last remaining “on-prem” workloads onto cloud. This “on-prem” piece is still almost 50% - 70% of worldwide IT spend (depending on who you ask), representing almost ~$300bn-$500bn addressable spend per year that is up for grabs. So AI’s real value is that it provides a clear incentive for on-prem workloads to be integrated and modernized, and this migration represents the “real money” to be made hyperscalers.
So taking a step back, the reason why hyperscalers can confidently splurge on Nvidia chips is because they know it strengthens the economics of moving to cloud. Worst case, they sell more EC2 instances, Bigquery nodes, Azure Fabric, whatever. Worst case, hyperscalers don’t care if generative AI makes money in year 1, year 2, etc.
Everyone that’s been holding out on trying cloud vendors are suddenly making AWS and GCP accounts to try Bedrock and Gemini. And in the process, they are now having conversations about migrating database workloads, data warehouses, and so on, because everything plays better with each other within the same AWS or GCP ecosystem.
Cloud provider revenue is at ~$300 bn run rate. So if GenAI lifts this by just 10% per year, then that’s already $30bn. OpenAI’s also doing $4bn ARR and growing fast. So that’s almost $40bn of lift across the board, and that’s more than enough to justify $200bn of capex (at least on Excel).
Per Andy Jassey (AWS) from last earnings call:
We expect the combination of AWS' reaccelerating growth and high demand for GenAI to meaningfully increase year-over-year capital expenditures in 2024, which given the way the AWS business model works is a positive sign of the future growth. … And we don't spend the capital without very clear signals that we can monetize it this way. We remain very bullish on AWS. We're at $100 billion-plus annualized revenue run rate, yet 85% or more of the global IT spend remains on-premises. And this is before you even calculate GenAI, most of which will be created over the next 10 to 20 years from scratch and on the cloud.
Of course, one risk to this thesis is that more compute goes to the edge (and outside of servers), but no one’s thinking that far ahead in the market.
Reason #2: 90%+ of enterprise AI consumption is programmatic, and invisible to end users.
Much of AI skepticism comes from the fact that “there’s no killer app for LLMs/AI yet”. This is just wrong on so many levels. The killer app for LLMs is unstructured data processing and mining, and these workloads are already saving companies hundreds of millions in data labeling costs. Companies barely scratched the surface. That said, this workload is largely invisible to the public or any armchair pundit, so it’s understandable that AI feels like web3.
In short, it’s possible that people are underestimating the impact of generative AI because their mode of use is ChatGPT or its equivalent, which are arguably not extremely useful for everyone. But when LLM s are programmatically applied to clean up and label millions of rows of unstructured data, it’s suddenly far more useful.
And all that “background” LLM consumption is not visible to people. Out of sight, out of mind - they say - and this introduces cognitive bias in many to underestimate LLMs’ utility. This will probably be the case for a while - most LLM workloads are not from ChatGPT interactions but from things like running document processing, log cleaning, internal apps, and so forth.
Also, don’t forget that if an internal app is actually useful for an enterprise, and it’s a huge competitive differentiation, they have no incentive to publicize it! But don’t confound lack of visibility with lack of usefulness.
Reason #3: Enterprises are serious and committed
Enterprises are actually serious about generative AI, more than the AI skeptics can imagine, and will stay in the game until it “clicks”. Most generative AI projects are c-suite priorities, since everyone actually sees this as a generational threat.
This is very different from cloud adoption or application modernization moves, in that you were ok with selling to some VP of Engineering or LOB. Because these projects have powerful sponsors, some of the crappy POCs will eventually get cleaned up and make into production. Product managers are asked to include Gen AI in the roadmaps. Sure, mistakes will be made, but things will get done. Impact will be felt eventually.
That’s especially the case with low hanging fruit use cases such as internal knowledge management, marketing automation, support, etc - and these projects are on track to make into production within a year or two. The bottleneck isn’t resolve, but best practices. Some early adopters like Morgan Stanley already have production deployments, which only adds fuel to these initiatives.
In other words, we are mostly talking about speed of adoption, not “if” the adoption will happen or not.
Reason #4: New industries take time to develop
The impact of generative AI can seem pretty underwhelming if new industries can’t emerge leveraging AI, like affordable robotics.
Automation is hard to “feel” and appreciate viscerally, compared to seeing a demo of AI powered robots picking up and folding clothes, etc. There’s a limit to how much one can get excited by automating web browser processes. Simply put, much of AI skepticism may stem from the fact that we are still doing household chores ourselves.
So where are the revenue generating applications? That’s a valid question - but direct revenue generation from AI will most likely not come from software industry, and somewhere in industrial or hard biotech use cases - which take a long time to bring new products to market. We just need to be patient, but I think the market is already discounting this future.
Reason #5: Multi-modal hasn’t even gotten started
Multi-modal AI hasn’t even really started: even if we take 5+ years for LLMs to create new non-software industries (an assumption), multi-modal AI alone may keep the GPUs busy and worth buying. Models like Sora (text-to-video) cost more compute to train than LLMs at this point, and the market hasn’t figured out how to run inference on them cheaply, which is why OpenAI is delaying the launch of Sora.
As to whether Sora will actually make enough “revenue” for OpenAI to make this worthwhile? I don’t know. But does it matter? Not really, since that won’t stop Google and OpenAI from competing in this space.
In a nutshell, I think AI skeptics are primarily conflating the lack of visible AI adoption (from the perspective of a consumer) with generative AI being useless. The progress is made mostly in an invisible way (programmatic use of LLMs), and the enterprise AI applications are still mostly half baked and not good enough to replace entire departments of workers. But can they get good enough to help companies do a 20% reduction in force and run smoothly? You bet.
Why it can still all go up in flames
Ok, so one dark horse is actually not in whether enterprise AI adoption will be impactful - but whether COGS of delivering apps falls too fast and too much for there to be “net-new” value creation, because we all just ran out of ideas on how to make money. In other words, if someone invents and commercializes technology to serve GPT-4.5 levels at $1 per 1 trillion tokens, we may be in a weird place.
But that’s a subject of another post.