OpenAI’s biggest worry isn’t DeepSeek
DeepSeek accelerated timelines, but we will just be accelerating, not changing course
Since DeepSeek R1’s release, there’s been an avalanche of opinions on topics ranging from Nvidia, Chinese AI supremacy, and my favorite, Jevon’s Paradox. Unfortunately, separating signal from noise difficult has been difficult, due to the sheer volume of opinions.
In this post, I will discuss DeepSeek’s actual impact on the AI ecosystem, versus just hype and speculations. In last week’s post, I predicted DeepSeek will climb virally to enter the U.S. consumer and enterprise markets. Well, it has happened (now available on AWS and Azure) - so now what?
Internet sentiment is fickle: the pendulum shifted overnight from “OpenAI will be an undisputed winner” to “OpenAI has no moat, because China and Meta will commoditize AI models”. The truth is somewhere in between.
So we will think deeper about how exactly DeepSeek may affect the “current order” led by OpenAI and Nvidia.
OpenAI Will Be Just Fine, the Speed of Enterprise AI Adoption is the Main Issue
One of the biggest contributions of DeepSeek was showing that simple reinforcement learning approach - and not complicated tree search, etc - can produce reasoning models. Also, DeepSeek independently confirmed that spending more compute during inference time produces better results. Lastly, DeepSeek laid out a path for another 2-5x in cost optimizations in LLM training.
But if these things are true, then it also proved that OpenAI - which has been sitting on these insights about reasoning models for at least a full year - has been quietly climbing up the scaling law curves. Of course, OpenAI had to bear the extra cost of innovation by being the pioneer, but many people are not fully respecting the importance of OpenAI’s lead.