Prompts Are the New IP—How Prompts Are Quietly Eating All Your "Business Logic"
"Business logic" of most apps are getting absorbed into prompts. They are becoming mission critical assets.
As AI models get better, an interesting thing is happening: prompts are slowly becoming repositories of a company’s “business logic”, distilling domain expertise, trade secrets, etc. The key distinction being, obviously, that prompts are written for machines, not humans.
In fact, more "business logic" within applications is already moving away from traditional code and into prompts. The prompts from leading AI products are starting to resemble standard operating procedures (SOPs) or manuals for human employees, as opposed to chatbot instructions.
If you have found yourself recently writing longer and more “ambitious” prompts, that’s what I mean.
Consider Anthropic’s system prompt for Claude 3.7 Sonnet. It is long and sophisticated, with a lot of “Do’s” and “Don’t’s”, nuances - basically written like an employee onboarding document. As reasoning models improve, this trend of having prompts do more “heavy lifting” is accelerating, mainly because AI is already human-level at instruction following.
Thus, prompts are fast becoming a mission critical business artifact like SOPs, but with even more leverage as they can plug into AI agents and run 24/7. They will contain sensitive internal processes, proprietary information, and critical business insights - basically, they are key intellectual property of post LLM era. And even more broadly than prompts, it’s really about the overall AI system architecture.
Unfortunately, as of 2025, many enterprises haven't yet recognized this shift, and treat prompts as second class citizens to application code, data, and models.
For example, most orgs are still treating AI projects as traditional data science or ML team projects - and treat writing good prompts as a priority over, say, spending more time writing application code, or building infra tooling. These are essentially “old habits” that made sense when AI models were bad, but are probably depreciating “fast”, and slowing companies down and costing money.
So in this post, I will:
explain how prompts are now “eating” business logic,
how companies that don’t recognize this shift will lose agility and be slower to adoption innovations,
and the typical mistakes companies make, and why they are costly
Prompts are eating “business logic”
While new AI agent frameworks and new models are fun to talk about - and certainly a big chunk % of AI “mindshare” - the real advancement of late has been AI models’ ability to understand user intent and follow instructions.
This enhanced instruction-following capability significantly reduces the ROI of complex, custom-engineered solutions, historically necessary as workarounds for earlier model limitations.
A relevant analogy is this: if you are hiring someone with zero education, you will have to break every concept down into small pieces, and generally give simple tasks. Otherwise, you will overwhelm him or her. But if you are hiring someone with PhD level education, you may get away with providing a big manual, and ask him or her to just “learn it”.
Consequently, better AI models reduce bespoke heavy lifting needed to make things work. And as AI agents proliferate, we can anticipate enterprise applications shrink in codebase size, as business logic increasingly consolidates around the new triumvirate: prompts, models, and data - as opposed to code.
This is also means AI agent architectures are getting simpler. You no longer need to write complicated dialog trees or “agent swarms” to declaratively and explicitly tell how an agent should respond.
Instead, in March 2025, you could write an AI agent in a single file, like this seat booking agent example (from OpenAI’s AI agent SDK, but slightly modified). It now takes perhaps less than 200 lines of code to create a decent customer service bot. Most of the “business logic” is basically the prompt:
f""" # System context You are part of a multi-agent system called the Agents SDK.. ... # Your role You are a seat booking agent. If you are speaking to a customer, you probably were transferred to from the triage agent. Use the following routine to support the customer. # Routine 1. Ask for their confirmation number 2. Ask the customer what their desired seat number is. 3. Use the update seat tool to update the seat on the flight. .... 10. # SOP Here are the relevant protocol(s) for you. ... If the customer asks a question that is not related to the routine, transfer back to the triage agent. """
Note, this prompt can be drafted by anyone, really, with Google Docs, Word, Notion, etc. It has sections for:
System context (system prompt)
Role description (”your role”)
SOP,
…
Aside from the prompt, the actual code that needs to be written (by the end developer) is less than 200 lines typically, and as few as 7 lines of code, like below:
seat_booking_agent = Agent[AirlineAgentContext](
name="Seat Booking Agent",
handoff_description="A helpful agent for United Airlines that can update a seat on a flight.",
instructions=PROMPT,
tools=[update_seat],
)
Thus, writing this prompt gets you about 20-50% of building the agent, depending on the use case. The (%) component that the prompt takes really depends on the complexity of your use case, and the type of agent you are building. For complex, domain specific apps (such as bio-chemistry research agent), the importance of custom models and datasets will increase dramatically.
But here’s the takeaway:
Prompts (and overall solution architecture) is eclipsing application code as the value driver.
Application code’s value is diminishing already with coding AI assistants and “vibe coding” tools.
Successfully navigating this paradigm hinges on robust solution architecture and thoughtful system design rather than traditional software engineering alone—a skill increasingly commoditized. Companies quick to recognize and adapt to this shift will secure a lasting competitive advantage.
Unfortunately, in 2025, most companies are still running their AI programs like a traditional data science project, and making costly mistakes.
Mistakes Companies Make in Building AI Agents
But perhaps due to inertia, many organizations mistakenly consider prompts solely within the purview of tech teams. This is fundamentally misguided: data science or tech teams alone shouldn't bear full accountability for the success of AI agent / automation projects.
This misconception also encourages poor engineering practices, such as embedding prompts directly into codebases, which is insecure and creates unnecessary gatekeeping. Restricting prompts to codebases alienates critical business stakeholders who ideally should own and influence these prompts.
For example, imagine embedding the above customer support prompt was directly into backend code.
Every time a policy or script needs updating, developers must make changes and redeploy the application. This approach severely slows down business agility and unnecessarily burdens tech teams.
But if you think about it, the developer is not the main stakeholder here. It’s the GM of the customer support business! The GM should have accountability for the prompt as much as the developer.
Effectively, managing prompts as mere application code treats them like "second-class citizens." Many view prompts as sub-components within the broader AI applications rather than as independent artifacts deserving separate management.
Such an approach is outdated and ineffectively, not to mention insecure:
Prompts are distillations of a company’s business knowledge - and therefore, the GMs, managers, etc, should be a part of prompt crafting process. After all, it is their business that will be amplified by AI agents. Unfortunately, many AI project teams currently don’t collaborate closely with business domain experts nearly enough.
Companies instead should manage prompts independently through platforms accessible to both technical and business stakeholders.
Part of the issue is tooling and culture. Although various sophisticated prompt management software exists (e.g., Langsmith, Braintrust), most of them cater primarily to developer personas. Business stakeholders are not seen as primary users of these applications.
Therefore, there remains a clear need for tools facilitating seamless collaboration between business and technical teams. Databricks has introduced some features for business stakeholder collaboration on their data lineage products, but there’s a lot to be desired.
These issues represent just a few emerging anti-patterns. Ignoring this transformation will prove costly as prompts increasingly become central to AI operations, especially with LLMs growing more "agentic."
Not All Prompts Are Equal: Where to Prioritize Effort
Since some of the readers may misconstrue my thesis in thinking that all prompts are important - let me clarify that not all prompts are equal, just like not all business processes are equally important..
To prioritize which prompts need to be optimized first, recognize clearly that the business value of prompts varies significantly by context:
Commodity Prompts: Certain prompt types—customer support chatbots, FAQ responses, basic translation tasks—are relatively commoditized. While these prompts still require periodic optimization, they won’t drive outsized competitive differentiation.
High-Value Proprietary Prompts: Conversely, prompts embedded into tasks directly tied to your profit centers—investment trading strategies for hedge funds, pricing and negotiation policies for sales teams, detailed internal compliance procedures for regulated industries—are your crown jewels. These are the prompts you must invest heavily into, obsessively iterating, safeguarding, and optimizing.
Identify these high-value prompts early, allocate your top talent and tightest security measures there, and rigorously track their performance outcomes. Such prompts become core competitive advantages, worth protecting fiercely and improving continuously.