Welcome to the first edition of Notes by Sherin Joseph Roy.
If you're reading this, you probably found me through one of my posts about building enterprise AI systems, or maybe we connected on LinkedIn or daily.dev. Either way, I'm grateful you're here.
I want to start this newsletter with a confession: I almost didn't launch it. There are already too many AI newsletters promising insights, too many founders sharing "lessons learned," too many voices competing for attention in your inbox. What could I possibly add that isn't already being said better by someone else?
Then last week, I sat in a meeting with a Fortune 500 CTO who said something that changed my mind. We were discussing why their team had stopped using the AI coding assistant they'd purchased six months earlier—a tool that cost them significant money and had passed all their technical evaluations with flying colors.
His answer was brutally simple: "It worked. We just couldn't afford to trust it."
That sentence has been rattling around in my head ever since. Because it perfectly captures the paradox we're living through right now in enterprise AI.
The Numbers Everyone Is Talking About
The 2025 Stack Overflow Developer Survey just dropped, and one data point is causing existential conversations in every tech company I talk to: 84% of developers now use AI tools in their workflow. That's up from 76% last year. Adoption is accelerating faster than almost any technology shift in recent memory.
But here's the part that keeps me up at night: only 33% of those same developers actually trust the output. Positive sentiment toward AI tools has dropped from 70% in 2024 to 60% this year. We're not just using AI tools more—we're trusting them less.
This isn't a temporary dip. This is the market telling us something fundamental about how we've been building and selling AI products. We optimized for adoption and forgot to optimize for trust.
What the Trust Tax Actually Costs
At DeepMost AI, we've spent roughly $200,000 learning what happens when enterprises deploy AI systems they can't quite trust. That number includes failed pilots, extended evaluation periods, custom integration work that ultimately got shelved, and opportunity costs from deals that stalled in procurement because we couldn't satisfactorily answer questions about reliability.
The trust tax shows up in ways that don't appear in any product metrics. It's the senior engineer who reviews every line of AI-generated code with the same scrutiny they'd apply to a junior developer's first pull request. It's the operations team that maintains manual verification steps alongside automated AI decisions "just to be safe." It's the executive who greenlights the AI pilot but refuses to let it touch production systems until it's proven itself for six months.
These aren't irrational behaviors. They're appropriate responses to systems that make confident recommendations based on incomplete understanding. The problem is that most AI products are designed to hide uncertainty rather than communicate it clearly.
The Meeting That Changed Everything
Three months ago, we deployed an AI agent for a logistics company that was supposed to optimize their supply chain routing. The agent performed exceptionally well in testing—finding cost savings, identifying inefficiencies, suggesting improvements that their experienced operations team confirmed were technically sound.
Then we watched the session recordings of how the team actually used it in their daily workflow. They would generate a recommendation, spend fifteen minutes verifying it manually using their existing tools, discuss it in a team meeting, and only then implement the change. The AI was supposed to save them time. Instead, it added a new step to their existing process.
When we asked why they weren't trusting the recommendations more directly, the operations manager said something I'll never forget: "Your AI is probably right. But if it's wrong and I don't catch it, we lose six figures in misrouted shipments. I can't afford to assume it's right."
That's when I realized we'd been solving the wrong problem. We were building AI that was impressively capable. What we needed was AI that was appropriately uncertain.
What "Appropriately Uncertain" Actually Means
Most AI systems are trained to always provide an answer. During development, they're penalized for saying "I don't know" or asking clarifying questions. This makes sense for consumer applications where user experience depends on smooth, confident interactions.
It makes zero sense for enterprise systems where wrong answers cost real money.
After that logistics deployment, we rebuilt our entire agent architecture around a principle we now call transparent autonomy. Our AI agents still operate autonomously and make recommendations. But they also explicitly communicate:
What information they used to make the recommendation
What assumptions they had to make
How confident they are in the solution
What additional context would increase their confidence
When they should escalate to a human for verification
This approach makes our demos less impressive. There are moments when our agent says "I need more information about your seasonal demand patterns before recommending this routing change" instead of just generating a solution. Investors hate it. They want to see the AI confidently solving problems, not asking questions.
But our production deployment success rate went from 30% to 85% after we made this change. Because enterprises don't care about impressive demos. They care about systems they can trust enough to actually use.
The Bigger Pattern I'm Seeing
The trust gap in AI tools isn't just about coding assistants or enterprise agents. It's showing up everywhere AI is being deployed in high-stakes environments:
In healthcare, doctors use AI diagnostic tools but still order the same tests they would have ordered anyway because they can't risk being wrong based on an AI recommendation they don't fully understand.
In finance, trading algorithms generate suggestions that human analysts verify manually before execution because the cost of an error is too high to delegate completely.
In legal, AI document review tools find relevant cases but lawyers still read them all because missing something due to AI error could constitute malpractice.
The pattern is consistent: AI tools get adopted for their promised efficiency gains, but organizations immediately build verification layers around them because they can't afford to trust them completely. The net result is often slower processes, not faster ones, because you're doing both the AI work and the human verification work.
What I'm Learning to Build Differently
The companies that will win the next phase of enterprise AI aren't the ones building the smartest algorithms. They're the ones building the most trustworthy systems. This requires different architectural choices:
Design for Auditability: Every AI decision should come with a clear explanation of how it reached that conclusion. Not just "here's my recommendation" but "here's what I considered, here's what I prioritized, here's what trade-offs I made."
Build in Confidence Scoring: AI systems should be able to assess their own certainty. Low-confidence recommendations should automatically surface for human review rather than executing blindly.
Create Escalation Pathways: When AI encounters situations it can't handle confidently, it should be able to ask for help with specific questions rather than either guessing or failing silently.
Optimize for Learning, Not Just Performance: Systems should get smarter over time by incorporating human feedback on their decisions, not just running the same model repeatedly.
These principles sound obvious when stated plainly. But implementing them requires resisting every instinct toward making your AI seem more capable than it actually is. It requires accepting that "I need more information" is sometimes the right answer, even though it makes for a less impressive demo.
I'm starting this newsletter because I keep having conversations with other founders, product builders, and technical leaders who are wrestling with similar challenges. Not just in AI, but in building any technology that's supposed to make important decisions or handle complex, contextual problems.
The public conversation about AI is dominated by two extremes: uncritical hype about how AI will transform everything, and alarmist warnings about how AI will destroy everything. There's very little practical discussion about the messy middle ground where most of us are actually working—trying to build AI systems that are genuinely useful without being overconfident about what they can reliably do.
This newsletter is my attempt to document what I'm learning in that messy middle. Some weeks I'll write about specific technical challenges we're solving at DeepMost. Other weeks I'll share founder reflections on building a company, hiring a team, or navigating the weird dynamics of the enterprise AI market. Occasionally I'll just write about interesting patterns I'm noticing in how technology and human organizations interact.
What I won't do is pretend I have everything figured out. I'm learning in public, making mistakes, and trying to build something meaningful in a space that's changing faster than anyone can keep up with. If that sounds interesting, I'm glad you're here.
What's Coming Next
In the next few editions, I'm planning to write about:
The context problem: Why most AI tools feel like talking to a stranger, and what we're building to fix it
Hiring for AI products: What I look for when hiring engineers and designers who understand contextual intelligence
Building in Bangalore: The unexpected advantages and challenges of building an AI company outside the typical tech hubs
The first 100 days: Honest reflections on what the early days of founding DeepMost actually looked like
But I'm open to feedback. If there are specific topics you want me to explore, or questions about building AI products or founding a company, reply to this email. I read everything, and subscriber questions often become the best newsletter topics.
One Ask
If this resonated with you, I'd appreciate if you'd forward it to one person who might find it valuable. I'm not trying to build a massive audience—I'm trying to build a community of people who care about doing this work thoughtfully.
Thank you for being here at the beginning.
Building something meaningful,
Sherin Joseph Roy
Co-Founder & Head of Products, DeepMost AI
Bangalore, India
P.S. — I'm always curious about what problems people are actually facing when building or deploying AI systems. If you're working on anything related to enterprise AI, contextual intelligence, or human-centered product development, I'd love to hear from you. Just hit reply.

