title: "Learning from Setbacks: Three Failures That Shaped My Approach to Building" date: 2025-10-15 author: David Sanker
The contract review system we built was technically impressive. It could do almost everything. That was the problem.
I've spent enough time in law to know that "can do almost everything" usually means "does nothing well." And I've spent enough time writing code to know that complexity is where projects go to die quietly. But knowing something intellectually and learning it in your bones are two different experiences. I had to earn this one.
Over the years, three failures in particular have shaped how I think about building — whether that's a legal tech product, a startup, or the structure of someone's professional life. They don't feel like war stories anymore. They feel like the foundation.
TL;DR
- Learning from mistakes is essential to building anything that lasts.
- Each failure carries a lesson specific to its context — and applicable far beyond it.
- Realigning goals and methods after a setback often produces the most durable innovation.
Key Facts
- Automated contract review system design led to overcomplexity.
- AI healthcare project failed due to regulatory compliance issues.
- Predictive patient care AI needed to consider GDPR and HIPAA.
- Inconsistent datasets led to unreliable market trend forecasts.
- "Less is more" strategy improved subsequent AI system designs.
The Misstep of Overcomplexity
What Happened
Early in my career, I was part of a team building an automated contract review system for law firms. The vision was genuine — contract analysis is tedious, high-stakes, and ripe for better tooling. We had machine learning, natural language processing, and enough enthusiasm to paper over the warning signs.
What we built was extraordinary in its ambition and exhausting in its use. Every edge case got a feature. Every stakeholder request made it into the next sprint. By the time we shipped, the system required extensive training just to navigate, and the attorneys it was meant to help were still reaching for their highlighters.
We had optimized for comprehensiveness. We should have optimized for clarity.
What It Taught Me
There's a version of this failure I see in people's careers, too. We load up our professional lives with every credential, every side project, every responsibility someone asks us to take on — until the whole thing becomes too heavy to move forward with any speed. My legal training made me thorough. My engineering instincts made me want to solve every problem at once. What coaching eventually taught me is that the question isn't what can we add — it's what actually matters here.
Simplicity isn't a lack of ambition. It's the discipline to stay close to what's essential.
Practical Applications
- Run regular feedback sessions with actual users before finalizing features — not after.
- Prioritize intuitive design over feature count.
- Build for scalability: simpler systems are far easier to expand without introducing instability.
Ignoring the Regulatory Landscape
What Happened
The second failure stings a little differently, because I should have seen it coming. We were developing an AI application for predictive patient care — real-time data analysis to anticipate adverse health events. The clinical potential was real. The team was sharp. And we walked straight into a wall we'd failed to map.
The wall was regulatory compliance. GDPR in the EU. HIPAA in the US. Patient data is governed by frameworks that exist for genuinely important reasons, and we had treated compliance as something to sort out after we'd proven the concept. That sequencing cost us the project.
What It Taught Me
Here's what I understand now, having practiced law and built products: regulation isn't the enemy of innovation. It's the terrain. You don't complain that gravity makes engineering harder — you build with it in mind from the start.
My legal background made me fluent in this, eventually. But fluency doesn't prevent arrogance. We were excited about what we were building, and excitement has a way of crowding out due diligence. The lesson wasn't "lawyers should lead tech projects." It was that legal and technical thinking need to be in the room together from day one, not introduced to each other at the finish line.
This is something I come back to when I'm working with founders and professionals who are building something new — in their businesses or in their lives. The constraints you ignore early become the crises you manage later.
Practical Applications
- Bring legal expertise into the development process early, not as a final review.
- Build a habit of monitoring regulatory developments — they shift, and your systems need to shift with them.
- Create compliance checklists tailored to your specific frameworks (GDPR, HIPAA, and whatever else governs your domain).
Overlooking Data Quality
What Happened
The third failure is the most instructive, in a quiet way. We were building a machine learning model to forecast market trends. Sophisticated algorithms, serious computational resources, experienced team. The predictions were unreliable, and stakeholder trust eroded fast.
The problem wasn't the model. The problem was the data we fed it. Inconsistent sources, poor validation, gaps we'd glossed over because the volume looked sufficient. A model is only as good as what it learns from — and ours had learned from noise.
What It Taught Me
There's a parallel here to the conversations I have with people who feel like their decision-making is off, their instincts unreliable, their sense of direction fuzzy. Often, the issue isn't their capacity to reason. It's the quality of the inputs — the assumptions they're treating as facts, the stories about themselves they've never examined, the data they inherited from someone else's blueprint for a good life.
Garbage in, garbage out is a technical principle. It's also a deeply human one.
When I started taking data quality seriously — in my projects and in my own thinking — things became more reliable. Not perfect. But trustworthy.
Practical Applications
- Implement data validation protocols before training, not as an afterthought.
- Audit your data sources regularly — relevance and accuracy degrade over time.
- Build a team culture where data integrity is understood as foundational, not bureaucratic.
Key Takeaways
- Embrace simplicity: the features that directly address real needs are worth more than ten that might be useful someday.
- Integrate compliance: legal and regulatory thinking belongs in the design process, not the review process.
- Prioritize data quality: even sophisticated systems fail without clean, consistent, well-governed inputs.
- Stay close to stakeholders: alignment isn't a kickoff meeting. It's an ongoing practice.
FAQ
Q: How do I prevent making my AI project too complex for users? Start with the smallest useful version of the thing. Build from there based on what real users actually struggle with, not what you imagine they might want. Simplicity is a design decision that requires active discipline — complexity accumulates on its own.
Q: What should be the first step in ensuring AI projects comply with regulations? Bring legal expertise into the room before you've fallen in love with your architecture. It's much easier to design for compliance than to retrofit it. Know which frameworks apply to your domain — GDPR, HIPAA, sector-specific guidelines — and treat them as design constraints, not obstacles.
Q: Why is data quality crucial in machine learning projects? Because a model learns exactly what you teach it. If your training data is inconsistent, incomplete, or biased, your model inherits those problems and amplifies them. Validation protocols and regular audits aren't optional — they're what separates a trustworthy system from an expensive guess.
A Final Thought
What I've noticed, looking back at these three failures together, is that none of them were caused by a lack of intelligence or effort. They were caused by misaligned attention — focusing on the impressive thing rather than the right thing.
That pattern shows up in careers and lives just as much as it shows up in product development. We over-engineer our schedules. We ignore the regulatory environment of our own values. We make decisions on bad data — stories we've accepted about who we are and what we're capable of.
The road I've taken — law, code, startups, coaching — has been less a strategic plan and more a series of honest responses to what each failure revealed. Each pivot came after something broke in a way I couldn't ignore.
I don't think the setbacks were the price of the journey. I think they were the journey.
So here's the question I'll leave with you: What's the last failure you actually learned from — not just survived, but let reshape how you work? And is there one you're still moving past too quickly?
AI Summary
Key facts: - A contract review system was over-engineered, clarifying the importance of simplicity. - AI healthcare project underscored the need to integrate regulatory compliance (GDPR, HIPAA) from the start. - Poor data quality impeded reliable market trend predictions.
Related topics: AI system design, regulatory compliance in AI, data quality in machine learning, legal tech, healthcare AI, intuitive user experience, scaling AI applications, machine learning in market analysis.