Why Software Development Engineers in Test represent the future of quality assurance, and why enterprises that don't invest in SDET talent are setting themselves up for failure.
Julian Morley
I've watched the role of Software Development Engineer in Test (SDET) evolve from a niche position into one of the most critical roles in enterprise software development. Yet, many organizations still don't understand what SDETs do, why they're essential, or how to properly leverage their unique skill set. This misunderstanding costs enterprises millions in poor software quality, slow release cycles, and technical debt.
Let me be blunt: if your enterprise is still treating testing as an afterthought performed by manual QA teams, you're already behind. The companies winning in today's fast-paced software landscape have realized something fundamental—quality cannot be bolted on at the end. It must be engineered in from the beginning, and that requires SDETs.
Here's an uncomfortable truth: most enterprise software is undertested. Not because companies don't care about quality, but because traditional testing approaches simply cannot keep pace with modern development velocity.
I've seen this pattern repeatedly:
This isn't a people problem. It's an approach problem.
Manual testing scales linearly—double your features, double your testing time. Automated testing scales exponentially—the marginal cost of testing new features approaches zero once you have the right infrastructure and expertise in place.
That's where SDETs come in.
The confusion around the SDET role stems from its hybrid nature. SDETs aren't traditional QA testers who learned to code. They're software engineers who specialize in quality infrastructure.
An SDET thinks differently than both pure developers and traditional testers:
Unlike traditional developers:
Unlike traditional QA:
The best SDETs I've worked with think like security researchers—constantly asking "how could this break?" and building automated systems to verify it won't.
In mature enterprises, SDETs create the infrastructure that makes continuous delivery possible:
Test Frameworks
Test Environments
Quality Gates
Observability for Testing
Notice the pattern? SDETs build systems that scale, not test cases that don't.
The shift to microservices, cloud-native architectures, and continuous delivery has made testing exponentially more complex.
Consider a traditional monolithic application:
Now consider a modern microservices architecture:
I recently worked with a SaaS company running 120 microservices. They calculated that comprehensive manual testing would require a QA team of 200 people working around the clock. The actual QA team had 15 people.
The solution wasn't hiring 185 more manual testers. It was hiring 8 SDETs who built testing infrastructure that automated 85% of what would have required manual effort.
Let's do the math:
Manual Testing Approach:
SDET Approach:
After year one, the SDET approach costs 80% less and runs orders of magnitude faster. The ROI is undeniable.
But the real value isn't cost savings—it's enabling velocity. With automated testing, you can deploy daily instead of monthly. That competitive advantage is priceless.
Here's where most enterprises fail: they hire SDETs but treat them like traditional QA.
The "Automating Manual Tests" Trap
Organization: "We have 5,000 manual test cases. Please automate them."
This is backwards. Most manual test cases are redundant, outdated, or test the wrong things. Blindly automating them creates expensive, brittle test suites that become maintenance nightmares.
Better approach: SDETs should analyze what actually needs testing, design an efficient test strategy, then build automation that implements that strategy.
The "QA Team Firewall" Pattern
Organization: "Developers write code, SDETs test it."
This creates the same bottleneck as traditional QA, just with better tools. SDETs become gatekeepers instead of enablers.
Better approach: SDETs build testing infrastructure and frameworks that developers use to test their own code. SDETs focus on system-level testing and framework maintenance.
The "Tools Team" Mistake
Organization: "SDETs just maintain our test automation tools."
This relegates SDETs to support roles, wasting their engineering talent on tickets and maintenance.
Better approach: SDETs are engineers who solve quality problems through automation, not IT support for test tools.
In high-performing enterprises, SDETs are:
Embedded in Development Teams
Building Platforms, Not Tests
Driving Quality Culture
Here's something nobody wants to admit: being an excellent SDET is harder than being an excellent developer.
Think about what SDETs must know:
Core Software Engineering:
Testing Specialization:
Infrastructure Knowledge:
Domain Expertise:
An SDET needs to be a full-stack engineer who also happens to be a testing expert. That's a rare combination, which explains why good SDETs are expensive and hard to find.
Most enterprises struggle to hire SDETs because they're looking for the wrong profile.
"QA with automation skills"
Posting requirements like:
This attracts testers with scripting abilities, not engineers who can build testing platforms.
"Junior developers we can train"
Taking fresh graduates and teaching them testing is backwards. They lack the engineering maturity to build scalable systems.
Hire engineers who love quality:
Then teach them your domain and testing specifics. Engineering fundamentals transfer; manual testing experience often becomes baggage.
Grow from within:
Some of the best SDETs I know were developers who got frustrated with poor testing and decided to fix it themselves.
I'd be remiss not to address the elephant in the room: "Can't AI just write all our tests?"
The short answer: No.
The longer answer: AI is a powerful tool that excellent SDETs will leverage to be even more effective. But AI cannot replace the engineering judgment and system thinking that good testing requires.
AI makes SDETs more productive. It doesn't make them unnecessary. If anything, it raises the bar—now SDETs can focus on higher-value problems instead of writing boilerplate.
Here's my prediction: in five years, every successful enterprise software organization will have an SDET-to-developer ratio of at least 1:10. Organizations that maintain that ratio will ship faster, more reliably, and with higher quality than their competitors.
The enterprises that view testing as a cost center to be minimized will struggle with:
Meanwhile, enterprises that invest in SDET talent will have:
If you're a technology leader wondering how to improve your organization's testing maturity:
Start small:
Invest in the role:
Change the culture:
The future of enterprise software quality doesn't lie in larger QA teams running more manual tests. It lies in SDETs building intelligent systems that make quality automatic, scalable, and fast.
The enterprises that realize this first will have an insurmountable advantage. The rest will wonder why they can't keep up.
What's your experience with SDETs and test automation in enterprise environments? I'm always interested in hearing different perspectives—feel free to reach out to continue the conversation.
Infrastructure as Code: The Terraform Supremacy and Its Challengers
A critical examination of Infrastructure as Code tools in 2025, with Terraform's dominance facing serious competition from Pulumi, CloudFormation, and emerging alternatives.
TLS Protocol Versions: Understanding 1.0, 1.1, 1.2, and 1.3
A comprehensive guide to Transport Layer Security protocol versions, their differences, vulnerabilities, and migration strategies for enterprise environments.