Clients know and trust us for this …and yet we know that the hype and rhetoric around AI will continue - because, after statements such as “Within 3-6m AI will be writing 90% of all code and within 12m essentially all of the code” (Dario Amondei, March 2025) and "in the next year, the vast majority of programmers will be replaced by AI" (Eric Schmidt, April 2025), how could it not?
Being AI-first requires a robust understanding of the real-world state of play. We wanted to validate (or not) the bold statements being made by these big players and undertook some significant research last year - consulting with our community of delivery partners, clients and friends across much of Europe, South America and beyond, and then sense checking their responses against prominent external sources such as Stack Overflow and GitHub.
Respondents were sourced from right across the development lifecycle (front-end, back-end, devops and QA) and the insight was tested and corroborated with leaders holding positions as varied as CTO, Product Leader and Head of Engineering. Experience levels were also broad: from 3-5 years all the way up to 20+ years.
Distilling all the data down, the insight was both consistent and clear:
The next 12 months will bring faster code delivery but require more review steps and new skills too.
Experience and enablement tip the scales of productivity gain - jumping from half day wins to 1+ day a week.
Dabbling and experimentation no longer cut it, it’s time for mandated frequent use.
From coding to curating… senior-led review and testing are the quality linchpin.
Mind the SDLC gap… it’s time to look outside of the IDE.
Entering into this survey, our goal was always to sense check the big talk and provide a real-life view of today’s state of play. We’re quietly confident in that having been done and it’s our hope that each insight is helpful and practical (and absolutely not hype!).
“AI is not always the answer. You don’t have to try to move it into every part of your team. As the models will progress… the next phases will be cutting costs down and transforming architectures… into something more optimized and cost efficient.”
Delivery Manager & AI Team Lead, Delivery Partner, Europe
500+ engineers
Read on to:
Learn more about the state of AI-augmented engineering, and what is happening across the 100s of engineering teams we are working with.
Benchmark your own teams’ progress in AI-augmented engineering.
Get ideas of what to do next with your engineering team as they adopt (or scale their adoption) of AI tools
Let’s get started…
In short, yes there are significant gains to be had - but not without significant overhead too.
"AI is like a junior developer… would you trust a junior to ship your production code? Of course not."
CTO, Delivery Partner, Europe
50-100 engineers
In our survey, 49% of respondents expected to do significantly more with the same effort while 54% expected to spend more time reviewing and validating AI generated code. This attitude and future outlook is additionally reflected in the Stack Overflow Developer Survey 2025 where only 3% of all respondents highly trust code generated by AI.
“It really helped speed up the good engineers… but it is a double-edged sword for someone who’s just starting… if someone’s a junior and you expect them to handle it on their own, I’m not sure they are going to be sped up at all.”
Delivery Manager & AI Team Lead, Delivery Partner, Europe
500+ engineers
There is an expectation from over 4 in 10 that they will need to develop new skills to stay competitive as a software engineer and many of the CTOs and Heads of Engineering we have spoken to have stressed the importance of providing structured onboarding and learning, particularly for juniors, in order to address this and reap the available benefits for the long-term.
What have those we’ve spoken confirmed they will be investing in?
Making onboarding an AI-first ritual: for example setting a clear service level objective (such as time-to-first PR ≤ 5 working days for new joiners or new stacks) and instrumenting it.
Giving juniors a framework, not just a licence to ‘use AI’ - by providing starter prompt packs (codebase tours, test stubs, docs summarisation), ‘golden’ examples and review templates as well as buddying them up with a senior ‘editor’.
Prioritising professional development plans: coaching their team members on secure prompting and test generation with deep-dives in architecture and safety for team leads.
Rebalancing time for seniors and team leads toward AI-assisted code reviews and architecture oversight - and making this explicit in job specs.
Setting out AI tool usage policies with libraries of ready-to-use prompts, approved and validated through repeat use.
At Deazy we’re addressing speed-to-benefit with an accelerator scaffold built into our onboarding that names training champions, drops in a lightweight playbook and stands up a simple outcomes dashboard - all to ensure that clients feel the benefits without so much of the pain.
“AI should buy speed and additional value …but you still need engineers to review, shape tests, and make sure quality holds. I’m going to bring it into personal development plans and ask: why aren’t you using these tools?”
CTO, Series B Scale-Up
20+ engineers
7 in 10 of the respondents surveyed across our network of partners reported at least a ½ day a week productivity lift, but 4 in 10 reported even larger gains, beyond 1 day a week. 90% of extensive AI users also said they now ship faster (against just 60% of those who use it occasionally).
“Integrating AI into the IDE… we get about 15% improvement; regular coding 10–12%, and unit tests more than 80% performance increase.”
AI Lead. Delivery Partner, Europe
100-500 engineers
Our data tells us it’s the engineers with more years’ experience who are more likely to hit the 1+ day a week productivity lift and though 9 in 10 report faster ramp-up and onboarding time when using AI, this again clearly skews towards the seniors.
It confirms that use of the tools alone does not necessarily deliver transformative productivity gains and that experience is key if you’re to leverage the full benefit of AI-enabled engineering, whether within or throughout the SDLC.
In essence, patterns and judgement matter - as does the skill of your team.
Enablement is what helps push teams over the threshold, and teams with formal training show a higher share of organisational-level productivity gains. Survey results show a staggering 90% of teams with formal workshop based training reporting >10% productivity gains, compared with only 60% of teams that have no formal training at all.
“We can now write scripts in 10 minutes that would have taken two or three days.”
Head of Engineering, Transport & Logistics
60+ engineers
There are however a number of habits that seem to contribute to this greater productivity uplift, all of which can you can take away and implement:
Track the gains and set targets. It’s a solid practice for any business but particularly here, make sure you know what you’re working with by regularly asking your team at least quarterly, ‘how many hours a week are you saving using AI tools?’.
Focus on the places that bank hours (inside the IDE to start with), honing in on specific habits such as code reviews with AI-generated unit tests on every pull request.
Make senior know-how contagious by capturing their prompt and checklist patterns for reviews, tests and docs and ‘buddying them up’ - pairing juniors with seniors to spread the behaviours and maximise the return.
Finally, nudge the ‘middle cohort’. Developers stuck at 11-25% gains are your swing group and targeted clinics and templates will tip them into the next level, where hours and impact jumps.
“On repeatable flows, that’s where I see a lot of use… it can speed up with general prompts and you are more like a code reviewer at that point.”
Delivery Manager & AI Team Lead, Delivery Partner, Europe
500+ engineers
56% of partners called their AI usage extensive, with 2 in 3 engineers surveyed using AI tools multiple times a day. And there are bigger gains for frequent users - with those using AI multiple times daily ~1.5× more likely to report >25% productivity gains than anyone else.
It’s a statistic backed up by other sources. Stack Overflow’s 2025 survey shows 51% of engineers using AI tools daily, and reporting strong benefits for doing so too.
“The whole team needs to utilize it in order for everything to be successful.”
Delivery Manager & AI Team Lead, Delivery Partner, Europe
500+ engineers
The long and short of it is that you need to treat AI as a habit to mandate, not a tool to provide - and from the consistency that this repeated and embedded use delivers you then reap the bigger gains that both our clients and partners report.
“The biggest boost of productivity is related especially to MVPs, prototypes - greenfield projects.”
Tech Lead, Delivery Partner, Europe
500+ engineers
At Deazy we address this by prioritising what we call ‘habit-embedded squads’, bringing prompt libraries, IDE macros, PR checklists and test-first AI patterns with us to a project so exemplar behaviours stick from day one and we achieve faster ramp-up with steadier quality and less enablement drag on your leads. We also look to ‘prove it fast’ - running 4-6 week pilots on a single stream and tracking the relevant metrics to show the delta. If it works, scale it; if not, iterate.
For actionable ideas for your own teams, here are a number of things we have seen work well (some of which, of course, you may be doing already):
Set adoption targets for your team and publish the results to keep them accountable:
DAU (Daily active users) - e.g. 100% of engineers using AI daily
% of pull requests (PR) with AI-assisted tests
Time-to-first-PR; showing how quickly a new joiner submits their first change
Additionally, mandate AI tool usage and adoption as part of personal development plans within relevant engineering teams
“Our devs are getting better at thinking like SaaS engineers, not just point-solution builders.”
Head of Innovation, Global Law Firm
In an echo of insight 1, it is clear that review and testing is an increasingly important part of the AI engineering development lifecycle and that senior engineers are already moving ‘up-stack’.
Our data shows that 1 in 2 engineers expect this shift to review and validation of AI-generated code, and this aligns with the Stack Overflow 2025 Developer Survey where 45% of developers report increased time spent debugging AI-generated code, emphasizing the necessity for high-level review and oversight.
“AI can follow the rules that you provided, but it also gives you a lot of bugs if developers don’t review everything and be confident.”
Tech Lead, Delivery Partner, Europe
500+ engineers
AI tools are moving senior engineers' time to focus mainly on review, requirements and architecture with seniors 5x more likely than juniors to use AI tools for requirements gathering and 1.5x more likely than juniors to use AI tools for review and quality assurance. Juniors on the other hand report the highest use of AI for learning, a fact consistent with the data in insight 3.
“Seniors become super-powered… more focus on code review, QA and architecture.”
Senior, Full-Stack, Europe
11-50 engineers
It’s information that CTOs we have spoken to recognise, and as a result they acknowledge the need for change. To address this? They tell us they are:
Planning for seniors to spend more time on requirements clarification, AI-assisted review, and sign-off and less on raw coding.
Codifying the “AI review” step and adding explicit checks in PR templates (e.g., AI-generated? yes/no, verification notes and tests updated by AI) as well as making it part of Definition of Done.
Pairing mid-level implementers with senior ‘editors’ - and running weekly defect/post-hoc reviews of AI-introduced issues to refine prompts and playbooks.
Adjusting career ladders and recognising ‘AI orchestration’ (prompt patterns, guardrails and review frameworks) as a senior competency as opposed to simply raw coding throughput.
“With good supervision from someone senior, I think AI-augmented engineering tools are great.”
Delivery Manager & AI Team Lead, Deliver Partner, Europe
500+ engineers
Where all four previous insights point to expanding the influential use of AI within the integrated development environment (IDE), our survey clearly highlights that this is not the case upstream and downstream - creating what we’re calling the ‘SDLC gap’.
Clearly there is strong AI adoption in implementation tasks: whether that be code generation, testing, refactoring or documentation. But uptake, for example, in requirements gathering and analysis, as well as deployment/monitoring (28% and 8% respectively) is markedly low… and the pressure around that is now on the build.
“Anything you can do to automate steps in a project lifecycle from idea through to production has merit.”
CTO, Series B Scale-Up
20+ engineers
With 49% of respondents expect to do more with the same effort and 32% are feeling the push to deliver faster it is clear that there is a demand for ever-increasing impact. And that, in turn, highlights that those upstream and downstream activities (e.g. requirements, release and ops) must be the next application zones for impactful ROI from AI.
The roadmap? Move from ad-hoc inner-loop wins to end-to end patterns such as AI-assisted requirements, test gates and release checks - and add explicit ‘human-in-the-loop’ quality steps for assurance around outputs and deliverables that you can trust.
And for a quick win? Pilot AI with one additional SDLC stage within the next quarter, track and monitor the progress and roll out additionally from there.
With all of the gains to be had it’s easy to forget that our growing use of AI creates a whole new set of risks. So we asked our respondents, and the feedback was insightful.
It’s not that AI will replace engineers, but that AI adopted without structure, standards and human-in-the-loop controls creates new exposure - not least a loss of deep technical understanding, a weakening of capability and a significant talent development gap as junior developer roles change perhaps forever.
Leaders worry that over-reliance on AI could erode architectural judgment, debugging capability and systems thinking, especially if engineers stop interrogating outputs; they fear that AI-generated code may introduce subtle security flaws or insecure dependencies that create downstream risk; they question whether AI-generated code will remain understandable and evolvable over time to avoid fragmenting codebases.
They also worry that defaulting to AI-suggested solutions potentially narrows creativity and deep problem-solving - which is what we’re all here for in the first place.
The risk summed up? It’s that AI augmentation without guardrails creates a set of problems we’re perhaps not even looking at yet.
One day a week productivity gains are no joke and with nearly 90% of our respondents believing that AI will cut project timelines it’s pretty clear that AI-enabled engineering is here and the benefits are real. But that the “vast majority of programmers will be replaced by AI” this year (that Eric Schmidt quote, April 2025)? We say, definitively, no.
Yes, the efficiencies are evident but so is the fact that there is distance to be covered when it comes to quality and trust. And of course there's that list of risks to be considered too.
“AI should buy speed and additional value… but you still need engineers to review, shape tests, and make sure quality holds. I’m going to bring it into personal development plans and ask: why aren’t you using these tools?”
CTO, Series B Scale-Up
20+ engineers
Here at Deazy? We’re looking to a future where AI is integrated across the SDLC with a systematic approach that addresses risk, amplifies gains and increases trust.
We’re committed to benchmarking and raising AI engineering and AI product development capability across all of our partners to create consistency, quality, and assurance at scale; and we continue to develop accelerators, frameworks and ways of working that enable repeatable and responsible AI adoption for consistently better delivery and product outcomes for all.
And we're already planning our next survey and report so keep an eye out for its publication date - anticipated to be later this year.
To everyone who contributed to this report, from the engineers and delivery partners who shared their experiences, to the CTOs and Heads of Engineering who helped validate our findings - thank you. Your insight has been invaluable in shaping a grounded view of what AI-augmented engineering really looks like in practice.
And we’d love your feedback: what resonated, what surprised you, and where you’re seeing results of your own. If you’d like to discuss any of these topics in more depth or explore how Deazy can support your team’s AI adoption journey, please get in touch with us at hello@deazy.com or look us up on LinkedIn.
Image by DC Studio on Freepik