Why developers still matter in the age of agentic AI


AI is quickly becoming a part of how software is written, tested, and delivered. From GitHub Copilot’s continued growth to the decline in traffic on traditional forums like Stack Overflow, there’s a clear shift under way in how developers work, learn, and share knowledge.

“Embedded, in-context tools give engineers real-time information to […] accelerate their development independently,” said Ed Keisling, Chief AI Officer at Progress Software. “They don’t need to spend as much time searching or asking coworkers for help.”

That shift marks a change in what it means to be a developer.

From coders to system thinkers

With AI now capable of generating large chunks of code, some wonder whether developers are becoming more like prompt engineers – crafting clear instructions for machines to execute. But Keisling doesn’t see it as a clean break.

Ed Keisling, Chief AI Officer at Progress Software.

“I don’t necessarily see it as either-or,” he said. “We’re not yet at the point where developers can just manage fleets of agent developers, especially for mission-critical enterprise systems. But AI is a tool, and developers need to understand how to use it well.”

That means thinking differently about how code is designed and structured. While traditional coding tasks still exist, broader thinking is becoming more important.

“It’s always been about understanding the problem and how best to solve it,” Keisling said. “That’s what sets experienced developers apart. With AI, there’s even more pressure to get those decisions right up front – AI can miss edge cases or overbuild something that isn’t really needed.”

In short, the coding itself may get easier, but the thinking behind it gets harder. Developers are being asked to think more like architects and product owners – people who see the full picture and can steer projects in the right direction from the start.

What makes agentic AI different?

While tools like Copilot offer intelligent code suggestions, the rise of agentic AI introduces a new level of autonomy. The agents don’t just autocomplete lines of code – they plan, delegate, and execute entire tasks with minimal human input.

Keisling described the difference this way: “It’s probably a disservice to call LLM coding tools a fancy autocomplete, but that’s a helpful way to think about them. With agentic development, you’re delegating portions of your work to the AI. You describe what needs to be done, and it goes off and does it.”

Some teams are already experimenting with assigning basic tickets – bug fixes, small enhancements, documentation updates – to autonomous agents. The agents can interpret goals, break down tasks, and generate working code. In some cases, they can even test and validate their own work.

“As that matures,” Keisling said, “it will completely transform how we think about development.”

The change is subtle now, but it’s building momentum. If today’s autocomplete tools are the first draft, autonomous agents may soon be capable of finishing the entire document.

Guardrails and accountability

One of the biggest concerns with agentic AI is what happens when it acts without direct human initiation. If an AI agent pushes flawed code, who’s responsible? Keisling is clear on this point: accountability doesn’t go away just because a machine is involved.

“The team is still accountable for any output the AI generates,” he said. “It should be treated as their own.”

He emphasised the need for oversight at every stage. No matter how advanced AI becomes, its code still has to pass through the same development, staging, and production pipelines. And it should be subject to the same reviews, standards, and tests that apply to any human-written code.

“You need CI/CD pipelines. You need policy enforcement. You need performance and regression testing. Without this, you will lose the trust of your customers,” he said.

It’s not just about protecting against mistakes. It’s about ensuring the output reflects the values, priorities, and quality standards of the team behind it.

Rethinking code reviews

Code reviews have always been a key part of engineering culture. They catch bugs, foster collaboration, and help junior developers learn. But as more of the codebase comes from machines, it’s worth asking whether this tradition still holds up.

Keisling believes reviews are more important than ever.

“AI can still write buggy, inefficient, and insecure code,” he said. “Reviews help keep everything connected and consistent. They’re also a key part of learning, quality control, and making sure your development is ethical and secure.”

That doesn’t mean AI can’t help. Automated checks for syntax, formatting, and known vulnerabilities can speed up the process. But humans still need to ensure that the code makes sense in context – that it does what it’s supposed to do, and does it well.

“Of course, you should use AI to bolster your pipeline,” Keisling said. “But that doesn’t eliminate the need for a human-in-the-loop review.”

Open vs closed: Choosing your tools

As AI development tools multiply, developers are facing another choice: stick with popular proprietary platforms like OpenAI, or explore open-source alternatives that offer more transparency and control.

Keisling understands the appeal of both.

“I think it’s important that developers have a choice,” he said. “We’ve all had those moments where we have to change how we work just to fit what a tool can do – and that’s frustrating.”

Open-source tools offer the ability to dig into the model, adjust how it works, and ensure it aligns with internal goals. But they also require more effort to support and scale. Proprietary systems, meanwhile, offer convenience and strong integrations – but may limit customisation and lock users into a specific way of working.

“The open-source models will offer transparency and customisation,” Keisling said. “The closed systems will likely offer scale and support. Each organisation has to decide what’s most important for their needs.”

The future of development

Looking ahead, Keisling believes the act of writing code will become more of a conversation than a task.

“Developers will be managing fleets of AI agents, using natural language and visual interfaces,” he said. “Andrej Karpathy described it as developers wearing an Iron Man suit. I think that’s a great way to think about it.”

Instead of typing every line, developers will guide AI through high-level goals, constraints, and priorities. They’ll shape systems by talking through the problems, not by writing out every step of the solution.

That shift will demand a different kind of skill set. Technical ability will still matter, but communication, product thinking, and ethical judgment will matter just as much.

The tools may be changing fast, but the job remains rooted in solving problems, understanding users, and building things that work. AI can help with the heavy lifting. But the thinking? That’s still up to the humans.

(Photo by Flipsnack)

See also: Open-source AI is free, but most people still can’t use it

Banner for AI Expo where attendees will learn about artificial intelligence for tasks like code reviews in their software development.

Want to learn more about AI and big data from industry leaders? Check out AI & Big Data Expo taking place in Amsterdam, California, and London. The comprehensive event is co-located with other leading events including Intelligent Automation Conference, BlockX, Digital Transformation Week, and Cyber Security & Cloud Expo.

Explore other upcoming enterprise technology events and webinars powered by TechForge here.



Source link

Leave a Reply

Translate »
Share via
Copy link