How AI and LLMs Are Reshaping Software Development

Every developer I know has an opinion about AI in software development. As Myles Ndlovu, someone who writes code daily and has integrated LLMs deeply into my workflow, I want to cut through the hype and talk about what’s genuinely changing, what’s overhyped, and what most developers are missing entirely.
The Productivity Shift Is Real
Let me start with the honest truth: LLMs have made me measurably faster at certain types of work. Boilerplate code, unit tests, data transformation functions, API integration code — these are tasks where I used to spend 30-60 minutes that now take 5-10 minutes. That’s not a marginal improvement; it’s a fundamental shift in how I allocate my development time.
But the productivity gain isn’t evenly distributed across all programming tasks. Writing a complex trading algorithm? AI helps with syntax but can’t design the strategy. Debugging a race condition in a concurrent system? The LLM generates plausible-sounding but often incorrect solutions. Architecting a microservices system? AI can suggest patterns but can’t evaluate the trade-offs specific to your context.
The developers who benefit most from AI are the ones who understand where it excels and where it fails, and who adjust their workflow accordingly.
Where AI Actually Excels
Code generation from clear specifications. When I know exactly what I want — “write a SvelteKit server endpoint that accepts a POST request with these fields and inserts into this database table” — AI generates accurate, working code in seconds. The specification is the hard part; the implementation is the easy part that AI handles well.
Test writing. This is perhaps the highest-value use case. Most developers under-invest in testing because writing tests is tedious. AI makes it nearly effortless. I describe the function’s expected behaviour, and the LLM generates comprehensive test cases including edge cases I might have missed.
Documentation. Generating JSDoc comments, README files, API documentation — these are tasks that AI handles extremely well because the input (code) is structured and the output (documentation) follows predictable patterns.
Refactoring. “Convert this class-based component to use hooks” or “rewrite this function to use async/await instead of callbacks” — pattern transformation is exactly what LLMs are designed for.
Where AI Falls Short
Architecture and system design. AI can tell you about design patterns, but it can’t evaluate whether a particular pattern fits your team’s capabilities, your deployment constraints, or your scaling requirements. Every architectural decision involves trade-offs that require contextual understanding AI simply doesn’t have.
Debugging complex issues. When a bug involves the interaction between multiple systems — a WebSocket connection dropping because of a reverse proxy configuration that interacts with a specific browser’s timeout behaviour — AI generates plausible explanations that are often wrong. Real debugging requires systematic elimination of hypotheses, which requires understanding the full system context.
Security. This is a critical blind spot. LLMs generate code that works but often introduces subtle security vulnerabilities: SQL injection through string interpolation, XSS through unsanitised output, timing attacks in authentication code. Every AI-generated code snippet that touches user input or authentication needs careful human review.
MCP Servers: The Bridge Between AI and Real Work
The most underrated development in AI tooling is the Model Context Protocol (MCP). MCP servers allow LLMs to interact with real-world tools — databases, file systems, APIs, deployment pipelines — through a standardised interface.
What this means in practice is that instead of copying code from an AI chat window and pasting it into your editor, the AI can directly read your codebase, understand your project structure, run your tests, and make changes. It’s the difference between having a consultant who reads a description of your code and one who can actually look at it.
I’ve built custom MCP servers that connect AI assistants to our deployment pipeline, our monitoring dashboards, and our database schema. The result is that AI interactions go from “generic advice based on my description” to “specific recommendations based on actual project state.”
CLI Tools Powered by AI
Another area where AI is genuinely transformative is in building command-line tools. Traditional CLI tools require you to remember flags, syntax, and options. AI-powered CLI tools let you describe what you want in natural language.
Instead of memorising git log --oneline --graph --decorate --all, you can type “show me the branch history visually.” Instead of constructing complex database queries, you describe the data you want. The AI handles the translation to the correct syntax.
I’ve built several internal CLI tools that wrap our infrastructure management in a natural language interface. The onboarding time for new developers dropped dramatically because they don’t need to learn our custom tooling — they just describe what they want to do.
The Real Change: How We Spend Our Time
The most profound impact of AI on development isn’t faster coding — it’s the reallocation of developer time. When implementation becomes faster, the bottleneck shifts to design, planning, and quality assurance.
The best developers in an AI-assisted world are the ones who invest more time in understanding the problem before writing code, who think carefully about architecture and trade-offs, and who review AI-generated code with a critical eye.
AI hasn’t changed what makes a great developer. It’s amplified the difference between developers who think deeply about problems and those who just write code. The thinking was always the hard part. Now it’s the only part that humans need to do.
Myles Ndlovu builds algorithmic trading engines, crypto platforms, and payment infrastructure for emerging markets. Read more about Myles or get in touch.