Skip to the content.

October 08, 2024

Where AI Coding Assistants Fall Short: Challenges in Software Engineering

Over the past few years, I’ve used various AI tools to help me write code, including ChatGPT, Claude, Gemini, Copilot, Cursor, and Unity Muse. I’ve mostly been writing small Python scripts to automate parts of my life, but I’ve also experimented with Unity projects, AI agents, and time series transfer learning. While these tools are helpful for basic tasks, they fall short when tackling even moderately complex challenges. However, I’ve found that I can often get a decent solution by providing the right context. What works for me is an iterative approach to context augmentation:

  1. Task Analysis: Identify the specific information the LLM needs to complete the task.
  2. Manual Context Enrichment: Manually curate and provide relevant context in the prompt.
  3. Iterative Refinement: Submit the prompt, evaluate the output, and refine by adding more context until the LLM produces an adequate solution.
  4. Context Analysis: Analyze what additional information improved the LLM’s performance.
  5. Automation Planning: Devise strategies to programmatically retrieve and incorporate similar context for future queries.

Below are some of the key challenges that LLMs struggle with:

Complex Project Dependencies

Modern software projects involve multiple libraries, modules, and interdependencies. AI assistants often lack a full understanding of the dependency tree, leading to inaccurate suggestions. They lack the comprehensive context that human developers have, making it challenging to see how each piece fits into the larger system.

Long-Term State Management

LLMs are largely stateless—they don’t inherently remember interactions over time, except within the temporary context of a conversation. This makes it difficult for them to understand long-term architectural decisions or maintain consistent state, resulting in fragmented suggestions.

Ambiguous Specifications

Software requirements are rarely perfectly defined. Human developers use intuition and experience to fill in gaps in vague specifications, whereas AI may misinterpret ambiguous requirements, leading to output that doesn’t align with project goals.

Handling Non-Standard Libraries

AI chatbots are well-versed in common libraries and frameworks but struggle with non-standard or niche libraries that are less documented or proprietary. When developers use custom tools, AI often falls back on generic advice that lacks practical value.

Complex Logical Reasoning

Complex algorithms require more than syntax understanding—they need deep logical reasoning. AI often struggles with this, producing code that looks correct but fails in real-world conditions. Logical nuances requiring sustained thought are particularly challenging for LLMs.

Error Handling and Debugging

Debugging combines reading error messages, understanding system behavior, and hypothesizing issues. While AI can point out syntax errors, it often falls short in diagnosing complex runtime errors, especially those involving interactions between multiple system components.

Maintaining Code Quality

Maintaining code quality involves understanding best practices, managing technical debt, and making trade-offs. AI can suggest functional code, but it may ignore long-term maintainability or efficiency, leading to accumulated technical debt. Quality goes beyond correctness—it’s about building reliable and scalable software.

AI and LLM-based coding assistants are helpful for many tasks, but their limitations become evident with complex software engineering challenges.

In the coming weeks, I plan to explore each of these obstacles in more detail and investigate how we can overcome them programmatically.