AI Changed My Coding Style

AI Changed My Coding Style

I’ve found that coding with an AI partner has dramatically transformed my coding practices and style — some of those changes are even heretical. When I started at Google, one of the first things you had to do was learn Google’s strict style guidelines for code. These guidelines were designed to make sure that code was well witten, documented, and easily readable as people switched between files written by different Googlers, past and present. Teams at Microsoft had something similar, but their style guide was trickier for humans with all the szHungarianNotation.

Now, with AI coding assistants becoming more common, style guidelines are likely to change a bit to optimize for working with these new tools. Here are some changes I’ve made delibrately and some organically, to be more productive when collaborating with AI on code.

Disclaimer: I’ve been primarily working on new projects from scratch, I’m a ‘middling programmer’, and been working solo recently, so much of this may not be applicable to those developers with existing code bases and hundreds of other enginerers. But, my suspicion is that some of these changes will start creeping into even those code style guidelines soon. While my experience might not apply to large teams, gigantic legacy monorepos, or legacy codebases, I believe many coders will find these insights interesting.

Flatter, Even More Modular Code Structure

One of the most significant changes in my coding style has been the shift towards a flatter, more modular code structure. I now keep my code with flattened OO hierarchies, in larger files, and with fewer external file dependencies. Lately, I even just drop the OO and go ‘functional’. This approach serves two primary purposes:

1. It makes it easier to copy and paste code snippets into AI tools without needing to include or explain additional context. 2. It allows for atomic execution from the command line, enabling quick validation of AI-suggested changes.

This approach includes CSS. Heretical yes, but I almost always use inline CSS now. We are supposed to separate CSS from the content, but that requires a lot of context to be passed into the LLM-or it hallucinates it for you! So, I often now use inline CSS — both inside the HTML and inline in the JavaScript that generates the DOM. It doesn’t look as pretty to a human, but better for the machine. If you think about it, the separation of CSS isn’t just for reuse, its for human-readability, and the machines don’t need it.

This structural change has led to more modular, easily testable code. I find myself creating more entry points and writing code that can be tested in smaller, more focused units. This not only improves the overall quality of the code but also makes it easier to collaborate with AI tools.

Beyond Simple Co-Pilots

Interestingly, I’ve moved away from IDE-based coPilots and code completion tools. While these tools can be helpful for simple tasks, I’ve found a more effective approach when working with AI:

Instead of relying on line-by-line suggestions, I now copy/paste larger code chunks or entire files into the AI and ask for specific changes or improvements. This method allows the AI to understand the broader context of the code and provide more meaningful, holistic suggestions. And it allows me to also exclude code from the context that might confuse the AI, or tempt it to change/cleanup that legacy code and all callers and callees — which introduces a lot of risk.

AI as a Pair Programmer

Large Language Models (LLMs) have become my virtual pair programmers. I frequently use them to:

- Get thoughts on a new function I’ve just written - Request general improvements to a piece of code - Brainstorm alternative approaches to solving a problem

This collaboration often leads to more elegant and efficient solutions than I might have come up with on my own. It’s like having a knowledgeable colleague available 24/7 to bounce ideas off of and get instant feedback. Unlike a code review at Google, the best part is I can ignore the AI input without a digital trail or argument that lasts days.

Personalized AI Context

To maintain consistency across my projects and maximize the effectiveness of my AI collaborations, I’ve established a set of rules that I apply across all my LLM interactions. These include:

  • Preferring two-space indentation in Python - Using the latest Python release - Specifying (nearly latest) versions of all dependencies - Favoring Font Awesome icons to avoid SVGs and additional libraries - Always requesting full code output to avoid truncation issues

By setting these preferences upfront, I ensure that the AI-generated code aligns with my coding style and project requirements, reducing the need for manual regex’s after a manual paste.

Fresh Starts for Interfaces

When it comes to user interfaces, I’ve discovered an interesting phenomenon: starting fresh with HTML or SwiftUI often yields better results than trying to collaborate with AI on existing interfaces.

The reason is simple: when you start from scratch, the code is formatted and structured in ways that the AI expects and generally uses. This leads to fewer mistakes and assumptions with the AIs’ furture changes or additions to the interface code.

Global Requirements First

My approach to starting new projects has also evolved. I now begin by defining global requirements such as:

- Responsiveness - Overall look and feel (e.g., “modern/flat”) - Key design elements

By establishing these overarching guidelines before diving into specific features, I create a more cohesive foundation for the project. This approach also helps the AI understand the project’s direction from the outset.

Maintaining Full Context

One of the most powerful aspects of using AI in the coding process is the ability to maintain a full context of the project’s evolution. I achieve this by:

- Starting projects within the LLM - Keeping separate chat threads for each major part of my project. No too large for the context window. - Making changes via the AI, even for minor edits that might be quicker to type manually.

This practice ensures that the LLM has a chronological, full-context history of the creative intent, bugs, and iterations. It’s incredibly valuable for later reference and allows the AI to provide more informed assistance as the project progresses — and it doesn’t repeat earlier mistakes.

LLM Consistency is Key

Through trial and error, I’ve learned the importance of sticking with a single LLM per project or file. Different LLMs have varying coding styles and assumptions, which can lead to inconsistencies if you switch between them.

lately, I’m using Claude.ai for most of my coding tasks. The reasons for this choice include:

  • Its large context window, which is almost invaluable for complex projects - Competitive API pricing, especially with the introduction of the Sonnet model - The ability to use the same LLM for both writing code and executing prompts

This consistency is particularly valuable when working with code that includes LLM prompts/API calls. Yes, as mentioned above, keep those prompt strings inline with the calling code instead of a separate data file <. The same LLM understands both the code and the prompt, optimizing their interaction and leading to better overall results.

Embracing New Technologies with Confidence

One of the most empowering aspects of working with AI is the confidence it instills when using the newest widgets and APIs for various platforms. In the ever-changing world of SDKs, staying current with the latest tools and features can be challenging. However, with AI as a coding partner, I find myself more willing to experiment with and implement the latest platform features because I don’t have to learn most of its usage and nuances.

The AI’s knowledge base, eespecially Claude 3.5 Sonnet’s recent cutoff-date today, often includes up-to-date information on the latest APIs and widgets. This means I can quickly get guidance on how to use new features effectively, understand best practices, and avoid common pitfalls. This confidence has allowed me to push the boundaries of my projects and deliver more interesting solutions.

Enhanced Code Quality and Style

Another significant benefit I’ve noticed is that AI tends to make my code far more ‘Pythonic’ by default. When working in Python, the AI-generated code often adheres to Python’s philosophy of readability and simplicity. This not only makes the code more elegant but also more maintainable in the long run.

Moreover, when I specifically ask for it, the AI often generates more efficient and readable implementations of my code. It’s like having a senior developer constantly reviewing and refactoring your work, suggesting improvements you might not have considered. This has dramatically improved the overall quality and performance of my codebase and has been an excellent learning tool for improving my own coding skills.

A Simple Yet Powerful AI Trick for UI/UX Improvement

One of the most effective AI techniques I’ve discovered, as embarrassingly simple as it might sound, is to present UI/UX code to the AI and simply instruct it to “make it prettier” or “make it better”. This approach has yielded surprisingly impressive results.

Here’s why this works so well:

1. Leveraging AI’s Broad Knowledge: AI models like LLMs have been trained on vast amounts of code, including numerous examples of well-designed interfaces. By asking it to improve your UI/UX, you’re tapping into this broad knowledge base.

2. Overcoming Designer’s Block: Sometimes, as developers, we can get stuck in our own design patterns or struggle to envision improvements. The AI can offer fresh perspectives and ideas we might not have considered.

3. Quick Iterations: This approach allows for rapid iteration. You can quickly generate multiple variations or improvements, then cherry-pick the best designs.

4. Learning Opportunity: By analyzing the changes the AI suggests, you can learn new design patterns, CSS tricks, or modern UI practices that you might not have been aware of.

5. Customization: You can further refine your request by specifying design principles or styles you prefer, like “make it more minimalist” or “add a modern, flat design feel”.

More generally, and ego-threatening, when I ask for more-general, and less-specific help, more often than not, better code and design comes out.

This simple technique of asking AI to “make it prettier” or “make it better” has become one of my go-to strategies for quickly elevating the visual quality of my UI/UX code. It’s a testament to how AI can augment our design skills and help us create more appealing interfaces with minimal effort.

Seamless Multi-Language Development

One of the funniest and most practical benefits I’ve experienced is the ease of switching between multiple programming languages. In my work, I often juggle between HTML, JavaScript, Python, Django, and Bash, among others. Before integrating AI into my workflow, I would occasionally mix up language-specific syntax, trying to use .push() in Python or .append() in JavaScript.

With AI assistance, these cross-language mistakes have become far less frequent. The AI keeps track of the language context and provides appropriate syntax and methods for each language. This has not only reduced errors but also increased my productivity by allowing me to flow more seamlessly between different parts of a project that might require different languages.

It’s almost like having a polyglot pair programmer who can effortlessly switch between languages, keeping you on track and preventing those small but frustrating syntax errors that can interrupt your coding flow.

Quantifiable Productivity Boost

Perhaps the most compelling argument for integrating AI into the coding process is the tangible increase in productivity. In my experience, I’ve found that I’m approximately twice as productive when using LLMs compared to coding without them. This 2X productivity boost is not just about writing code faster; it encompasses various aspects of the development process:

1. Faster Problem Solving: AI helps in quickly exploring different approaches to solving a problem, often suggesting solutions I might not have immediately considered.

2. Reduced Debugging Time: With cleaner, more consistent code generated with AI assistance, I spend less time debugging and fixing errors.

3. Efficient Learning of New Technologies: As mentioned earlier, AI’s assistance in working with new APIs and technologies significantly reduces the learning curve, allowing me to implement new features more quickly.

4. Streamlined Documentation: AI helps in generating and improving documentation, a task that often takes considerable time but is crucial for maintainability.

5. Code Refactoring and Optimization: The ability to quickly get AI suggestions for code improvements means I can refactor and optimize code more frequently and efficiently.

This doubling of productivity has not only allowed me to complete projects faster but has also given me more time to focus on the creative and strategic aspects of development. It’s important to note that this productivity increase came with practice and learning how to effectively collaborate with AI tools. The key is finding the right balance between leveraging AI capabilities and applying your own expertise and creativity.

While the overall productivity gain is significant, it’s important to note that working with AI-generated code comes with its own set of challenges. Debugging simple errors in AI-suggested code can be annoying, but I’ve found that it’s often much faster than debugging my own errors and omissions. This trade-off contributes to the overall productivity boost, as the time saved in generating code and solving complex problems outweighs the time spent on occasional AI-induced errors.

Navigating Challenges in AI-Augmented Coding

While the benefits of AI-augmented coding are substantial, it’s crucial to be aware of potential pitfalls and how to navigate them effectively. Here are some challenges I’ve encountered and the strategies I’ve developed to address them:

1. Copy-Paste Pitfalls

One of the most time-consuming issues I initially faced was related to transferring code between the AI chat window and my development environment. Errors often crept in during this process, either from overwriting existing code or missing crucial parts of the AI’s suggestions.

Solution: I now always request full code generation for changes from the LLM. This approach ensures that I have the complete context and can more easily spot any inconsistencies when integrating the AI’s suggestions into my codebase.

2. Balancing AI Assistance and IDE Features

While many developers use AI-powered coding assistants integrated directly into their IDEs (like GitHub Copilot), I’ve found that I’m much more effective when I have more control over the context in which I’m asking the AI for help.

Strategy: Instead of relying solely on in-IDE AI assistants, I maintain a separate dialogue with the LLM. This allows me to provide more comprehensive context, ask follow-up questions, and get more tailored assistance. The trade-off of switching between windows is outweighed by the quality and specificity of the AI’s input.

3. Overreliance on AI

It’s easy to become overly dependent on AI suggestions, potentially stunting your own problem-solving skills or leading to a lack of understanding of the code you’re working with.

Mitigation: I make a conscious effort to understand every piece of AI-generated code before implementing it. I often ask the AI to explain its suggestions or to provide alternative approaches. This not only ensures that I comprehend the code I’m using but also serves as a valuable learning opportunity.

4. Maintaining Code Consistency

When frequently incorporating AI-generated code, there’s a risk of ending up with a codebase that lacks consistency in style or approach.

Approach: I’ve established a set of guidelines for AI interactions, including coding style preferences and project-specific requirements. I communicate these to the AI at the beginning of each session and periodically remind it of these guidelines. Additionally, I perform regular code reviews to ensure overall consistency.

5. The Temptation of Constant Refactoring

An unexpected side effect of working with AI-generated code is the heightened awareness it brings to the quality and style of your existing codebase. This phenomenon reminds me of an old rule from my time at Google: clean up any file/code you touch. However, the situation with AI takes this to a new level.

When integrating LLM-written code, which often looks cleaner and more elegant, I sometimes find myself distracted by the urge to clean up my legacy code. The contrast in style between the AI-generated code and the existing codebase can be stark, making the older code seem suboptimal or outdated.

While this increased attention to code quality can be beneficial, it also presents challenges:

1. Time Management: The temptation to refactor existing code can be a significant time sink. What starts as a quick cleanup can turn into a major refactoring session, potentially derailing your current task.

2. Scope Creep: Constant refactoring can lead to scope creep in your projects. A simple feature addition might spiral into a large-scale rewrite if you’re not careful.

3. Balancing Act: It becomes crucial to balance the desire for a uniformly clean codebase with the practical needs of project timelines and stability.

To manage this, I’ve adopted a few strategies:

- Scheduled Refactoring: Instead of immediately refactoring legacy code, I make notes and schedule dedicated time for cleanup tasks. This helps maintain focus on current objectives while still addressing technical debt.

- Incremental Improvements: When I do touch legacy code, I make small, incremental improvements, and factor them out into a seaparte code chunk with comments, rather than wholesale rewrites. This approach aligns with the original Google principles without derailing current work. At least thats what I tell myself.

- Refactoring Sprints: Periodically, I dedicate entire sprints to refactoring and bringing older parts of the codebase up to current standards. This allows for focused improvement without constantly disrupting feature development.

While the quality boost that comes from AI-assisted coding is generally positive, it’s important to manage the indirect effects it can have on your approach to existing code. By being aware of this tendency and having strategies to handle it, you can maintain a balance between improving your codebase and staying productive on current tasks.

6. The Paradox of Increased Ambition

One of the most unexpected outcomes of integrating AI into my coding process has been its impact on project scope and ambition. While AI has undoubtedly increased my productivity in terms of code generation and problem-solving speed, I’ve found that, somewhat paradoxically, I’m often taking longer to deliver projects. Here’s why:

1. Expanded Project Scopes: The efficiency gained from AI assistance has made me more ambitious in defining project scopes. Features that I might have previously considered too time-consuming or complex suddenly seem within reach.

2. Pursuit of Perfection: With AI’s ability to quickly generate and iterate on code, I often find myself pursuing more perfect or comprehensive solutions rather than settling for “good enough.”

3. Exploration of Alternatives: AI’s capacity to suggest multiple approaches to a problem encourages more exploration of alternative solutions, which while beneficial for quality, can extend development timelines.

4. Feature Creep: The ease of adding new features or improvements with AI assistance can lead to feature creep, where the project continuously expands beyond its original scope.

5. Learning New Technologies: AI’s vast knowledge often introduces me to new technologies or techniques that I’m tempted to incorporate into projects, leading to additional learning curves and implementation time.

This situation presents a new challenge: balancing the expanded possibilities that AI offers with practical project management and delivery timelines. To address this, I’ve had to develop new strategies:

- Stricter Scope Definition: I now spend more time upfront clearly defining and limiting project scope, being more resistant to scope changes even when they seem easily achievable with AI. - Time-Boxing Exploration: I set strict time limits for exploring alternative solutions or new technologies, forcing myself to make decisions and move forward. - Minimum Viable Product (MVP) Focus: I’ve renewed my commitment to the MVP concept, using AI to reach a solid MVP quickly before considering expansions or improvements. - Regular Progress Reviews: I conduct more frequent progress reviews, assessing whether the current direction aligns with project goals and timelines.

While the increased ambition sparked by AI capabilities can lead to more feature-rich and innovative solutions, it’s crucial to remain aware of its impact on overall project timelines. The key is to harness the productivity gains of AI while maintaining discipline in project scope and time management.

7. Technical Hiccups and Frustrations

While AI has dramatically improved my coding process, it’s not without its technical challenges. Two issues, in particular, have been significant time sinks in my workflow:

Incomplete Code Generation

One of the most frustrating issues I’ve encountered, particularly with models like GPT and Claude, is incomplete code generation. This problem seems to occur more frequently when dealing with longer context windows or generating larger code blocks. The AI might generate code that appears complete at first glance, but upon closer inspection, crucial sections are missing. No ellipsis, no comments, just missing and not even newline-aligned.

What makes this particularly challenging is that when you ask the AI to fix or complete the code, it often acts as if the missing section is already there. It’s almost as if there’s an invisible markup tag or some other quirk in the AI’s internal representation that breaks up the output.

To mitigate this: - I’ve learned to carefully review all generated code, especially longer snippets. - When I spot missing sections, instead of asking the AI to “fix” the code, I often find it more effective to ask it to regenerate the entire function or module. - For crucial or complex pieces of code, I sometimes ask the AI to generate the code in smaller, more manageable chunks.

Chat Thread Errors

Another significant issue I’ve faced, specifically with Claude, is the occasional occurrence of a general error message that renders the entire chat thread inaccessible. This can be incredibly frustrating, especially if you’ve been working on a complex problem or have built up a lot of context in that thread.

To address this: - I’ve started to more frequently save important code snippets or insights from the chat to external files. - For complex projects, I try to break discussions into smaller, more focused chat threads rather than relying on a single, long conversation. - I wish there was a feature to save draft threads, clone them quickly, or implement some form of versioning to prevent complete loss of work.

These technical hiccups serve as a reminder that while AI coding assistants are powerful tools, they are still evolving technologies. It’s crucial to have backup strategies and not to rely entirely on the persistence of chat threads or the perfection of code generation.

Conclusion

AI-augmented coding represents the future of software development — and our conventions need to change with it. The key to success lies not in resisting this change, but in learning to effectively collaborate with AI tools while maintaining our critical thinking and decision-making skills. Humans should focus on what they are good at — and delegate the things AI is better at, to the AI.

 — Jason Arbon

Heemeng (Chris) Foo

Leadership in Quality Engineering, Test and Engineering Excellence. Startup advisor.

2mo

I was just at #elevate2024 and companies like Gitlab and HackerRank are already trialling coding interviews with coding assistants. They report that it allows them to focus on more crucial skills like design and tradeoffs instead of the usual boilerplate stuff.

Anna Royzman

Technology Leader | International Speaker & Trainer | Quality Leadership Visionary►Organizing groundbreaking conferences

2mo

Thank you for this thorough experience report Jason Arbon. I am not using copilot in coding, but my usage of genAI aligns well with your observations and conclusions. I am also in pursuit of quality, now that the "typing" (coding) part is auto-generated. Thank you for being a pioneer in sharing your experiences. I don't see many honest articles like that.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics