TL;DR: Rigorous testing remains essential; AI-generated code isn't bug-free.
Tips:
- Test each workflow carefully as AI frequently leaves placeholders or incomplete functionalities.
- Clearly specify errors and device specifics to the AI for effective debugging.
- Leverage browser developer tools, component variable tracking, and console logging to pinpoint errors and feed detailed descriptions to the AI.
- For complex issues, ask the AI for advice.
- Only start a new chat at this point if absolutely necessary. Instead create dedicated debugger chats to isolate and resolve complex issues without cluttering your main project chat or losing context.
- Prompt the AI to improve application performance and employ best practices.
AI significantly speeds development but does not eliminate the need for rigorous testing. The current AIs do not appear to do any deep workflow or bug testing. You’ll need to go in and tell it to fix each of these issues, which can sometimes end up in loops that get stuck. Below I’ll describe some of the scenarios I encountered and the ways I was able to resolve the issues.
Note that as you debug if you want the AI to diagnose an error and/or make recommendations for fixing an issue, but not update the code, to specify in the chat that you do not want it to make a code update. Otherwise if you report an error or ask it a question related to a feature working suboptimally, it will update the code automatically.
The first thing I tested was basic workflows. I found many AI-generated components were initially placeholders or lacked complete functionality. Testing each workflow meticulously helped quickly identify these feature gaps, which I was able to then tell the AI to complete.
To fix bugs, I started by specifying exactly what was failing and the series of steps to reproduce it. For my deployed app I also provided information such as browser, operating system, and device. This was often enough for the AI to effectively diagnose and correct errors. If the first fix doesn’t work, you can simply tell it the fix didn’t work and it will try a new approach and after a few iterations that typically resolves the issue.
Unfortunately, the AI can sometimes find itself in a loop where it is unable to fix a bug and doesn’t know how to move forward. If that happens, you can use the browser Developer Tools>Console to see what errors pop up and feed the console error information into the chat to help the system pinpoint the bug. Another option is to ask the AI to create a debugging box in the app that shows the component variable values to figure out which one is misfiring. V0.dev also added console logging to help debug when an error wasn’t fixed with the first try.
If you know which part of the code is breaking and you are comfortable with code, you can copy the code that isn’t working and put it into a different AI tool and ask the new AI tool to fix the error. I personally didn’t feel comfortable enough to use this technique since I couldn’t follow enough of the code to know if it would risk breaking functionality with other dependencies in the code. But I did sometimes put the bug issue I was seeing into a different AI to ask its opinion on how to fix, and then put that recommendation into the AI that had my application to implement the recommended fix.
For stubborn bugs, I did find isolating problems in separate AI chats within the same AI tool proved useful. For example, I had difficulty getting the music file to deploy to Vercel. I used a separate chat to focus on creating a solution for deploying music files to Vercel. The AI determined the solution was to use a Vercel Blob url with direct loading instead of API routes. I then took that solution and fed it back into the main app chat, which fixed the issue.
As part of your quality control testing, you’ll also want to test the application performance. Vercel has some features you can turn on to monitor app performance, but this is something you’ll want to also consider yourself. For example, I knew that my app included large image file uploads and would be using the local browser storage which is limited. I asked the AI what it recommended to improve performance for the image storage, and it advised compressing the images. After the image compression was implemented the app was noticeably faster and it retained all four images in local browser cache. Application performance is an example of where you have to ask the AI what it considers best practices and then tell it to implement them–don’t assume it’s doing it on its own.