This is the first post in our Chatbot Chronicles series.

Recently, I conducted an experiment to test if ChatGPT, an AI language model, could generate a quick MVP from a client brief. The experiment included copywriting, logo/branding, coding, hosting, and deployment, all completed within a day, amidst other daily tasks.

Although ChatGPT 4.0 was available and performing better, I used version 3.5 to avoid the messages per day limit.

Spoiler alert: it worked! However, it's worth noting that my 20 years of experience in web development helped the process. I also set myself a challenge by using a JS framework that I haven't worked with before (Next.js).

Here's a breakdown of how ChatGPT performed:

Copywriting:

ChatGPT performed well in creating copy for the app. The more context provided, the better the results. For an app that required users to select preferences from a list of options, ChatGPT was quick to generate text for those options, saving me a lot of time.

Logo/Branding:

ChatGPT created a relevant SVG icon for the client brief, which was sufficient for an MVP. However, when I asked ChatGPT to create another SVG icon, the results were less successful, with the icon looking nothing like the subject. Currently this saves no time and it would have been quicker to use an existing icon from a library like heroicons or ask a designer to create it.

Functionality:

My experience as a developer was crucial in determining the three screens required for the app. ChatGPT built each screen one at a time, requiring several refinements at each stage. However, it was good at remembering the previous state of a file, making it easy to copy and paste code from the chat window to my editor.

Debugging issues:

There were a few instances where the app would crash after a code change. However, following ChatGPT's instructions and pasting the error message back into the chat allowed the bot to fix the error or provide additional code changes to debug the problem.

Deployment:

ChatGPT selected a free Next.js hosting service (Vercel) and provided detailed installation and configuration instructions. However, for database hosting, it was thought that Heroku still had a free tier for PostgreSQL DBs (which is no longer available), so I opted for the mini tier.

Other features:

ChatGPT exceeded my expectations when I asked it to write a README.md file for the repo. It not only provided installation instructions and deployment commands but also listed the environment variables a new developer would need to set up.

Improvements:

  • The dataset used by ChatGPT had limitations on anything after 2021. So updates to frameworks since then can cause some hiccups. 
  • Copying and pasting from the bot to the code editor was also a bit cumbersome. However, exciting tools like Bloop.ai can integrate code better in the future.
  • Although the UI was functional, it was also basic. ChatGPT made modest improvements when I asked it to improve layouts, but it didn't meet the standards of our creative team at Sauce.
  • From anecdotes around the Sauce office not everyone's experience of this has been as smooth so we will be looking into how different people interact with the bot and how that affects the responses. 

Overall, the experiment demonstrated that AI language models like ChatGPT can generate quick MVPs from client briefs with a bit of creativity and technical know-how. It's an exciting development for Sauce as it can lower the costs for clients looking for proof of concept solutions and embrace the "fail fast" agile mentality.