Architecture Maestro from Now On
Over the past couple of years, I’ve used GPT-3 almost every day to prototype, experiment, and to generally offload as much business and technical work as I can.
This past week I’ve been using GPT-4, and it’s a big leap over GPT-3.
With GPT-4, my coding approach has evolved to become predominantly AI-native. As opposed to GPT-3, where I was the primary coder still, GPT-4 positions me as the “architecture maestro,” overseeing the codebase structure, requirements, objectives, testing, and AI management, and GPT-4 does all of the coding.
This workflow can be summarized as:
- Incorporate existing code relevant to my objectives
- Outline desired changes or additions to the code
- Evaluate GPT-4’s output
- Request further modifications or additions
For example, I maintain a home NAS containing over 20 years of family photos, managed by Ruby scripts that leverage rsync for file organization (by year/month/day) and glacier for backups.
This past weekend, I pasted my existing code into GPT-4 sandbox and requested, “pull out the exif data from photos, use that to extract any geo-data, use APIs to determine city/region/country from that, and put all the useful info (camera name, shutter speed, etc) in index.json files in each folder along with the full file path and relevant information in a hash format.”
GPT-4 ran for 30 seconds or so. I read through the code and it looked great:
(a) rewrote parts of my existing code to do a better/cleaner job of the basic directory crawling and rsync code
(b) suggested gems for exif and reverse geocode and wrote the code to handle that
(c) added (and documented) code to create the index.json files
I replaced my old code and ran it. Oops, an error with the gem. I pasted that in to the chat. “Looks like you’re using an older version of that gem, try replacing the line of code [x] with [y]”. That fixed the error and everything ran perfectly.
Hmm, “I’d like this to run faster” I thought. “Make it faster”. GPT-4 dutifully pointed out that the IO bound operations could be parallelized, and created a refactor do to that.
“It should would be great to have a progress bar”. Done. GPT-4 seamlessly integrated one and pointed out that, due to the multi-threading, it would need to use a mutex.
Except for the minor gem version error, GPT-4’s output was flawless and efficient. The entire process took mere minutes, saving me a day’s worth of manual coding.
I’m the architecture maestro and AI will do the coding from now on.