AI
- It’s cumbersome when you need to copy-paste generated code around. The generation should be integrated directly into your actual development environment. You can even do all this with your phone only, but that is very cumbersome.
- If you try to avoid coding, you’ll just end up debugging someone else’s (the AI’s) code. I, for one, usually prefer writing new code to debugging old one. And when the code is not made by yourself, you don’t have a mental model of it, so debugging takes a lot of time.
- The technology is advancing fast. I have no doubt that some other model could’ve successfully created a proper ant colony simulator already, or that the models I tried now will be able to do it in a year.
Are you really using AI?
Yesterday I was scrolling through my feed when I came across a short video by Allie K. Miller, who’s something of a thought leader in the AI business space. She mentioned that you probably use AI already, but asked if you’re really using it? AI can enable so many novel use cases that merely enhancing your old processes is kind of wasting its full potential.
It made me stop and think. Like many, I’ve been using AI tools here and there - asking ChatGPT to help with writing, generating some ideas, maybe summarizing content. But am I actually reimagining how I work?
Allie mentioned how after 10 years of creating AI content (7 of those without AI tools), she completely rebuilt her workflow. Now she dictates to Otter AI while walking, runs her content through multiple AI systems, and formats everything in Beehiiv with auto-links. More steps, but apparently 80% time saved. That’s not just enhancement.
I wonder if most of us are in that “scratching the surface” phase with AI that Allie mentions. We’re asking AI to help with tasks we already do, rather than rethinking what tasks we should be doing in the first place.
I’m obviously not using AI to its fullest potential either. My current workflows still look pretty much like they did a couple of years ago, with some AI assistance sprinkled in rather than built around them. There’s something to think about there.
(By the way, I don’t often subscribe to newsletters, but I do follow Allie’s work. If any of this piqued your interest, you might want to check out her newsletter at AI with Allie.)
Vibe coding an ant colony simulator
When it comes to household chores, ironing is one of my favorites as it often allows me to shorten my “Watch later” playlist (771 videos there as of writing this…) in YouTube. Last time I did ironing I watched this cool video of an ant colony simulator:
The fun thing is that it creates fascinating emergent behavior from a few simple rules. The ants walk around randomly, leaving behind a trail of pheromones like breadcrumbs pointing back home, and when they encounter food, they pick some up and follow the pheromone trail back to the nest. While carrying food they leave behind another type of pheromone that point the ants to the food source.
This seemed like an interesting exercise in vibe coding where you just describe your need in your own words and let an AI do the actual programming for you. Vibe coding seems to split the opinions sharply: some find that it makes coding more equal and available to anyone, others loathe the idea of an influx of AI-generated trash.
My take is that AI is a tool. Just as you don’t become a better photographer by buying a more expensive camera, you don’t become a (better) programmer by using smarter tools. If you are prone to creating bugs without AI, then AI just enhances your ability to create those bugs faster.
But sometimes it does not matter. If you’re working on a hobby project for yourself and you’re only running it locally, then it might as well have all the security holes in the world. However, the situation quickly changes if you deploy your application online and it has a backend to store any kind of a state.
Also an important reason for this exercise to me, besides wanting to watch virtual ants, was simply to keep up with and assess the current state of AI code generation. We seem to be heading towards ever more advanced and omnipresent AI tools, so it’s for the best if you start getting familiar with them already.
So, how did it go? I’ve had successes with vibe coding so I was very optimistic. I wish I could tell you it was a great success: that I gave ChatGPT one prompt and it gave me a working piece of code right away, and then I spent an hour watching my new virtual ant colony do its things. Alas, it was not a great success.
I started with ChatGPT 4o and described what I wanted to it: a simple ant colony simulator that runs in a browser, and where the ants leave pheromone trails and follow them. Emergent behavior from simple rules, like in the YouTube video I had watched. The system quickly blurted me out an HTML file with Javascript, and it was very promising. The simulation even launched successfully and ants began crawling around! You could even place down the nest, the food, and some walls, though the UI was clunky – you couldn’t even see the item you placed the simulation was running.
I started to iterate the app with ChatGPT, giving it feedback on what works and what does not and got it improved a bit. Then, however, it seemed as if the system started to lose the context: features that had worked stopped working randomly on new iterations. I restarted the process a few times with new prompts and with the o3-mini-high model, but the results were similar. The AI either went for an overly complex model that didn’t work, or left it too simple and added shortcut rules that broke the idea of emergent behavior from simple rules. For example, the AI thought that it would be great if an ant carrying food could head directly to the nest, no need to follow any pheromone trail. Needless to say, that isn’t what I want.
Next I tried with a local Qwen2.5-Coder, running on Ollama and Open WebUI. This seemed promising at first when Open WebUI even opened a browser frame next to the prompt as the model was coming up with the code. However, that approach didn’t work at all when the model split the HTML and the Javascript to different files, and the UI wasn’t made for such a complex setup. I might have retried with a refined prompt to ask it to output all the code into the same file, but I think it fell for one of the mistakes I had already seen ChatGPT make. Also I couldn’t find a way to restart the preview frame if I lost it due to, say, refreshing the window, which was a let-down.
Finally I tried the new Google AI Studio with the Gemini 2.5 Pro Experimental 03-25 model. Previously I had tried to just give the models the task to create the app in one go, but now I had a different approach. I told the model I wanted it to make me an ant colony simulator, but not yet: we should plan it first.
So, next I asked Gemini to explain me like I’m five how ants in real life find food and then back to the nest. I adapted this approach from Zach Bart, founder of Zachtronics, my favorite puzzle game studio. He mentioned in some interview that when he wants to make a new thematic game about, say, chemistry, he checks the teaching material about that subject available for some 10-15 year old kids, because that hits the sweet spot for what most people would find plausible enough but not overly complicated.
Gemini provided me with the description, I asked a few clarifying questions, and off we went again. This time I made the additional request to use the p5.js library, which is apparently very popular with these kind of apps requiring simple graphics. Unfortunately, Gemini’s result was the only one that didn’t do anything at all once run! I quickly debugged it having an invalid reference to the HTML element that was supposed to host the arena, though, and again the ants started crawling. But, long story short, Gemini made roughly the same mistakes that ChatGPT had made, like giving the ants a magical compass pointing to the nest, despite explicitly being told to have them follow the pheromone trails.
In the end this was all about the journey, not the end result, so I don’t mind not having my own ant colony simulator. Key takeaways:
…and indeed I hope to revisit this topic in the future.