On the dangers of natural language programming
AI is taking the world by storm at the moment. Everywhere you look, there it is. Whether it’s the latest model eking out another 0.5% on a benchmark, talk of an AI bubble that will imminently burst, a newspaper accidentally copy-pasting a chat model's response, or someone getting cancelled for even asking an AI a question. There is no doubt it is winding its way into our lives, one way or another.
I, too, have spent considerable time researching, testing, and using these tools. I've written papers, presented my thoughts, and built solutions. I’ve relied on them too much. And I’ve relied on them too little.
I think they are exceptional tools that have already transformed how we live, and will continue to do so. I don't want to talk about the future of AI, how it will change our profession, whether AGI1 is around the corner (I don't think so), or if the bubble will pop. I want to share a note of caution. Because I think these tools can do more damage than we realise.
Why share these thoughts?
The reason I want to share my thoughts, unprovoked I might add, is that I hold a strong opinion that how we do things is often more important than the thing we actually do. A deed may be good, but the intent selfish. A solution may work, but the process to get there unethical. The result may be correct, but the method flawed.
Does a correct solution make the world better, or does the rigour and learnings of how we arrived at it?
In one of my talks on AI, I frame what I call the "Journey of Knowledge". I discuss how over time, we have shortened the distance from ignorance to knowledge.
- Cave paintings: Not portable. The distance to knowledge was long, both metaphorically and literally.
- Writing: Portable knowledge. Sharable, but it took effort to copy. The distance shortened.
- The Printing Press: Gutenberg released his killer app: the printing press. Copies were made and distributed globally. The distance shortened further.
- The Internet: Instant access. The world at our fingertips. The distance became almost non-existent.
In all these cases, the final step still required us to consume, understand, and interpret that knowledge which we sought. We had to synthesise that information into a solution, decision, or action. The interface changed, but the final step remained ours to perform.
AI, particularly Large Language Models (LLMs), promises to skip this step. We no longer need to interpret knowledge; we can simply ask for a solution directly. The interface has changed again, but this time, the final step is outsourced to the machine.
What is the risk of skipping that step?
Dijkstra saw it coming
Edsger W. Dijkstra, a pioneer of computer science, had strong views on this. In his 1982 paper, On the foolishness of "natural language programming", he argued that using natural language as a programming interface is inherently flawed. He believed that natural language is ambiguous, imprecise, and context-dependent, making it unsuitable for programming tasks that require clarity and precision.
Today, we have natural language programming. In fact, we have natural language decision-making, action-taking, and problem-solving. We can ask an AI to write code, make decisions, solve problems, and take actions on our behalf using conversational English. It has never been easier to make mistakes and be misunderstood en masse.
It is interesting to think that in almost all aspects of life we have aimed for more precision, clarity, and unambiguity. We always have. The phonetic alphabet, Morse code, programming languages, formal logic, mathematical notation. All these systems were created to reduce ambiguity. We built these systems because natural language failed us in technical contexts.
So why are we now so eagerly embracing an imprecise, ambiguous interface for tasks that require clarity and precision?2
Broken windows and red robots
I don't like the word "lazy." It carries a moral judgment that implies we just don't care, that we act the way we do because of a moral shortcoming. I think a lot more goes into the decision to take a shortcut. I don't think it’s because we are "lazy" in the traditional sense. I think it’s because we are drowning.
Since the COVID-19 pandemic, the move to working from home and being always available meant that the barriers to engaging with work were lowered. With that, our boundaries blurred. We no longer needed to take time to transport ourselves to work. Meetings could be back-to-back because they were simply a click away. The need to "be at work" was no longer a physical requirement. We could be at work, at home, and everywhere in between. The "distance" to work had shortened, too. Couple this with the stress and uncertainty of the pandemic and the economic impacts, and what you get are people that are always "on."
Having driven South African roads for nearly two decades, I have seen a deterioration in driving standards. I see this as a proxy for how people behave when they feel overwhelmed. It used to be that skipping a red robot (traffic light) was a rare, late-night occurrence when the streets were empty. Now, you see it in broad daylight. You see it in heavy traffic. It is almost a weekly occurrence that a car behind me will swerve around me to take the turn while I am stopping at a stop street. People are cutting corners, skipping rules, and taking shortcuts more often.
Why? Partially, it’s the "Broken Window" theory: once you see three other people do it with no consequences, the social contract erodes. If everyone else is rushing, why should you wait?
But deeper than that, it is a symptom of a lack of space. We are impatient, rushed, and overwhelmed. When we are overwhelmed, tired, or rushed, that effort feels like a luxury. We make a trade-off: we trade the effort of obeying the rule for the perceived benefit of maintaining momentum. Its not malice, or laziness, but rather a coping mechanism.
One might argue "no harm, no foul." If we are far from other cars and we didn't cause an accident, what is the harm?
The harm is the removal of predictability. We don't follow rules for the sake of rules. We follow them because they create a predictable environment. When we remove that predictability, we increase the cognitive load on everyone else. The roads become less safe, more stressful, and more chaotic. The vicious cycle continues.
The trade-off of effort
When we use AI to skip the thinking step such as generating code we don't review, or entrusting it with decisions we don't understand, we are skipping that stop street. We are doing it not for productivity, but because we are overwhelmed. We are trading off the effort of thinking, reviewing, and understanding for the perceived benefit of speed and convenience. We have deadlines, meetings, tasks, backlogs, burnout. Taking the time to understand the problem from first principles, to question the assumptions, to validate the solution, feels like a luxury we don't have.
We are under immense pressure to deliver, grow, exceed the targets, and do more. We are told that "done is better than perfect" (a phrase I have come to loathe). We are told to maximise productivity. The economy squeezes 100% utilisation out of us, leaving 0% for thought, exploration, or care.
So when the AI offers that shortcut, we take it. Not because we are lazy, not because we can't write the code, make the decision, or solve the problem ourselves. But because we don't have the mental bandwidth left to do so; our environments have used it all up. We skip the stop street because we are already late.
We use AI to rush through the boring parts of our work to get to the result. We skip the thinking, the solving, the questioning, so we can save time and deliver the next task sooner. But what are we saving time for?
You may have heard the term "sustainable abundance". It refers to a future where resources are abundant and AI does all the work, leaving us with unlimited leisure time. But what did the great thinkers do with their leisure time? They thought. They solved problems. They created. We are rushing to do that which we are trying to avoid.
Not just the curmudgeoning of a nostalgic millennial
I am not arguing against the use of AI. I use it daily and it has transformed my work and productivity. I am arguing for mindfulness in how we use it. Yes, use it to remove the drudgery of your day-to-day tasks. Extracting data from unstructured sources, finding syntax, churning through and structuring your messy notes, or generating documentation for your code (a favourite of mine). These are all worthy uses.
Don't let it do the thinking for you. Use AI to reclaim the time to formalise, to ponder, and to fail. We need to stop treating "thinking time" as "wasted time." And in a larger sense, we need to reclaim our environments. We need to value thoughtfulness over speed, quality over quantity, and understanding over output. Pause to have your coffee. Take the lunch break. Walk outside and feel the sun. Make your bed when you wake up. Think of these as red robots in your day. Moments to stop, reflect, and proceed with intention.
It won't be easy. I struggle with it myself. But sometimes we just need space. The time to stop at the red robot, look around, and proceed with intention; even if everyone else is speeding past you.