Now that AI can code...: Can it, though?
Whenever I see a phrase like “Now that LLM/GPT/AI can <VERB>…” I think I do what anyone with even a little bit of knowledge about AI (and especially generative models) does and think, “Hold on there a minute. Can it <VERB>?” If I’m wrong, I’m sure someone will correct me, but my understanding is that the transformer-based large language generative models (cough ChatGPT cough) that are all the rage right now don’t really do anything except generate text passages.
So, no, it can’t <VERB>.
But, you might ask, “Isn’t generating text basically the same as having someone type out that same text?” To which I would respond, “That depends on if you think the product is the only thing that matters. Otherwise, no.” Despite (potentially) producing the same product, the underlying understanding that determines that product is very different for humans and AI and I think that ignoring (or not recognizing) this difference leads some folks to believe that AI is more capable than it is.
What is this difference? Human doing originates from a concept or an idea. This concept or idea forms the basis of what is made, for example, what text or code would be written to solve a particular problem or accomplish a particular effect. The doing for transformer-based models is entirely driven by how they go about knowing (for lack of a better word) and what they know—and they only really know one kind of thing: they know what words are usually found near each other. Now, they know a lot–and I mean, a lot–of these nearby-word relationships, but they don’t know more than that. Using this as their basis for doing, though, means that each word that makes up a generated text passage has been chosen based on how likely it would be that it would be found there, near the other words in the passage.
Let’s think about the statement, “Now that AI can code…” This is a sentiment that I’ve seen around the Internet, lamenting (or perhaps crowing) that the professional coder is becoming obsolete. The success of a transformer-based model in producing a coherent product, in this case, functional code, is rooted in its training and what data (and how much–especially how much) it has learned from to establish the nearby-word relationships. If it has learned from a lot of code samples and examples, then it might be able to successfully guess and output a text passage that happens to work like code would.
When it works, it can seem downright magical with what it can come up with. I think it’s important to understand, though, that the thing it is coming up with is more of a kind of product rather than a particular or specific product, a generic rendition versus a bespoke one. I don’t think this is a distinction without a difference, either. It might be able to pull together code to create an app but can it create your app? It has no conceptual understanding of its product or of coding in general. Code gets generated the way it does because it has learned that the words found in a given code snippet have been encountered together.
Here’s my lukewarm take: the current generation of AI can’t (doesn’t) code, it can only generate text that resembles code. I don’t think it’s the coder that, ultimately, will need to worry about the relevance of their profession. There will be a period of pain, to be sure, as C-suiters and managers dream of the cost savings from getting rid of those pesky engineers while still being able to have a product. The reality, though, is that you can use AI to build a product, but you need coders to build the particular or specific product.
tl;dr/Conclusion:
Can text passages generated by a large language model sometimes resemble code? Sure. And is this resemblance enough that the generated text can be executed like code? Sometimes. But can we say that generating code-like text is the same as actually coding? I wouldn’t.