…probably not, no. Not today anyways.

An excellent video from Paul Hudson at Hacking with Swift explains.

What is remarkable about the code which ChatGPT (and, I assume, other Large Language Models) can produce is how close to a correct solution it is, but how wrong it ends up being.

All is not lost, of course. A human who knows how to write SwiftUI code could quickly fix up these flaws. The potential benefit here is saving time by having the computer write all the boilerplate code for you! It’s a fancy form of auto-complete. Another benefit is in the education of new and existing programmers. ChatGPT can spit out code which compiles (sometimes), and follows the syntax of the language. For a student, or anyone new to the programming language, the ability to get code samples that (a) attempt to solve your specific problem and (b) are written in proper syntax could remove a barrier to entry to learning that new language. Personally, I know the frustration of learning of new programming language and trying to write the code I already know how to from another language, while messing up the syntax in the new language. It is infuriating.

On the other hand, there’s a huge risk surface where subtle bugs are introduced by ChatGPT’s code. If the person using ChatGPT to generate the code is incapable of discerning those bugs, or properly testing for them, that will1 lead to serious problems. Consider if the code from that demo video of Paul’s somehow skipped the final second in each minute? Or skipped 1 second every 20 minutes? That sort of bug could easily go undetected, and off-by-one errors are not at all rare in programming.

  1. Yes, this will happen. Perhaps not straight-away, but even the very best human programmers cannot solve for all the logic bugs they might introduce. And I’ve yet to see evidence that automated testing / code validation is somehow on-par or superior to human being’s in this area. ↩︎