Don't Do It: Vibe Code Your Embedded System
Embedded designers and programmers tend to be a conservative lot who love to delve into the latest technology but adopt it after careful consideration. One of those technologies is vibe coding, which is an offshoot of generative AI. Ok, maybe not an offshoot, but it’s definitely based on using large language model (LLM) tools in AI-assisted software development.
Computer-generated programs by non-programmers have been a goal for decades and espoused in the creation for COBOL with its English-like statements and 4th-gen programming tools for app creation by filling in a form. However, none come close to chatbots, which will spit out programs in any programming language you ask for in minutes that will compile and run, usually, on any hardware you ask it to target. That is, of course, assuming that the chatbot doesn’t lie, cheat, steal, or otherwise go rogue with some hallucination.
What is Vibe Coding?
In general, vibe coding uses a chatbot or an integrated LLM in a development tool like an integrated development environment (IDE) that can accept text prompts. One can ask for the creation of Ada code to blink an LED on a particular board like the Raspberry Pi and the chatbot will respond with a complete program. You can even ask how to compile and link it.
This may appear to be remarkable for anyone who hasn’t had to do this from scratch. Even someone who has written code for an embedded system might be impressed. It’s a move from a simple example to a more complex embedded system with thousands of lines of code (or a lot more) and myriad modules and libraries
For trivial requests, the results work well, since the source of most LLMs has been stolen (taken) from internet sites where examples abound. Queries for functionality or coding examples are often returned as results, sometimes with modifications, from a chatbot interaction. Still, developers need to remember how generative AI works.
Processing a chatbot prompt involves multiple steps, including breaking the natural language query into tokens and then building up the response a token at a time. The appearance of thought and analysis is obscured by the impressive nature of the results.
Assuming the generative AI tools don’t make major mistakes or come up with false perceptions, there’s still the issue of prompt engineering. Systems allow follow up prompts to clarify or ask for changes to a response, but one needs to keep in mind the imprecise nature of most text-based or verbal interactions. It’s why legal documents have precise meanings for words and phrases and why the syntax and semantics of programming languages are even more precise.
One only needs to look back on dictation machines and speech-to-text tools of the past to understand the limitations of today’s prompt-driven AI tools. Voice recognition has improved greatly, but even at 99.99%, corrections still must be made. And doing that using voice commands or prompts can be tedious at best. Even the best of today’s systems needs a person to oversee the results of transcripts from Zoom calls and webinars.
Likewise, many complex systems are designed or specified using the Object Management Group’s Unified Modeling Language (UML). UML is very structured and a far cry from AI prompts. Here, though, the level of specification is more precise but often limited or incomplete. Trying to get AI prompts to match even this level of specification needed for embedded application is, at least to me, a lost cause.
Resist the Push for Vibe Coding from Above
Fortune magazine’s “MIT report: 95% of generative AI pilots at companies are failing” highlights the challenges of using generative AI in general for projects. This includes those trying to replace people using agentic AI tools. Some of these projects might be critical corporate projects, but hopefully not.
A useful reminder to the powers that be: Something as basic as replacing a compiler or IDE is as critical and difficult as jumping on the AI bandwagon. Even switching suppliers of software tools and middleware has implications that are minor compared to adding generative AI to the mix without considering the integration issues to the development process.
The inclusion of generative AI in IDEs and other development tools has been rapid, but it’s being pushed more by the suppliers than the demand of the developers. It’s true that developers want to try these tools and use them if they meet their requirements. However, getting lured by the type of chatbot feedback provided by search engines overlooks how challenging it is to obtain useful and accurate results.
Beware of Integrating Agentic AI
Vibe coding is just one aspect of AI that embedded developers must contend with. Another is agentic AI, which essentially puts AI models in the driver seat, literally in some cases like cars and robots. The systems are designed to operate autonomously with little or no direction from a human.
Vibe-coded agentic AI would seem to be the holy grail of programming, especially when it comes to things like robotics. It will be useful at a high level, although beware of the repeated AI prompting for an agile-style development — edge cases that may be critical to safety and security are likely to have big holes because of the imprecise nature of prompt coding.
Not All AI is Bad
AI models from convolutional neural networks (CNNs) to chatbots have been integrated into applications for decades (ok, maybe only two decades), but this is much different from vibe coding that’s being used to generate applications.
If you and your company aren’t liable for your embedded code or product, then have at it now. Otherwise, you might want to be more conservative in adopting and integrating generative AI tools like vibe coding.
Generative AI and the latest AI incarnations are useful; however, they should be used with care and understanding. Too many interactions crop up from reliability, repeatability, safety, and security to jump on the AI bandwagon without looking for what one is jumping on — as well as where it’s going.