A compiler has mostly fixed rules for translation. The English language often is ambiguous and there are many ways to implement something based on a verbal description.
Programming by using the ai as a “compiler” would likely lead to many bugs that will be hard to impossible to trace without knowing the underlying implementation. But hitting compile again may lead to an accidental correct implementation and you’d be none the wiser why the test suddenly passes.
It’s ok as an assistant to generate boilerplate code, and warn you about some bugs / issues. Maybe a baseline implementation.
But by the time you’ve exactly described what and how you want it you may as well just write some higher level code.
A compiler has mostly fixed rules for translation.
Some compilers are simple, while some are complicated. An AI compiler would of course be very complicated. However, it still would have “fixed rules”. It’s just that these rules would be decided by itself. If u r a software dev, u r also an English-to-xyz-language-compiler. You do what your client tells u to do more or less correctly, right? Junior devs do what senior devs tell them to do kinda correctly, right? An AI compiler would be the same thing.
Programming by using the ai as a “compiler” would likely lead to many bugs that will be hard to impossible to trace without knowing the underlying implementation.
Bugs would be likely if your AI compiler was dumb. The probability of bugs would reduce drastically if ur AI compiler was trained more/on better data.
It’s ok as an assistant to generate boilerplate code, and warn you about some bugs / issues. Maybe a baseline implementation.
That is the state of AI today. What you are describing are the capabilities of current AI models. However, I cannot see how this is a criticism of the idea of AI compilers themselves.
But by the time you’ve exactly described what and how you want it you may as well just write some higher level code.
Again. The smarter your model, the more you can abstract your stuff.
A compiler has mostly fixed rules for translation. The English language often is ambiguous and there are many ways to implement something based on a verbal description.
Programming by using the ai as a “compiler” would likely lead to many bugs that will be hard to impossible to trace without knowing the underlying implementation. But hitting compile again may lead to an accidental correct implementation and you’d be none the wiser why the test suddenly passes.
It’s ok as an assistant to generate boilerplate code, and warn you about some bugs / issues. Maybe a baseline implementation.
But by the time you’ve exactly described what and how you want it you may as well just write some higher level code.
Some compilers are simple, while some are complicated. An AI compiler would of course be very complicated. However, it still would have “fixed rules”. It’s just that these rules would be decided by itself. If u r a software dev, u r also an English-to-xyz-language-compiler. You do what your client tells u to do more or less correctly, right? Junior devs do what senior devs tell them to do kinda correctly, right? An AI compiler would be the same thing.
Bugs would be likely if your AI compiler was dumb. The probability of bugs would reduce drastically if ur AI compiler was trained more/on better data.
That is the state of AI today. What you are describing are the capabilities of current AI models. However, I cannot see how this is a criticism of the idea of AI compilers themselves.
Again. The smarter your model, the more you can abstract your stuff.