To be clear I like AI, I’m a fan. I think it will give humans marvelous insight into many things, but I don’t think it will make software better. And I also don’t trust the current AI companies for obvious reasons (they seem to be run by industry plants) but this will just be about technical aspects.
I’m a programmer who used to try and design better tools thinking better tools leads to better software and more happiness, but when I started getting more serious about designing an OS I realised the quality of software actually seems to degrade in correlation with better tooling.
BASIC games still work after decades, C++ games break between Windows versions. MenuetOS is reliable enough to run industrial control panels on, other operating systems are steaming piles of shit that nobody would trust attached to a motor.
And there are probably a few reasons for this:
- High level languages encourage absolute n00bs from the university system to present themselves as qualified experts seemingly without learning any fundamentals (they don’t seem to teach things like assembler so much anymore, resulting in general incompetence)
- High level languages have basically solved every problem already, usually in a very mediocre one-size-fits-all way, encouraging people to think less about ideal solutions
- Toolchain issues can significantly complicate design, deployment, debugging and other things in ways that just aren’t a problem in C or assembler
- Languages with security features appear to be giving people a false sense of security, you have to actually think of security problems not just “muh code is impervious to buffer overflows”
I will still release some high level tools, AI toys and the like – but more because I enjoy these things than for technical or marketing reasons. For now I am finding more enjoyment in replicating the old ways of development with newer/faster hardware, and I’m just not having a lot of problems using C.
Leave a Reply