Skip to main content


The end of classical computer science is coming, and most of us are dinosaurs waiting for the meteor to hit, says Matt Welsh.

"I came of age in the 1980s, programming personal computers like the Commodore VIC-20 and Apple IIe at home. Going on to study computer science in college and ultimately getting a PhD at Berkeley, the bulk of my professional training was rooted in what I will call 'classical' CS: programming, algorithms, data structures, systems, programming languages."

"When I was in college in the early '90s, we were still in the depth of the AI Winter, and AI as a field was likewise dominated by classical algorithms. In Dan Huttenlocher's PhD-level computer vision course in 1995 or so, we never once discussed anything resembling deep learning or neural networks--it was all classical algorithms like Canny edge detection, optical flow, and Hausdorff distances."

"One thing that has not really changed is that computer science is taught as a discipline with data structures, algorithms, and programming at its core. I am going to be amazed if in 30 years, or even 10 years, we are still approaching CS in this way. Indeed, I think CS as a field is in for a pretty major upheaval that few of us are really prepared for."

"I believe that the conventional idea of 'writing a program' is headed for extinction, and indeed, for all but very specialized applications, most software, as we know it, will be replaced by AI systems that are trained rather than programmed."

"I'm not just talking about CoPilot replacing programmers. I'm talking about replacing the entire concept of writing programs with training models. In the future, CS students aren't going to need to learn such mundane skills as how to add a node to a binary tree or code in C++. That kind of education will be antiquated, like teaching engineering students how to use a slide rule."

"The shift in focus from programs to models should be obvious to anyone who has read any modern machine learning papers. These papers barely mention the code or systems underlying their innovations; the building blocks of AI systems are much higher-level abstractions like attention layers, tokenizers, and datasets."

This got me thinking: Over the last 20 years, I've been predicting AI would advance to the point where it could automate jobs, and it's looking more and more like I was fundamentally right about that, and all the people who poo-poo'd the idea over the years in coversations with me were wrong. But while I was right about that fundamental idea (and right that there wouldn't be "one AI in a box" that anyone could pull the plug on if something went wrong, but a diffusion of the technology around the world like every previous technology), I was wrong about how exactly it would play out.

First I was wrong about the timescales: I thought it would be necessary to understand much more about how the brain works, and to work algorithms derived from neuroscience into AI models, and looking at the rate of advancement in neuroscience I predicted AI wouldn't be in its current state for a long time. While broad concepts like "neuron" and "attention" have been incorporated into AI, there are practically no specific algorithms that have been ported from brains to AI systems.

Second, I was wrong about what order. I was wrong in thinking "routine" jobs would be automated first, and "creative" jobs last. It turns out that what matters is "mental" vs "physical". Computers can create visual art and music just by thinking very hard -- it's a purely "mental" activity, and computers can do all that thinking in bits and bytes.

This has led me to ponder: What occupations require the greatest level of manual dexterity?

Those should be the jobs safest from the AI revolution.

The first that came to mind for me -- when I was trying to think of jobs that require an extreme level of physical dexterity and pay very highly -- was "surgeon". So I now predict "surgeon" will be the last job to get automated. If you're giving career advice to a young person (or you are a young person), the advice to give is: become a surgeon.

Other occupations safe (for now) against automation, for the same reason would include "physical therapist", "dentist", "dental hygienist", "dental technician", "medical technician" (e.g. those people who customize prosthetics, orthodontic devices, and so on), and so on. "Nurse" who routinely does physical procedures like drawing blood.

Continuing in the same vein but going outside the medical field (pun not intended but allowed to stand once recognized), I'd put "electronics technician". I don't think robots will be able to solder any time soon, or manipulate very small components, at least after the initial assembly is completed which does seem to be highly amenable to automation. But once electronic components fail, to the extent it falls on people to repair them, rather than throw them out and replace them (which admittedly happens a lot), humans aren't going to be replaced any time soon.

Likewise "machinist" who works with small parts and tools.

"Engineer" ought to be ok -- as long as they're mechanical engineers or civil engineers. Software engineers are in the crosshairs. What matters is whether physical manipulation is part of the job.

"Construction worker" -- some jobs are high pay/high skill while others are low pay/low skill. Will be interesting to see what gets automated first and last in construction.

Other "trade" jobs like "plumber", "electrician", "welder" -- probably safe for a long time.

"Auto mechanic" -- probably one of the last jobs to be automated. The factory where the car is initially manufacturered, a very controlled environment, may be full of robots, but it's hard to see robots extending into the auto mechanic's shop where cars go when they break down.

"Jewler" ought to be a safe job for a long time. "Watchmaker" (or "watch repairer") -- I'm still amazed people pay so much for old-fashioned mechanical watches. I guess the point is to be pieces of jewlry, so these essentially count as "jewler" jobs.

"Tailor" and "dressmaker" and other jobs centered around sewing.

"Hairstylist" / "barber" -- you probably won't be trusting a robot with scissors close to your head any time soon.

"Chef", "baker", whatever the word is for "cake calligrapher". Years ago I thought we'd have automated kitchens at fast food restaurants by now but they are no where in sight. And nowhere near automating the kitchens of the fancy restaurants with the top chefs.

Finally, let's revisit "artist". While "artist" is in the crosshairs of AI, some "artist" jobs are actually physical -- such as "sculptor" and "glassblower". These might be resistant to AI for a long time. Not sure how many sculptors and glassblowers the economy can support, though. Might be tough if all the other artists stampede into those occupations.

While "musician" is totally in the crosshairs of AI, as we see, that applies only to musicians who make recorded music -- going "live" may be a way to escape the automation. No robots with the manual dexterity to play physical guitars, violins, etc, appear to be on the horizon. Maybe they can play drums?

And finally for my last item: "Magician" is another live entertainment career that requires a lot of manual dexterity and that ought to be hard for a robot to replicate. For those of you looking for a career in entertainment. Not sure how many magicians the economy can support, though.

The end of programming - Matt Welsh

#solidstatelife #genai #codingai #technologicalunemployment

reshared this

in reply to Wayne Radinsky

"For example as people become smarter with the use of computers" - interesting, I have been thinking lately that people are getting dumber and dumber using computers... As a consequence of the aforementioned ladder of abstraction (and simply human evolution leading to the reversed Flynn effect) and the development of the UI on computers (including smartphones) in general.
in reply to Wayne Radinsky

People at very least become deskilled by working with this stuff.
in reply to Wayne Radinsky

Yes, that's one of the main reasons for the reversed Flynn effect. They quote attention deficit and harmful chemicals as well.
in reply to Wayne Radinsky

“For example as people become smarter with the use of computers” - interesting, I have been thinking lately that people are getting dumber and dumber using computers


Yeah, i think i phrased that wrong. I wasn't referring to genetics of brains making people smarter, but rather the combination of brain with the aid of a computer makes a person smarter. But I do agree that growing dependence on computers could possibly result in reduced brain smartness, though I kinda doubt it. Who knows.

As interesting as the flynn effect studies are, I'm not sure you can depend too much on their results. Pretty small sample sizes over a relatively short period of time:

Flynn effect and its reversal are both environmentally caused
https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6042097/

in reply to Wayne Radinsky

One thing I would caution about the role of AI and computers. We tend to think of them competing with humans and their brains. But that isn't necessarily the main change that is happening. Rather, what may be happening is that the context is changing. Computers and AI might bring new and different ways of doing things. So, it's the new context that humans will deal with that is at issue.
in reply to Wayne Radinsky

AI output is derivative.
AI can't do anything a human can't do better, faster, for less.
in reply to Wayne Radinsky

One thing it can do is image recognition - which is going to be very useful for blind people. Phones will be able to run the model and describe whatever it's pointed at without needing a network connection.
in reply to Wayne Radinsky

Yes of course a human aide can also do that, but having the freedom to manage without one will be an important option.
in reply to Wayne Radinsky

A human can do that better.

When people say "AI is making this possible" they really mean "This is stuff so worthless to us, that we won't hire a person to do it".

That is literally all it means.

And when coding jobs are axed because of AI, it is still the same message "This work is so worthless to us, that now that we can get not-a-person to do it, y'all are out of here".

They don't care about the quality. They foolishly believe the AI will improve, despite eating its own diarrhetic shit from here on out.

in reply to Wayne Radinsky

the combination of brain with the aid of a computer makes a person smarter

I'm not sure this is a notion that is generally applicable -- like in many things, real life scenarios are complex. "SQL" certainly made people vastly more productive, and the use of SQL instilled a good practical understanding of Set Theory logic. But, once you move up the ladder of abstraction from this already very abstract database interface, you are bound to loose a great deal of understanding of what is is you are doing. That's not smart.

in reply to Wayne Radinsky

Tom, no that's not quite right. SQL is a good example. Before relational database theory, people were guessing how to organize database schemas. With a theory of "normalization", technical people began to organize data differently, systematically around the idea of "normal forms". Similarly with the advent of object orientation, tech people began to organize procedures around the idea of message passing. Functional programming, taught people how to control and manage "side effects." These techniques made people smarter about representing data and procedural organization, and made a huge difference in simplifying programs. And btw not to be too pedantic about it, relational theory is based on relational algebra, but i'd have dig through the cobwebs of my mind to say how that is more than set theory logic.
in reply to Wayne Radinsky

Tom, ok you meant set operations such as join, union, intersection, etc. So the SQL abstractions are: selected item, from sources, set operations on item sources, selection condition. Thus, rather than writing procedural code to query the database, you just declare desired value of these 4 parameters. This works because the underlying standard query engine code only has to be written once for everyone. So, the relational database idea is a good example of computer inspired abstraction. (But this also required another abstraction - functional dependency, which is achieved by normalizing the data)
in reply to Wayne Radinsky

So, the relational database idea is a good example of computer inspired abstraction.

This was my initial point. You are missing what followed. If we move upward from this level of abstraction, you loose all of this: set operations such as join, union, intersection, etc. So the SQL abstractions are: selected item, from sources, set operations on item sources, selection condition.

These are the basic operations, but it can get significantly more complicated from these basic operations as well. Who is going to be able to cognate what they need to do to get what they want when they have no basic understanding? They won't have the mental machinery in place, nor the symbols to employ.

in reply to Wayne Radinsky

... I guess I am positing that there are "ideal" levels of abstraction to work within for various kinds of problems. Once we move away from that level, we lose a certain kind of understanding - and most likely, some portion of our ability to solve the problem at hand.
in reply to Wayne Radinsky

Just to clarify, the part about "The end of classical computer science" was written by Matt Welsh, who wrote the linked-to blog post. The part about "cerebral computation" vs "sensory/motor system computation" was written by me. That my commentary/addendum.

I think the mistake the original author (Matt Welsh) made was putting in timelines. He puts in time points like "I am going to be amazed if in 30 years, or even 10 years, we are still approaching CS in this way." If there's one thing I have learned from 2 decades in "futurology", it's that predicting what will happen is a lot easier than predicting when something will happen, or even in what order.

It's perfectly fine to say AI can't do software "design", or that it can't do "maintenance and upkeep" right now but it's a mistake to say it can't do those things forever. 10 years ago people thought AI couldn't write in natural language or generate art -- heck, people 10 years ago thought computers beating humans at the Chinese game of Go was decades away, if it was ever going to happen. (It happened in 2016.)

Someday, AI will design software, write software, and do all the maintenance and upkeep. But it's foolish to try to predict when. AI seems to have been racing ahead in the last few years, catching everyone by surprise with advancement after advancement. It's tempting to extrapolate this fast rate of change indefinitely out into the future. But the AI field might get "stuck" for a time, and have to wait for advancements in computing or algorithms. It's happened before.

in reply to Wayne Radinsky

But we can't afford to keep it powered right now, and our budget for energy use keeps shrinking.

If it is smothered in the crib, none of that will happen.

in reply to Wayne Radinsky

which tech is going to win the battle for computing resources (including energy): (a) so-called AI; (b) crypto-currencies?
in reply to Wayne Radinsky

AI will design software, write software, and do all the maintenance and upkeep. But it’s foolish to try to predict when. AI seems to have been racing ahead in the last few years

AI is like an invasive weed in a plowed field. The "crop" is at a disadvantage. We have got to find our area of superiority, or we will get overgrown.

in reply to Wayne Radinsky

While “musician” is totally in the crosshairs of AI, as we see, that applies only to musicians who make recorded music – going “live” may be a way to escape the automation. No robots with the manual dexterity to play physical guitars, violins, etc, appear to be on the horizon. Maybe they can play drums?

I'm still not sure of this. The machines can "make" music, but can they make it meaningful to humans... you know, something like "Pinball Wizard".

in reply to Wayne Radinsky

The machine would somehow have to "know" what is meaningful and what is not.
in reply to Wayne Radinsky

Some people sneer at AI gobbling up resources. Ha, look at energy consumption around the world.

The human brain uses about 25 watts of electricity i'm told, and that's because it's very miniaturized. And there are a few billion people times 25 watts. So, people gobble up quite a bit of energy too. Won't be long before computers start miniaturizing too.

All this worry and sneering about computers is probably a bit misplaced. Like Wayne says, it's pretty hard to predict the future. I'm reminded of the story about the farmer driving his horse drawn wagon and passing a new fangled model A ford stranded by the side of the road with a flat tire, and the farmer laughs as he passes and says, "get a horse."

in reply to Wayne Radinsky

The human brain uses about 25 watts of electricity i’m told, and that’s because it’s very miniaturized.

Yet humans are extremely wasteful of "brainpower" or potential brainpower. Look what a crow can do, using about two orders of magnitude less.

in reply to Wayne Radinsky

The brain is more or less optimized for adaptability to an environment to survive long enough to reproduce itself. This organic process has created vast diversity over the eons. Computers just happen to be added to mix at this point in the ongoing creation of diversity.
in reply to Wayne Radinsky

Yes, but will their "evolution" be more or less successful?
in reply to Wayne Radinsky

The brain uses exactly 0W of electricity.’

If you fail to understand that hard fact, you’re going to make bad decisions, like this fella :

https://newrepublic.com/article/180487/balaji-srinivasan-network-state-plutocrat

in reply to Wayne Radinsky

It is always overlooked just how utterly stupid tech execs are.
They are worse than pond scum.
in reply to Wayne Radinsky

Andreas, that seems to be important to you, but it isn't clear what your point is.
in reply to Wayne Radinsky

Like our muscles, our brains consume a lot of energy every day—approximately 20 watts to maintain spontaneous neuronal activity.
in reply to Wayne Radinsky

And exactly zero Watts is electricity.

Do you understand basic biology?

in reply to Wayne Radinsky

What is your point? You eat food, metabolize it, and the brain consumes part of the energy. You don't like the use of watts as a casual conversational term? Suggest a better way then.
in reply to Wayne Radinsky

I was pointing out that you were spouting gibberish.

A supercomputer emulating a brain 100th the size of an ants is a completely different sort of problem than 1000 humans eating a sandwich.

AI is a wasteful and stupid technology, and we cannot afford to use any electricity on it, because we are already consuming more electricity than is sustainable.

Handwaving about "miniaturization" just underscores how poor your understanding of any of this is.

A brain has trillions of neurons.

The largest AI has the equivalent of maybe 10000 neurons, and uses MW of energy to run it all.

That has nothing to do with "miniaturization".

Actual neurons are just magnitudes more efficient than their emulations, and there is no reason to think this will ever change.

in reply to Wayne Radinsky

Ok, you thought i was talking nonsense. I think my comments made some sense, but whatever.

Human brains have about 0.8 trillion neurons, and probably trillions of interconnections. But keep in mind that much of that is devoted to sensory/motor and bodily regulatory information processing. If you narrow that down to processing language or images, then current Ai machines do begin to approach brain connectivity complexity. So, you may be wrong along this dimension.

AI sometimes uses billions of "neurons", well within range of doing useful intelligent things.

The comparison between transistors and neurons has everything to do with miniaturization. I have no idea why you say it doesn't. It is true that organic neurons are vastly more efficient than transistors and the use of linear algebra to computer over them. But that is far from the larger story.

Predicting that AI capacity will not change is about like Bill Gates saying we'll never need more than 640k memory, or DEC's Olsen saying that we'll never have computers in the home. Hahaha.

So, your point was that current AI is nowhere near brain complexity, and that it is gibberish to think otherwise. And you say it is unrealistic to think AI will ever approach brain capacity. And, apparently, you think it is wasteful to pursue the AI line of research.

I think I understand your point now.

in reply to Wayne Radinsky

Also, keep in mind that current AI costs are mostly developing the foundation models. Once that is done, the cost and energy consumption of actually using AI is very low. The next few generations of the iphone will do amazing intelligence on the phone - on the phone - get it?
in reply to Wayne Radinsky

Intelligence can be thought of as information compression. Much of human intelligence is a result of culture having encoded a compressed scheme of reality in such a way that people can learn the shared encoding scheme. Similarly with AI. Moral of the story is that people who mock and sneer at AI for being overhyped and too resource intensive might not understand what the bigger picture is.

True, in some cases "get a horse" might be the answer, but evolution has many tricks up its sleeve, and one should not keep all their eggs in one basket of contempt. So to speak.

in reply to Wayne Radinsky

I'm not sure how we got on the relevance of miniaturization, but
looking at the numbers with a current "system on a chip", it seems processors are beginning to scale up to brain size. That sure does put a spin on what we think of as classical programming. Just imagine you are a programmer and you have a computer at your disposal the size of current mobile phone computers. Billions of transistors at Trillions of cycles per second! Kinda boggles the mind:

The A17 Pro boasts 19 billion transistors and a 6-core CPU, with two high-performance cores ..., and four high-efficiency cores.

The 16-core neural engine can process up to 35 trillion operations per second, .... There are also additional dedicated engines for [graphics].

in reply to Wayne Radinsky

exactly zero Watts is electricity

Technically there is an amazing amount of actual electrical activity happening in the brain. Any movement of ions across a gradient can be termed “electrical activity”. More specifically, neuronal action potentials are formally considered “electrical activity”, at least by the definition that the NIH uses.

https://www.ncbi.nlm.nih.gov/books/NBK546639/

in reply to Wayne Radinsky

The 16-core neural engine can process up to 35 trillion operations per second, … There are also additional dedicated engines for [graphics].

One of the fascinating thing about the brain is that it functions in a variety of modalities, some of which are happening in parallel. Then there is the hybrid digital/analog nature of the brain as well. We are still only understanding the surface of how all this actually works.

in reply to Wayne Radinsky

The brain does generate electrical potentials, yes. That's not the same as consuming electricity, and very much not the same as drawing power from the grid, which is what would be required for the stupid comparison the tech-lords are trying to float right now.
in reply to Wayne Radinsky

FFS, AI hype-jockeys are always trying to dishonestly exaggerate their bullshit.

Like, six months ago they were really desperately trying to compare Parameters to Neurons.

Now apparently it's comparing transistors to neurons.

Both are completely false comparisons.

The only reasonable comparison is between Layers and Neurons.

And even then, it's clear than one layer is less powerful than a neuron, although whether it's 7 times or 70 times, we don't really know.

Bottom line: As I said, the heaviest AI is not even a tenth the power of the brain of an ant.
In other news, no, LLMs do not understand the bullshit they spout.

And in yet other news, AI providers are downgrading expectations across the board, because this absolute bullshit technology is only generating profits for the hardware manufacturers, and that too will not keep up once the bubble bursts.

in reply to Wayne Radinsky

"LLMs do not understand the bullshit they spout."

LLM's have met their match here.

"the heaviest AI is not even a tenth the power of the brain of an ant."

Are you measuring ant power in watts? Lol.