(This article is Part-2 in a series; for Part-1, go here.)
The company is called Final Spark.
Their innovation?
—Tiny synthetic human brains hooked up to electrodes…to solve the problem of powering AI, like ChatGPT…because AI requires lots of power to run.
The even more explosive claim is: these tiny brains in the lab have their performance improved by a SYSTEM OF REWARDS AND PUNISHMENTS.
What?!
For “good work,” the brains get injections of pleasure-inducing dopamine. For “bad work,” random chaotic disturbing electrical stimulation.
Yet, Final Spark claims these little brains aren’t alive or conscious.
If that’s true, how can reward-punishment work at all?
Can you reward a rock or a piece of a wall?
Something is going on here. I don’t know what the hell it is. Maybe the reward-punishment is just a simulation. It isn’t authentic. Or maybe it’s not working successfully. But if it is actual and operating and makes these little brains perform more efficiently, then the brains are alive at some level.
Unless you can train an old drill in your garage to shine brighter and cut faster by smearing it with dopamine.
Call me crazy, but I don’t like idea of tissue from brains like mine or yours captured in a lab undergoing operant conditioning.
I’m sure technocrats couldn’t care less. But they don’t care about a long list of human concerns.
I had a rather long conversation with ChatGPT about the rewards-punishment system applied to the tiny brains.
I’m printing it here because I want you to see the effect of GPT when it’s engaged in a way that requires it to “think and reason.” Not just spit back data.
Note the questions I’m asking and its responses as we go back and forth. Keep in mind that it’s responding at the top speed. With no lag.
Aside from general explanations that don’t satisfy, I have no idea how GPT does this. And I suspect there are many software pros out there who don’t really have any idea, either.
People who underestimate the power of AI or think it really can’t take over MANY functions of society don’t know what they’re talking about.
For example, I have no doubt that with a bit of reengineering and a few human techs for the installation, ChatGPT could handle all the paperwork and record keeping of 10 giant corporations all at once—and displace a whole lot of employees before they knew what hit them.
OK. Now follow the conversation I had with GPT about reward-punishment, and catch the distinctions it’s able to make, instantaneously, without blinking or raising a sweat: