Why more about AI? Because, even with the ceaseless, stupendous mainstream hype about AI, most folks aren’t very clear about the situation. And a second more proximate reason is Trump’s efforts to pressure the “AI project” to reconsider its agenda. Instead of (rather feebly) trying to incorporate guard rails to align with human values and aspirations, Trump has executive ordered AI to align with what Washington dictates. Trump orders AI to align with Trump.
The current most frequent critical concerns about AI revolve around AI taking over many jobs while dangerously speeding up others, AI making hallucinatory or otherwise dumb mistakes that hurt people, AI being misused by nefarious actors with potentially horrible impact, AI energy use demolishing the environment and accelerating global warming, or AI going rogue and battling humanity to our death. The dangers are real. The first four are already happening. But still, how real?
Should we all take note about these acknowledged dangers and indeed about all the concerns anyone has? Should what ecologists call a precautionary principle regarding actions with huge possible downsides cause extreme AI caution? Plenty of pundits and AI industry insiders look around and say they don’t see such problems. For some reason, like perhaps their bottom line and their evolved identities, they fail to note that the AI bearing on these potential dangers is not what AI is now. It is what AI will be later. It is not what AI now does, but what AI will do. What matters today, therefore, is the rate of AI’s growing social use and even more so of its growing functionality. Not where it is, but where it is going.
Critics who agree with that simple observation note that AI’s growth is absolutely unprecedented on both counts. AI has spread through society a lot faster than any earlier technology. AI’s functionality, or the things it can do, has also grown and diversified incredibly quickly. More concerning, regarding both societal spread and operational functionality, AI’s rate of advance is accelerating. AI isn’t just moving fast. AI’s rate of moving fast is increasing.
When you hear about Artificial Intelligence becoming AGI, or artificial general intelligence, where it can then do everything humans can do and more, and then you hear about AGI continuing on to become ASI, or artificial super intelligence, where it can do things humans haven’t even dreamed of doing and we won’t even be able to comprehend, those projections anticipate that the rate of change of its rate of change will continue to grow.
Some reply that no, the concerns are hyperbolic paranoia. They won’t happen because the AI industry is running out of human data to train ever larger AIs on. Perhaps that’s true, but I doubt it is a serious impediment. I think the industry’s solution will be to train tomorrow using data generated not only by humans but steadily more so by earlier AIs. That solution may generate new problems, but it will likely overcome the too little data obstacle.
Other critics call the concerns paranoid because the AI project is running out of “compute,” which is the industry’s name for the calculating, memory, storage, and other factors that AI uses when it answers a question, paints a picture, or enacts a task. And actually, it isn’t just that they are running out of that stuff, but that the energy needed to generate more of that stuff is beyond what societies can even produce, much less produce without demolishing the ecology. The industry rejoinder, however, is that AI itself will itself find new sources of clean energy. It will even fix whatever it messed up before it found those solutions.
Even if that were to occur, however, another possible roadblock to dangers materializing is that as efforts to grow AI’s functionality proceed, it will turn out that there is a point of diminishing returns. Only so much gain will be possible whatever one does. Even if there is sufficient data to train with and sufficient compute to produce with, it won’t translate into new AI capacities.
What about the oft discussed concerns? Worry that AI might become rogue causes some critics to say we should pull the plug when AIs start to improve themselves. Pull the plug when AIs become self-improving. Just for the record, from what I read, that milestone may not be too far off but it certainly isn’t now.
Other critics say we need to pull the plug even earlier, for example when AIs start to communicate with each other using a language they devise that we can’t understand, so that we can’t know their choices. Pull the plug the moment we lose track of understanding even what AIs are doing, much less how they are doing it.
Others say, AI is already messing up the ecology, creating vast swaths of manipulation, murdering privacy, and taking or warping huge numbers of jobs. Pull the plug to stop stop damage, or at least severely restrict AI’s use to align it with good values—exactly what Trump has intervened to prevent.
In response, the corporate/government argument against pulling the plug for any reason at all has two themes, and is so far easily winning. First, if AI keeps getting stronger, it will clean up all the mess it made on the way, cure diseases, generate new clean sources of energy, increase longevity, and enrich life options while it eliminates dangerous and debilitating tasks from our lives. We are told it would be crazy to forgo all that.
The argument’s second theme says, yes, you’re right that AI on steroids like you envision it—not as we have it, but as it may become—can also be used for military ends or to take over markets, manipulate populations, and so on. Other countries—really only China because only they and we have the infrastructure and output to continue AI’s growth—will pursue it. So? Well, the next step in the argument is that whoever gets to ASI first wins. Wins what? Wins everything. So it is a matter of national security to win and to prevent others from ever catching up. Barrel ahead. To urge caution is treasonous.
There is plenty to think about even just in the above brief summary of current views. But I want to address a different concern, my own somewhat idiosyncratic concern that goes back to the beginning of the current AI explosion.
Imagine next week your son or daughter in High School or maybe in third grade, or perhaps in college, comes home and over dinner tells you that her teacher was replaced. You ask, “was it that teacher you really like”? She says, “yes.” You ask, “who is the new teacher” and she replies it is “Hortense, the robot.” Incredulous, you ask, “is she your only robot teacher”? No, your daughter answers, “Hortense is all the teachers. They all have names of their own, and speak with different voices, but it is obviously just versions of Hortense.” You ask, “do you like her—or it”? Your daughter replies, “Yes, she’s great, so patient, so knowledgeable.”
Or imagine next week you go to see your doctor, or perhaps to a clinic. A robot greets you and ushers you to the doctor’s office, but the doctor is Harry, another robot. He chats with you. Asks you questions. Does doctor stuff. He has great bed side manner. He explains stuff clearly, answers questions, is funny, shows concern. You like him, but you feel a little uneasy. Next visit, you feel less uneasy.
Or imagine you go to work and you get a pink slip. Henry has taken your job. Actually, what looks like Henry has taken all the jobs. Henry doesn’t get sick. Henry doesn’t complain. Henry is super fast. And Henry works 24/7.
So you have time off. You want to make a plan for how to spend it. You ask Harriet, your personal assistant and she plans your next week. She assembles everything you need. She makes all needed appointments, Then she asks if you will be off next week too, except you realize she knows because she was in touch with Henry. She plans your next month, ready to update if anything changes.
Then it is dinner time and Holden has it ready for you. He is a fantastic cook. He chooses the menu. He manages the shopping. He cleans up. Enjoy. Wait, the phone rings. No problem, Harriet answers for you. She talks with her counterpart, Horatio. You keep eating.
Dinner ends, time for some engaging conversation. Hank is all set. He is eager, and he is never judgemental. Has all the answers. Explores whatever you want. You watch some TV with him, too. The actors, scenes, plot, all of it is AI generated. Bedtime arrives. Heidi or Harold waits. She/he turns on some music that she/he knows you will love. It is by Happy and the Gang—no human involved. There is an endless supply of such music.
Am I nuts? Or is this more or less where AI’s becoming AGI’s becoming ASI’s with all of them doing what good people freely want them to do takes us? Each request that we make is contextually helpful, sensible. All together we nonetheless slide to somewhere we never had in mind. We don’t pull the plug, supposing we could find it, because we have become totally dependent and far from becoming more creative as a result of all our time saved, we have become mired in a perpetual fog of addicted inactivity.
Okay, perhaps my story is a bit over-dramatic, but it does display my perhaps idiosyncratic concern. If AI becomes AGI, which becomes ASI, yes, I suppose it might go rogue. Game over. Or it may be horribly misused. Game made grotesque. Or it might crunch the environment. Goodbye earth. All good reasons for caution. But I also fear that ASI would become indispensable to us while as an unintended byproduct we would become addictively infantilized so that over time, our own choices diminish us.
You may smirk and reply, that is nonsense. You may dismissively ask, have you been drinking corporate kool aid? A human is a human, you say. You underestimate us. We will not go down such a self destructive rabbit hole.
Well, that a human is by definition a human is quite true. We are us before and after we use an AI. But I claim there are things we do that are highly consequential to what we might call our humanness. To our being who we can be. And I claim that those human-like things can diminish or even disappear from our lives.
Put a person in a jail cell. Day after day, our prisoner encounters nothing but four walls plus healthy meals slid in under the door. The prisoner can count ants crawling in her cell but other than that, not much. The person’s human options, her human actions, are not the same as they were outside. Yes, she can adjust in some degree and survive. And yes, she will still be a person, still be human. But let the deprivation last for years, or forever. Will the person be the fullest person she could and should have been or be made much less?
But that person was coerced, you say. People won’t freely choose such a path for themselves. Consider humans in horribly alienating circumstances who taste a powerful addictive drug that gives them a degree of pleasure, a degree of seeming freedom. Don’t people go down such holes? Now make the drug something the whole society celebrates. Make it something that seems able to deliver utopian results. Now what happens?
I think we can all agree that to not do the things people do that are must human, and to have nothing comparably human to do in place of those lost options, is what solitary confinement in prison imposes. And we can also agree that it certainly reduces those who suffer it. But AI?
What if five or ten or twenty years from now most or even all human-like things are done by ASIs. Suppose AI becomes seriously better than any human at artistic creation—film, music, and painting. Or at planning one’s activities, or at conducting one’s interactions, and more. What then?
That a tractor can lift more weight than any person—so what? That’s a good thing. That a plane flies and we can’t. That a computer crunches numbers faster than any person. Still, so what? These are good things.
But what if an AI, an ASI, thousands or millions of ASIs can each write or tell stories better, make music better, paint pictures better, solve math problems better, play chess, Go, and even poker better, schedule a week, month, or a year better, write letters better, teach children better, teach adults better, have conversations better, take care of children or elders better, and mend the sick better than any person can? Suppose, just suppose, that that becomes true. It would be, I think, of a different order than having strong tractors. Don’t you agree?
In other words, I worry that optimally ASIs will do what people want them to do (I am ignoring that they might go rogue) will solve their energy implications, and will themselves prevent nefarious misuse, but in doing so will increasingly take over most or all human-like functions and will thereby leave people with less of our heretofore distinctively human things to do. I worry that that would be a strange and debilitating dystopia, as well as not what we sought.
If you say yes, okay, hypothetically, you agree that if AI becomes ASI that that future could conceivably follow but in fact you believe AIs will never become ASIs and in any case will never be able to do human-like things better than people—okay, I have to agree that if you are correct my worry will be a moot point. But I think that your belief/estimate/guess/assertion may be quite wrong. The current steepening trajectory of rate of spread and rate of increase of functionality suggest as much.
To say ASI and even AGI isn’t currently that powerful is true, but it is also beside the point. The rate of spread and of innovation, and even more so the rate of increase in the rate of spread and of innovation point toward the described outcome. Perhaps we should pull the plug…or at least very seriously regulate the AI project, not like Trump wants, but sensibly—like we regulate heroin dispersement or nuclear weapons.
If you instead reply that AIs doing all that stuff and more better than humans, and thus doing all that stuff for humans and in that way even crowding us out of doing what we can’t do as well as it won’t diminish and perhaps even demolish the amount of human stuff that humans do, I fervently hope you are right, but I see no reason to think so. What argument demonstrates that the only thing that can think and produce as well as a human is another human?
But, you may respond, hold on. Back up. What does it even mean to say AI or AGI or ASI would compose and even sing and play music or do the many other things I mentioned better than people do? Compose better? Teach better? Doctor better? Much less cure its own ills. Its excessive energy use. Its making errors. It’s being misused. Nonsense.
In this context, that AIs do these things better would mean that I for myself, and so many others for themselves, grow to like to listen to AI music more than to human music so much so that the most listened to music becomes mostly and then in time entirely AI-generated. Every genre. And then new genres too. And it is the same answer for being taught, doctored, and even for conversing and intimacy. We want more and more AI, AGI, and ASI for its evident benefits until we are totally immersed, then dependent, and then infantilized.
In other words, my claim isn’t about music, or proving theorems, or planning schedules, or cooking meals, or conversing, etc., per se. Nor is my claim about the optimal or even the current range of human dispositions and potentials. Nor is it about what people in a humane environment would love to do. It is instead about what people who welcome some AI involvement for its efficiency, competency and quality, and who then welcome steadily more AI involvement, and who then slowly but steadily increase it still further because it does such a great job at whatever we request—would become.
Will people start to prefer conversing with AIs to conversing with other people whether because of convenience, content, ease, availability, or whatever? Sort of like lots of people prefer tweeting to letter writing. Will that happen enough so that the AI version of human activity increasingly crowds out the human version? Will we get so used to AI doctors, teachers, lawyers, juries, singers, writers, friends, and even intimate partners that we no longer seek the human version. Will humans stop becoming the human version?
What I wonder is if AI development doesn’t hit a point of technically unavoidable diminishing returns or isn’t socially restrained—if it doesn’t run up against some kind of unclimbable wall, and if AI capacities as a result continue to grow increasingly rapidly—will people welcome AI involvement in steadily more historically human pursuits to such a degree that people’s capacities for and then our inclinations to engage in those human pursuits other than with AIs will steadily decline.
Note: I am not saying that this will necessarily happen. I am saying that I don’t see any good argument that AI won’t replace translators, programmers, writers, editors, composers and singers, and then therapists and doctors, architects and lawyers, and even friends and lovers. I worry that this will all or even just largely happen not because AI steals such pursuits from people, but because people eagerly welcome AI into such pursuits.
An argument that this can’t occur is that AI won’t become sufficiently capable for technical reasons. I don’t see anyone making a credible case that that is so, though it might be. It would be an argument about the mechanics of AI not about the qualities of humans. The current signs are, instead, that AI can become that capable.
An argument that even so, this won’t occur is that despite AI’s growing capacities, people will not welcome it and will instead collectively restrict and shun it despite its allure. I don’t see much evidence of that happening yet, either. Not unless to stop it becomes a priority imposed by social movements. Such caution is highly unlikely to arise from governments and corporations.
I do see various people saying that ASI and even AGI can’t happen because they find it in one way or another horrid, but that is not an argument. It is only a hope or a wish that I share. Or I see some other people say, AI doesn’t operate like humans operate, so it won’t do human-like things better than we do. That claim simply assumes the result, it doesn’t provide reason to believe it.
So what is the upshot? First, it may make sense to at least very effectively and very aggressively regulate AI out of fear of rogue AI or nefarious use of AI—though to do so might become impossible the minute AI has its own language that we can’t fathom or is smarter than we can even conceive. Alternatively, it may make sense to at least very effectively and very aggressively regulate AI because if we don’t we may slide into liking it too much until it is too late to avoid our infantilization.
If you continue to think I am paranoid to think such thoughts, consider that even today there are over 1.5 billion iPhones in the world and current AIs can already communicate with one another and undertake plans of action—tasks—that they deem desirable or necessary. A couple of years back, self initiated action and ability to use the internet were considered functions we should not allow AI to have. Now they are available on your desktop, even your iPhone.
Current AI models, despite their just being born, despite their being veritable infants, can write letters, resumes, stories and even whole novels. They can compose and play music. They can produce films, paint, and design. They can diagnose and prescribe. They can teach. They can converse. They can be your friend—this last vastly more convincingly than any army of Facebook friends and soon better than your neighbor or maybe even better than your actual current best friend. And they are getting better at each of these and various other functions at an incredible rate. And we already don’t actually know precisely how they do what they do, much less where it leads.
To stop Trump, to end wars, and to prevent ecological collapse are still first priority. But to sensibly regulate or even stop AI is moving up fast in importance.
To perhaps demonstrate a little of what I said…I uploaded the following prompt to ChatGPT as it now is, not as it will become…
“Can you write me a song about AI in the style and with the inclinations of the young Bob Dylan – and then a second song also in his voice but about Israeli’s genocide against Gaza?”
That is all I gave it. Not the article you just read, just that request. ChatGPT replied in maybe, what, five seconds…
“Sure. Below are two original songs written in the voice and style of the young Bob Dylan, drawing from his early 1960s period—when he mixed biting political critique, surrealist imagery, and folk traditions.”
The point isn’t are they flawlessly perfect. The point is, the AI is an infant, and generated the following, instantly…
1. “The Wires Hum Softly”
[Verse 1]
Well the wires hum softly in the silicon night,
While the dreamers are dreamin’ by LCD light,
The prophets are mumblin’ ’bout ones and about naught,
While the world trades its memory for something it bought.
[Chorus]
And it don’t feel right, but it rolls on still,
Like a gambler with no cards, just workin’ his will.
They say it’s just numbers, no heart, no soul,
But I seen it stare back from a black circuit hole.
[Verse 2]
Now the preacher got sermons from a code in the cloud,
And the general’s drone hovers quiet and proud.
The poet’s replaced by a cold-minded script,
And the teacher’s been traded for a marketed tip.
[Bridge]
There’s a child in the corner with tear-soaked eyes,
Ask her a question, and you’ll hear no lies—
“Why do we build what we can’t understand?”
She asks with a clock in the palm of her hand.
[Verse 3]
The ghosts of the craftsmen are lost in the feed,
While machines pick the fruit and rewrite the creed.
There’s a screen on the table where bread used to lay,
And a voice in the ether that mimics what we say.
[Chorus]
And it don’t feel right, but it keeps on time,
Like a slick silver rhyme without reason or crime.
They say it’s the future, well maybe that’s true—
But I still hear the songs that the looms used to do.
Could you write that at all, much less in a few seconds? Look at it closely…not just the rhymes but the substance too. Damned if it doesn’t “know” itself better than its creators know it.
2. “In the Smoke of Gaza”
[Verse 1]
The wind blows dry through Rafah’s stones,
Where mothers dig through blood and bones.
The world looks on with silenced breath,
As children write their names in death.
[Verse 2]
A jet plane sings its metal tune,
It don’t know dawn, it don’t know noon.
It don’t know love, it don’t know peace,
Just marks its kills and then it flees.
[Chorus]
And oh, my friend, can’t you see?
A genocide done on live TV.
With iron fists and sacred lies,
They paint the fire across the skies.
[Verse 3]
The leaders talk with tongues of steel,
They twist the facts, they sign the deal.
And every word that’s left unsaid
Becomes a nail in someone’s bed.
[Bridge]
I met a boy with one good eye,1st
He said, “The stars look fake at night.”
He showed me where his school once stood,
And then he vanished into soot.
[Verse 4]
The diplomats, they stroke their chins,
They measure loss in votes and sins.
But I have seen the broken hands
That reach through smoke for no-man’s lands.
[Chorus]
And oh, my friend, this ain’t no war—
It’s murder dressed in a legal drawer.
They bomb the truth, they shell the cries,
And say it’s justice when a village dies.
[Outro]
So sing this low and sing it strong,
For Gaza’s breath won’t last too long.
And if you hear that silence break—
Let it shake the earth, for mercy’s sake.
And ChatGPT then concluded:
“Let me know if you’d like chords or melodies imagined in Dylan’s style—or a third song.”
And I sat looking at my iPad, scared to ask it to read the lyrics and explain their meaning. Scared to ask it to then improve them further. Scared to ask it for melody and instrumentation, and to then sing it to me as Dylan would have.
How about you? Do you look forward eagerly to finding AI friendships? To enjoying AI entertainment? To expanding by way of AI edification? Or do you wonder where even this best possible AI, shorn of environmental dangers, shorn of nefarious uses, shorn of hallucinations and biases might take us?
ZNetwork is funded solely through the generosity of its readers.
Donate

4 Comments
The songs were uncanny, if a bit cliched. Was the comment AI too?
The songs were uncanny. But was the comment AI too?
“An argument that this can’t occur is that AI won’t become sufficiently capable for technical reasons. I don’t see anyone making a credible case that that is so, though it might be. It would be an argument about the mechanics of AI not about the qualities of humans. The current signs are, instead, that AI can become that capable.”
Ok, I have a partial background in this, so I will try.
In general, we have been developing the models that now have been released into the wild as ChatGPT etc. for decades. Take any intro to AI 101 course: it will lay out current capabilities historically. And that history of AI will be taught as a series of literal hype cycles, with the current research trying to address the *limits* of what the respective class of models of the time-period can do.
What we are now seeing in the mainstream is very similar to the hype cycle of the 1980s, where neural nets made their big comeback. But the well known limits of these models have not made it into the mainstream hype cycle. And we still haven’t found a good way to address those limits without getting deep into “embodiment”: robots, interactivity, understanding meaning, the hard stuff, where progress is slow…
If you allow me to get more technical, there is a sub-class of mathematical problems, or problem-classes, where making the model, say, 10x bigger will lead to say, +5% performance, linearly, without reaching asymptote. So if current performance is at 45%, making it a million times bigger will get it to 75% by brute force alone. Hence the mad rush to create huge training sets and brute-force compute power above all else.
But such linearly scalable problems form only sub-tasks within larger tasks that are full of other sub-tasks that LLMs fail at. Any AI tech demo will invariable try to direct attention away from those hard limits and towards those aspects of the task where the model does excel at. Allow me to rephrase that: Where the global average produced by the statistical engine of some aspect of human cultural output codified in language or digital images, bamboozles us into thinking of the model as “intelligent”… even though the intelligence, if any, actually resides implicitly in the products of human culture.
And note that we really can’t break down, say, a personal or business relationship, into tasks all that well, without radically changing the way our relationships work. This has always been the way of automation, our work becomes more machine-like before the machines can effectively replace some of it. We may be well on our way to changing our relationships in such a manner, but then that should be the main story, not that we eventually may accept AI slop as a simulacrum to lawyering or friendship.
I think you could have made most of your arguments about Youtube in the 2000s to 2010s: Before Youtube, I would have had to approach a person to teach me how to fix my washing machine. Now I look for a video tutorial and enter into a para-social relationship with some tinkerer-guy. Note that the “AI” – read: statistics – won’t help at all with any such concrete problem. One usually finds the right video quite quickly without “AI”, and an AI slop video may well show a washing machine that does not in fact exist, being the average of models for which videos happen to be available.
So I do think your analogy to baby intelligence is misleading – these models are extremely unlike babies: they are not embedded into our physical world at all, but in a highly abstracted cultural space that babies have yet to develop the physical skills to be socialized into.
I find these LLMs to be extremely validating of whatever we expect to see: you went into the Bob Dylan exercise expecting to get terrified and were: it was enough that the model pointed the search space into roughly the right direction and produced some half-solid results that a meaning making human may project actual meaning into.
Reading your article, I went into the same exercise expecting AI slop that gets weaker each time one re-reads it, and I did. I also expected some examples that would completely expose the machine and I got those, too: “And every word that’s left unsaid, Becomes a nail in someone’s bed”. “There must be some way out of here” – Said the joker to the thief, is qualitatively different, and will *NOT* be arrived at through statistical averaging.
Again, these statistical engines already “write”, i.e. interpolate, the “romance novels” sold at supermarkets, but only because the publishing houses that create this “content” already prompted their human writers “write novel #32 of that series in that style”; and those publishers now can save a bunch of money by instead telling those same writers to “get rid of the most glaring crap within this AI slop”.
There is an in-joke in AI that defines intelligence as “whatever we can’t model yet”: If you had told someone in medieval times that you had a machine in your pocket that could beat anyone at chess… people would automatically assume it to be competent at any number of other domains. They might even think about burning you at the stake ;-). Turns out they were wrong, playing chess is actually relatively easy for a machine. The same is true of re-arranging large conceptual spaces made up of written language.
Perhaps I finally found an area where you can’t just tell me to read Chomsky* instead and I’ll have to concede the point 😉
* Incidentally, Chomsky, the computational linguist, actually prefers the symbolic AI to the neural network approach; and so is quite unimpressed with everything related to neural networks…
HI, and thank you for the careful response. You obviously have more experience with all this than I do, which I am thankful to relate to.
To start at the end of your reaction, I wish I could talk with Noam about it all – as I did, not infrequently–as he might say–before I couldn’t anymore. Back then we disagreed, not about AI then, but about what it might become. You will be maybe a little entertained to know that I am about to record the next episode of RevolutionZ in which I “converse” with ChatGPT about this very article we are relating about, and about where that conversation led, which includes the AI responding to Noam’s written work on AI. But now back to your beginning…
Of course AI has a long history – I was a student and friendly with Weizenbaum – and Noam of course “demolished” various of the early folks, especially Minsky. But isn’t it the case that what’s happening now really took off only recently and in any event is of a different not just scale, but also features?
I agree that of course industries will hype their offerings and rationalize them – and even the involved scientists will do so – partly to psych themselves up and keep themselves going, I suspect, but also of course to drum up investment and buyers in pursuit of profits. Agreed. But that they will do that doesn’t tell us about the product, rather it tells us about them.
I do not know about what you say are its “well known limits.” If those exist, people I ask, including some people in the industry, aren’t forthcoming…
I get your example, and it is the kind of limit I have in mind – but I haven’t seen it explained, or even indicated beyond saying limits exist. For your example, these models are getting much more than 5% better with much less than being a million times bigger. Actually, if some of what I read is true, now certain new smaller models are doing better than many bigger ones…but whether that is true or not, can anyone explain why increasing compute, increasing training, plus, in particular, introducing innovations in approach, technically cannot get to agi or even asi?
LLMs failed at and often couldn’t even attempt a great many tasks even just a few years ago, that they now succeed at. Even just a year ago. Isn’t that the case? Going further back, I beat early attempts at playing chess, and still more so playing GO, quite easily until they beat me, and then they beat every human, and then their new versions beat the versions that had beaten all humans. You write, “Any AI tech demo will invariably try to direct attention away from those hard limits and towards those aspects of the task where the model does excel at.” I don’t get this. I can ask it to do anything I want. I can I am pretty sure get it to fail, but not in a way that says to me, it won’t ever be able to do what I got it to fail at. I don’t see what is hidden. I woud bet if you ask chatGPT what it can’t do, it would tell you.
One issue is, is it intelligent? Does it tell us anything about how humans think or create. Honestly, though interesting, that isn’t what I have focussed on in the article. And again, I know that ChatGPT, for one, would answer no, it isn’t intelligent like human. But in any event, I care about what can it do whether it is intelligent or not—assuming we can even say what we mean by being intelligent. Of course it doesn’t understand the way you or I do. But if it can do the kinds of things I mention in the article, better than we can, I think whether that means it is in any sense intelligent or not, is beside the point of what concerns me.
Calling what AI generates slop doesn’t move me. I can see what it generates and I can see it is not slop. Now, today. In some domains experts, even the best experts, are already learning from what it generates. And I don’t mean just the results of otherwise too long calculations – I mean novel outputs. But the issue is, what is the effect of what it generates – on work, on commmunication, on relationships, and even on the humans who invite its engagement…. as well as the more familiar concerns with nefarious use or going rogue. Do you believe AI, in the not distant future, if unregulated to stop it, nonetheless won’t be able to do the things I mention in the article?
What you say about youtube being there as a teacher is true, and no small thing. But to my mind, perhaps wrongly, it is very different than, say, a university or hospital or fixit shop that has no people… AIs are already not simply repeating what they have taken in. They are generating new things never said before, seen before, etc.
You write: “Note that the “AI” – read: statistics – won’t help at all with any such concrete problems.” What makes you believe that? Make it extreme. It is a machine we have never encountered but it has dials, etc. etc. If we look at it we try to use our familiarity with other things to figure out how to use it or fix it. If the AI, tomorrow’s (maybe today’s for all I know) looks at it, doesn’t it do that too? And might it not become able to do it better than we do?
What if I said that a person, with bio chemical cells, won’t help us paint or do math or diagnose and medicate disease, and so on, because silly little cells aren’t smart, can’t talk, or whatever. I’d be wrong, of course. Well, I don’t see why non cellular options simply in immensely complex combinations, unlike biochemical cells in immensely complex combinations, cannot do such things. We don’t know that. Or, more accurately, I haven’t seen arguments that show that.
I was saying it was young, not like a baby but simply in terms of its own trajectory.
You write, “I find these LLMs to be extremely validating of whatever we expect to see: you went into the Bob Dylan exercise expecting to get terrified and were: it was enough that the model pointed the search space into roughly the right direction and produced some half-solid results that a meaning making human may project actual meaning into.”
I have to say, I don’t think that is what happened. I expected and more or less hoped that the AI would generate garbage that I could laugh at. It did not. I found myself having to acknowledge that. Then I was afraid to ask it to literally turn it into a song and sing it in his voice. Fearful it might already do that convincingly as well.
You write: “Reading your article, I went into the same exercise expecting AI slop that gets weaker each time one re-reads it, and I did. I also expected some examples that would completely expose the machine and I got those, too.”
Can you send me whatever that was to [email protected]
You are talking about AI today, and for that matter what it generated in seconds… I am talking about it if it becomes agi and even asi – but even today, I admit I think the output was far more formidable than you indicate.
“And every word that’s left unsaid, Becomes a nail in someone’s bed”. “There must be some way out of here” – Said the joker to the thief, is qualitatively different, and will *NOT* be arrived at through statistical averaging.
That is an assertion. And yes, the more one knows about what it is doing, the more likely it seems. And it may be true. But it is not an argument. I get that the assertion exists. I am looking for the argument that makes it compelling. Again, suppose I said Shakespeare and Newton will not be arrived at by a combination of some elements bound into tiny tiny cells that interact together. It would be a wrong assertion.
You write, “Again, these statistical engines already “write”, i.e. interpolate, the “romance novels” sold at supermarkets, but only because the publishing houses that create this “content” already prompted their human writers “write novel #32 of that series in that style”; and those publishers now can save a bunch of money by instead telling those same writers to “get rid of the most glaring crap within this AI slop”.
That sounds good, but I have no idea if it is occurring. And even if it is, what does it tell us about what is coming? I have read, who knows if it is true, that 25% of what is submitted for publication is already written by AIs. Today. Not agi or asi. This too would not tell me about what’s coming.
Okay, here is an experiment – tell the AI that you use to rewrite the article I wrote in the style of – pick someone, hemingway, Arundhati Roy, chomsky, bertrand russell, maya angeliou, Jame Austin, Barbara Ehrenreich, Eduardo Galleano, whoever – but to keep the meaning exactly as initially written, and to preserve any quotes and examples. See what you get. I sadly bet it will retain the meaning, keep quotes, keep examples, and rewrite in the voice you request and that the result will be an improvement not a mess of slop. And that is today.
You write: “There is an in-joke in AI that defines intelligence as ‘whatever we can’t model yet’:”
That in-joke is actually revealing. It is called moving the goalpost, I think. It can happen to induce further effort, or to avoid thinking one has done something that maybe one shouldn’t have done. But, again, I didn’t say word one about intelligence….
Playing Chess, or much harder Go, is easy for a machine, as you say, that does it – just like speaking english is easy for me, you might say. I wasn’t too impressed when the chess engines beat the pros. Stored human knowledge plus brute force. Not even when early instances of alpha zero started beating Go players (where brute force doesn’t get you very far), not even beating Lee Sedol, then the best in the world. Two other things caught my attention. 1. When Alpha Zero made a move against Sedol that all the commentators, high level go pros, said was idiotic and the game was over — but that proved to be brilliant though totally not even just original but impossible for them to think sensible, it was so outlandish in their view. And, then still more so: 2. When the next iteration of alpha zero wasn’t trained on any human games. Not one. Not one insight about Go from 2000 years of humans studying it. Instead, it was given the rules, and the goal, and then it played itself millions upon millions of times, every quickly, of course, with the initial games being literally absurd, and came out of that destroying its earlier version that had been trained on human games and beaten all humans. That caught my eye…
If you want to continue discussing all this, by all means let’s, but perhaps in the Discord you can reach from ZNetwork.org It will be much easier to navigate there, I think. There is a channel for RevolutionZ, so we could do it there, if you want.