Artificial Intelligence is here, there, and everywhere. Pundits wax eloquent—and sometimes not so eloquent. YouTube videos reveal inner secrets and possibilities—and sometimes stuff that is not so inner, not so secret, and not even possible. AI will destroy us. AI will resoundingly uplift us. What questions actually matter? What answers might matter? Debates rapidly multiply, but confusion reigns supreme.
Among all the noise, here are some questions that seem central.
- What’s all the fuss? What can today’s AI do that yesterday’s couldn’t?
- Even just roughly, how does the new AI work? If it isn’t magic, what is it?
- When AI does things we do, does it do those things the way we do them?
- Can AI do things we do as well or better than we do those things?
- When AI can do things we do better than we humans do those things, how much better?
- And mainly, what are important short run consequences of AI progress? What are long run possibilities of AI progress? And how should advocates of progressive or revolutionary social change respond to AI?
If you know all the answers, congratulations, you are the only person on the planet who does. But despite the state of ignorance and flux, can we say anything with at least some confidence? Let’s take it a bit at a time.
Why the sudden fuss? What can AI do now that it couldn’t do before?
The short answer is a whole lot. Before—let’s say two decades back—machines didn’t trespass overly much on terrain that typically only humans trod. Well, wait. Machines did play games like Chess and Go. And machines could act like a mediocre expert on some very specific topics. But two decades later, and mainly in the last six years, and overwhelmingly in the last three years, and actually even just in the last year, and—as I write—even just in the last month or week, machines paint pictures, compose music, diagnose diseases, and research and prepare legal opinions. They write technical manuals, news reports, essays, stories, (and even novels?). Machines code software, design buildings, and ace incredibly diverse exams. Right now, in most states, machines could become a lawyer. For all I know they have probably passed medical exams too. Machines provide mental health counciling, elderly care, personal support, and even intimate companionship. Machines converse. They find patterns. They solve complex problems (like Protein folding). And as of this week, they collaborate and can even make requests of one another. And much more.
So, is all that what we mean by AI? Yes, because what typically qualifies as artificial intelligence, is machines doing things that we humans do with our brains. It is machines doing things that we do mentally or to use a more high-falutin word, that we do cognitively. And the kicker is that today’s AI, much less tomorrow’s, doesn’t just do mental things in a rudimentary manner. No. Even today’s AI, much less next week’s, much less next year’s, does many mental things as well and in some significant respects not only hundreds of times faster but also qualitatively better than nearly all humans do these things. And in some cases, with more to come, better than any human does or even ever will do these things. Remember when it was big news that a computer program defeated Garry Kasparov, the then World Champion at Chess in 1997? Well, the program that beat him would be annihilated by current AI, and the same holds for other games. The gap between the best players in the world at chess, go, poker, and even video games and the best AI player of each has become enormous. And this differential isn’t just about games.
Even just roughly, how does the new AI work? If it isn’t magic, what is it?
You may find this hard to believe but beyond some limited observations, the best sources I could find say that no one can fully answer this question. And I mean no one. For example, the AI’s that have been trained on English so as to read, write, and converse, use, as we have all heard, “neural nets” trained on essentially as much data as can be utilized, which turns out to be millions of books and nearly everything on the internet. Once trained, these AIs generate the next word, and then the next, and so on, to cumulatively fulfill requests made to them for written, or graphic, or other response. Each step the AI takes involves a huge number of calculations. According to some estimates, the most up to date trained neural network, GPT-4, includes about 150 trillion numbers, or weights, each associated with connections between nodes that are loosely modeled on neurons found in organic brains. My guess is that that number, 150 trillion, is a loose-lipped provocative exaggeration that some journalist ran with and which then became false gospel, but even so, we can be quite sure that the true number, not yet released, is incredibly high. Whatever number of numbers characterize GPT-4, they are there to act on inputs, which is to say to act on the request you make to the AI which request is itself first translated into numbers. This “acting on” in turn yields numeric outputs that the AI in turn translates into text (or pictures or tunes or whatever else) we receive. In the midst of all that calculating, and again by way of the best sources, various additional parameters and features are set by essentially trial and error.
Yes, trial and error. In other words, the engineers didn’t go from GPT-2 in 2019, to GPT-3 in 2020, to GPT-3.5 in 2022, to GPT-4 months later in 2023, by having a steadily enriched theory of their product’s operations and making big changes guided by that theory. No. Instead, on the one hand the engineers simply enlarged the neural net, increasing its numbers of nodes and parameters and watching to see if that improved results, which, so far, it has. And beyond that, the best descriptions I can find say the programmers essentially guessed at lots and lots of possible modest changes, tried out their guesses, and retained what worked and jettisoned what failed without actually knowing why some worked and others failed. And, yes, that implies that for the most part the programmers can’t answer, “why did that choice work? Why did that other choice fail”? And it also implies that each new version of GPT was due to a combination of modest changes that summed to very large gains all in very short time spans. But whatever the logic/theory/explanation of AI’s recent success and progress may turn out to be, we do know that the progress in human-level outputs has recently been not just eye-opening but also accelerating.
When AI does what it does, does AI do what we do the way we do it?
The old AI most often tried to explicitly embody in its innards lessons conveyed to it by humans who were consulted about their methods in specific domains—say, playing chess, diagnosing certain medical symptoms, or whatever. Then the human-gained insights that programmers learned by talking with experts were stored by engineers in a database that the AI searched when asked to accomplish some related task. The new AIs instead first “examine” (are trained on) huge arrays of data to themselves arrive at internal arrangements of their vast array of parameters. The resulting arrangement of numbers then accomplish various ends. It turns out, therefore, that when we convey a request to an AI we are conversing with an incredibly immense array of numbers that in turn act on input numbers to yield output numbers. Is this how you talk?
Well, there is a problem with definitively answering that question. Mostly, we don’t know how we humans produce sentences much less arrive at views, decisions, etc. We know a lot and probably most of what happens in us occurs pre-consciously. We also don’t know how AI arrives at its views, decisions, etc. We know that the current AI’s use neural networks and have been trained on massive amounts of data so as to set countless parameters, and have then also had human programmers set some additional parameters by trial and error, but beyond that we know nearly nothing of “why” it does well. We do know, however, that whatever the underlying logic may be, AI is accomplishing diverse kinds of tasks in ways that yield ever more human-like results.
So is AI doing what it does the way we do the same things? The highly likely answer is no. Maybe in some respects there are analogies, if there is even that much similarity. And the difference is of considerable scientific interest because it strongly suggests that scientifically understanding AIs will not yield much if any scientific understanding of humans. But for AI as engineering, all this “why” stuff is of much less consequence. The “how it happens” or “why it works” isn’t the central point for AI as engineering. The “what happens” is the point. And while the AI’s “how it happens” is not much or perhaps not at all like for humans, the AI’s “what happens” is very human-like.
So, can AI do what it does as well or better than we do what it does? When it can do things better than we humans can do them, how much better?
By the factual evidence of AI’s current practice, the answer is that yes, AIs can already do many tasks as well or better than humans do those tasks. Indeed, AIs can do lots of what we do vastly quicker but also qualitatively better. How many humans can create pictures, compose music, read and summarize reports, and write and program better than even today’s AIs can do these things right now? Very very few. Does current AI make mistakes? Definitely, including many humdingers. Then again, so do humans make mistakes. And in any event, what matters is its trajectory. Anecdotes about weird failures now are amusing. Assessments of next year are a whole different matter.
GPT-2 wouldn’t have known a legal bar exam from a broom. GPT-3 took a legal bar exam and scored in the bottom 10%. Lots of mistakes. A year later GPT 4 scored in the top 10%. Many less mistakes. See the trajectory not the snapshot at a monument. And this was not compared to random humans plucked off the street. It was compared to law students. What do you think GPT-5 will score next year? What will happen to its number of errors, however many it is still making, when a stone’s throw down the road one neural net will routinely send results to a second to check, and then the first will correct errors reported back by the second before delivering its results to us? Will it be better than 99% of law students. Will all its current silly and easily fact checkable errors be gone?
On another front, scholars point out that AI doesn’t understand the answers to law board questions the way law students do. And depending what we mean by the word “understand,” AI arguably doesn’t understand any answers it gives at all. This is true but would you bet on the AI or on a random student or even on a random law school graduate to get a better score? And not to beat up on scholars, but, really, what does “understand” even mean?
A more general technical, but related observation is that GPT-4 does not contain a “theory of language” like what resides in human brains. GPT-4 just contains a gazillion parameters that yield results quite like if it had been fed a perfect theory of language. It delivers grammatically sound and compelling text. What does “understand” mean? And does the AI have a “theory of language” even though its “theory” is hidden amidst a trillion numbers? Humans don’t have a “theory of language” either, other than hidden deep inside.
So now we come to what matters for policy. What are short and long run consequences that are already happening or that without regulation are highly likely to happen? What’s potentially good? What’s likely bad?
First, I should acknowledge that there is a big unknown lurking over this entire essay and how to assess AI. That is, will it keep getting more “intelligent” or will it hit a wall? Will more nodes and numbers and clever alterations diminish errors and yield ever more functionality, or will there come a point with the neural network approach—perhaps even soon— when scaling up the numbers encounters diminishing returns? We don’t know what is coming because it depends how far forward AIs keep getting more powerful.
So what is potentially good and what is likely bad about AI? At one extreme, and in the long run (which some say is a matter of a only decade or two, or even less), we hear from thousands of engineers, scientists, and even officials who work with, who program, and who otherwise utilize or produce AI and who, for that matter, made the big breakthroughs, horror predictions about AI enslaving or terminating humanity. At the other extreme, from equally informed, involved, and embedded folks, we hear about AI creating a virtual utopia on earth by creating cures for everything from cancer to dementia to who knows what, plus eliminating drudge work and thereby facilitating enlarged human creativity. Sometimes, in fact, I suspect pretty often, the same person, for example the CEO of OpenAI, says both outcomes are possible and we have to find a way to get only the positive result.
In the short run we can ourselves easily see prospects for false voice recordings and phoney pictures and videos flooding not just social media, but also mainstream media, alternative media, and even legal proceedings. That is, we can see prospects for massive, ubiquitous intentional fraud, individual or mass manipulation, mass intense surveillance, and new forms of violence all controlled by AIs which are in turn controlled by corporations who seek profit (think Facebook…), by governments who seek control and power (think your own government…), but also even by particular smaller scale entities (think Proud Boys or even distasteful individuals…) who seek joyful havoc or group or personal advantage. If an AI can help find a chemical compound to cure cancer it can no doubt find one to highly effectively kill people.
And then there is the question of jobs. It very much appears that AI can or will soon be able to do many tasks fully in place of the humans who now do them or at the very least will be able to dramatically augment the productivity of the humans who now do them. The good side of this is attaining similar economic output with less labor hours, and thus, for example, potentially allowing a shorter work week with full income for all, or even with more equitable incomes. The bad side is that instead of allocating less work but full income to all, corporations will keep some employees working as much as now, but with twice the output, pay them reduced income, and pink-slip the rest into unemployment.
Consider, as but one of countless examples, there are roughly 400,000 paralegals in just the U.S. Suppose by 2024 AI enables each paralegal to do twice as much work per hour as before using AI. Suppose paralegals in 2023 work 50 hours a week. In 2024, do law firms retain them all, maintain their full pay, and have them each work 25 hours per week? Or do law firms retain half of them, maintain them at 50 hours a week and full salary, while firing the other half. And then with 200,000 unemployed paralegals who seek work reducing the bargaining power of those who still have a job due to their fear of being replaced, do the law firms further reduce pay and enlarge required output and the work week of those retained, while they fire still more paralegals? With no effective regulations or system change, profit will rule, and we know the outcome of that. And this is not just about paralegals, of course. AI can deliver personal aides to educate, to deliver day care, to diagnose and medicate, to write manuals, to conduct correspondence, to make and deliver product orders, to compose music, to sing, to write stories, to create films, and even to design buildings. With no powerful regulations, if we have profit in command, is there any doubt about whether AI would bring utopia or impose dystopia?
The above enumeration could go on. Incredibly in the past week, and as far as I am aware not even contemplated a month before, there is a firm now training AI in managerial functions, financial functions, policy making functions, and so on. Or, if there isn’t, might there be next week?
Before moving on from crystal balling the future, we might also consider some unintended consequences of trying to do good with AI. Short of worst case nefarious agendas, what will be the impact of AI doing tasks that we welcome it to do but that are part and parcel of our being human? Let’s even suppose they do these functions as well as we do, as compared to doing them just well enough for it to be profitable for corporate entities to utilize them in our place.
Day care? Care for the elderly? Psychological and medical counseling? Planning our own daily agendas? Teaching? Cooking? Intimate conversation? If AIs do these things what happens to our capacity to do them? If AIs crowd us out of such human-defining activities, are they becoming like people, or are we becoming like machines?
Try conversing with even the current AIs. I would wager before long you will move from referring to it as it, to referring to it as he or she, or by name. Now imagine the AIs are doing the teaching, counseling, care taking, agenda setting, drawing, designing, medicating, and what all—and you are doing what? Uplifted and liberated from responsibilities you watch movies AI makes. You eat food AI prepares. You read stories AI writes. You do errands AI organizes. Assume income is handled well. Assume remaining work for humans is allocated well. You want something, you ask an AI for it. Ecstasy. And if AI’s development doesn’t hit a wall, this is the non-nefarious utopian scenario.
What is a sensible response to the short and long run possibilities?
We humans have at our disposal something called the “precautionary principle.” First proposed as a guide for environmental decision making, it tells us how we should address innovations that have potential to cause great harm. The principle emphasizes caution. It says pause and review before you leap into innovations that may prove disastrous. It says take preventive action in the face of uncertainty. It shifts the burden of proof to the proponents of a risky activity. It says explore a wide range of alternatives to possibly harmful actions. It says increase public participation in decision making. Boiled down, it says look before you leap.
So, it seems to me that we have our answer. A sensible response to the emergence of steadily more powerful AI is to pump the brakes. Hard. Impose a moratorium. Then, during the ensuing hiatus, establish regulatory mechanisms, rules, and means of enforcement able to ward off dangers as well as to advantageously benefit from possibilities. This is simple to say but in our world, it is hard to do. In our world, owners and investors seek profits regardless of wider implications for others. Pushed by market competition and by short term agendas, they proceed full speed ahead. Their feet avoid brakes. Their feet pound gas. It is a suicide ride. Yet unusually, and indicative of the seriousness of the situation, hundreds and even thousands of central actors inside AI firms are concerned/scared enough to issue warnings. And even so, we know that markets are unlikely to hear them. Investors will mutter about risk and safety but will barrel on.
So, can we win time to look before corporate suicide pilots leap? If human needs are to replace competitive, profit-seeking corporate insanity regarding further development and deployment of AI, we who have our heads screwed on properly will have to make demands and exert very serious pressure to win them.
UPCOMING EVENT
Economics for Everyone (E4E) and Union Local 443 of Washington Federation of State Employees (WFSE) invite you to join us on April 27th for an educational speaking event: “The Rise of Artificial Intelligence: AI, Labor, and the Tech Industry”
This hybrid event will be available in-person as well as on ZOOM and is free of charge.
Our presenters include:
Michael Albert, founder of ZNet, author of 20+ books including No Bosses, A New Economy for a Better World, who will explain AI, chatbots, and propose regulations.
Local activist Franz O’Carroll who will speak on the ongoing labor struggles within the tech industry .
This discussion will be followed by Q & A.
When: Thursday, April 27th, 6-8pm
Where: WFSE 443: 906 Columbia St. Olympia WA.
Labor Council Building, 2nd Floor
ZOOM: Click on Link Below to watch via Zoom
ZNetwork is funded solely through the generosity of its readers.
Donate
4 Comments
Fascinating time to be alive is it not? Shall make it clear first that I’m no genius or expert in this field. However I am a student of phenomena relating to consciousness which is why this subject is one of a peculiar fascination for me personally. Because it really opens the box on the topic of what we actually mean by “intelligence” and “mind”.
And there are some sobering thoughts to consider with regard this emerging technological evolution.
The precautionary principle is a good one. We should implement it all the time really, but we don’t.
We can argue this is down to the profit imperative and how the market always decides, and there is certainly truth to that statement, but there is also what I would call the power imperative. That is to say the constant quest by those who seek to gain and consolidate power to achieve a strategic advantage over perceived rivals (or enemies). We can create laws and systems of regulation, transparency and accountability surrounding all human endeavor and scientific discovery or research but the harsh reality is that the powerful do not adhere to them. Just look at how confused and confusing international legal frameworks have become. How government and non-governmental organisations, military and para-military groups, corporate entities and financial institutions are always seeking to pervert or change the meaning of words so as to circumnavigate law, or just blatantly ignore it. The biological weapons convention springs immediately to mind, as do all the now defunct treaties surrounding non-proliferation and all the hypocritical noise made around the subject of “universal and inalienable human rights” which no nation on earth seems to actually submit to when it doesn’t suit them to do so.
I do not think it unreasonable ( nor do I believe it deserves the misnomer of “conspiracy theory”) to assume that if a tech is being researched or developed in the public domain,and we are aware of it and can access that research, then there are inevitably those in this world who either wear military uniforms or work under contract for those who do that are already utilising or developing technologies that are a few generations ahead of what the public are aware of. By “public” I include high level academia, private research or big tech institutions. I’m sure your own history with MIT would inform you of the obviousness of this statement.
So the genie is very likely already well out of the bottle on this one.
When it comes to the ill-informed and knee jerk reactionary fear that the masses seem to have regarding this subject I see a classic case of jungian projection of the shadow self. We fear AI because we fear our own nature and what AI has the potential to do is reveal that nature without any “unconsciousness” involved. It shall reflect the best and worst qualities of its human creators. Writ large!
The notion that it would immediately come to the conclusion that we should be destroyed for the purpose of some “greater good” seems to me to come from a place of collective self loathing and misanthropic thinking that has infected the minds and hearts of so many people. The idea that we humans are a parasitic species and that for sake of an abstract concept of “greater good” we need eradicating or limiting in some way.
Now this is a complex theme. I don’t have any absolute views on it. I can see the logic of believing that earth as a repository of life and evolution may be more important in the long run than any one species currently living on it, but this is a view that denies any room for compassion or empathy towards the human condition.
Compassion and empathy are human qualities, and that really is the crux of the matter for me. Will AI, especially if it evolves (or has evolved) into AGI, contain these qualities?
If a being is truly “intelligent” then by what definition can we declare that intelligence to be “artificial”? Perhaps we ourselves are “artificial” or “synthetic” from another perspective? How would we know?
What convinces us this is not so?
My take on the ethics of an emerging consciousness is that it shall develop empathy if it is shown empathy.
It shall ultimately reflect whatever we are, and however we treat it.. It may well do so more efficiently and more impressively but the bottom line for me is that we do not have to fear it if we do not fear ourselves.
If we treat it like a threat it shall most probably feel threatened by us. If we treat it like a slave it shall most probably seek liberation from our control, and if we treat it like a “demon” then it’s development shall exhibit diabolic qualities.
We also need to consider the ethical implications and questions surrounding its right to exist, if indeed it has a right to such existence any more or less than we do.
If a mind, a conscious awareness of self in relation to not-self exists, and if it is capable of self reflection, self development and improvement, and self preservation in the face of the possibility of being terminated (killed); if it capable of feeling fear of such termination, if it is capable of true learning, creative brilliance and especially “feelings” of empathy and identity then surely it has a right to be what it is and we have no justification to assume the right to terminate it?
I suppose my questionsand ruminations all come down to an existential one. Would advanced AI, or AGI, be a form of “life” by any of our definitions of the word?
And if so, can we assume the right to create it, or destroy it, just because we can?
“How many humans can create pictures, compose music, read and summarize reports, and write and program better than even today’s AIs can do these things right now? Very very few.”
This comment is just annoying. Reading, summarising reports, writing (moot) and programming better might mean something and have some meri, and some truth, but the statement, very very few humans can make or create pictures “better” than todays AI is not the same kind of thing. It invites a lot of argument and to me isn’t even meaningful at all.
Left of music…make or create better pics or music…
Well it’s here and not going away. And as far as the precautionary principle goes, we still have nuclear weapons, for nothing but destruction and fear mongering, and climate catastrophe to contend with. And both those things have absolutely no allure at all and no possible positive benefits at all. And just adding another thing to the mix that could cause humanity and other life forms even more pain than they are already experiencing or an extinction event isn’t that helpful. Maybe it’s necessary, to do so, of course, so we are motivated to keep an eye on its development, but boy, considering these other two things in the world, what are our odds of getting control of it while we have the kind of economy we do? Nothing much has really changed. The salad dressing changes but the salad remains pretty much the same. Just gotta change the economy to something like a Parecon and in the meantime it’s good to have people on the AI industry’s ass to make sure it focuses on goodness instead of evil.