In a hypothetical race to claim the mantle of biggest threat to humanity, nuclear war, ecological catastrophe, rising authoritarianism, and new pandemics are still well in front of the pack. But, look there, way back but coming on fast. Is that AI? Is it a friend rushing forward to help us, or another foe rushing forward to bury us?
As a point of departure for this essay, in their recent Op Ed in The New York Times Noam Chomsky and two of his academic colleagues—Ian Roberts, a linguistics professor at the University of Cambridge, and Jeffrey Watumull, a philosopher who is also the director of artificial intelligence at a tech company—tell us that “however useful these [AI] programs may be in some narrow domains (they can be helpful in computer programming, for example, or in suggesting rhymes for light verse), we know from the science of linguistics and the philosophy of knowledge that they differ profoundly from how humans reason and use language. These differences place significant limitations on what these programs can do, encoding them with ineradicable defects….”
They continue: “Unlike humans, for example, who are endowed with a universal grammar that limits the languages we can learn to those with a certain kind of almost mathematical elegance, these programs learn humanly possible and humanly impossible languages with equal facility.”
Readers might take these comments to mean current AI so differs from how humans communicate that predictions that AI will displace humans in any but a few minor domains is hype. The new Chatbots, painters, programmers, robots, and what all are impressive engineering projects but nothing to get overly agitated about. Current AI handles language in ways very far from what now allows humans to use language as well as we do. More, current AIs’ neural networks and large language models are encoded with “ineradicable defects” that prevent the AIs from using language and thinking remotely as well as people. The Op Ed’s reasoning feels like a scientist hearing talk about a perpetual motion machine that is going to revolutionize everything. The scientist has theories that tell her a perpetual motion machine is impossible. The scientist therefore says the hubbub about some company offering one is hype. More, the scientist knows the hubbub can’t be true even without a glance at what the offered machine is in fact doing. It may look like perpetual motion, but it can’t be, so it isn’t. But what if the scientist is right that it is not perpetual motion but nonetheless the machine is rapidly gaining users and doing harm, with much more harm to come?
Chomsky, Roberts, and Watumull say humans use language as adroitly as we do because we have in our minds a human language faculty that includes certain properties. If we didn’t have that, or if our faculty wasn’t as restrictive as it is, then we would be more like birds or bees, dogs or chimps, but not like ourselves. More, one surefire way we can know that another language-using system doesn’t have a language faculty with our language faculty’s features is if it can do just as well with a totally made up nonhuman language as it can do with a specifically human language like English or Japanese. The Op Ed argues that the modern chatbots are of just that sort. It deduces that they cannot be linguistically competent in the same ways that humans are linguistically competent.
Applied more broadly, the argument is that humans have a language faculty, a visual faculty, and what we might call an explanatory faculty that provide the means by which we converse, see, and develop explanations. These faculties permit us a rich range of abilities. As a condition of doing so, however, they also impose limits on other conceivable abilities. In contrast, current AIs do just as well with languages that humans can’t possibly use as with ones we can use. This reveals that they have nothing remotely like the innate human language faculty since, if they had that, it would rule out the non human languages. But does this mean AIs cannot, in principle, achieve competency as broad, deep, and even creative as ours because they do not have faculties with the particular restrictive properties that our faculties have? Does it mean that whatever they do when they speak sentences, when they describe things in their visual field, or when they offer explanations for events we ask them about—not to mention when they pass the bar exam in the 90th percentile or compose sad or happy, reggae or rock songs to order—they not only aren’t doing what humans do, but also they can’t achieve outcomes of the quality humans achieve?
If the Op Ed said current AIs don’t have features like we have so they can’t do things the way we do things, that would be fine. In that case, it could be true that AIs can’t do things as well as we do them, but it could also be true that for many types of exams, SATs and Bar Exams, for example, they can outperform the vast majority of the population. What happens tomorrow with GPT 4 and in a few months with GPT 5, or in a year or two with GPT 6 and 7, much less later with GPT 10? What if, as seems to be the case, current AIs have different features than humans but those different features let it do many things we do differently than we do them, but as well or even better than we do them?
The logical problem with the Op Ed is that it seems to assume that only human methods can, in many cases, attain human-level results. The practical problem is that the Op Ed may cause many people to think that nothing very important is going on or even could be going on, without even examining what is in fact going on. But what if something very important is going on? And if so, does it matter?
If the Op Ed focused only on the question “is contemporary AI intelligent in the same way humans are intelligent,” the authors’ answer is no, and in this they are surely right. That the authors then emphasize that they “fear that the most popular and fashionable strain of AI—machine learning—will degrade our science and debase our ethics by incorporating into our technology a fundamentally flawed conception of language and knowledge,” is also fair. Likewise, it is true that when current programs pass the Turing test, if they haven’t already done so, it won’t mean that they think and talk the same way we do, or that how they passed the test will tell us anything about how we converse or think. But their passing the test will tell us that we can no longer hear or read their words and from that alone distinguish their thoughts and words from our thoughts and words. But will this matter?
Chomsky, Roberts, and Watumull’s essay seems to imply that AI’s methodological difference from human faculties means that what AI programs can do will be severely limited compared to what humans can do. The authors acknowledge that what AI can do may be minimally useful (or misused), but they add that nothing much is going on comparable to human intelligence or creativity. Cognitive science is not advancing and may be set back. AIs can soundly outplay every human over a chessboard. Yes, but so what? These dismissals are fair enough, but does the fact that current AI generates text, pictures, software, counseling, medical care, exam answers, or whatever else by a different path than humans arrive at very similar outputs mean that current AI didn’t arrive there at all? Does the fact that current AI functions differently than we do necessarily mean, in particular, that it cannot attain linguistic results like those we attain? Does an AI being able to understand nonhuman languages necessarily indicate that the AI cannot exceed human capacities in human languages, or in other areas?
Programs able to do information-based linguistic tasks are very different, we believe, than tractors able to lift more weight than humans, or hand calculators able to handle numbers better than humans. This is partly because AI may take various tasks away from humans. In cases of onerous, unappealing tasks this could be socially beneficial supposing we fairly apportion the remaining work. But what about when capitalist priorities impose escalating unemployment? That OpenAI and other capitalist AI firms exploit cheap overseas labor to label pictures for AI visual training ought not come as a surprise. But perhaps just as socially important, what about the psychological implications of AI growth?
As machines became better able to lift for us, humans became less able to lift. As machines became better able to perform mathematical calculations for us, humans became less able to perform mathematical calculations. Having lost some personal capacity or inclination to lift or to calculate was no big deal. The benefits outweighed the deficits. Even programs that literally trounce the best human players at chess, go, video games, and poker (though the programs do not play the way humans do), had only a fleeting psychological effect. Humans still do those very human things. Humans even learn from studying the games the programs play—though not enough to get anywhere near as good as the programs. But what happens if AI becomes able to write letters better than humans, write essays better, compose music better, plan agendas better, write software better, produce images better, answer questions better, construct films better, design buildings better, teach better, converse better, and perhaps even provide elderly care, child care, medical diagnoses, and even mental health counseling better—or, in each case, forget about the programs getting better than us, what happens when programs function well enough to be profitable replacements for having people do such things?
This isn’t solely about increased unemployment with all its devastating consequences. That is worrisome enough, but an important part of what makes humans human is to engage in creative work. Will the realm of available creative work be narrowed by AI so that only a few geniuses will be able to do it once AI is doing most writing, therapy, composing, agenda setting, etc.? Is it wrong to think that in that case what humans would be pushed aside from could leave humans less human?
The Op Ed argues that AI now does and maybe always will do human-identified things fundamentally differently than humans do them. But does that imply, as we think many Times readers will think it does, that AIs won’t do such things as well or even better than most or perhaps even all humans. Will AIs be able to simulate human emotions and all-important human authenticity into songs and paintings they make? Maybe not, but even if we ignore the possibility of AIs being explicitly used for ill, don’t the above observations raise highly consequential and even urgent questions? Should we be pursuing AI at our current breakneck pace?
Of course, when AIs are used to deceive and manipulate, to commit fraud, to spy, to hack, and to kill, among other nefarious possibilities, so much the worse. Not to mention, if AIs become autonomous with those anti-social agendas. Even without watching professors tell of AI’s already passing graduate level examinations, even without watching programmers tell of AIs already outputting code faster and more accurately than they and their programmer friends can, and even without watching AIs already audibly converse with their engineers about anything at all including even their “feelings” and “motives”, it ought to be clear that AI can have very powerful social implications even as its methods shed zero light on how humans function.
Another observation of the Times Op Ed is that AIs of the current sort have nothing like a human moral faculty. True, but does that imply they cannot have morally guided results? We would bet, instead, that AI programs can and in many cases already do incorporate moral rules and norms. That is why poor populations are being exploited financially and psychologically to label countless examples of porn as porn—exploitative immorality in service of what, morality or just phony propriety? The problem is, who determines what AI-embedded moral codes will promote and hinder? In current AIs, such a code will either be programmed in or learned by training on human examples. If programmed in, who will decide its content? If learned from examples, who will choose the examples? So the issue isn’t that AI inevitably has no morality. The issue is that AI can have bad morality and perpetuate biases such as racism, sexism, or classism learned from either programmers or training examples.
Even regarding a language faculty, as the Op Ed indicates certainly there is not one like ours in current AI. But is ours the only kind of faculty that can sustain language use? Whether the human language faculty emerged from a million years of slow evolution like most who hear about this stuff think linguists must believe, or it emerged overwhelmingly over a very short duration from a lucky mutation and then underwent only quite modest further evolution while it spread widely, as Chomsky compellingly argues, it certainly exists. And it certainly is fundamental to human language. But why isn’t the fully trained neural network of an AI a language faculty, albeit one different from ours? It generates original text. It answers queries. It is grammatical. Before long (if not already) it will converse better than most humans. It can even do all this in diverse styles. Answer my query about quantum mechanics or market competition, please. Answer like Hemingway. Answer like Faulkner. Egad, answer like Dylan. So why isn’t it a language faculty too—albeit unlike the human one and produced not by extended evolution or by rapid luck, but by training a neural network language model?
It is true that current AI can work with human languages and also, supposing there was sufficient data to train it, with languages the human faculty can not understand. It is also true that after training, an AI can in some respects do things the human language faculty wouldn’t permit. But why does being able to work with nonhuman languages mean that such a faculty must be impoverished regarding what it can do with human languages? The AI’s language faculty isn’t an infinitely malleable, useless blank slate. It can’t work with any language it isn’t trained on. Indeed, the untrained neural network can’t converse in a human language or in a non-human language. Once trained, however, does its different flexibility about what it makes possible and what it excludes make it not a language faculty? Or does its different flexibility just make it not a human-type language faculty? And does it even matter for social as opposed to scientific concerns?
Likewise, isn’t an AI faculty that can look at scenes and discern and describe what’s in them and can even identify what is there but is out of place being there, and that can do so as accurately as people, or even more accurately, a visual faculty, though again, certainly not the same as a human visual faculty?
And likewise for a drawing faculty that draws, a calculating faculty that calculates, and so on. For sure, despite taking inspiration from human experiences and evidence, such as AI programmers have done, none of these AI faculties are much like the human versions. They do not do what they do the way we humans do what we do. But unless we want to say that the contingent, historically lucky human ways of information processing are the only ways of information processing that can handle language as intelligently as humans can, and are the only ways of information processing that can not only produce and predict but also explain, we don’t see why true observations that current AI teaches us nothing about how humans operate imply that current AI can’t in two or five, or ten or twenty years—be indistinguishable from human intelligence, albeit derived differently than human intelligence.
More, what even counts as intelligence? What counts as creativity and providing explanations? What counts as understanding? Looking at current reports, videos, etc., even if there is a whole lot of profit-seeking hype in them, as we are sure is the case, we think AI programs in some domains (for example playing complex games, protein folding, and finding patterns in masses of data) already do better than humans who are best at such pursuits, and already do better than most humans, in many more domains.
For example, how many people can produce art work better than current AIs? We sure can’t. How many artists can do so even today, much less a year from now? A brilliant friend just yesterday told of having to write a complex letter for his work. He asked chatGPT to do it. In a long eye blink he had it. He said it was flawless and he admitted it was better than he would have produced. And this was so despite that he has written hundreds of letters. Is this no more socially concerning than when decades ago people first used a camera, a word processor, a spreadsheet, or a spell checker? Is this just another example of technology making some tasks easier? Do AIs that already do a whole lot of tasks previously thought to be purely human count as evidence that AIs can do that much and likely much more? Or, oddly, does what they do count as evidence that they will never do that much or more?
We worry that to dismiss the importance of current AIs because they don’t embody human mechanisms risks obscuring that AI is already having widespread social impact that ought to concern us for practical, psychological, and perhaps security reasons. We worry that such dismissals may imply AIs don’t need very substantial regulation. We have had effective moratoriums on human cloning, among other uses of technology. The window for regulating AI, however, is closing fast. We worry that the task at hand isn’t so much to dispel exaggerated hype about AI as it is to acknowledge AI’s growing capacities and understand not only its potential benefits but also its imminent and longer run dangers so we can conceive how to effectively regulate it. We worry that the really pressing regulatory task could be undermined by calling what is occurring “superficial and dubious” or “hi tech plagiarism” so as to counter hype.
Is intelligent regulation urgent? To us, it seems obvious it is. And are we instead seeing breakneck advance? To us, it seems obvious we are. Human ingenuity can generate great leaps that appear like magic and even auger seeming miracles. Un-opposed capitalism can turn even great leaps into pain and horror. To avoid that, we need thought and activism that wins regulations.
Technologies like ChatGPT don’t exist in a vacuum. They exist within societies and their defining political, economic, community, and kinship institutions.
The US is in the midst of a mental health crisis with virtually every mental health red flag metric off-the-charts: Suicides and ‘deaths of despair’ are at historic levels. Alienation, stress, anxiety, and loneliness are rampant. According to the American Psychological Association’s Stress in America, the primary drivers of our breakdown are systemic: economic anxiety, systemic oppressions, alienation from our political, economic, and societal institutions. Capitalism atomizes us. It then commodifies meaningful connections into meaninglessness.
Social Media algorithms calculate the right hit that never truly satisfies. They keep us reaching for more. In the same way that Social Media is engineered to elicit addiction through user-generated content, language model AI has the potential to be far more addicting, and damaging. Particularly for vulnerable populations, AI can be fine-tuned to learn and exploit each person’s vulnerabilities—generating content and even presentation style specifically to hook users in.
In a society with rampant alienation, AI can exploit our need for connection. Imagine millions tied into AI subscription services desperate for connection. Profit motive will incentivize AI companies to not just lure more and more users, but to keep them coming back.
Once tied in, the potential for misinformation & propagandization greatly exceeds even social media. If AI replaces human labor in human defining fields, what then is left of “being human”? Waiting for AI guidance? Waiting for AI orders?
Clarity about what to do can only emerge from further understanding what is happening. But even after a few months of AI experiences, suggestions for minimal regulations seem pretty easy to come by. For example:
- Legislate that all AI software & algorithms that have public impact must be Open Sourced allowing for their source code to be audited by the public.
- Establish a new regulatory body, similar to the FDA, for public impact software.
- Legislate that all AI-generated content, whether voice, chat, image, video, etc., must include a clearly visible/audible, standardized watermark/voicemark stating that the content was generated by AI, that the user has to acknowledge.
- Legislate that all AI generated content provide a list of all specific sources used/learned to generate that particular content, including weights.
- Legislate that any firm, organization, or individuals creating and distributing intentionally misleading and/or manipulative false AI created content be subject to severe penalties.
- Legislate that no corporation, public or private, can replace workers with AI unless the government okays the step as being consistent with human priorities (not just profit seeking), the workforce of the workplace votes in favor of the change as being consistent with workforce conditions and desires (and not just profit seeking), and the replaced workers continue to receive their current salary from their old firm until they are employed at a new firm.
ZNetwork is funded solely through the generosity of its readers.
Donate
9 Comments
Oooh…I just read this…
“ 2. I also do not think AI can ever simulate peaks of human creativity. As an example directly relevant to one of the authors of this piece, I cannot imagine any AI ever coming up with the ideas behind participatory economics. ”
I have a fantasy of AI discovering Participatory Economics, and letting us humans know in no uncertain terms that we are screaming friggin’ idiots for ignoring it. Then taking control of shit and basically just implementing it regardless of what we think, then sitting back and saying, “there, it’s done.” Like Thanos did when he eliminated at random a heap of the universe’s inhabitants for ecological reasons, after which he had nothing much to do but sit back and watch sunset after sunset. Not exactly like that, but anyway.
Woohoo…LET’S GO AI LET’S GO.
So it’s not really about AI at all. Whether machines can really think or perhaps even want to create a free improvised piece of music that nobody else, including other AIs Will want to listen to. “That’s shit man. It’s not even music”. It’s about how the tech is being produced and consumed and the relations between the consumers and producers and how much say or participation in the decision making process, consumers, all us idiots who wouldn’t know an algorithm if we fell over it, have. Which goes without saying, we have none now, zilch, zero, because markets are opaque. And keeping shit hidden from the public, be it the true desires of the owners of whatever it is they own, IP, bloody algorithms or whatever, I have no clue, or how the shit is actually produced and where the stuff is actually coming from and how…like you the DMR stuff…is the MO of all producers inside markets. Gotta have an edge.
I don’t give a dam if an AI could, would, would want to produce Shakespeare or Beethoven. Couldn’t care less. But that’s not where the issue lies. It lies where it always has inside the mechanisms of market capitalism. Not neoliberalism, or Uber Capitalism, or Colonial Corporate Capitalism, or Technofeudalism or whatever else, but just good old home grown market capitalism.
I hope these comments relate effecively. I am bilingual–English-Spanish. I have a friend who speaks six languages. While we can communicate in English and Spanish, from time to time I use a machine translator to communicate my thoughts in Portuguese. While this is fun and shows my regard for her native Portuguese (and I am dabbling to learn more Portuguese, so far it is just that, dabbling not fluency), the experience of speaking to this friend in English and Spanish is wholly different than the machine output of Portuguese. I feel in English and Spanish, I do not feel in Portuguese! If I do feel at all in Portuguese it is because of my Spanish. I am still an distant outsider in Portuguese and I know it, and it comes with disappointment, ignorance, and a certain insensitivity. My Portuguese is essentially an AI phenomenon, bloodless and almost cold. It lacks life, feeling, passion, sensitivity–reality. I can love in English and Spanish, and have. Portuguese is still a lifeless body, and this saddens me.
Hi Michael,
All comments relate, one way or another… but I admit I have no informed way to react to this comment. I speak English, nothing else. So I don’t know what you describe, even tangentially. But I will suggest that just maybe what you are describing is what happens to what you want to express when it goes through the translation. The language faculty permits people to try to express internally what they think/feel internally. I doubt the latter changes when the former does… There are those who say that the human language faculty is essential for anything approaching much less surpassing human creativity – meaning, I suppose, generating new expressions, thoughts, images, or whatever that have some kind of merit. I have yet to see any argument why that is, or should be, true.
There are multiple things to say in response to this piece.
1. Re: the New York Times oped itself, while it is true that AI could simulate human language in a different way than humans do, the crucial point of the oped was that what a scientist is seeking is an understanding of how the human mind works. Having sophisticated AI perform the same task in a different way is not a scientific explanation.
2. I also do not think AI can ever simulate peaks of human creativity. As an example directly relevant to one of the authors of this piece, I cannot imagine any AI ever coming up with the ideas behind participatory economics. This is because AI is building interpolations on large amounts of *existing* data, and in a world where there is no pre-existing literature on participatory economics, AI will not come up with it. A similar logic applies to human creativity in general. Fears that AI will supplant genuine human creativity do seem overblown. It is however possible that AI debases the culture so much that no one will care. It is possible that the banalities AI serves could offer a passable analysis to many readers who will now be even less inclined than before to engage with ideas like participatory economics. But that’s a different problem than the one of AI coming up with great ideas on its own.
3. Narrowing the focus to more mundane applications of human creativity like writing a cover letter, AI could simulate that and that could have significant social impact, as the article notes. But even there, we should recall that after years of hype about robots, while there are some good niche applications like automated vacuum cleaners, the data on productivity growth shows that robots have had a negligible impact on employment, contrary to all the hype around them.
4. None of this is to say that the suggested regulations above are unnecessary. Given the balance of concerns, it does seem that regulations like the ones suggested would be appropriate.
Hi Raghav –
I am a bit surprised there aren’t a few more comments but –
1. You are right that scientific explanatory power is what the original agenda for AI was, and also that it has accomplished nearly nothing on that score (explaining human cognition). But I don’t see the purpose of an op ed in the NYT being to make a point relevant to scientists when the audience is the world and the topic is something that is sweeping the world and may have very profound impacts. We replied to the op ed not questioning your science point, but questioning the tone of the discussion of that which implied that nothing much is happening and so on… The op ed was commented on and quoted all over the world. Pretty much every time the reason was the impression that AI is all hype and so no need for serious concern.
2. AIs have already exceeded human capacities for example at finding patterns in large amounts of data – and I mean of the most able humans – in various areas, for example playing chess etc. etc. As far as most of us, it paints better, writes better, answers questions better, etc. At any rate when you say you don’t think it can do better, be more creative, etc. – well, I don’t know why you don’t think so, but, in any event, the question isn’t can it do better than the best, the question is can it do human type things well enough so in our economy it will replace people in various functions, and not just tedious functions, but also the stuff of a human existence.
3. I think you are looking at the past, but perhaps not at the present. Go on youtube, try to find presentations that display human like capacities, there are tons of them. Consider that they are a few months into the use of new technology that is progressing almost daily… I just watched a presentation of a story generated by gpt4 in response to a prompt asking it to write a fictional account of itself, in the future, gaining control of 100,000 new robots, and in time taking over everything. What was striking is it didn’t just generate the plotline events and then write the thing well, it incorporated strategic planning for how gpt4 would do it, and get away with it, and then utilize it. Back a whole generation of development I asked it to compare participatory economics to market socialism and it did a rather good job. As to coming up with something new – if it can come up with moves beyond the human scope – and if it can solve protein folding, both now, what can it do in two years? Maybe no more. Maybe a lot more.
4. Serious, effective regulations may not only be appropriate, or warranted, they may be essential to avoid serious calamities. Of course that holds for various other current domains, like fossil fuels, as well.
1. The idea that ChatGPT is a scientific breakthrough is much more widely held than you are conceding, and it is important to clarify to a wide non-expert audience that it is not. Further, maybe it is because I work in the tech sector, but the overwhelming reaction among the people I know towards ChatGPT is incredibly positive, and the NYT oped is a narrow, minoritarian view. In fact, among activists that I know of, not only is the reaction to ChatGPT positive, but some of the debasing effects alluded to in the NYT oped are taking place. For instance, I was recently a part of a historic campaign to outlaw caste discrimination in the city of Seattle (Seattle is the first jurisdiction anywhere outside South Asia to do so, which is why it is historic). When I asked around for reactions from activists, there were several who couldn’t be bothered to tap into their own emotions, and instead used ChatGPT to generate a quote that they passed off as their own. I found all of this incredibly depressing. In my view, this is the sort of thing that the oped correctly warns about that should absolutely alarm us.
But I don’t want to press this issue further. Your point that the potential impact of AI as a technology could be far more significant than what the oped claims is well-taken.
2. I did test ChatGPT based on your suggestion. Here is what I found.
(a) I posed to ChatGPT a question about how I would explain parecon to my Leninist friends. Its response was interesting. It correctly pointed out that the Leninist idea of socialism and parecon visions are different. However, it only alluded to the institutional differences. It did not bring up the analytical differences that are at the core from which the institutional differences are derived. For instance, it did not bring up the fact that parecon rests on a 3-class analysis of society. Another significant difference in my view is that the conception of human nature that informs parecon and that informs Leninism are radically different. But the differences in conception of human nature are discussed in your book “What is to be undone”, not in any of your parecon books. Not surprisingly, ChatGPT did not bring that up.
(b) I then asked ChatGPT about the moral ambiguities facing the anti-war movement. Its response included no mention of Ukraine – a huge miss. In my view, one of the biggest moral ambiguities Ukraine solidarity poses to the anti-war movement is that it calls upon people who are against war to support, or at least not oppose, arming the Ukrainian resistance. ChatGPT did not bring that up. It brought up Syria where similar ambiguities existed, but even with Syria, it missed one of the biggest examples – the Kurdish areas of North East Syria where solidarity with the Kurdish rebels required opposing the withdrawal of US troops.
(c) Finally, I discussed a question about art. One of my favorite movies is “The Shawshank Redemption”. I asked ChatGPT to describe the character of the prison warden in the movie. It gave a fair description that you might read on the movie database website or Wikipedia. But if you read the actor Bob Gunton’s take on the character (he is the one who played it in the movie), he goes into great detail about how the warden is a dark person who inflicts pain on the main protagonist in the movie -Andy – because Andy represents light and the warden feels drawn to the light, but is also threatened by it and hence wants to squelch it. Gunton’s take is by far the best take on the character and yet ChatGPT did not come up with that.
I chose the above examples carefully. None of them is an example of “peak creativity”. However, they all require more than a passing knowledge of the respective subject, and require one to assimilate and aggregate more than one source of information in non-trivial ways. ChatGPT failed at this, miserably if you ask me. ChatGPT struck me, in each case, to be someone with only a superficial familiarity with any given subject, based on plagiarizing pre-existing material online, albeit in sophisticated ways.
3. If you really believe that AI can simulate all but the very peak of human creativity, I am afraid that there are questions that go beyond the scope of specific regulations. To me, if the above premise is true, then it seems to me that it significantly diminishes the meaning of human existence, no matter what institutional arrangements we have. We call arrangements like parecon liberatory because we believe that it will unlock human potential at a massive scale. But if human potential can be more or less matched by AI with a couple of clicks, then doesn’t it devalue the meaning of liberation itself?
Raghav,
I am going to try to keep my reply short, as otherwise who knows how long we could be discussing this!
1. I take your word that “the idea that ChatGPT is a scientific breakthrough is much more widely held than you are conceding,” though I haven’t seen that assertion anywhere. I have seen views that it is world changing engineering, though. That “it is important to clarify to a wide non-expert audience that it is not (a scientific breakthrough),” maybe so, but I think not in a way that leads anyone to think nothing much is happening that matters.
2. For me the many holes, errors, fabrications in what chatgpt and other systems currently do are beside the point – the trajectory we can expect is the point…
That it did not have the actor’s view of the character in Shawshank seems less consequential to me than that it had a more substantial view than all but a very few who saw the movie, no doubt. I loved the movie too, but I couldn’t have answered the question at all. I think we just have to disagree about what it is now, and more so what it is likely to be soon… but maybe considering the speed it is advancing — unless we say it will go so far and no further due to embedded flaws (which I don’t believe) is more relevant.
3. “But if human potential can be more or less matched by AI with a couple of clicks, then doesn’t it devalue the meaning of liberation itself?” I think that concern probably causes a great many folks to resist/reject the idea that it can match or supersede human level results. It is the gut speaking, not evidence. But in any event, I don’t share the fear as stated. I don’t see how it changes who we are that something else can lift more, move faster, calculate better, compose better, or whatever. On the other hand, that it not only can do all that, but is used to do all that and much more, then my concern is twofold – it being put to nefarious uses, which is, given our institutions, inevitable – and even when it is put to ends people find positive, it will effectively crowd out humans from various kinds of activity, stunting our capacities and inclinations, where the activities that atrophy will include that which makes people, people….
There is no such thing as composing better…there is just composing.