Let me tell you, ChatGPT-like AI will not change our world

Yong Jin Park, Howard University, Washington, DC, United States, lpark@law.harvard.edu

PUBLISHED ON: 17 Mar 2023

This piece cautions against both celebratory and ominous tales about the power of generative AI. It reflects on ChatGP-like AI’s potentials, comparing the 1990s invention of commercial internet providers and their contribution to the growth of the internet. ChatGPT’s contributions, in comparison, will be minimal, given the ways that its AI by design will permeate social bias and thus risk its business model. It points to regulatory environments outside of Silicon Valley and how the 1990s internet law —like Section 230 of US Communication Decency Act— may bring change to how generative AI brings about change.

Hypes about generative AI

A person you know declared that ChatGPT will change our world. That person is Bill Gates. ChatGPT is like the invention of the internet, Mr. Gates argued — helping us write quicker, making our offices more efficient, and becoming the most significant game changer in education. The fact that Gates-founded Microsoft is the lead investor in OpenAI (Crunchbase, 2023) does not obscure Mr. Gate’s optimism.

Others disagree. Pundits warn, ChatGPT will make our world worse. Students will cheat, journalists and programmers will lose their jobs, and lobbyists will exploit the AI to hijack elections. Surely, predicting a future is a risky business. But their predictions will turn out to be largely wrong. And I say this because OpenAI’s ChatGPT — and self-trainable AI in general — is shockingly underwhelming.

AI has become a fact of our lives — as human decisions increasingly depend on ChatGPT-like automation (Helberger & Diakopoulos, 2023). This dependence has no sign of wavering, exacerbated by a drastic AI capacity of finding ‘the most likely pattern’ in existing documents, just like ChatGPT and similar AIs such as Bard, Chatsonic, or Chinchilla do. In some areas, AI got better than humans, imitating the works of Leonardo da Vinci or Vincent Van Gogh and combining them into a new art piece.

But are these the sign of a doomsday scenario or rosy optimism?

The answer depends on two things: 1) technicalities:, whether generative AI, like ChatGPT, has genuine capacity of generating contents that people fear and awe; and 2) economics:, whether contents produced by AI threaten prevailing business models in digital platforms. If we set our baseline at the dawn of the industrial revolution, we see a prophet who shrugs to say, the world never accessed such an intelligent machine surveilling scattered pieces of knowledge. But if we move our baseline to Silicon Valley in the mid-1990s, the prophecies about the power of ChatGPT lose their charm.

Lessons from the 1990s internet

Look at the birth of the commercial internet. In 1996, Jerry Yang’s Yahoo! barely left a footprint in the online directory. In 1998, two Stanford PhD students, Larry Page and Sergey Brin, just published a beta-version of Google search. When these old technologies were new, however, they revolutionised the internet — Yahoo! bringing millions of websites under one directory and Google ranking those sites by relevance. Along with it, they brought advertisement by delivering users (and their personal data) to advertisers. The 1990s’ revolution was this capacity to curate intelligent databases that pulled an infinite number of personally-relevant pages to create advertising. People in their couches could instantly find organised answers, which had never been accessible up to that point in human history.

New things coming about with generative AI are its capacity to scan millions of data files, detect the most likely patterns out of existing databases, and imitate prevailing ones. The difference is that ChatGPT uses, not just metadata as Google does, but actual contents to create a content proxy close to source files and it does it anew each time. In Google search, this job of creating a proxy is that of users; in ChatGPT, AI does it for users — inviting the technologically-deterministic fear about what it can possibly do next (Neuman, 2016).

Technicalities of generative AI

ChatGPT, however, is only as good as its databases from which it generates curated responses to a human request. Yes, it writes a letter, but the letter is a curated proxy, whose content validity is set to be trained within a preexisting corpus. A racist response to such ChatGPT prompt as “write about Barack Hussein Obama” was not a blunder of AI, but a function of data it uses. This is largely predictable, given the way its databases crawl a preexisting universe of news, comments, opinions, and just about every source material it can collect — an awful way to reinforce stereotypes built in data sources. We saw the same data-source problem in 2008, when Google introduced auto-search fill only to find that their suggestions for Blacks, Latinos, Asians, women, etc. turned up racially or gender-biased.

The business model of generative AI

In terms of business, ChatGPT will be too unpredictable to become a stand-alone advertising model. Copyright infringements raises liability issues. In this case, it is not a human but an AI algorithm which commits errors, potentially plaguing ChatGPT for responsibilities. It is harder to build advertisements around risky content that is purported to be one complete piece of art, writing, etc. For ChatGPT to be subscriptions-based will also be difficult, since building vibrant user-communities with social stigma for using concocted contents not based on individuals’ genuine efforts might turn out to be tough.

Looking at the future: changes in and outside of Silicon Valley

The likely scenario? For a clue, look at the history of the online industry’s financial success based on search advertising (Napoli, 2019). Given the risk ChatGPT carries, it is safe to bet that ChatGPT will build business on top of its search. It will collect demographic data, create database profiles tied to certain searches, and deliver advertisements to target search prompts. Notice this is an invention of the 1990s, when data-based advertisement began matching individuals with personally-relevant pages. We see this surveillance capitalism at play repeatedly (Zuboff, 2019). ChatGPT will reinforce data-driven business models and determine who should be targeted, included, or excluded. Reproductive of new ChatGPT-like AI is the old human bias baked in the system, that the designer must find a way to fix.

Ironically, what changes our world will happen outside of Silicon Valley.

The US Supreme Court considers the legality of Section 230 of the 1996 Communication Decency Act, which (in the US context) strictly separates (and therefore protects) content delivery from creation. Regardless of US courts decisions, the Court of Justice of the European Union could rule that similar protections are outdated in EU jurisdictions — thus classifying AI as not just surveilling and delivering but creating content. This would be a gamechanger for the European Union as well as for the US, by moving responsibility from users to AI algorithms.

Recall the Gonzales vs. Google case in which the family of a person killed by terrorist group ISIS sued YouTube for recommending ISIS videos. In February 2023, the US Supreme Court heard each side’s arguments, but was puzzled about the position that the algorithm is liable for ‘recommending (good or bad) contents’. Where is it to draw the line? Looking into the future, we can imagine cases like Gonzales vs. Google, where legal responsibilities will fall on the AI.Albeit the likelihood of this shift is unknown. But it will be about how the 1990’s internet law changes the future of AI.

Acknowledgment

The author is indebted and very thankful to vibrant discussions at BKC, Harvard Law School, and also to the kind editorial support by the journal Internet Policy Review. The viewpoint expressed in this piece is solely that of the author.

References

Crunchbase. (2023, March). Organisation OpenAI. https://www.crunchbase.com/organization/openai

Helberger, N. & Diakopoulos, N. (2023). ChatGPT and the AI Act. Internet Policy Review, 12(1).

Napoli, P. M. (2019). User data as public resource: Implications for social media regulation. Policy & Internet, 11(4), 439-459.

Neuman, W. R. (2016). The digital difference: Media technology and the theory of communication effects. Harvard University Press.

Zuboff, S. (2019, January). Surveillance capitalism and the challenge of collective action. In New labor forum (Vol. 28, No. 1, pp. 10-29). Sage CA: Los Angeles, CA.3​

Add new comment