Ctrl alt existential dread

Written by Ethan Gray

Illustration by @georgiabrownillustration

Following its launch on 30 November 2022, ChatGPT became the fastest application to reach the 100 million user milestone – doing so in only 2 months. For reference, TikTok took around 9 months, and Instagram took nearly 2.5 years. Since 2022, artificial intelligence (AI) has consumed headlines and infested business, entertainment and leisure worldwide. Founders have launched swathes of startups in dozens of countries, tech giants plan to invest hundreds of billions of dollars in research and infrastructure, and governments are struggling to figure out how the technology could and should be regulated. Every few weeks a new product or tool emerges that swears to upend traditional work modalities or creative processes. 

Technologies that promise to transform societies have a mixed history of success and failure. It is crucial to understand how AI got to where it is today to grasp where it will go. We know what the world looked like before widespread AI adoption. What will it look like after? 

Conversing with ChatGPT can seem surreal.

It was the first time people could easily access the frontier of AI, which was capable of having a multi-faceted dialogue on almost any topic. The underlying technology that underpins the model, deep learning and neural networks, has been around for some time. As impressive as they are, LLMs, the type of AI receiving most of the coverage, are in essence, incredibly complex prediction engines. The latest models from Open AI, the creator of ChatGPT, are GPT 4-Omni and o1. o1 is Open AI’s most advanced reasoning model. It takes longer to formulate a response to questions. Just as a person would ponder a question regarding the nature of gravitational interaction longer than they would 2+2, so will o1. 

This process allows the prediction engine more time to generate more robust logic. GPT 4-Omni is a multi-modal system that can simultaneously interpret text, images, video, and voice, with a rumoured 1.8 trillion parameters. Parameters define a language model's behaviour and allow it to predict what should be said based on a user's input. Astronomical amounts of data are used to train the weights of these parameters and inform the model of how it should respond to different inputs. Companies then use a process called reinforcement learning with human feedback to tune the model to favour outputs deemed most useful for people.

It is important to know some AI history to understand how it developed and grasp the type of upward trajectory it may be on. The following outline is ChatGPT’s summary of AI history, dating back to the advent of digital computers and until the development of Generative Pre-trained Transformer (GPT) models.

AI is not flawless, and many of the kinks are still being ironed out. While models are often greater than the sum of their parts, training data and tuning can only go so far in improving their efficacy. The predictive engine makes mistakes. Riddles that humans can easily solve will sometimes trip up models. Sometimes, they hallucinate totally false information.

Because language models are generally correct and make nuanced interpretations of questions, it can be easy to take what models say as fact 100% of the time. This is a mistake.

Their output should be double-checked. However, it would be foolish not to recognise how powerful they have become in such a short period. According to Open AI, GPT4 not only passes but beats most test takers on exams such as the BAR, LSAT, GRE, and most AP exams.

While LLMs and the application ecosystems around them are advancing rapidly, they are not the only kind of AI that stands to shift the way entire industries operate. AI will usher in significant changes in modern society. Medicine and scientific research could be some of the greatest beneficiaries. In 2020, Google’s Deepmind, an AI research lab, released AlphaFold 2. It is a deep-learning model specifically designed to solve the protein folding problem. 

There are over 300 000 000 types of proteins on earth. Each is made up of strings of Amino Acids folded up on top of each other. Before AlphaFold, researchers could spend years unravelling a single protein's structure. AlphaFold can now provide predictions within a matter of hours that are so accurate the protein folding problem is considered solved. Ewan Birney, the director of Europe’s Bioinformatics Institute, said 'This will be one of the most important datasets since the mapping of the human genome.'

In 2024, AlphaFold 3 was released, which can now incorporate predictions on DNA, RNA and some molecular structures. It is already being deployed to research how new drugs will impact humans, find new ways to combat infectious diseases and uncover how to break down plastics. It saves researchers precious time and funding and accelerates the development of new technologies that can save lives. 

However, as with any technology, bad actors can use positive advancements for nefarious purposes.

For example, AIs like AlphaFold could be used to develop new biological weapons. Splitting the atom unlocked abundant fossil fuel-free energy and the most dangerous weapons that humans have ever created. 

All fundamental breakthroughs lead to a new balancing act that society and structures must adapt to. Will these new tools be used to help address inequalities and improve the human experience, or instead be directed towards violent and selfish ends? 

This is most apparent when it comes to the internet and social media platforms. In 1995, 14% of Americans were online. Facebook would not be launched for another decade and the iPhone didn’t exist. Today, over 5 billion people use the internet and social media, and nearly 1.5 billion have an iPhone. Society was far from adapting to social media and daily internet usage before the introduction of photorealistic computer-generated content and college-level essays that can be generated by a prompt. 

The internet decreased the cost of content distribution for all types of media to almost nothing, radically changing the business models of legacy distribution channels like TV, print media and radio. It gave users an inundating amount of content and information to consume. But in almost all cases, that content still had to be made by someone. That is no longer the case.  Soon, distribution and content creation will effectively cost nothing. 

Disinformation generated by AI systems, such as deepfakes or well-written fake news posts linking to websites created by models, are already here. You might have consumed half a dozen pieces of online AI-generated content and be none the wiser.  

In 2016, Russia’s Internet Research Agency (IRA) waged a campaign to influence the US election, in one of the first mainstream cases of deliberate disinformation dissemination on a national scale.

While effective, the initiative required a huge amount of manpower - they still had to create every account, write every post and doctor every photo. Now computers can do it for you. The flood of content will be asphyxiating. Tech companies are well aware of the potential problem but can only do so much to flag and filter content at scale. According to Statista, a data provider, 691 million fake accounts were removed from Facebook in the fourth quarter of 2023 alone. There is simply no way companies can keep up with the sheer volume of information. 

Notably, statements generated by models are also increasingly persuasive.

A report released in April 2024 by Anthropic, the creator of Claude 3, one of the most advanced frontier language models, assessed how persuasive Claude was relative to humans. The difference is no longer statistically significant, and the models are still improving. When coupled with specific information on users, such as their search history, demographics and income, individuals or groups could generate targeted persuasive misinformation at a scale far beyond anything society has seen. Similarly, society has never had to grapple with a tool that can mimic their creative process so well.

With modern AI models, relatively high-quality art, videography, music, and creative writing are all just a few prompts away. Sora, an unreleased technology from Open AI, can generate entire worlds in 4K video with a prompt. Suno, an AI music generator, can generate 1980s disco songs with Paul McCartney's voice singing about the history of the World Cup. Mid Journey, a picture creation model, can create photo-realistic images and other artistic portraits, with the only limiting factor being your imagination. 

That is to say nothing of the extensive text and writing tools that can write entire books or screenplays. It is a challenging reality to come to terms with, as these were all domains where, just a few years ago, only humans had the cognitive capacity to work. An increasing number of the over 100 000 songs uploaded to streaming platforms each day are AI-generated. The Hollywood Writer’s Guild partially premised its 2023 strike on not getting replaced by AI. 

A slew of ethical and legal questions surrounding creative AI-generated content are swirling. Pairing these tools with the human development process can lead to novel and truly unique creations. However, models are inherently reductive, as the limits of their training data define their creations, at least for now. 

What happens when someone creates and profits from AI-generated creations that utilise someone else's music, copyrighted images or a journalist's reporting? The New York Times is suing Open AI over this right now. There is much to be litigated regarding what 'fair use' is regarding AI. One argument that companies developing these models posit is that all human creations build on and incorporate pieces of what came before them in their work. AI models are doing the same thing. 

In an opinion piece with Ezra Klein, Holly Herndon, an American composer, refers to AI as collective intelligence because these systems effectively index human knowledge during training. This idea certainly has some merit. 

The issue arises when it copies things too closely to another’s work. But where is the line drawn? I don’t think anyone has a clear answer yet.

Three things are certain.  

One way or another, AI will transform entire industries. Governments will either need to define fair use for AI tools, or the courts will establish it through litigation. And humans will remain indispensable to the creative process, even though it seems like AI can replace much of what they do well.  

In the realm of white-collar office work, there has been a continuous slog of hyperbolic claims that mass layoffs are imminent because AI will replace everyone. While there certainly will be changes to how people work, this is unlikely to be the case for positions that require multiple layers of thought or require nuanced extrapolation of ambiguity. 

Certain jobs, such as customer support, are more ripe for replacement. Klarna, a financial technology company, replaced 700 customer support agents with AI and saw a 25% drop in repeat customer inquiries. This is an area where AI excels. Models can be indexed with company information. Their ability to recall information and handle Q&A is fantastic. They can also operate in almost any language and handle speech-to-text quite well. That said, mass workforce replacement in all other domains is not imminent. The nature of work will change, and people who learn to utilise AI in their workflow will excel because it can amplify someone's capacity to get things done. Established companies with large workforces and governments move to adopt technology slower than it is developed, giving people time to learn and familiarise themselves with new tools. People should not be idle as AI develops. Nobody knows exactly how AI will fit into all industries, jobs, and companies. People should prepare for whatever changes stick by learning to leverage AI to their advantage. 

Much of the alarm stemming from new AI technologies is rooted in how they challenge assumptions about what makes humans unique on Earth. As Ezra Klein put it in a recent podcast on AI: 'One of the chilling thoughts that I have about it is that its fundamental message is that you are derivative, you are replaceable.' 

This can be said for several technologies of the past. The industrial revolution replaced huge swathes of labour that people once did with machines. The difference with AI is that it does not just automate monotonous physical tasks or give humans a new tool to interact with the world. They may just be incredibly complex mathematical models. Still, those models are getting better at what the human brain does across several previously untouchable human domains, even if the AI doesn’t understand why it is doing what it is doing. That is a new paradigm.

Conversations surrounding AI become existential very quickly because of this reason.

Alarming statements from influential scientists and entrepreneurs like Elon Musk, Sam Altman and Gary Marcus, as well as the sector's rapid advancement, have put regulators on alert. In a 2023 opinion piece in The Economist, Gary Marcus and Anka Reuel, two academics specialising in AI wrote, 'It is in this context that we call for the immediate development of a global, neutral, non-profit international agency for AI, with guidance and buy-in from governments, large technology companies, non-profits, academia and society at large, aimed at collaboratively finding governance and technical solutions to promote safe, secure and peaceful AI technologies.'

Key stakeholders hope that such multilateral organisations can help humanity harness the power of AI, while developing and deploying it safely. The industry is growing and innovating rapidly. Nobody knows who the long-term market winners will be or what applications will be created. 

That is why it would be irresponsible to over-regulate the development of AI technology. It must be allowed to mature. Its potential for benefiting humanity is too high. That said, there are clear cases of potentially dangerous misuse at scale, and too many countries are involved in its development to have no coordination or communication. A respected international agency could aid in defining some core safety and regulatory standards for individual countries to base their own AI legislation on and inform regulators on the nature of its development. 

Technology and society have always had a reciprocal relationship. Each enables the change and development of the other. Assessing the world before and after different technological or societal inventions is a great way to catalogue history. The development of agriculture 10 000 years ago allowed humans to congregate in cities, begin specialising in other types of labour, and created a need for property rights. The invention of the printing press in the 1450s helped launch the European Renaissance and spread the Protestant Reformation. The Morril Land Grand System of Universities in the US created a new model for funding basic scientific research. The Industrial Revolution enabled the scale of destruction wrought by two world wars and the interconnected global system that developed in its aftermath. Atomic weapons fundamentally reshaped great power conflict and the calculus of international relations. 

In all these cases, technology or society changes over time due to the advancement of the other. There is typically a painful adjustment period, specifically when society struggles to reorient itself with new technology.

Today’s digital technologies are advancing faster than society can cope with. Most domestic and international institutions have not changed meaningfully for decades. Instantaneous global communication and social media give anyone access to endless information streams from around the world. Social structures and the brain never evolved to operate in a world like that. 

Researchers, journalists and politicians have been building a greater understanding of the internet's impact on humans in the last few years. However, changes in social and government institutions have not manifested meaningfully outside of privacy laws and some congressional hearings where politicians could rail on tech CEO’s for soundbites. Before we could fully adapt, AI and its potential for change have taken centre stage. Just as we study the impact of the Industrial Revolution, 100 years from now, people will compare their world to one before ubiquitous AI technologies.

Sundar Pichai, the CEO of Alphabet, Google’s parent company, has said that AI will be the biggest technological shift in our lifetimes and might be more significant than the internet itself. That statement has yet to be proven true, but it is noteworthy that people developing the industry's cutting edge believe it. In his 2023 book, Invention and Innovation, Vaclav Smil warns readers to be sceptical of claims touting ever-accelerating technological growth and outlines several failures throughout history. 

It is right to be cautious of blustering claims until the adoption of new tools has been borne out. However, it would be foolish not to recognise its potential or plan for a world with widely adopted AI systems. If it has even half the impact the internet has had on humanity, substantial work will be ahead. Even though nobody knows what stage of development we will be in 3 years from now, the best thing you can do is familiarise yourself with these tools and grapple with how they could impact your life. If history is a guide, governments and companies will be too slow to mitigate the downsides of new technologies, only capitalising on its gains. 

Previous
Previous

Far right: close to home

Next
Next

Does anyone miss the good old days?