top of page

A.I.? Oh no

Writer: Michael Trotter-LawsonMichael Trotter-Lawson

Artificial intelligence is the process of perceiving, synthesizing, and inferring information demonstrated by machines, as opposed to any form of natural life. Artificial intelligence aka AI was founded as an academic discipline in 1956, and in the years since has experienced several waves of optimism, followed by disappointment and the loss of funding, known in the field as an "AI winter", followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches since its founding, including simulating the brain, modeling human problem solving, and imitating animal behavior. Recently, highly mathematical-statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.


The field was founded on the assumption that human intelligence "can be so precisely described that a machine can be made to simulate it". This raised philosophical arguments about the mind and the ethical consequences of creating artificial beings endowed with human-like intelligence; these issues have previously been explored by myth, fiction, and philosophy since antiquity. Computer scientists and philosophers have since suggested that AI may become an existential risk to humanity if its rational capacities are not steered towards beneficial goals. Today, however, many uses of AI are ethically questionable at best. These include the use of AI in building social media algorithms, like I wrote about a couple weeks back, or the onset use of AI chatbots like ChatGPT, which are surging in popularity right now.


Chat Generative Pre-trained Transformer, more commonly known as ChatGPT, is a chatbot developed by OpenAI that was launched in November of last year. ChatGPT is the fastest growing consumer application to date, with over 100 million users. It garnered this popularity due to its ability to produce remarkably human-like writing, in addition to performing tasks like composing music, write and debug computer programs, emulate a Linux system, and much, much more. It has, however, demonstrated a lack of factual accuracy at times, a major flaw considering how popular the app has become. This phenomena in AI is known as “hallucinating”, and it leads to chatbots like ChatGPT giving confident inaccurate information. One example is one tester asking ChatGPT for "the largest country in Central America that isn't Mexico." ChatGPT responded with Guatemala, when the answer is in-fact, Nicaragua. Sam Altman, CEO of OpenAI said himself that AI's benefits for humankind could be “so unbelievably good that it's hard for me to even imagine.” He has also said that in a worst-case scenario, A.I. could kill us all.


ChatGPT is just the latest in a very wide swath of modern artificial intelligence programs to spark controversy. Another OpenAI product, for example, is DALL-E, an application that creates digital images based on a given text prompt. DALL-E and all its competitors accomplish this by sampling a pool of billions of images all across the internet, and they create images based on what they can infer about the images in their respective datasets. They can make images in a wide range of styles, including mimicking the painting styles of famous artists and photorealistic but surreal pictures. This has led to an influx of debate and controversy, as artists were quickly threatened by the onset of AI art, as artificial intelligence systems like DALL-E are capable of creating images that once took an artist hours upon hours to accomplish. In addition, these AI art software programs are trained on a dataset of people’s work, usually without their permission, so AI art is also the definition of derivative.


Let’s move to discussing AI’s impact on the rest of the world, creative and otherwise. In our neo-capitalist landscape, the most profound effect AI will have will be on the job market. I have written about automation in terms of robotics back in October of last year, but artificial intelligence is progressing to a point where much more than just physical labor-based jobs are at risk. Already, AI has taken on roles you’re aware of, like a great deal of front-end support and automated call systems. However, with the rapid growth of artificial intelligence’s capabilities, many jobs once thought safe from automaton are now at risk. Just recently, Buzzfeed announced that much or their infamous quizzes would be written by AI, with much of the company’s writers being laid off. The long-term effects of artificial intelligence on employment are still hotly debated though, with some estimates claiming up to 47% of U.S. jobs are at "high risk" of potential automation, while others classify only 9% of U.S. jobs as "high risk".


Technological unemployment is only one risk arising from AI’s sudden growth; there are many other risks you should be aware of. For instance, AI provides several tools that are particularly useful for authoritarian governments. Smart spyware and facial/vocal recognition allow for widespread surveillance, and such surveillance allows machine learning to classify potential enemies of the state and can prevent them from hiding. Recommendation systems can precisely target propaganda and misinformation for maximum effect, while deepfakes aid in producing misinformation. Plus, advanced AI can make centralized decision making more competitive with liberal and decentralized systems such as markets. While these features are certainly most concerning in authoritarian states, they have the potential to undermine democracy right here in the U.S. as well.


Even without malicious intent, artificial intelligence has already created a host of problems because of unintentional bias. Bias can be inadvertently introduced by the way training data is selected, and it can also emerge from correlations. AI is used to classify individuals into groups, then make predictions assuming that the individual will resemble other members of the group. Often, this assumption may be unfair. An example of this is COMPAS, which stands for Correctional Offender Management Profiling for Alternative Sanctions. This is a commercial program widely used by U.S. courts to assess the likelihood of a defendant becoming a recidivist. ProPublica claims that the COMPAS-assigned recidivism risk level of black defendants is far more likely to be overestimated than that of white defendants, even though the program was not told the races of the defendants. Algorithmic bias can also lead to unfair outcomes when AI is used for credit rating, hiring, even content moderation on social networks like YouTube. Last year, the Association for Computing Machinery, in Seoul, South Korea, presented and published findings recommending that until AI and robotics systems are demonstrated to be free of bias mistakes, they are unsafe and the use of self-learning neural networks trained on vast, unregulated sources of flawed internet data should be curtailed. Without government intervention, however, this seems incredibly unlikely.


ChatGPT shattered a lot of laypeople’s understanding of artificial intelligence. It recently successfully cleared the coding interview at Google and been deemed eligible for a level 3 engineering position, with a salary of approximately 183 thousand dollars. Simultaneously, according to some users, ChatGPT-powered Bing is getting unhinged and argumentative. Truth is, no one knows what the future of AI has in store, but here’s what ChatGPT has to say about it:


“What have we learned about the state of artificial intelligence today? Well, to put it bluntly, it's complicated.


On the one hand, we've seen incredible advancements in the field of AI, with impressive breakthroughs in areas like computer vision, natural language processing, and machine learning. These developments have the potential to transform everything from healthcare to transportation, and could help us solve some of the world's most pressing challenges.

But on the other hand, we've also seen some serious concerns emerge about the ethics and impact of AI. From issues around bias in algorithms, to questions about job displacement and even existential threats to humanity, the stakes are high and the risks are real.


So what do we do with all of this? Well, for starters, we need to take these concerns seriously and engage in thoughtful, nuanced discussions about how to build and deploy AI in ways that are safe, ethical, and beneficial for all. This means involving experts from a wide range of fields - including computer science, philosophy, and social science - in the conversation, and prioritizing transparency, accountability, and responsible governance.


And as we navigate this complex terrain, it's worth remembering that AI is not an inevitable force of nature, but rather a product of human design and intention. It's up to us to shape the direction of this technology, and to ensure that it aligns with our values and goals as a society.


So let's roll up our sleeves and get to work, because the future of AI is in our hands."

Comments


bottom of page