Silent Semi-Killers | Extinction-level impacts of AI that are right under our nose.
TLDR:
- There exists a class of AI catastrophic outcomes, Silent Semi-Killers, that lead to major (but not directly extinction-level) effects on our society and are hiding in plane sight
- These effects arise from an need for increase in productivity(or the value of productivity / work output) in our society
- This often leads to decreases in cognitive development, necessary skills/intution, mental-health benefits that come from the automation of certain tasks.
-
While AI doesn’t need to be ripped out of these domains, we need more time to analyze and restructure our systems to deal with the introduction of AI
- Overview
- Examples of Silent Semi-Killers
- Root Issue: Rapid Moving Tech in Underprepared Systems with a need for Productivity.
Overview
When we often think about extinction/p(doom) scenarios, we often separate them into two categories:
- Immediate collapse: bioweapons, robot takeover, some form of assisted warfare, etc.
- Apparent and over-time changes: growing concentration of wealth, job losses, political instability.
These scenarios will be obvious. Maybe the second type would happen a bit more under-the-rug, but people will know when they (or peers) get laid off, or when the government begins to collapse. However, I think there exists a 3rd type, and, in my view, one of the most underrepresented ways AI could lead to the collapse, or at least destabilization, of humanity. And many of these scenarios are occurring as we speak.
I call these scenarios “Silent Semi-Killers”, as they are:
- Silent: Already adopted, or in the process of being adopted, habits/uses of AI that have on the surface don’t seem to be harmful, yet lead to catastrophic effects. Usually, they are justified for the gain of “productivity”
- Semi-Killer: They won’t necessarily take out the entire human race, but they will significantly change (most likely for the worse) certain core aspects of us and our society. They will also take a very long time.
Here, I’m using the word “scenario” a bit liberally. I’ll pack in the exact AI usage for each that occurs right now, and how it will lead to a near doomsday scenario for humanity. At the end of this essay, I will discuss how these scenarios have two main trends:
- They are all rooted in the view that is embedded in our society, whatever increases your productivity/work output the most is best.
- AI can exist in many of these domains; however, we need time to adjust our systems accordingly.
Examples of Silent Semi-Killers
A Loss in Critical Thinking: AI in Education
Here, I want to focus on the most well-known silent semi-killer, AI in Education. Here, the education system is still very hesitant about using AI, which is good. Yet as we know, students continue to us AI in the classroom.
In high school and college, often what AI is being used to replace tasks with tasks/work that feel more important. Here, the issue is a lack of motivation. Your average high school/college student wouldn’t feel the need to use AI as a shortcut(or for cheating) on a class if they knew it might come up on a test (where they dont get AI) or mattered toward their career. This is why AI usage at this level isn’t as threatening. However, I am worried that we will have more specialized but less well-rounded people in society. Sure, you could implement some complex algorithm from scratch, but not knowing how to analyze society and history (something you would gain from your gen-ed, which you used AI on) is pretty important. I want to clarify that neither students nor faculty are at fault here; it mainly comes down to our society’s obsession with any way to increase productivity/work output, and work that seems to be the most directly connected to are careers. Undergrads and high schoolers are being pushed to rapidly advance their careers, completely skipping on (or giving little attention) to fundamental lower level classes.
Yet, the most dangerous place for AI use in education is among younger students. Here, the unrestricted access that students have to ChatGPT or Claude (outside of school) to complete work is very scary. Again, we shouldn’t blame the students. We are taught to take the fastest approach to getting our work done (again, another product of our emphasis on productivity). I’m not going to harp on this point too much; we can all agree that middle and elementary schoolers using AI in their work is very damaging to their critical thinking and problem-solving skills.
A common way that the education system has been responding to AI usage is by adopting it and incorporating it into classrooms. While I’m not an education expert, I believe this is the wrong choice, and instead we need to be stricter about AI usage. All of this is easier said than done obviously but the bottom line is that the education system needs more time to adapt to AI.
The next part is speculation on the effect of this usage. While test scores may tell us something, these effects will not be immediately apparent. Certain students in more funded education systems (or with parents who have the time to monitor their kids’ education, which will become increasingly rare in the future) will avoid this mess. There will likely be more disparity in people’s education, and thus, we will deal with lots of polarization (lack of critical thinking), increased reliance on AI (bad habits tend to stick), and a more destabilized society (as a product of everything else).
A Loss in the Corporate Cycle: “Non-Replacing AI” in Workforce
The first example was very broad; the next two are more focused, yet nonetheless important. Here, “non-replacing AI” simply means AI that doesn’t replace workers, as opposed to AI used in factories, customer service roles, data analysts, fast-food chain jobs, etc. The use of AI assistance in certain roles is very damaging for the long-term stability of the company/field. I’ll paint out a couple of scenarios to illustrate this idea, and then paint the general trend/scenario below:
- Use of AI in a doctor’s office. Imagine if, instead of having an intern interview a patient and take notes before the doctor comes in, it was all done by an AI. In this scenario, what’s being lost is the interns’ developmental skills that the current medical system expects of trained doctors. The doctor was once in the intern’s shoes, and the same goes for the doctor the current intern took notes for. While the job might get done better, and the intern may have other things to do, something useful may be lost when we throw AI into this equation.
- Software Engineers: Here, the cycle at threat is the difference between junior and senior devs. Studies show that junior devs experience more losses when using AI. Following the trend above, when the senior devs retire, who will take their place? One would imagine it would be a junior developer, but would they have the intuition to manage projects on an operational level?
These scenarios dont go too in-depth, but there is a very crucial factor that is lost when we rush to replace AI in the marketplace. Again, we often do this for productivity’s sake, especially in the workforce, where one’s competitors may be using AI. But what isn’t considered is the long-term stability of AI within these systems. While not as impactful on our society as the education example, we will see continued hits to our workforce. Instead of rushing to use AI in certain scenarios, companies should consider the long-term negative consequences that may arise. Yet, with stubborn investors who are only worried about the next day’s stock, it’s hard to imagine corporate leaders being given this time.
A Loss in a Healthy Lifestyle: Automating Mundane, but Crucial Tasks.
The last one may be the most underrepresented scenario. It’s about the increased automation in our lives. This is not AI-exclusive, but it is accelerated by AI (obv). We are constantly encouraged to be more productive. Yet while computers get faster, transportation gets quicker, and workflows get optimized, the time in our day stays constant. As we jam-pack our day with more information and time spent on high-level cognitive thinking, our brains are more strained. Outside of one’s job, the mundane time spent searching for that one file on your computer, washing the dishes, making coffee, or just waiting for something to load is crucial for your mental health and wellness. Those repetitive, low attention tasks are where you give your mind space to think and rest. This problem didn’t start with AI; it dates back to the Industrial Revolution and the start of automation. However, in this day and age, AI is the final straw that will break many lifestyles. People feel the need to optimize everything and squeeze every second possible out of other tasks in their work to increase productivity. Yet with any increase comes a decrease, and again its not so obvious that AI is a root issue when burnout occurs (I mean, it kinda is a root issue).
Root Issue: Rapid Moving Tech in Underprepared Systems with a need for Productivity.
I think that while maybe it’s a stretch to include these in the category of “doomsday scenarios”, they are often left out of the conversation and need to be treated with the same urgency. The scary part is that this isn’t the same as say emergent misalignment in LLMs, which can be mostly solved through technical developments. Here, this requires the cooperation of everyone to both understand AI and its impact on a domain, AND develop/adopt systems for AI usage. It also requires agreements, as when some parties start using AI to increase productivity, everyone else follows to survive. Note that here, parties could be other companiaes, but also your fellow workplace peers, other students, etc. At the same time, we drop busy work, skill-building exercises, and mindfulness routines because they are unmotivated . The idea is that no one is intentionally trying to lose all of this; instead, the capitalist (I’m sorry to get all leftisty, but that’s the real root) world we live in often mixes our priorities to chase productivity. The solution is that we need to slow down AI integration and conduct a critical analysis of the systems we live in. This will probably not happen, and instead, we may see our society turn much more unstable in the future, even if we reach a deal on not using AI for bioweapons, war, or any other mainstream AI threat to humanity.
Enjoy Reading This Article?
Here are some more articles you might like to read next: