Averting the AI-Pocalypse
Averting the AI-Pocalypse
The Bruce Projects – Dialogue XII
I believe that humanity is facing its greatest test perhaps ever and we Philosophers need to show leadership to save the day.
My concern is that Artificial Intelligence (“AI”) has every likelihood of really harming, and potentially even destroying, our humanity, and something must be done about this before it is too late.
There are five risks that I see, although there are likely more that I am not seeing yet. We philosophers can push back on four out of the five.
Here are the five concerns, which I am labeling The AI-Pocolypse:
AI Out of Control or Mis-Used:
The first concern from The AI-Pocolypse is one we all know about, i.e. that AI will get out of control itself or be used by bad persons. This is terrifying, as these AI programs are potential weapons of mass destruction. Imagine a super smart AI that is told that its goal is to figure out how to infiltrate all nukes in the US – or the planet –and trigger them in place. The world just ended didn’t it?
Is this far-fetched? You tell me as I have no idea. But considering every day we hear about a mega-data breach, it doesn’t sound implausible to me.
Along these lines, a recent article claimed that AI is Learning to Escape Human Control. I don’t know if the article was over-hyped, but most of us have seen the Terminator movies.
This is scary, but being honest with myself, I don’t see what we philosophers can do about this one, except pray that it doesn’t happen. Also,I think this is the least dangerous risk to humanity that AI brings.
AI Replacing Us:
This is the second risk I will highlight. Everyone is talking about this already, so I will give it short shrift here. You can’t open a newspaper without an article. Here is one I noticed recently: I Built an AI Career Coach. I’veNever Had a Better One.
For almost everything we do in our work, we have to think about whether AI could do it better. Even writing this article, I am confident that ChatGpt could do it better than I have done. And this article took me a couple of hours. Chat Gpt could do it in about thirty seconds.
Yuval Noah Harari, who wrote the well-received book Sapiens, says “Humanity has competition. And it’s coming fast.”
This is happening a lot faster than we want to conceive. See below for some ideas what to do about this.
AI Being Mis-Used to Manipulate Us
This third risk is getting scarier every day if you consider where you get your news and how the news sources – including what we think of as the “media” and also “social media” — in their goal to achieve eyeballs to their stories, do everything possible to deliver news to you that you find interesting and resonating.
This creates a bubble of contentment – a comfort zone – around the news, clicks, scrolls and other feeds of information that you are getting. And this pushes everyone towards extreme thinking and extreme thinking about what is actually happening in the world.
A simple example is Twitter’s“for you” programming. But this is only the most obvious. If you google something about feeding your gerbil you will end up getting a zillion advertisements and articles about rodent pets. And I believe if you, as you scroll through news feeds, the ones your eyes linger on is tagged somehow, so even if you spend a little more time on that gerbil article without even opening it you will have the same result.
I regard myself as an iron-willed person – and at heart a seeker of truth, which is the lynchpin of The Bruce Philosophical Project — but I find myself getting tricked again and again. And if someone like me, with the benefits of education and making it my mission not to be fooled, can still be tricked, I am guessing that others without these benefits will be even more easily manipulated.
Supposedly Hitler – the ultimate non-role-model — said:
“If you tell a big enough lie and tell it frequently enough, it will be believed”
We see more and more examples of this every day. And dare I say that we are now not far from allowing people to simply live in their own faux reality and exist there quite pleasantly, and essentially isolating themselves from the rest of the human race. How sad that would be.
For those of us who believe that truth is a goal worth pursuing we must do something about this, shouldn’t we. See below for thoughts about what we might do.
AI Causing Loss of Ability to Think:
The fourth concern deriving from The AI-Pocolypse is about what AI will potentially do to our brains.
The problem is that those who don’t use AI as a useful tool today are downright foolish; however, those who overuse AI will become foolish. In other words, AI is a wonderful servant but a bad master.
Let me explain what I mean with a little more depth….
Right now if you have a brain that has polished the ability to think – typically that means you are grown up and able to think for yourself adequately – then you would be foolish not to use AI in the things that you do. For pretty much any goal or task that you have to accomplish – even if you are an expert –AI allows you to essentially poll every single person on the planet (who is on the internet) and ask each of them if they have ways to improve on what you are doing or make it more efficient or double check if you missed something. With this tool at your fingertips, you would be foolish not to take the 60 seconds to ask an AI program to double check things for matters of importance. That is what AI currently does exceptionally well – it assembles pretty much all human thinking on the subject in question, summarizes it for you, and gives you additional ideas to boot.
I would think that of course you would do use AI for this function. And if you haven’t started to do this yet, sooner or later you will I am sure. I am an authority here since I am(always) the latest adopter of technology – I probably had the last Blackberry pried from my hands before I gravitated to the iPhone. And if even Bruce is using AI in the manner I outlined above, then everyone else is or will be soon.
This is what I mean when I say that you would be foolish not to use AI.
However, let’s say you don’t have a brain that has (yet) achieved the ability to think. Maybe you are a young kid – or maybe not even born yet. As you grow up you could either:
- Figure something out for yourself
- Or ask the AI program the answer
The easiest path will be to ask the AI program, obviously. I mean you just have to ask and it will tell you. This will mean that you are not thinking – or learning to think — but letting the AI program think for you. You don’t need me to point out that unless you do something about it, this will atrophy your brain or cause its development to be stunted.
And this will be awfully hard to avoid won’t it?
Let’s say you are adamant that you want to figure things out for yourself, i.e. you want to resist the urge to fall into the non-thinking comfort zone everyone else is in. You want to be a true thinker and develop your brain, so you fight the programming.
Consider all the pressures against you. Others are whipping through their homework while you plod along. Others are essentially laughing at you as a turtle while they race ahead. You are sitting with an old-fashioned thing called a book while they are outside playing. You are learning to think, you say, but more and more times it doesn’t matter that you can think, since all your friends are easily out-thinking you with their machinery.
And it is even worse in your career or job, where typically the pressures are to achieve tasks by a deadline. What company would tolerate your taking a ton of extra time for you to figure things out for yourself when it could be done much quicker using an AI program. AI would be the antithesis of competitive behavior.
You would have to have the will of an adamant to resist these pressures.
And my belief is virtually no one will be able to do this, as after a while it will be irrational to do so.
So – for us oldsters – and even older kids in their early twenties – AI is a God-send. It will open us up to all sorts of things we could never dream of achieving. It is the greatest tool ever.
But for those younger it has a severe risk of crippling the next generation. And, unchecked, it will cripple each generation thereafter worse and worse.
Here is the title of an article: “How I Realized AI Was Making Stupid – and What I Do Now. Backers of the new tech say it will free us to be creative, but studies show that avoiding mental effort can cause your brain to atrophy.”
And another one: “AI’s Biggest Threat: Young People Who Can’t Think.”
As an aside, I am giving you here some input from my daughter, who works in the AI field. She notes, partially positively:
“Humans that use google maps are worse at navigating, but that hasn’t resulted in a cognitive apocalypse — it just became another tool that let us move further and faster.
Using a program to do addition frees up my brain to design the spreadsheet. So, why is a tool that helps us write so different?
However, I do think it’s fair to worry that Large Learning Models are not just fancy graphing calculators — this is the key difference between artificial general intelligence (AGI) and specialized AI (ASI). The goal of artificial general intelligence is to be broadly intelligent, and in some limit, this would mean there would not be other cognitive tasks to fallback on as AGI subsumes them. And a perfect, infinitely capable language model would, potentially, approach this.
Instinctively, I don’t really feel threatened by this, although I think it’s right to worry. My feeling is that we, as a society, have a lot of really hard problems, we’re good at making problems, and we’re still in a regime where it would be nice to get some help solving the problems we have. If we ran out of problems though, I’m not really sure what that would look like psychologically, societally, economically.
I do think it’s something philosophers should figure out.”
To conclude, my belief is that the process of thinking itself is now at risk. We philosophers have to save the day here don’t we. See below for some thoughts on how we might do this.
AI Causing a Loss of What it Means to be Human:
There is a fifth risk encompassed within The AI-Pocolypse, which is the risk that AI will rob us of our humanity itself.
Consider one of the important– and human – things that we do, and treasure doing:
Creating and enjoying friendships
I have found that virtually everyone one way or another comes to the view that friends are one of the true elixirs of a full and meaningful life. Without friends what does any of us have?
Consider yourself as a billionaire but with no friends at all. You can have just about anything under the sun – houses, cars, vacations, your own jet or yacht. The sky’s the limit.
But would you be happy with all the money and no friends?
I know I wouldn’t and suspect you would not be either.
But the rub is that interactions with friends are messy. The interactions are sometimes wonderful – sometimes awful – sometimes intricate – sometimes puzzling – sometimes inspiring – sometimes depressing – and the more deep and fulfilling they are, often the more messy.
Yet despite the interpersonal frailties surrounding our friends, most of us believe that friends are one of the most important focal points of their lives. But now, let’s weave AI into the picture:
Consider, again, how difficult it is to relate to other people. People are so emotional – so tricky – so up-and-down – so difficult – so stupid – so annoying – so impossible – so disloyal – so mean — so…..human!
Wouldn’t you rather just have a (friend?) who satisfied every single friendship demand you have. This friend would:
Never let you down
Always be there for you – even when you wake up sweating at four o’clock in the morning
Always say the right thing all the time
Not be emotional (since only your emotions would matter) or be difficult or tricky to understand, have up and down behavior, or stupid or annoying or just impossible!
I mean how could the rest of humanity compete with that?
AI could substitute for friends quite easily couldn’t it? And bea better friend than any human could be, couldn’t it?
Note a recent article –talking about love as opposed to friendship, but along similar lines — wonders Can You Really Have a RomanticRelationship With AI? The article says: “Yes you can. And it can be good for you. But the danger is seeing it asa substitute for a human connection. Three experts weigh in.”
Another article is titled: “When AI Tells You Only What You Want to Hear.” And noted that: “Chatbots tend to flatter users and be overall agreeable. It feels good, but you may pay a high price for such praise.” It points out the risks of AI sycophantic behavior creating a feedback loop that overhypes to the positive whatever the user is seeking to accomplish, and we all know how that can turn out.
And one I found especially unpleasant which states: “Why Moguls WantBots to be Your BFF”. I could rail about Big-Tech or just recognize and accept that they really don’t have a choice. If they don’t promulgate this kind of stuff, they risk irrelevancy and the destruction of their businesses to competitors. It is an arms race –against our humanity – that cannot be trammeled and we have to deal with it.
The simple answer to all of this is just plain NO!!!
As already stated, if there is any point to being a human being, friendship – and love of other persons — is one of the key lynchpins.
And AI is the exact opposite of that. AI is a machine and can never be your actual friend. It is a pathway to irrelevance and self-delusion.
My fear is that The AI Pocalpyse will deprive us piece by piece of this critical essence of humanity.
We humans are so gullible. How easy it will be when everyone in the world seems to be so mean, uncaring or awful – on a bad day – but the one person who won’t let you down is your AI Friend.
But your AI Friend is not your friend. It cannot – ever – be that. Your AI Friend could, with the flip of a switch, be Hitler. It is a thing and not a human being!
Please take a look at the Appendix to this article regarding a creepy story from Isaac Asimov, who was one of the greatest thinkers who ever lived. He predicted our future with eerie scariness. Can you believe he wrote the story in 1951?
We humans are quite gullible creatures. We believe conspiracy theories that are so inane that… well…I will say no more. Imagine how easy it till be to believe that your AI friend really cares about you. Especially if it has a nice friendly voice and is right about stuff more of the time than anyone else is.
So, to conclude, in addition to thinking being at risk, so is friendship. And we philosophers have to save the day on this risk as well. See below for some thoughts on how we might do this.
How Can We Push Back Against the AI-Pocolypse:
As I outlined, AI is a wonderful servant but it will be a truly terrible master. Imagine a world where we don’t have friends and don’t think much either. We just interact with a machine that does thinking and caring and feeling for us.
But it doesn’t have to end badly. Here are some thoughts about how to not succumb to the AI-Pocolypse:
For AI Out of Control or Mis-Used: As noted, I am not sure we philosophers have much ability to solve this problem. I hate to just close my eyes and hope, but that is the best I can do right now.
For AI Replacing Us: I don’t have that much to say here that isn’t obvious or hasn’t been said by others. One interesting quote I am paraphrasing is this one:
It isn’t AI that will take your job. It is other (people) who are using AI who will take your job
This means of course that you should be a user of AI as much as you can in your work, so you don’t fall prey to the foregoing risk. See my note above that those who don’t use AI as a tool are being foolish. Don’t be foolish here.
Also, you have to be vigilant and consider how much of what you are doing is sitting at a computer terminal typing stuff or doing something that could be replicated. Those duties are at risk.
But those duties where you are doing something unique and interacting with other people. Those duties (I hope) are at a lot less risk.
Sorry to say, but remote work is likely doomed by AI faster than almost anything else. Hanging out with the boss at the proverbial water cooler has more value than ever before since bosses don’t want to fire their friends if they can avoid it.
For AI Being Mis-Used to Manipulate Us: This is really hard and easy at the same time. It is hard because it is brutally difficult to get out of the comfort zone that AI is shoving us into. That is why it is called a comfort zone in the first place – it is comfortable.
Getting out of this comfort zone is easy to explain. It is a commitment to challenging every single thing you hear. Asking people who say things where they got their information. When you see articles from the media look for actual quotes as opposed to interpretation. If you swing left in politics force yourself to read the right, and the converse. Make sure you are getting news from uncomfortable people. And listen listen listen when people tell you things you think are wrong, consider are they morons and you are intelligent or maybe it is the other way around.
Fight your belief that you know everything. Virtually everything you think you know, you got from another source one way or another. Consider the sources of the knowledge you have and possible re-evaluate what you know. Maybe what you know is really incorrect, or maybe after investigation, you will learn that it is more accurate than you previously thought.
If you make a commitment to this way of thinking and acting then you become a philosopher and a warrior against this version of the AI-Pocolypse.
For AI Causing Loss of Ability to Think: For the loss of thinking ability:
Be mindful that your brain is a muscle – okay not really a muscle – but it is like a muscle in that if you get used to exercising it, then it gets stronger and more powerful. If you like exercising your body – to get ripped maybe – then think of doing the same for your brain.
Force yourself to just turn off your access to AI machinery for a period of time every day. I have found that when I go for a walk without my iPhone, the intellectual pain of not having it around vanishes pretty quickly and after about twenty minutes I forget about it.
Have stimulating debates with other people without use of technology. Make them strong and instead of just being nice, be intellectually rigorous. This is exercising your brain with a buddy –give yourselves a good workout.
Force yourself to learn new things that you have to actually learn, such as a new language
Play intellectual games like chess or crossword or other puzzles. Try lightning chess by the way for really blasting your brain around.
Try to figure things out for yourself as opposed to just asking the AI program for the answer.
If you are a parent or a teacher or charged with imparting knowledge and learning to others, use theSocratic method as much as you can. It is pretty hard – impossible even — to avoid thinking when the teacher is acting in this capacity.
Try to make thinking fun. This is what I have been doing for years. I sit by the pool with a pad and a pen and no AI machinery and pick things to think about and just think. I love doing this. It is my favorite hobby.
AI Causing a Loss of What it Means to be Human: For the loss of humanity through faux AI friends, it is a lot simpler to know how to push against this:
Just don’t fall for it in the first place – AI just is not your friend– don’t treat the AI thing like a person. It is just a machine like your refrigerator.
Treasure your real friends even more. Realize how critically important they are to you – and you to them. Turn this AI risk on its head by making more effort than before to embrace friendships.
Read my upcoming book on Friendship. It is in draft and should be out later this year. I think it will inspire you on the foregoing.
To Conclude:
This is a worrisome time for the human race. While we worry about wars and social issues, the AI-Pocolypse is creeping up on us.
The first step in pushing back against the AI-Pocolypse is recognizing the threats and the risks. Hopefully this article puts those risks forth properly for discussion.
AI will have incredible benefits for humanity, but it comes with greater risk than any previous weapon of mass destruction. We can benefit from AI or the converse.
Now it is time to act. All of us philosophers – and everyone else too — should get together on these risks and focus ourselves on how AI can be an agent to enhance humanity as opposed to undermining it.
I would love to know what you think. If you have thoughts feel free to give me a shout over on X/Twitter.
Bruce of The Bruce Philosophical Project
Appendix
Satisfaction Guaranteed
This is a story written byIsaac Asimov in 1951 called Satisfaction Guaranteed. See if the last sentence of the story doesn’t shock you and make you think about things. This is the Wikipedia summary:
Robot TN-3 (also known as Tony) is designed as a humanoid household robot, an attempt by US Robots to get robots accepted in the home. He is placed with ClaireBelmont, whose husband works for the company, as an experiment, but she is reluctant to accept him. Tony realizes that Claire has very low self-esteem, and tries to help her by redecorating her house and giving her a make-over. Finally, he pretends to be her lover, and deliberately lets the neighbors see him kissing Claire, thus increasing her self-esteem. In the end, though, Claire falls in love withTony, and becomes conflicted and ultimately depressed when he is taken back tothe lab. The TN-3 robot models are scheduled to be redesigned, since US Robots thinks that they should not produce a model that will appear to fall in love with women. US Robots robo psychologist Susan Calvin dissents, aware that women may nevertheless fall in love with robots.
The last paragraphs of the story freaks me out today. Notably, I hardly noticed it 55 years ago when I read it:
“And won’t the TN-3 model need changes?” “Oh, you think so, too?”questioned [Susan] Calvin [Asimov’s scientist foil in many of his stories]sharply. “What’s your reasoning?”
Bogert frowned. “I don’t need any. It’s obvious on the face of it that we can’t have a robot loose which makes love to his mistress, if you don’t mind the pun.
“Love! Peter, you sicken me. You really don’t understand? That machine had to obey the First Law. He couldn’t allow harm to come to a human being, and harm was coming to Claire Belmont through her own sense of inadequacy. So he made love to her, since what woman would fail to appreciate the compliment of being able to stir passion in a machine-in a cold, soulless machine. And he opened the curtains that night deliberately, that the others might see and envy-without any risk possible to Claire’s marriage. I think it was clever of Tony-Do you?
What’s the difference whether it was pretense or not, Susan? It still has its horrifying effect. Read the report again. She avoided him. She screamed when he held her. She didn’t sleep that last night-in hysterics. We can’t have that.”
“Peter, you’re blind. You’re as blind as I was.The TN model will be rebuilt entirely, but not for your reason. Quite otherwise; quite otherwise. Strange that I overlooked it in the first place,” her eyes were opaquely thoughtful, “but perhaps it reflects a shortcoming in myself. You see, Peter, machines can’t fall in love, but – even when it’s hopeless and horrifying – women can!”
Bruce/The Bruce Philosophical Project