Ideas from philosophy and good AI
Monday, 6 November 2023
One of the ancient questions in philosophy is, “what do you have to do to be a good man, or to live a good life?”. At the moment, there are a number of meetings going on all over the world trying to decide about the ‘goodness’ of artificial intelligence (AI), and, much like the parents of a slightly wayward teenager, how AI can be kept on the straight and narrow until their teenager grows up.
The question we should really be asking is, “what makes something good?” But the answer to that is much more complex than it at first seems. To a child, a good parent is someone who lets them eat sweets, lets them stay up late, and lets them watch TV or play computer games all day. A ‘bad’ parent is someone who places limits on those activities and makes them learn their spellings, recite their tables, and read books. However, the adult version of that child may clearly disagree with the view of their younger self because they have failed to achieve all that they were capable of, and they are unhappy with how their life has turned out.
Another question to ask is whether the same decision or activity is always the right one in order to be a good person or to achieve a good outcome? You can add to that whether the same right or wrong decisions apply to all cultures at all times? Here’s an example. A man walks into a crowded room and starts firing a gun at the other people in the room. Is that a good thing to do? Hopefully, most people feel the answer is ‘no’, but want to know more information. How about if the room is full of people who are about to destroy the world – including you. Does that make the murders acceptable? In how many films or TV shows has murder been made acceptable because a ‘bad’ person has been stopped from doing harm. If I were writing an algorithm about when my AI could kill people, it would have to be quite a complicated one. The point I’m trying to make is that the rules we live by are quite complex and often unstated.
Let’s go back to the fourth century BCE. Socrates said that a good man does not concern himself with petty personal wants but only whether his actions are good and just. Although, of course, that hasn’t told us what is meant by good or just. However, it gives us a starting point for our AI.
Aristotle in the third century BCE suggested that a good man is the man who acts and lives virtuously and derives happiness from that virtue. He introduced the idea of virtue. I’ve not yet heard anyone talk about virtue in association with an AI.
Plato, who came between Socrates and Aristotle suggested four virtues, which were: prudence, fortitude, temperance, and justice. Aristotle muddied the waters a little by suggesting that a virtue can be defined as a point between a deficiency and an excess of a trait. The point of greatest virtue lies not in the exact middle, but at a golden mean sometimes closer to one extreme than the other. I say “muddied the waters” because that gets harder to code an algorithm or train an AI (or human) on.
Marcus Aurelius, the Stoic philosopher in the first century CE said, “Waste no more time arguing what a good man should be. Be one.” What he’s suggesting is that we’re all wasting our time discussing being good, we should lead by example and live a good life. I like the idea of just getting on and doing it. However, having done stuff all day, how can I know at the end of it whether I have been doing good or not?
Thomas Babington Macaulay in the 19th century came up with a quote that seems to apply to much AI research across the world, “The measure of a man's character is what he would do if he knew he would never be found out.” Or maybe I’m just a little cynical about people who are training Ais to hack mainframes? Perhaps people working on Ai are like the parents of teenagers and helping them to understand the need for kindness, honesty, courage, generosity, and integrity. These virtues can help to make the AI ‘good’. By cultivating virtues within the AI, we could, hopefully, shape its decisions.
Bertrand Russell, who died in 1970, said that the good life is one inspired by love and guided by knowledge. Clearly Ais are being fed lots of information – and, again hopefully, not too many alternative facts, but I have not sat through an AI presentation where someone mentioned the word ‘love’. It’s suggested that someone following Russell’s ideas will lead a good life with a deep sense of fulfilment. Ais don’t do feelings – unless you know better? No-one expects an AI to feel happy at the end of a day’s work.
Those working on Ais might do well to remember the words of Ralph Waldo Emerson in the 18th century. He said: “The purpose of life is not to be happy. It is to be useful, to be honourable, to be compassionate, to have it make some difference that you have lived and lived well.” Again, I don’t know whether the word ‘honourable’ is in the mind of people training Ais, but hopefully the AI is making some difference, in a positive way.
My thinking at the moment is that AI is neither good nor bad, it is only the use that people put it to that will make it seem either one or the other. Like every other invention, it will lead to change, but it will also lead to new jobs being created. I am sure that there will be an arms race as bad actors use Ai to attack mainframes and the good guys us AI to protect them. I am also uncertain whether legislation is going to be the most successful way to control AI. Large western governments will try this route because it’s the way they try to control everything else, but offshore development will continue whatever. What I am suggesting is that AI developers should look back at over 2000 years of philosophical thinking to decide what the right thing is to do when training the AI they are working on.
If you need anything written, contact Trevor Eddolls at iTech-Ed.
Telephone number and street address are shown here.