6 days ago
A Prelude To The Ethics Of Artificial Intelligence
Gone are the days when your company or organization must decide if they will use artificial intelligence. It is now just a matter of how.
With the vast increase of AI employment, it was inevitable that some ethically questionable use cases would pop-up. Students using chatbots like ChatGPT to write papers is an obvious example, but the reverse is equally worrying. As reported in The New York Times, a business professor at a Boston-area university was allegedly using ChatGPT to grade papers and mistakenly left the prompt in when returning the comments to students. Given the soaring cost of higher education, the student was understandably concerned, and requested a tuition refund for this course.
While this situation is clearly ethically compromised - don't tell your students or employees not to use chatbots and then turn around and do it yourself - the majority of AI practices likely fall into a gray area. It would therefore be handy to have black-and-white ethical guidelines. In theory, not too much to ask. In practice, it would take an entire career of research, writing, and teaching to fully flesh out all the ethical implications associated with generative models.
But there is a distinction that allows us to establish some general best practices when dealing with AI.
The discussion of how to ethically approach artificial intelligence or machine learning began long before the actual technology emerged. The genesis can likely be traced to the landmark 1950 paper 'Computing Machinery and Intelligence' by Alan Turing. The paper introduced the concept of the Turing test, a method for determining whether a machine can exhibit what humans understand as intelligence.
In simplest terms, the Turing test puts a machine behind a curtain and asks whether a human, asking it a series of questions on the other side of that curtain, can tell if it is a machine. If the person is not able to discern whether it is a machine or another person giving the responses, the machine passes the test.
Nearly no machines can pass the Turing test. What we are left with is a technology that is not intelligent by human standards, and it is therefore an object. This determination then shapes the ethical conversation around that machine. You do not need to treat it as something with agency. Rather, it should be viewed as any other tool, a means to an end.
Examples of this kind of object technology could include computers, telephones, or automobiles. The ethical questions that come up for these machines are not about the things in themselves but as objects for our use, such as issues of equality of access, any potential programming bias, or the privacy of the information they store.
Although ChatGPT and other large language models may exhibit certain patterns in their responses that can help identify them as a machine - such as tone or consistency - they are far from easy to notice. A Stanford University study from last year confirms that ChatGPT did pass the Turing test, and the technology has only gotten better since.
What this means is that ChatGPT and similar AI have human-like intelligence in that they are not discernibly different to the naked eye. In other words, we may have crossed into the machines as subjects-era. By extension, they should be treated as an end themselves.
According to the 2007 AI Magazine article, 'Machine Ethics: Creating an Ethical Intelligent Agent,' treating A.I. as a subject means that ethical questions about them should be 'concerned with ensuring that the behavior of machines toward human users, and perhaps other machines as well, is ethically acceptable.'
The ethical landscape here is about the things in themselves, how they behave, and how you act or relate to them, accounting for societal values, context, and logic. In other words, the ethics of human relationships.
AI is bringing change in all areas of life. But is it a subject or an object? In subtle but significant ways, you can make a case for both. We can be sure it is not neutral. Only by solving this riddle can we deal with the difficult ethical questions that come with the technology.