The more artificial intelligence enters our lives, the more important become ethics and philosophy. Below are four book recommendations.
The ethics of artificial intelligence is the part of the ethics of technology specific to robots and other artificially intelligent beings. It is typically divided into roboethics, a concern with the moral behavior of humans as they design, construct, use and treat artificially intelligent beings, and machine ethics, which is concerned with the moral behavior of artificial moral agents (AMAs). (Wikipedia)
Artificial intelligence has close connections with philosophy because both share several concepts and these include intelligence, action, consciousness, epistemology, and even free will. Furthermore, the technology is concerned with the creation of artificial animals or artificial people (or, at least, artificial creatures) so the discipline is of considerable interest to philosophers. These factors contributed to the emergence of the philosophy of artificial intelligence. Some scholars argue that the AI community’s dismissal of philosophy is detrimental. (Wikipedia)
|Robot Ethics: The Ethical and Social Implications of Robotics:|
The book is an excellent primer on ethics and philosophy. It is definitely accessible to an undergraduate student—perhaps in the context of an undergraduate engineering ethics course. It is also a valuable reference for roboticists, providing an awareness of the social concerns related to their research.
Prominent experts from science and the humanities explore issues in robot ethics that range from sex to war.
Robots today serve in many roles, from entertainer to educator to executioner. As robotics technology advances, ethical concerns become more pressing: Should robots be programmed to follow a code of ethics, if this is even possible? Are there risks in forming emotional bonds with robots? How might society—and ethics—change with robotics? This volume is the first book to bring together prominent scholars and experts from both science and the humanities to explore these and other questions in this emerging field.
Starting with an overview of the issues and relevant ethical theories, the topics flow naturally from the possibility of programming robot ethics to the ethical use of military robots in war to legal and policy questions, including liability and privacy concerns. The contributors then turn to human-robot emotional relationships, examining the ethical implications of robots as sexual partners, caregivers, and servants. Finally, they explore the possibility that robots, whether biological-computational hybrids or pure machines, should be given rights or moral consideration.
Ethics is often slow to catch up with technological developments. This authoritative and accessible volume fills a gap in both scholarly literature and policy discussion, offering an impressive collection of expert analyses of the most crucial topics in this increasingly important field.
|Life 3.0: Being Human in the Age of Artificial Intelligence:|
The book begins by positing a scenario in which AI has exceeded human intelligence and become pervasive in society. Tegmark refers to different stages of human life since its inception: Life 1.0 referring to biological origins, Life 2.0 referring to cultural developments in humanity, and Life 3.0 referring to the technological age of humans. The book focuses on "Life 3.0", and on emerging technology such as artificial general intelligence that may someday, in addition to being able to learn, be able to also redesign its own hardware and internal structure.
The first part of the book looks at the origin of intelligence billions of years ago and goes on to project the future development of intelligence. Tegmark considers short-term effects of the development of advanced technology, such as technological unemployment, AI weapons, and the quest for human-level AGI (Artificial General Intelligence). The book cites examples like Deepmind and OpenAI, self-driving cars, and AI players that can defeat humans in Chess, Jeopardy, and Go.
After reviewing current issues in AI, Tegmark then considers a range of possible futures that feature intelligent machines and/or humans. The fifth chapter describes a number of potential outcomes that could occur, such altered social structures, integration of humans and machines, and both positive and negative scenarios like Friendly AI or an AI apocalypse. Tegmark argues that the risks of AI come not from malevolence or conscious behavior per se, but rather from the misalignment of the goals of AI with those of humans. Many of the goals of the book align with those of the Future of Life Institute.
The remaining chapters explore concepts in physics, goals, consciousness and meaning, and investigate what society can do to help create a desirable future for humanity.
|Our Final Invention: Artificial Intelligence and the End of the Human Era:|
James Barrat weaves together explanations of AI concepts, AI history, and interviews with prominent AI researchers including Eliezer Yudkowsky and Ray Kurzweil. The book starts with an account of how an artificial general intelligence could become an artificial super-intelligence through recursive self-improvement. In subsequent chapters, the book covers the history of AI, including an account of the work done by I. J. Good, up to the work and ideas of researchers in the field today.
Throughout the book, Barrat takes a cautionary tone, focusing on the threats artificial super-intelligence poses to human existence. Barrat emphasizes how difficult it would be to control or even to predict the actions of, something that may become orders of magnitude more intelligent than the most intelligent humans.
|Superintelligence: Paths, Dangers, Strategies:|
Superintelligence: Paths, Dangers, Strategies is a 2014 book by the Swedish philosopher Nick Bostrom from the University of Oxford. It argues that if machine brains surpass human brains in general intelligence, then this new superintelligence could replace humans as the dominant lifeform on Earth. Sufficiently intelligent machines could improve their own capabilities faster than human computer scientists, and the outcome could be an existential catastrophe for humans.
It is unknown whether human-level artificial intelligence will arrive in a matter of years, later this century, or not until future centuries. Regardless of the initial timescale, once human-level machine intelligence is developed, a "superintelligent" system that "greatly exceeds the cognitive performance of humans in virtually all domains of interest" would follow surprisingly quickly, possibly even instantaneously. Such a superintelligence would be difficult to control or restrain.