by J. Joseph Miller
Although it is commonly understood among philosophers that science fiction can be of great use in understanding certain philosophical problems, moral theorists have been slower to take up science fiction as an explanatory tool.1 Unfortunately, I shall have to leave that broad observation at the level of assertion for now as it is far beyond the scope of this article to demonstrate such a grand claim. Instead, I propose to explore the relationship between utilitarianism and Isaac Asimov’s Foundation series. In an article entitled “Ethical Evolving Artificial Intelligence: Asimov’s Computers and Robots,” Patricia Warrick explores what she calls Asimov’s “ethical technology.” There she argues that Asimov’s robots are programmed, in a Skinnerian behaviorist fashion, to regard “John Stuart Mill’s concept of ‘the greatest good for the greatest number’… [as] the essential element in the criteria for designing the [behaviorist] ideal” (191). In this essay, I would like to follow up on Warrick’s claim that Asimov’s robots are, in some sense, utilitarians. Indeed, I shall go one step beyond Warrick, arguing that the major plot moves in the series of novels and short stories which make up Asimov’s extended future history (a history which includes the events in The End of Eternity (1955), the Robot stories and novels, and the Foundation novels) are ultimately motivated by utilitarianism.2 Specifically, I will argue that the progression of the series can be read as a set of ever-more-precise answers to a set of related objections to utilitarianism, a set that I will call calculation problems.
Of course, Asimov himself does not explicitly describe his fiction as an exercise in utilitarian moral theory, but that is hardly surprising, for few outside of the world of professional philosophy explicitly attach labels to their moral beliefs. Nonetheless, most people’s untutored beliefs about moral theory can be carved up into three general classes. There are the utilitarians (such as Bentham, Mill, Peter Singer, and David Hume) who hold that morality really is fundamentally concerned with producing the best overall consequences. Then there are the Kantians (Kant himself, along with contractarians like Locke and Rawls) who hold that morality is crucially concerned with the protection of autonomy. Finally, there are the virtue theorists (Aristotle, Aquinas, and Nussbaum, for instance) who hold that character is what counts most in morality. For most people, one of these three ways of thinking about the world colors much of their world-view.3 Asimov’s own view, at least as expressed in the Foundation series, is motivated by a generally utilitarian approach; indeed, throughout his future history, Asimov expresses a commitment to promoting the greatest good for all of humanity, an explicitly utilitarian goal. One of the central questions in his fiction, then, is how best to go about achieving that greatest good. I shall argue that many of the essential features of Asimov’s future history can be read as attempts to answer that very question.