Course description

This course is about

  • the ethical implications of computer programs and robots, and
  • how to implement ethics or morality in computer programs and robots.

Sophisticated computer programs and robots already have significant ethical impacts on our lives. Consider: automated computer systems that decide which social-media posts are shown to us, or that decide which loan applications are denied, or that decide which job applications are forwarded, or that recommend which incarcerated people are granted parole. Consider self-driving cars that must decide whether to favor the carโ€™s occupants or outside pedestrians. Consider autonomous robotic health-care assistants that cajole patients to do things they might not want to do. Consider autonomous military drones that decide whom to kill. How do we check that those systems are being fair and unbiased, and treating people ethically or at least legally? Crucially, how can we possibly program ethics or human morality into computer programs and robots?

This course emphasizes the importance of applying human moral psychology to these issues. Every topic relies on human psychology: What is the psychology of fairness? What is the psychology of explaining or justifying an action? What is the psychology of responsibility and blame? What is the psychology of attributing moral standing or rights? Do those judgments vary across cultures? Importantly, what are the various psychological and evolutionary functions of human morality, and should those same functions be mimicked or acknowledged in artificial moral agents?

This course is structured as readings with discussion. Students are expected to do extensive reading every week, and to be prepared to discuss the readings in class.