Innovations in data science and artificial intelligence (AI) have a central role to play in supporting global efforts to combat Covid-19. The versatility of AI technologies is enabling scientists and technologists to address an impressively broad range of biomedical, epidemiological, and socio-economic challenges. This wide-reaching scientific capacity is, however, also raising a diverse array of ethical challenges. The need for researchers to act quickly and globally in tackling Coronavirus demands unprecedented practices of open research and responsible data sharing at a time when innovation ecosystems are hobbled by proprietary protectionism and a lack of public trust. Moreover, societally impactful interventions like digital contact tracing are raising fears of “surveillance creep” and are challenging widely-held commitments to privacy, autonomy, and civil liberties. Pre-pandemic concerns that data-driven innovations may function to reinforce entrenched dynamics of social inequality are likewise now only more intensified given the life-and-death consequences of biased and discriminatory public health outcomes. The following offers five steps toward responsible research and innovation that need to be taken to address these concerns. It presents a practice-based path to responsible AI design and discovery centered on open, accountable, equitable, and democratically governed processes and products. When taken from the start, these steps will not only enhance the capacity of innovators to tackle Covid-19 responsibly, they will help to set the data science and AI community down a path that is both better prepared to cope with future pandemics and better equipped to support a more humane, rational, and just society of tomorrow.
5/27/20: To preview this content, click below for the Just Accepted version of the article. This peer-reviewed version has been accepted for its content and is currently being copyedited to conform with HDSR’s style and formatting requirements.