Why We Need to Develop an Ethical Approach to Technology
Julie Walsh, associate professor of philosophy at Wellesley, and her colleague Eni Mustafaraj, associate professor of computer science, are working to make ethics of technology, in particular computational, data-driven technology, a fundamental part of the liberal arts curriculum. They are focusing on how to teach future data scientists and software engineers to think of their work as not just technical but also ethical—an urgent imperative, given the tremendous social impact of new technologies.
“We want students to think deeply about their responsibilities as creators of future technologies. To ask themselves: Why are we doing this work? Who will benefit?” says Mustafaraj.
The two professors have received a National Science Foundation grant totaling nearly $400,000 over three years that will allow them to collaborate closely with students through methods, ethics, technology, and research (METER) seminars. It will also fund a three-year postbaccalaureate research position at the College. Mustafaraj, Walsh, and their seminar students will interview alums working in tech about the ethical considerations they face in their jobs; assess the College’s curriculum to determine how well it prepares students for ethical decision-making; and research ways to better prepare graduates to evaluate and make ethical decisions about evolving technologies.
Collaboration with students is essential to the project and its long-term success, say Mustafaraj and Walsh, as they are the ones who, equipped with the skills, frameworks, and knowledge that structure the project, will be best positioned as change agents.
Walsh and Mustafaraj met at a faculty event in 2015, where they bonded over a mutual love of speculative fiction. Stories set in imagined futures, such as 2001: A Space Odyssey, The Matrix, and Ex Machina, depict the intersection of tech and human society in interesting ways, raising important and prescient moral questions, they noted. They have since been working together in various capacities, including as guest lecturers in each others’ classes.
We need a deeply heterogenous group to create tech that is more inclusive, and less racist.
Eni Mustafaraj, associate professor of computer science
“Students tend to see professors as isolated in their offices … but they love to hear them talk, and especially disagree with each other, outside the classroom,” says Walsh. “We wanted to erode the artificial boundaries between departments and disciplines.”
Before they collaborated on the NSF proposal, they created a series of AI ethics labs, which were cut short by the pandemic in spring 2022. The labs were renamed Tech Ethics in spring 2022, and brought together students in Walsh’s PHIL 222: Ethics of Technology class and Mustafaraj’s CS 315: Data and Text Mining for the Web course.
Mustafaraj and Walsh say the goal of interdisciplinary ethics training for technologists is not to teach students what the “right” ethical decision is for any one case or question. Instead, by providing them with the analytical tools to assess the stakes of specific situations, they hope to empower students to ask questions of themselves, their communities, their employers, and colleagues.
Just as programming and engineering must be learned, ethical decision-making is an acquired skill set. “Being a good person is really hard,” says Walsh. “People think they’re good without thinking about what frameworks or decision matrix they’re using … it takes training to think about this stuff.”
In the METER seminars, students will learn to evaluate the myriad issues raised by technology through historical and contemporary frameworks; conduct behavioral interviews and collect, collate, and analyze data; and have an opportunity to help design the first iteration of a comprehensive ethics of technology program—one that will be embedded across the College’s liberal arts curriculum and potentially serve as a model for peer institutions.
“Our long-term vision,” Mustafaraj and Walsh state in their grant application, “is to turn Wellesley into a leader in issues of ethics and equity in digital technology, especially with respect to the aspects of concern for women and other groups that experience systemic bias and harm.”
Mustafaraj, a data scientist who studies web-based systems and their complex interaction with people, and social structures, says computers are not to blame for tech’s worldly ills, which include deepening economic inequality through job automation, weakening democratic systems through disinformation, and spreading and amplifying systemic bias—as shown by Amazon’s use of an AI recruiting tool that downgraded applications from graduates of women’s colleges.
Contrary to headlines, “the web is a democratizing force … where once there were gatekeepers, now, there are few institutional protections,” she says. Theoretically, people are empowered with greater access to information and opportunity, but without robust regulation, power ends up in the hands of a few tech giants.
“Tech as it is currently deployed by corporations is primarily about extracting value from us for their commercial benefit, and only as a side effect to allow us to flourish in our pursuits,” Mustafaraj says.
To change that trajectory, she says, we must train computer scientists and tech leaders from diverse backgrounds who can bring different perspectives to their work, a recurrent theme in Mustafaraj’s teaching. “We need a deeply heterogenous group to create tech that is more inclusive, and less racist,” she says.
Walsh and Mustafaraj’s project will bring together philosophy and computer science, students and alums, to shape the future of tech education at Wellesley and, as the next generation enters the workforce, to help ensure tech enhances rather than erodes human possibility.