Featured

Sleepwalking Into Our Future

Five silhouettes of people in varying colors with binary code
By Robyn Kontra Tanner

Cal Poly faculty have a thing for intersections — conceptual territory where seemingly distinct ideas, fields of research and industries collide.

Since 2007, faculty in the Ethics and Emerging Sciences Group (EESG) have found a home on the fertile ground where moral philosophy meets technological frontiers, like artificial intelligence and autonomous vehicles. Now, Cal Poly is home to the largest cluster of tech ethicists in North America, who are grounded in philosophy or moral theory.

Headshot of Professor Patrick Lin

Patrick Lin
Professor of Philosophy, Director of the Ethics + Emerging Sciences Group and Affiliate Scholar at Stanford Law School’s Center for Internet and Society

“Public interest in technology ethics has just blown up in the last 10 years or so,” says philosophy Associate Professor Ryan Jenkins. “I think the cause of that has been a growing realization that technology is not this sort of neutral tool.”

Professor Patrick Lin, founder and director of the EESG, says the group illuminates subtle issues and hidden tradeoffs, such as those between convenience and privacy, or individual and collective interests. Even in tools that prolong human life or make work more efficient, the ethical implications can be an afterthought. He says the world needs forward-looking philosophy experts to be “canaries in the coal mine” of potentially dangerous advances, getting ahead of them with well-reasoned analysis before unintended consequences play out in the real world.

“We’re often among the very first to scope out a field, map it out, and by doing that, we end up framing a research agenda that people around the world can just take off with,” says Lin. “There’s greater awareness, but that also means there’s a lot more work to be done.”

Headshot of Ryan Jenkins

Ryan Jenkins
Associate Professor of Philosophy and the Women and Gender Studies Department and Senior Fellow of the Ethics + Emerging Sciences Group

This moment has been a long time coming, say Lin and Jenkins. After the iPhone, Facebook and YouTube debuted, accessible tech has become much more complex and less transparent. Now advanced apps and smart devices harvest unparalleled amounts of data and connect it through the internet. Most recently, the pandemic has thrown our dependence on tech and society’s ethical standards into stark relief. The result is a kind of technological inertia, a creep forward in the way we allow technology to weave itself into all facets of our lives. It can be hard to rein in — and it can change the trajectory of our species.

“We’ve been doing this throughout all of human history, but the stakes have never been higher, given the outsized impact that technology has on the modern world” says Lin. “We are all sleepwalking into our future if we’re not being deliberate about the technology we design and deploy.”


A Mind of Its Own

Artificial intelligence (AI) trains computers to perform tasks that previously required humans, like recognizing speech or images, translating between languages, and making final decisions in critical situations. Most people are probably familiar with how AI is used in devices like Siri or Alexa, or eerily-pertinent online advertising. But it’s also involved behind the scenes in everything from finance to hiring practices to slowing the spread of COVID-19 on university campuses.

“A lot of the applications seem to reduce social choices and ethical choices into algorithms, which means you’re doing ethics by statistics, you’re doing ethics by math,” says Lin of his frequent concerns about AI. “Something about that approach deeply misses the point of ethics.”

“Generally, U.S. legislators and regulators are loath to regulate preemptively — they want to see a victim first.” — Professor Patrick Lin

Lin uses the example of AI in hiring: potentially approving job applicants based on historical models of a company’s best employees.

“That may seem to make sense at first — until we recognize that past hiring practices could have been, and often were, discriminatory,” says Lin. “So, an AI that’s trained on a biased historical model will carry that bias forward, unintentionally perpetuating that discrimination in a textbook problem of garbage-in garbage-out in computing.”

“A lot of people will say that actually starts with the data that are fed into the AI, the way algorithms train models to generate particular outcomes,” says Jenkins. “What that means is that you need to be thinking about the way that specific social practices or institutions are likely to generate certain kinds of data.”

Context is key. Lin points out that AI-driven facial recognition software may help find lost children, but it runs into privacy and social justice concerns when used in policing scenarios. There’s rarely a one-size-fits-all solution, and it’s even more difficult to apply sweeping government regulation to something evolving this fast.

“Generally, U.S. legislators and regulators are loath to regulate preemptively — they want to see a victim first,” says Lin. “In the European Union, they tend to be more proactive about it, but being proactive also creates a risk that you’re going to be heavy-handed and stifle innovation.”

Instead, some tech companies boast about how they apply ethical standards from within, or at least make it appear that way to satisfy consumer demand.

“There’s a gold rush in Silicon Valley for technology ethics; the flip side of that is a lot of what’s getting peddled is fool’s gold — it’s ethics washing,” says Jenkins. He says tech giants’ marketing messages don’t always reflect the depth of analysis necessary to parse lasting ethical impacts of a new tool. “They’re incentivized to fake it a lot of times.”

Lin and Jenkins are seeing some shifts when it comes to checking the power of AI. For the first time, the National Highway Traffic Safety Administration is investigating nearly a dozen Tesla autopilot crashes that have caused injuries and death, specifically when cars on autopilot have crashed into emergency vehicles.

In academia, some AI-focused conferences are beginning to require an ethics review as a guardrail for controversial papers. Most notably, United Nations leaders called for a moratorium on the sale and use of AI technology posing risk to human rights, like facial recognition tools, until more safeguards are developed.

“Things are starting to move along because folks are recognizing the real harms that could come from this technology,” says Lin.


High Tech Troops

The idea of lethal autonomous weapons — machines that can select and fire on human targets without human oversight — may have officially made the jump from science fiction to the battlefield. A United Nations report about a March 2020 skirmish in Libya says such a weapon was used, but stopped short of saying anyone was killed in the attack.

“It’s still extremely difficult to overcome that deep sense of unease that surrounds this technology.” — Professor Ryan Jenkins

“There’s been a lot of hand wringing, to put it lightly, in the academic community about the development of robots like this,” says Jenkins. He jokes that it often conjures visions of the Terminator. “But it’s also a classic case that’s ripe for philosophical analysis because it’s very hard to say what is wrong with this kind of technology.”

The arguments for these kinds of weapons say it may kill fewer civilians overall than a human soldier or pilot; in theory, autonomous weapons may be more accurate because they won’t get tired, they won’t be vengeful, they won’t be racist, or they may be less likely to commit war crimes.

“But even if we suppose all that’s true, it’s still extremely difficult to overcome that deep sense of unease that surrounds this technology,” Jenkins says. Instead, he’s working to better clarify the objection for a more productive debate beyond the numbers of lives saved or taken. “A lot of people say it’s disrespectful to treat your enemy this way — it treats the enemy as vermin and it turns warfare into pest control. It’s even worse than turning war into a video game, which is what people say about drones.”

Other weapons try to find “a middle ground between shouting and shooting,” says Lin. For example, a device called an active denial system, a large microwave gun developed a decade ago, can make a target feel like their skin is on fire and flee. Lin says the gun could be used to disperse crowds in dangerous situations, like military checkpoints where a vehicle is not stopping or groups aren’t obeying commands.

“But here’s the problem: The military is generally bound by international laws and treaties that prohibit targeting noncombatants or civilians,” says Lin. “That’s exactly what this ‘pain ray’ is doing. You’re indiscriminately firing into a crowd, and you don’t know if there are combatants or not, so it immediately seems unethical.”

The EESG often analyzes new tools like this, but faculty also use their research and roles in prominent policy groups to look at the bigger picture. It’s not just for the sake of argument — Lin and Jenkins point to philosophical reasoning that underlines international law and treaties governing wartime tactics.

“Superior technology alone does not win you wars,” says Lin, who is a member of the Center for a New American Security’s Taskforce on AI and National Security. “I like to remind folks that we have weapons that can [perform] surgical precision strikes, but what if it were more cost effective and more effective overall to just use diplomacy or humanitarian aid? That goodwill could help us avoid these conflicts in the first place.”


To Protect and Serve

While military ethics has been documented for at least 2,000 years, police ethics is a surprisingly undertheorized concept, says Jenkins. At this critical moment for the police, EESG is helping address the knowledge gap.

Jenkins and Lin, along with senior philosophy lecturer Keith Abney and researchers at the University of Florida, are in the midst of a project funded by the National Science Foundation studying artificial intelligence that can be used to predict crime. The study acknowledges that policing is under intense scrutiny by the current social justice movement. Many argue that this kind of algorithm-driven technology has the power to reinforce systemic racism, risk personal privacy, and exacerbate distrust for law enforcement. The project plans to publish recommendations for law enforcement and computer scientists by 2023.

In the meantime, other nonlethal tech continues cropping up in precincts around the country. The NYPD recently tested the use of a new “Digidog” in dangerous scenarios. These remotely operated robot-dogs use surveillance cameras to scope out crises like a hostage situation without risking the life of an officer. The robots can’t deploy weapons, but some lawmakers worry that may be around the corner.

“Robots can save police lives, and that’s a good thing,” Lin said in an article for WIRED. “But we also need to be careful it doesn’t make a police force more violent.”

According to Lin, studies show police are more likely to use some kind of force if they have a nonlethal option. And everything from robots, tear gas, rubber bullets, real canines, and even robot suicide-bombers are available to officers today.

“Weapons like tasers do increase the use of force,” says Lin. “It makes it easier for police officers to escalate the situation and choose force, even though it’s not deadly force … as opposed to doing everything they can to deescalate.”


Facing the Future

But it’s not all doom and gloom.

“We don’t think that technology ethics is all about slapping hands,” says Jenkins, pointing to medical advances as one clear way tech benefits humanity. “And one of the skills that we want to impart is identifying those benefits and amplifying them at the same time that we’re trying to mitigate negative consequences.”

Lin, Jenkins and their colleagues in the Philosophy Department teach a spectrum of courses that help students build a toolkit of foundational ethics ideas, including basic human rights, dignity and consent. As a result, they can use ethical concepts as evaluation criteria in design thinking. Future engineers, programmers, scientists and business professionals can evaluate a product’s effect on personal privacy while assessing that product’s efficiency. The courses also give students practice at articulating their concerns and advocating for thoughtful tech advances.

“Here are some other things to think about when you’re thinking about making the world a better place,” Jenkins tells his students. “I think it shows that there can be a constructive collaboration between the humanities and STEM.”

And that collaboration is paying off: new professionals at the intersection of ethical tech are finding careers in a fast-growing job market in the public and private sectors. Cal Poly’s Strategic Research Initiative focused on the technology workforce is fueling a variety of projects aiming to strengthen the university’s ability to study the trajectory of this growing industry and prepare multidisciplinary tech leaders, specifically in the cybersecurity and aerospace industries.

“I wish I were in grad school at this time because there are so many opportunities,” says Lin. His advice to budding tech ethicists: don’t be afraid to explore understudied topics and technologies. They are cropping up so fast, there is no way to cover them all at once.

“Let’s seek out new rabbit holes to explore,” he says. “It’s important to be familiar with a wide range of fields, because you’re not going to be able to anticipate where the dots are that you need to connect.”