featured

Machines Rising

For decades, technology has been getting smarter and smarter—to the point that we’re now experiencing a revolution in the way computer systems can process massive quantities of complex data, learn, adapt and make decisions that once only a human mind could comprehend.

The applications for such artificially intelligent systems are endless, matched only by the potential of such applications to drastically change the way humans live, learn and work. Will artificial intel-ligence improve the way we live? Will it render humans obsolete? Is there a middle ground? Are the benefits worth the costs? And have we already come too far to turn back?

In this special section, Cal Poly professors from each of the university’s six colleges weigh in on the rise of a smarter class of machine — promising applications, potential challenges, and ways these systems will reshape the future.

TMI and the Informed Voter

If there’s one person at Cal Poly who’s equipped to define exactly what artificial intelligence is, it might be computer science professor Foaad Khosmood — and even so, his definition is pretty flexible.“For years and years, artificial intelligence was just experts writing down in a coded way what they were doing in real life,” says Khosmood. “Within the last 15 to 20 years, we had this explosion of data becoming available — millions and billions of pieces of data for machines to look at and learn from.”

That massive level of data analysis, Khosmood says, is teaching machines to accomplish an increasingly complex range of tasks. And given enough computing power, the more data a machine has access to, the better it can do its job. The rise of social media and the ubiquitous selfie, for example, has given rise to a dramatic improvement in facial recognition software, as AI researchers now have access to untold billions of publically available images of human faces.

Using these capabilities, Khosmood is addressing a widespread problem that gets little attention — lack of transparency in state governments. “In California, we have legislative hearings and meetings in Sacramento, but they aren’t written down,” he says. “Everything is recorded on video, but they are not in text format, they are not searchable, and therefore you have to watch like 90 hours of CSPAN just in case somebody says something about a given topic. No one does that — even political reporters don’t do that anymore.”

Khosmood is working with the Institute for Advanced Technology and Public Policy, a team of Cal Poly faculty and student researchers founded by former state legislator Sam Blakeslee, to address this issue with the help of artificial intelligence. The project, called Digital Democracy, is a perfect test case for AI applications that refine a machine’s ability to learn based on an overabundance of information. Using natural language processing — Khosmood’s doctoral dissertation subject — along with voice analysis and face recognition technology, the Digital Democracy system digests thousands of hours of video to discover relevant information for concerned citizens interested in a given issue.

"Within the last 15 to 20 years, we had this explosion of data becoming available — millions and billions of pieces of data for machines to look at and learn from."Click To Tweet

Right now, Digital Democracy is focused primarily on state governments, because federal legislative proceedings are already pretty well covered. “State governments are a critical part of how our country works, but it’s an area that’s missing a lot of attention,” says Khosmood.

It’s a very specific set of tasks focused on a specific need, but for Khosmood and others working to advance the field of artificial intelligence, each additional capability is a step in the direction of more-advanced general AI systems.

The Food You Eat

Fresh produce is a delicate thing. Unlike other forms of production, even within the food industry, fruit crops tend to require large amounts of human labor at every stage.

“These aren’t desirable jobs for most people,” says Bo Liu, a professor of bioresource and agricultural engineering at Cal Poly. “It’s tedious and labor intensive, so if we can change those low-skill jobs to high-skill jobs, I think that’s a benefit.”

Liu and his research partner, computer science graduate student Jeremy Kerfs, are working to develop a more streamlined solution to this challenge: a robot swarm aimed at monitoring crop health and maximizing yield. The project is funded by the California State University Agricultural Research Institute and is a collaboration with the Cal Poly Strawberry Center, a group of faculty and students developing applied research with the state’s strawberry industry.

"Drones are coming, and they’re going to have a lot of applications."Click To Tweet

It’s an idea straight out of science fiction, with humans and autonomous robots working hand-in-hand. From the air, a small fleet of aerial drones patrols a strawberry field along a pre-programmed set of waypoints, taking photos from hundreds of feet in the air and feeding the images to human farmers on the ground. The farmers analyze the images for a variety of factors, determining whether the plants in the field are growing properly. If there’s a problem spot — say malnourished plants or a pest infestation — the farmers activate a second fleet of ground drones to take a closer look.

“Right now, you have to send people out to these spots to check out plants one-by-one,” says Liu.

In addition to finding and diagnosing problem spots in the field, the ground rovers have another function: helping farmers plan for the harvest. Using a subset of artificial intelligence known as deep learning, the drones take and analyze images of the buds and fruit on the vines, calculating how much yield the farmer can expect — even predicting when each fruit will ripen. The system uses TensorFlow, an open-source project from Google, and the predictions are made in real time and become more accurate with more experience.

“Finding people to work the harvest is hard in the short period it takes fruit to ripen,” says Liu. “If we can predict how many people you need, how many trucks and how many resources, farmers can schedule ahead and save time and money.” Those enhancements to the operation, he says, should create more and better produce at a better price for consumers.

Liu’s system is still in its developmental phase — he and his students are building the ground rovers now, but the aerial drones are already patrolling the skies and the deep learning protocols are ready to launch. Once the system is implemented, Liu says that additional research may allow human farmers to automate more and more of the process, and eventually this robot swarm approach could spread to other areas of agriculture.

“Drones are going to be used to monitor crops. They could be used to monitor water quality, to monitor livestock,” says Liu. “Drones are coming, and they’re going to have a lot of applications.”

Looking Inside You

Imagine you’re lying on a treatment table, receiving radiation for a lung tumor. A focused beam of radiation is about to fire into your body — but it must be perfectly aimed. With every precise hit, it kills a part of the tumor that is ravaging your lungs. With every miss, a healthy part of your body dies.Those hits and misses rely on a variety of factors. They depend on whether the scanner is positioned correctly; whether there’s unexpected visual interference on the images used to target the tumor; whether you’re inhaling or exhaling or slightly twitching or fidgeting nervously.Oftentimes in sensitive medical situations, a life-or-death course of action relies on the precise alignment of two very complex sets of images. In the scenario described above, the job of figuring out how to align multiple images of the body is critical to aim the radiation beam, and it’s currently done by hand by a radiation physicist. It takes time and it’s difficult, and if it’s done wrong or not quickly enough, the consequences to the patient are severe.

Enter Cal Poly mathematics professor Dana Paquin and her work in automated medical image registration. Over the past few years, Paquin has been developing a mathematical technique that would allow an automated system to perfectly line up the images that guide these types of treatments, regardless of visual interference.

“This is a very labor- and time-sensitive process, and of course is subject to human error and inaccuracy,” says Paquin. “The long-term goal of this research is that an intelligent computer system will compute and continuously update the optimal beam angles in real time as the patient lies on the treatment table.”

Paquin’s algorithms would give doctors and radiation physicists a powerful tool to increase the success of their treatments. “The computer would make the process more efficient and much more accurate, freeing up more time for the medical professionals to analyze other aspects of the treatment plan,” she says. “This would result in a significant improvement in the amount of radiation delivered to the tumor while minimizing the radiation received by healthy tissue.”

"The long-term goal is that an intelligent computer system will compute and update the optimal beam angles in real time as the patient lies on the treatment table."Click To Tweet

Are there any risks to taking this sensitive process out of the hands of medical professionals and allowing a computer system to make this life-or-death decision on its own? With the potential improvements in speed and accuracy, Paquin sees little downside — and she doubts the patient on the table would either.

The Shape of Cities

Picture the downtown area in your city. How much space is devoted to operating and storing cars? If your town is anything close to average, odds are that about 70 percent of the usable area is occupied by roads and parking lots.But Cal Poly city and regional planning professors Billy Riggs and Mike Boswell see a surprisingly near future where that may not be the case.

“Autonomous vehicles are only marginally starting to impact our city design,” says Riggs. “There is still a long way for policy to go in terms of responding to this technology that may be with us sooner than we all expect.”

Using finely tuned algorithms, autonomous vehicles, they say, could eventually operate much closer to each other than we currently drive. They could zip smoothly past and around each other in choreographed intersections, eliminating the need for traffic signals. They could even drop off their occupants and zip off to parking structures miles away, waiting to be summoned again later.

“I started building some class projects around some of the implications of these autonomous vehicles, and one student did a project on how far parking could be away from San Luis Obispo,” says Riggs. “To summon a car in 10 minutes, it could almost be in Morro Bay. It’s really incredible from a proximity standpoint, when you really start to rethink some of this.”

"Autonomous vehicles are only marginally starting to impact our city design. This technology may be with us sooner than we all expect."Click To Tweet

Although the full impacts of autonomous vehicles have yet to be felt, there is historical precedent for the way new transportation technology shapes the way we build our cities. “We’ve seen that throughout the history of cities, anytime there is a major revolution in transportation technology, it tends to have a profound impact,” Boswell said. “Suburbs practically didn’t exist prior to the automobile, and now suburbia is a dominant pattern of cities on our landscape.”

But as Riggs and Boswell dug further into the implications of this looming revolution, they began to notice something troubling: for most local governments, the impacts of developing technologies are not being factored into city plans.

“An easy response is to keep doing what we are doing and wait and see,” said Boswell. “But the problem is that a lot of the decisions we make in urban planning are 50-year decisions. You just have to go to any of our major cities and see that aspects haven’t changed in 50 to 100 years.”

Earlier this year, Riggs and Boswell published an op-ed titled “No Business as Usual in an Autonomous Vehicle Future” in which they urged city leaders and planners around the nation to be more cautious in the face of profound technology shifts that could explode within the next 10 to 15 years.

“In these decisions, inaction is not policy inaction,” says Riggs. “Thoughtful inaction can actually be a smart policy move.”

According to Riggs and Boswell, a lot is riding on the choice between business as usual and keeping a wary eye on emerging technology. “We can make it such that if autonomous vehicles change the future in a dramatic way, we can still react to that,” says Boswell. “Stranded assets is the term you are starting to hear a lot now, but it’s broader than just stranded assets. These are 50-year decisions you can’t undo.”

How Safe if Your Job?

Perhaps as you’ve been reading the preceding essays, you’ve been wondering what happens to the people who were doing the work about to be taken over by autonomous systems — the reporter, the farm worker, the radiation tech, even the professional driver. Cal Poly economics professor Eric Fisher’s current research into changes in workforce participation has piqued his personal interest in that same question.“The way we increase prosperity from generation to generation is that we invent things,” says Fisher. “It’s a process of creative destruction — every invention that makes things better is in net a huge boon to society, but it also means that some sectors of the economy are going to die off.”

These latest adaptations due to applications of artificial intelligence are an accelerated version of the trend that’s been driving changes in the workplace since the Industrial Revolution began in the 18th century. “One hundred years ago, roughly 90 percent of the American workforce was in agriculture, and now only about two percent of the American workers feed all people,” he says. “We can produce more stuff with fewer workers, and it means America is richer, but the people who would have been on an assembly line are now are working service jobs or writing magazines for Cal Poly.

“The real effects of automation have not yet been felt in the American labor force,” he adds. “The changes over the last 10 years are actually very minor compared with where we are going.”

Fisher cites science philosopher Michael Polanyi and his 1966 book, “The Tacit Dimension,” in describing which kinds of jobs will be the easiest to replace with computer systems: If you can write down the entire process step-by-step, a machine can do it. “I could write down parts of Eric Fisher’s job as an economics professor, and that’s what companies are trying to do with online courses,” he says. “But the parts to being a professor that are more creative — developing new research, writing articles and putting out new knowledge, that would be pretty hard to write down.”

"The future will belong to people who can combine creative skills with technical skills."Click To Tweet

Obviously, where jobs go, money follows. “What we are experiencing right now with skewed wealth distribution in this country is less about the one percent versus the 99 percent — it’s really the nature of technological progress when computer programs are hollowing out middle class jobs,” he says. “As far as we can tell for the indefinite future, for the next 100 years, that’s going to continue to happen.”

So which jobs are going to be most in demand as automation continues to impact the labor economy? “The future will belong to people who can combine creative skills with technical skills — my daughter is artistic and she’s a computer nerd, so she will be in good shape,” Fisher says. “Cal Poly has been at the forefront of those types of careers.”

But in the broad sweep, the changes in the kinds of jobs humans perform won’t just affect the economy — they will impact all aspects of society. “All through history, every society had to face what would happen if we didn’t have enough food. Are we going to starve to death? We don’t face that anymore,” he says. “The typical poor person in America has a better standard of living than almost all human beings in all of history.

“Now we face the problem of being too rich, so just a few people can make all the things we need. We have affluence, but middle class people are losing meaning in their lives,” he says. “You will see more and more aspects of society where people find meaning outside of their work, and you can already see that happening now. So the problem becomes a political or sociological one, about what makes a stable society, and that’s trickier to deal with.”

Before We Go Any Further…

For philosophy professor Keith Abney, the questions and challenges associated with artificial intelligence is a key component of his work as a member of Cal Poly’s Ethics + Emerging Sciences research group.“Artificial intelligence is giving humans new capabilities, and for each of these capabilities there is a real question about what we ought or ought not to do,” says Abney, summarizing the crux of the research group’s key query. “Should we go whole hog into this new technology, or should we restrain ourselves? What kind of regulations are appropriate? And are we looking at the consequences, both intended and unintended?”

Abney and his partner, philosophy professor Patrick Lin, write articles and consult for companies and government agencies trying to find an ethical way through long-range implications of new and imminent technological advances. But unlike other tools humans have developed throughout history, AI doesn’t just expand what we ourselves can do.

“Historically, ethics has been tied up with accounts of human nature,” says Abney. “The possibly unique and unprecedented ethical issue is that AI may offer the promise of a new moral entity outside of our own species.”

The classic image of such an entity — the fully conscious, super-intelligent robot — is a staple of our visions of the future. According to Abney and his group, this type of system could be a game-changer for humanity. And despite the ubiquitous “killer robot” scenario that fills our sci-fi fantasies, there are some serious upsides to developing such a system.

“It could be possible to develop a superintelligence machine, a system whose capabilities excel ours in every way, and that may actually help us solve existential risk problems,” he says. “Is there a way to solve global warming that will be acceptable to everyone? Is there a way to achieve nuclear nonproliferation so we don’t have to worry about blowing everyone up? A superintelligence of this sort could solve these truly existential problems, and that would be a reason to go full speed into developing such a general superintelligence.”

A more immediate question for the Ethics + Emerging Sciences group than the rise of general superintelligences is what to do about the AI systems that make life-or-death decisions right now. “Should we allow robots to make killing decisions? That’s one we’ve been working on for eight or nine years in a military context, and it continues to be a focus because of our more recent work on self-driving cars,” Abney says. “With any kind of robots that are going to have any kind of substantial autonomy, they are going to have to make ethically impactful decisions, and those are the decisions that we spend some time worrying about.”

Even when death isn’t on the line, the advent of artificial intelligence brings some very complicated choices, says Abney. Do we embrace autonomous technologies that generally make life better, but at the cost of peoples’ livelihoods? What do trust and intimacy mean if your beloved pet, your parents’ caretaker, or even your child’s best friend is a robot?

The implications of handing critical decisions over to a non-human intelligence, whether involving life-or-death ethics or just changing the way we live day-to-day, may be either exhilarating or terrifying. But the crux of Abney’s work with the Ethics + Emerging Sciences group isn’t ultimately a search for answers — it’s about urging policymakers and tech creators to simply stop and think.

To read more about the Ethics + Emerging Sciences Group’s work in artificial intelligence and autonomous technology, check out Patrick Lin’s recent Forbes article, “Is Tesla Responsible for the Deadly Crash On Auto-Pilot? Maybe.”