Expert Interview

Telescope Magazine: So you disagree with a view that technology and all its innate attributes are shaping the world?

Kevin Slavin: I don't think it's technology that does it; it's people. I think all the technologies that we have could also go in any direction, if we humans placed emphasis on the directions one way or the other. I think today's technologies are expressing cultural values centered on optimization, and their impact is enormous.

But maybe that's not a great thing. There has always been a move towards optimization, but other elements-such as craftsmanship and experience-could act as a countervailing force. Admittedly it's very hard to counteract this move, but it's not really acting against the technology itself. It's really acting against the underlying values that overemphasize optimization.

Telescope Magazine: Now that artificial intelligence (AI) is built into various technologies, it is conceivable that technologies themselves will be automated. What's your take on that?

Kevin Slavin: There are basically two ways that algorithms get built. One is that they're designed by humans, and that's kind of scary because it means that some individual's worldview is going to be imposed on us on an unimaginable scale. But the alternative to that-in which an algorithm is written by no living human beings-is even scarier. Yet that's becoming much more common.

There is a specific subset of algorithms called genetic algorithms, and they don't start with a premise or a worldview. The way genetic algorithms or a lot of other contemporary machine learning works is that it doesn't have an idea about how to get a result; it just knows the result must be this and figures out everything that we have to do to get there. For example, if you wanted to make a car more aerodynamic, you don't have to start with all your principles of aerodynamics. You just say you want to reduce the drag, and AI will find a way to get there.

The problem is that what's presented is an answer without an explanation. That's okay if it's just making a car more aerodynamic, but what if the issue is more complex, like figuring out which of the 10,000 prisoners should get parole? The system may go through the problem and say these 5,000 should get parole, but you can't interrogate the computer system why it reached that conclusion. It just doesn't know why. If it's a question of interpretability, the system falls short.

How algorithms shape our world?
Source: https://www.ted.com/

Telescope Magazine: Interpretability? You mean whether humans can comprehend the process that goes on within the system?

Right. And it's worrisome that we're delegating greater and greater parts of our everyday lives to uninterpretable systems. I think it's the reason cultural operations are so important. Artists look to interpret what seems uninterpretable. They also look to provide explanations for things in the world. In the past, the world had enormous natural mysteries, such as where the thunder comes from and why do we get sick, and that was where artists lived. Now we have these other mysteries, and I'm most interested in the artists who deal with those kinds of mysteries.

Kevin Slavin

Telescope Magazine: You have chosen various technologies for different projects, including GPS, QR codes, and certain kinds of social media. How do you choose which technology to focus on? Do you use any set formula?

Kevin Slavin: I go by my instinct. Nothing more.