Ken Goldberg
         

In Media Res
Ken Goldberg interviewed by Jeffrey Inaba and Jesse Seegers

As an engineer, artist and roboticist, as well as the director of the Berkeley Center for New Media at University of California Berkeley, Ken Goldberg knows a thing or two about content management, or at least how to cultivate a diverse set of interests. Examining the role of media to analyze and understand contemporary culture, his research bridges epistemology, aesthetics and technology. His art project Telegarden from 1996 was an early exploration at the intersection of art and internet social behavior. Goldberg talks with Volume about how ideas can be understood as media, how inefficiency can be innovative and why we should focus on defocusing.

KG: One of the things we’re trying to do at the Berkeley Center for New Media is to take a long view, to not limit new media to digital. My colleague Howard Rheingold is working on a BCNM symposium on ‘attention literacy’ that addresses this idea. For example, the alphabet and language are both pre-digital, but definitely media. Media facilitates perception. What we commonly think of as ‘mass media' – television, newspapers, etc. – essentially acts as a lens. A telescope is a medium: it was new media in 1710. What happens when a new medium enters that culture? It is technological, but what are the broader implications? The telescope had far-reaching consequences: for the church, Descartes and the emergence of modernity. So if you accept that, then many scientific instruments–the x-ray, atomic energy and microscopes – are media. And then, to push that a little further, one can think of an idea as a medium.

JS: Like how a metaphor brings new meanings by acting as a medium between two things?

KG: Exactly. A metaphor allows you to see in a new way. Take the Theory of Relativity. I might argue that it’s a medium in the following sense: there were data and observations of the planets that didn’t quite add up. Suddenly, there was a new theory that worked at the far extreme levels of the speed of light. All the data snapped into place. It was like focusing with a new lens. And this is true of psychoanalytic theory, of postmodern or post structuralist theory. They operate as organizing frameworks. Part of this approach is designed to engage faculty who think new media is what the computer scientists do. BCNM now has over 110 faculty members from thirty different departments. It’s been a very interesting process of expanding the definition, because it raises the question of accessibility. You create a dichotomy between wanting to be welcomed and not wanting to reveal everything at once. If we expose too much, it loses its allure.

JI: In calling ideas a medium, do you mean that it could be any idea - irrespective of when and from which discipline it may have originated? For example, you’re suggesting that there may be a renewed awareness of a concept from the past that in turn generates a lens to understand the present in a new light. The idea of ‘new media’ has been popularized by the rise of digital technology, but you see it as a term for describing a more general interpretive tool that can arise from any field past or present.

KG: Certainly my intentions are polemical. New media is related to, but not dependent on, technology. It is instead part of a broader agenda to structure and organize the world. The ‘lens’ metaphor interests me because one can pull back to focus. A medium operates when it works. There are bad media and theories that don’t work so well, theories that are in fact terrible. You can have bad lenses. We want to be critical in terms of how media can help us focus make corrections or sharpen our understanding of something. I’ve been thinking of the Berkeley Center for New Media as a medium itself, because structurally it facilitates perception. For example, students say, ‘I don’t just want to be an historian or an architect, but also want to talk with engineers and philosophers.’ So they’ll gravitate toward a group that encourages them to engage in that dialogue and learn from each other. When you first mentioned ‘content management’, I got ready to be bored. But the issues of access and control in the context of architecture of buildings and websites is much more interesting. I teach a class in relational database theory, so I’ve been thinking lately about the relational model, which is based on the mathematics of relational algebra. In the 1960s databases were ad hoc structures in which over time the data would become inconsistent and ultimately wreak havoc. Computer scientist Ted Codd invented the model of relational algebra, which was a set of elegant abstractions making it possible to guarantee properties about the data based on logical equations. Abstractions also apply to content management in architecture. I’ve been closely watching the design of the new Berkeley Art Museum. Toyo Ito’s building is very different from the current building which is a forbidding, concrete Brutalist structure. In contrast, the new building emphasizes access, but is also concerned about protecting the art from theft, damage and light. Ito’s light weight construction has very thin walls that promote an abstract sense of openness. Ito conveys this by unpeeling linear planes. He wants to construct it from millimeter-thick sheets of steel. In contrast to Serra’s heavy, weighty solids, Ito wants a lightweight form, almost like an onion skin. Seismically it’s very beneficial to design a light building, because it functions like a shell and is very resilient. To the public its curvature and openings must convey accessibility, but the building must also protect the art. That’s a content management issue.

JI: So you’re saying that in a building such as a museum, the experience is designed to be a continual entry into new spaces and that while these spaces display information or objects (in this case art), they may very well also protect and limit full access to that information and those objects. Like buildings, there are websites that draw your interest and then sustain it with additional thresholds and levels of participation.

KG: Yes, some websites reveal themselves over time. In the Telegarden we let anyone look at the garden. If you register, you’re allowed to participate as well. You can water the plants and in time even be given a seed to plant.

JS: Was this always how you wanted to do it?

KG: Well, we learned that visitors wouldn’t stay long at an installation unless you whet their appetites. And they won’t stay long if you give them everything right away. People prefer to be teased.

JI: In a previous interview you mentioned that your telepresence projects exploit the interrelationship between abstraction and reality. Because telepresence deals with communication over long distances, the interaction necessarily abstraction. At the same time, it’s possible to achieve a high degree of one-to-one human exchange. Rather than trying to make the experience even more of a real-world interaction by further bridging the gap between the digital and real-worlds, you’re interested in testing just how artificial or fake we believe the online world to be.

KG: Right. There’s an extraordinary capacity for deception online, especially with things like politics or pornography. They lure you in with something that seems plausible on the surface. I’m interested in the epistemological question of where can you be confident about what you’re seeing and where it is appropriate to be skeptical. The broader question is how to develop scenarios that are deliberately ambiguous, if you will. For example, in 1997 we did a piece called Legal Tender. We took two hundred-dollar bills, announced that one was counterfeit and that we needed help to determine which bill it was. The website explained that you were going to be a participant in this ‘laboratory’. If you registered for the site, you would be presented with a random section of a bill and a series of tests to perform. One of them was the ‘burn test’, which almost everybody chose.

JI: What is the ‘burn test’? Do real dollar bills burn differently than other types of paper?

KG: They do. The burn test brought a hot soldering iron into contact with the bill. Almost everyone chose it. We then displayed a reminder that there is a federal statute against burning or defacing currency and asked, ‘Do you accept responsibility for this? Yes or no.’ Users had typed in their personal information and their email addresses. We wanted to create a moment of hesitation. Although much of the internet is trivially accessible, and hence there’s little sense of engagement, I’m interested in heightening emotions.

JS: Through virtual means?

KG: Yes. A painting can do that, as can a good book or film. Certain museums do that as well – the Prado has few barriers between you and the paintings as well as very few guards. You can walk right up to the work and put your hand on it. But in the crowded, jaded internet environment, it’s a challenge to create anything visceral. Where on the internet can you get that kind of visceral engagement? New Media are creating an epidemic of distraction. I don’t know if you feel the same way, but I can no longer keep up with e-mail. Between Facebook and voicemails, you’re constantly checking. We’re all becoming obsessive-compulsive. It’s almost impossible to focus. But risk tends to have a very focusing effect. Say you’re designing a new website and you’re trying to create real engagement. An element of risk encourages the visitor to pay attention. Games are one way to do this. Games tap into our primal instincts to compete.

JI: So risk has become a way to generate a sense of focus at a time when we’re inundated with distractions. It also seems that being distracted or unfocused is now the default work mode. Whether we like it or not, the attention we given to any given task at hand is shared simultaneously with attention we give to others of equal priority. Rather than attempting to ignore those other things that demand our attention in order to focus in on the task at hand, do you see potential in defocusing? Of not focusing in but letting it all blur or fuzz together? Could this apply not just to our effort to be ‘efficient’ at work but also be a method of inquiry and experimentation, a way to collect information or make more intelligent decisions? Does focusing assume a limited capacity to process information, whereas defocusing assumes a human capacity to take in more information and assemble it than we have until now? For example, when I was growing up the Wonder Kid was the kid who could do homework while watching TV or getting high and yet do very well. Absorbing distractions didn’t necessarily mean one would underperform. It seems that a mode of working today might be that you process all of the demands and needed actions to your e-mails, SMSs and voicemails while also as a matter of course be able to simultaneously think through and write a fifteen thousand-word report. That instead of needing ‘quiet time’ to write, for example, that disruptions are things we can deal with and in fact can feed off of.

KG: Progress is often non-linear; regressions can enable conditions to move forward. A good example of this is television. Our parents’ generation said it was rotting our minds, that it was an overload of spoon-fed, low quality information. It did shape our minds, but television also gave us some of the conditions needed for the development of the internet. Many major innovations have come from inventors and innovators–from Google to Facebook to the internet-who are under 25. Defocusing leads to new forms of innovation and sets the conditions for something new.

JI: One last question: you mentioned that in the course you co-taught with Hubert Dreyfus, the Berkeley professor of philosophy, you questioned efficiency. What exactly does that mean? Is it the idea that productivity also stems from multiple actions or purposes or non-specialization?

KG: The course brought together students from philosophy and engineering. They worked in teams to assemble things in the most efficient ways possible. We were also looking at the origins of the obsession with efficiency that characterized the 20th century. Going back to Frederick Taylor’s time-and-motion studies, his first book was called The Principles of Scientific Management. He created the idea of quantifying work. Before that, there wasn’t anything like it. Henry Ford was a big fan. They recorded a worker’s every motion. This coincided with the rise of phenomenology. The micro scopic analysis of individual motions developed independently, but around the same time as phenomenology, the internalized analysis of individual perception and experience. That was where we started, but we then focused on Heidegger’s insights into technology. Heidegger defined a series of ‘epochs’, starting with religion-based societies and moving into the industrialized epoch, which is efficiency-based. Heidegger saw a step beyond that, a trend towards flexibility. Similarly, we discovered in the course that efficiency is more sustainable when given some slack. Systems can be adapted to multiple purposes, rather than being optimized for a specific task. Consider the computer, it can be adapted to do many different things. More recently, stem cells and genomics. Another example, nanotechnology, is presented as a universal technology that can be applied to all sorts of things. That’s the rhetoric at least. We’ve fallen in love with nanotubes: for any problem we face, if we want a high-temperature superconductor for example, ‘try a nanotube’!

JS: My plants aren’t growing…

KG: Just ‘try a nanotube’!

 

 

 

 

 

 

 

 

 

 

 
 
MENU
Content Management
 
LINKS
Ken Goldberg's Website
Berkeley Center for New Media
Telegarden