If you’ve Googled anything lately (hah), you’re probably familiar with Google’s new service, Knowledge Graph. You’ve probably seen the little info boxes that pop up on the right side of the page when you’re searching for a person, place, or issue. It’s a little laughable that Google produces elaborate videos to market such changes, but I guess when you own YouTube you maybe feel like everything you do has to be announced in as many forms of media as possible. It’s OK, Google. I understand the feeling. These are the stirring questions of our time: Would Facebook care more about this delicious sandwich, or Twitter?
Mawkish promotional videos about cheerleaders with disabilities aside, there’s no doubt that the way we search and research has changed, is changing, will continue to change. Digital epistemology is different. Tim Adams of The Observer rhapsodized recently on the way Google and Knowledge Graph have changed the very idea of a “search,” arguing that having a question is a qualitatively different notion today: “Search’s sense of questing purpose has already gone the way of other pre-Google concepts, such as ‘getting lost’.”
So, how does this all work? It’s called the “semantic web”: essentially, an increasingly detailed and complex series of metadata tags (which search engines use to categorize data, e.g. is this a title? is this a person? is this a date? is this the name of a movie?) help the web chart relationships among pieces of information. Notice how quickly “the web” becomes actor here—it’s because the web is, for the most part, the actor. Coders set up the tagging system, but the links between the tags and the interpretation of those links, happen, well, organically. Metadata becomes a language that computers in Google data centers can read. Doubtless, interpreting language is nothing new to computers. But suddenly computers are interpreting enormous numbers of complex articles we’ve written for human consumption (this is the key for me: they’ve been taught to read natural language) and are charting meaningful human relationships among those articles: authors, dates, children, spouses, theories.
OK, I lied earlier. Turns out I’ve got some stirring questions that don’t have anything to do with sandwiches: What is language? What is interpretation? What does it mean for a computer to “make meaning”? And then, maybe, too, what is consciousness? (Whoa, getting a little heady there, better stop with the questions.)
Georgetown Linguistics professor Ron Scollon asserts that “social action and discourse are inextricably linked,” which is hard to argue with—with every utterance we reflect and reproduce the social structures that make and construct us—but he adds, “these links are sometimes not at all direct or obvious, and therefore in need of more careful theorization” (1). Scollon proposes mediated discourse theory as a means of theorizing those links. But with the semantic web, I believe the web has already begin to develop its own theory of links between social action and discourse online—a theory grounded partly in Bourdieu but more in Lawrence Lessig’s free culture work. This is theory in the definitional sense, “a concise systematic view of a subject,” but theory without a clear author.
No longer do pages statically exist in solitary harmony, like my OED and my NAEL side-by-side on the shelf. Rather, by reading us reading the web, the semantic web begins to map links between the things we search for and the more complex discursive web behind the ways we mean. The semantic web is what Edwin Hutchins calls a “socio-technical system” (266), a place where human memory and machine memory interweave and merge to create a system that would be impossible using the affordances of just one or the other. Individuals’ search histories offer a very detailed look at that user’s online habitus (at least as Scollon defines the term), concrete real-time social actions occurring over a long period, combined into aggregated contextualized experience. And because the semantic web is created by the way we search, link, and click, it is the ultimate in emic systemic theorizing: the categories quite literally create themselves. They are ever-updating, changing as new information and pages are added, responding to click-length and other indications of user satisfaction. For example, to deliver you the page below, Google must know, when you type the phrase “how tall is barak oboma”:
Literacy problems? Never fear, Google is here—for all your “how tall is barak oboma” needs!
- you probably meant to say “Barack Obama” (woohoo it can spell—and it continues giving the same result even for such bastardizations as “ho tal is brak obom,” “jow tal is brk obam,” and “how tell are brk obam”),
- Barack Obama is a person (compare, for example, searching for the phrase “barak oboma” on Amazon.com, which returns the David Maraniss biography titled Barack Obama—”how tall is barak oboma” returns, hilariously, only this life-sized cutout),
- you are looking for the height of Barack Obama (“how tall is” means “height”),
- it knows the height of Barack Obama (and that height is a quality of a person, and since he’s a person, heck, let’s display some other key facts we know about him),
- you probably want the information displayed in feet and inches because you’re searching in the US (but here it is in meters, too, just in case), and
- as a follow-up query, you’re more likely to be interested in knowing how tall Malia Obama is than how tall other presidents have been.
That’s a lot of meaning-making for a free service. Google has become our collective consciousness: Laurence Kirmayer, quoted in David Howes, says, “‘mind’ is located not in the brain but in the relationship of brain and body to the world” (227). The semantic web is beginning to map those relationships—between real and virtual space, between thought and person, between events and news about those events. Of course, “free service” obscures some of the issues. Google makes money by knowing stuff about you and your browsing habits, which it markets to advertisers as demographic information. But Google also knows that by giving you the best possible search service and making you happy as quickly as possible, you’re unlikely to go anywhere else for search. And it works—no one uses Bing.
All this meaning-making understandably makes some people anxious: we don’t know, really, what Google’s ideological position is, beyond their list of bold, manifesto-esque statements about putting users first and not being evil. And as Google becomes social actor and social mediator and social theory, that ideological position becomes even more important.
The Getty (famous LA art museum) is a mass of red tourist photos on the left. UCLA is the mass of blue local photos on the right.
But it’s not just Google bots making sense of this vast mass of metadata. The semantic web goes beyond search. Real live humans can do all kinds of interesting things cross-referencing data, too. Photographer Eric Fischer, for example, has mapped geotagged data on photos uploaded publicly to Flickr, to create awesome pictures of where tourists take photos and where locals take photos. He first found photos taken in Los Angeles (accomplishable through geographical metadata), then looked through submitters’ histories: if they’d submitted mostly photos from Los Angeles, going back over a month, they were probably locals. If they primarily submitted photos in some other locale, they were probably tourists. Then he mapped the dots in different colors, creating really neat webbed images of how tourists and locals experience the city differently. As a native Angeleno, I’ve gotten into an embarrassing number of arguments with people who spent a week in LA and can’t understand how I could love a city that’s so plasticky and gross. Now I’ve got visual evidence that they see a very different city from the one I know.
So, yes. I, for one, welcome our new Google overlords.