fbpx Robots or People: Is There a Place for People in the Future of Information? | The Centre for Digital Media
  • Latest News

Robots or People: Is There a Place for People in the Future of Information?

Feb 19, 2014 By Richard Smith

If you read my last post about how data are turned into information and then how information is turned into knowledge, you might remember the two lists of activities that all start with the letter C.

To recap, data ("random facts") are turned into information (organized, useful) by activities that can be remembered by five Cs:

  1. contextualize
  2. categorize
  3. calculate
  4. correct
  5. compress

Information, in turn, is turned into knowledge ("information that is actionable") by four Cs:

  1. compare
  2. consequences
  3. connections
  4. conversation 

Data Turns into Useful Information By Computers.

If you compare the first five to the second four (and in so doing, enact one of the activities—comparison—that turn information into knowledge!), you might be struck by the extent to which the first list is made up of activities that we often assign to computers. Especially simple things like condensing, calculating and correcting (e.g., through a check-sum on credit card numbers). 

Even what might seem like an activity that would require human intervention (e.g., to provide the "context" for the location of a photo) is now done automatically by computers (as happens when your cell phone camera tags an image with GPS coordinates, or software locates the faces in a photo).

Turning Information into Actionable Data is Done More & More By Computers, Too.

The second list, the activities that move information into the realm of knowledge, seems to be much more of a human domain. But lately that has been changing. It isn’t exclusively human, anymore. Think of the example of anti-lock brakes in a car. There are sensors in the wheels, in the engine, in the drivetrain, perhaps elsewhere in the car that are determining the speed of the vehicle, forward and lateral acceleration, rotation (or not) of the tires, and perhaps the temperature of the air, the humidity, and so on. All of those are clearly data. 

But there is a computer—and software—in the car, that is turning that data into information. It is collecting data, correcting errors (e.g., an anomalous temperature reading), contextualizing by separating the tire rotation from the speedometer readings, and calculating speed, temperature, probability of sliding. As a result, raw data from many sensors are turned into information about the vehicle.

The car doesn’t just use that information to beep or put up a warning light. It is likely too late for that, anyway. It turns that information into action: with a quick conversation between the sensors, a consideration of consequences, a comparison between the present situation and examples in a database, drawing connections between temperature and lack of rotation in the wheels despite forward momentum, it draws a conclusion: the car is in a skid. And, when you step on the brakes, it doesn’t just apply the brakes evenly. It pumps them, very rapidly, so that you can maintain control while slowing down. It has turned information into action, the very definition of knowledge. And this isn’t just any knowledge. This is knowledge that saves lives.

So even though the list of activities— comparison, consequences, connection, conversations—seems like something that only humans do, more and more this is something we trust to computers. Of course, humans are deeply involved in the programming of those computers, so you could argue that it is still a "human" that is creating that knowledge.

Google’s Computers Are Smarter Than Humans.

To give an even more vivid example, consider a story from Google researchers: in their effort to develop computer programs that can recognize elements in pictures (and serve them up when you search for "cat videos" for example), they have begun to lose track of how the computers are accomplishing this. According to the story in The Register

This means that for some things, Google researchers can no longer explain exactly how the system has learned to spot certain objects, because the programming appears to think independently from its creators, and its complex cognitive processes are inscrutable. This "thinking" is within an extremely narrow remit, but it is demonstrably effective and independently verifiable (Clark 2013).

Kind of makes you wonder, doesn’t it? If you’d like to read a (fictional) example, check out Ken Perlin’s blog, from early November 2013, starting at Anna, part 1.

Note: Ken Perlin is a founding advisor of the Master of Digital Media program and was visiting professor at the school in 2012 and 2013. Ken will be at The Centre for Digital Media for an extended residency during the summer of 2014. We can ask him if this is really fiction.


Clark, Jack. "If this doesn't terrify you... Google's computers OUTWIT their humans." The Register. November 15, 2013. Available at http://www.theregister.co.uk/2013/11/15/google_thinking_machines/

Perlin, Ken. Anna. Ken’s Blog, November 1, 2013. Available here: http://blog.kenperlin.com/?p=13401