Data Viz: Design Frontier
Editorial note: Nicholas Felton's talk is now embedded below. Gretchen's original remarks follow.
Chasing the "Open-Mouth Moment"
I typically don't appreciate it when conference speakers get up on stage and take you through their short life's work with a voice over. But with this group of presenters, the "portfolio show" was fresher and engaging. Designers are doing a lot of cool art and semi-professional experiments with data that are a far cry from the corporate apps and products that most product designers are working on. Aaron Koblin's work was by far the most evolved and moving. He's probably best known for the Arcade Fire music video done last year in HTML 5, but he also shared contributions from thejohnnycashproject.com, where a music video is being crowdsourced to great effect.
Zach Lieberman of Parsons also showed personal artwork that was quirky and experimental. He talked about the "open-mouth moment" where people watching something start to stare in wonder. His focus on making tools that serve him better than those made by Adobe was instructive. Projects like the Eye Writer tool he made for Tempt1, a graffiti artist with ALS, shows how focusing on cheap, experimental, low-fidelity output actually pushes things farther. His corporate work for Toyota resulted in the iQ font, a quirky, custom typeface created by sensors tracking a driving car.
Jesse Louis-Rosenberg and Jessica Rosenkrantz are two designers doing work that harnesses complex biological algorithmic techniques to "grow" products like jewelry and lamps that are one-of-a-kind. The Hyphae product line will likely become a whole genre of product that will seem trite at some point, but until then it's intriguing to hear about. And when others finally copy them, they will doubtlessly have moved on. I can't hope to explain how they work, given the intricacies of math and science behind their algorithms, but I did at least grasp that they can get inspired by something in nature like an Amonite shell, reverse engineer how it might have evolved, and then produce a pattern in virtual space that is both impossible and irrational in reality. And in the end, print it in 3-D, making it a reality after all.
A personal highlight was seeing Nick Felton (@feltron) share the story of making his annual report about his father's life. He addressed the hard work of digging through analog data not intended for the type of project he undertook. He discussed the design decisions behind the report's signature elements, and showed rejected versions. His highly personal story was a nice break from the other technical and portfolio discussions.
Processing... It's Not Just for Experimentation Anymore
While I had heard of Processing before Eyeo, I'd thought of it as the engine for countless student portfolio pieces that seemed interesting, but a little, "so what?" After seeing Jer Thorpe show several projects he's been involved with, I now understand that Processing is a tool that the design community can't ignore.
The most powerful example was Cascade, a tool that the R&D dept at the NYT has developed to track the flow of stories through the social media streams. This home grown tool looks at a single story that the NYT puts out, and visualizes the "cascades" or threads that influential people start on services like Twitter. Using this tool, the NYT has learned about how different types of stories live on, some dying in hours, some living for weeks. Jer Thorpe (@blprnt) is clearly the a leader in putting Processing to work for real purposes, not just artistic expression. Others in the interactive field should watch and learn as his work evolves.
The 9/11 Memorial has gotten a great deal of press, and it figured prominently at Eyeo, but it is still a fantastic story. Using Processing Jer Thorpe and Jake Barton were able to help the very conscientious architects honor the 1400 adjacency requests that covered 2900 names on the memorial. What impressed me the most was the use of the algorithm for the lion's share of the work, but also the ability of designers to finish the job to human aesthetic standards.
Data Visualization, Interpretation and the Future
While the achievements to date were impressive, there was something about the constant theme of "interpreting the data" that didn't sit quite right with me. Amanda Cox of the NYT interactive graphics department made a great point when she said "Nothing important is ever headlined: ‘Here is some data. Hope you find something interesting'." She acknowledged that the act of making the graph or visualization can change the reporters' opinion of what is revealed within. The examples she shared were striking. Even as someone who finds baseball a yawn, I got something out of the piece about Mariano Rivera's pitching.
She also showed the "Big Board" election results visualization that she and her colleagues prefer to the more popular map view. She points out that while less graphically interesting, it's actually more revealing about surprises. I found her presentation highly relevant to more commercial design problems, and appreciated her presentation style.
Certainly, the NYT can't just dump data on the public and run. Delivering analysis and commentary is the core of their business. But in the closing panel on Data and Social Justice, a "data journalist" raised the question I'd been thinking all day, which was, "To what extent does interpretation impose meaning on data that's slanted?" With a fixation on "interpretation" and making meaning, designers working in data viz run the risk of distortion. I had been hoping to hear about projects that were an evolution of the "Many Eyes" project done by Fernanda Viegas and Marcus Wattenberg, where both the upload of data and the application of visualizations are open to the public to explore. After all, in a world where we have more and more data and the ability to crowdsource, why not leave the door open for many interpretations? At the very least, why not publish conflicting or alternate visualizations? A journalist often reports on different views of a story. Why can't data do the same?
When we consider the kinds of data we can and should collect at the civic level in service of things like social justice, we will also need to consider evolving our data visualization approaches to be less monolithic and "one-time use." The ability for the public to gain "data visualization literacy" and work with data over the span on longer cycles will become crucial.
I also hope to see future Eyeo Festivals be less of a portfolio show, and more about sharing the processes and back stories behind the great work people are doing.