Experimental Category Entries

'Train a Brain' - Explaining Artificial Intelligence in a museum setting

Company American Museum of Natural History

Introduction Date November 20, 2017

Project Website

Why is this project worthy of an award?

The last room in the exhibition "Our Senses" focused on how technology helps humans augment their sensing capabilities. We are most familiar with the side of capturing data that represent us, but that is only one side of the equation. What the data says about us and how the data can see us, is another. Utilizing machine learning and artificial intelligence algorithms, we've started offloading the perception - the part where we make sense of the data we're receiving - to computers. Much has been written about machine learning recently, and with toolkits released by major tech companies such as Google and Apple, more people are exposed to the concept than ever before. Explaining how a machine learning algorithm is trained, however, is difficult to do, especially in a museum setting. Our exhibit lets users participate in the process of 'training a brain' to recognize objects assembled by our visitors. Our Machine Learning experience presents visitors with a challenge. Using laser-cut acrylic pieces placed on a sensing surface, visitors must attempt to create a shape that the computer will recognize. Personal combinations for flowers, faces and houses –among others- challenge our “AI brain” to guess. A projection mapped interface displays instructions and results, while a Kinect and capacitive sensors power the table.

What else would you like to share about your design? Why is it unique and innovative?

This exhibit incorporates cutting-edge machine learning and computer vision algorithms and presents them in a way that is fun and easy to understand. The training data for our model is crowd-sourced from our visitors, one of the first for a museum experience. The interface and instructions were carefully designed to guide users through a complex and often abstract concept, making it playful and approachable to all audiences.

Who worked on the project?

Brett Peterson, Director/Developer Hélène Alonso, Executive Producer Hayeon Hwang, Developer Joseph Levit, Writer Lissa Anderson, Graphic Designer Victoria Azurin, Assistant Producer

View the project video: https://youtu.be/lc-4b6SM0Yk


TransVision: Exploring the State of the Visual Field in the Age of Extreme Augmentation

Company Harvard University Graduate School of Design

Introduction Date May 9, 2018

Project Website https://www.jiabaoli.org/transvision/

Why is this project worthy of an award?

(There are three links to the videos of each machine that explains the essence of this project.) Human perception has long been influenced by technological breakthroughs. An intimate mediation of technology lies in between our direct perceptions and the environment we perceive. Through three extreme ideal types of perceptual machines, this project defamiliarizes and questions the habitual ways in which we interpret, operate, and understand the visual world intervened by digital media. The three machines create: Hyper-sensitive vision – a speculation on social media’s amplification effect and our filtered communication landscape. Hyper-focused vision – an analogue version of the searching behavior on the Internet. Hyper-commoditized vision – monetized vision that meditates on the omnipresent advertisement targeted all over our visual field. The site of intervention is the visual field in a technologically augmented society. All the three machines have both internal state and external signal. This duality allows them to be seen from outside and experienced from inside.

What else would you like to share about your design? Why is it unique and innovative?

This project contains three ideal types of perceptual machines that meditate on how digital media affect our perception of the world and our social interactions. The modern society has observed an increase in allergies and intolerances. Hypersensitivities are emerging not only medically but also mentally. Technology has this mutual reinforcement effect that people tend to become less tolerant because they interact even less with people who have different backgrounds and opinions just because of the structure of the Internet’s ability to connect selectively and to filter information. Digital media as mediator reinforce people’s tendency of overreacting through viral spread of information and amplification of opinions, making us hypersensitive to our social-political environment. Similar to patterns of intolerance to signals that we see with our immune system, we also see with our mental responses to our environment, to mental stimulation, and to the distribution of the sensible. Under the current social-political media condition, we device more and more structures in order to aggressively filter this environment both in terms of digital media and in terms of physical interactions like what we eat. By creating an artificial allergy to redness, this machine manifests the nonsensical hypersensitivity devised by digital media. Vision works well when we have an overview of the total system, but the way we search in digital media is through little steps, from link to link — a tactile experience as we feel the landscape. We can never see it as a whole because it’s not a continuous space. Instead, we look through a pinhole and build up everything without an overview. This searching function enables us to reduce the amount of chance and encounters, so we can just directly search for something in an extremely focused way and filter out everything else. This machine is the extreme version of we possess only one sense for one thing. With a pneumatic system made of silicon that reacts to the sensing of light in front of both eyes, the wearer gains stereovision to distinguish directions for navigating in space. Depriving all other sensory experiences and leaving only one signal channel, this hyper-narrow, focused, and filtered vision is an analog version of the searching behavior on the Internet. The commodification of the visual field requires observers that can rapidly consume visual information. The downside of this is the extreme overloading of information that has to be packed into the visual field in order to make the most out of every second when we are looking at something. It prevents us from any kinds of contemplative relationship to the world. The meditative relationship to what we are staring at is no longer possible because everything has an overlay of commercial information trying to extract value from us. The visual field becomes a commodity that has real estate value. By creating the tension between meditative state and consumptive state, the third machine contemplates on how augmenting the visual field with new technologies affects our relationship to the world in this particular social-economic context. More information in this link: https://www.dropbox.com/sh/pdu7333sxb8vh41/AADD5d_KZjDKrTEHEXa5bE6Pa?dl=0

Who worked on the project?

Creators: Jiabao Li, Honghao Deng, Video: Ostin Zarse, Advisor: Panagiotis Michalatos


Udacity Universe

Company Udacity

Introduction Date March 27, 2018

Project Website https://www.udacity.com/universe

Why is this project worthy of an award?

What if we could reimagine transportation with millions of autonomous systems in a massive simulation? What if we could reimagine education with millions of online students learning in a massive shared space? We believe Udacity Universe is the world's first technology capable of answering these exhilarating questions. Udacity Universe is a massive shared simulation learning tool that will enable millions of students to coordinate and autonomous agents—such as self-driving cars, drones, and flying cars—in large-scale 3D simulations of entire real-world cities, from Dubai to San Francisco and beyond. Udacity Universe is built on data-driven models of city populations, transportation tasks, vehicle dynamics, and more. Most importantly, Udacity Universe presents a groundbreaking learning tool that will allow students to explore and collaborate as they develop real-world solutions to real-world problems. With Udacity Universe, students from diverse fields such as artificial intelligence, autonomous systems, virtual reality, and more, will inhabit—and collaborate in—the same virtual space. These students will not only learn the state-of-the-art technologies powering cutting-edge autonomous systems today, they will also work together on the design decisions that will shape the smart cities of the future. Will air traffic management rely on centralized coordination systems, or distributed vehicle-level autonomy? If self-driving cars reliably taxi passengers to and from flying car landing pads, could such coordinated systems provide sufficient throughput to serve a significant fraction of urban transportation systems? These questions are currently the subject of pure public speculation or closed private innovation. Udacity Universe will empower students to collaboratively and empirically analyze such questions in an open setting. Through partnerships with pioneering companies such as Unity Technologies and WRLD, we are leveraging cutting-edge technologies to expand Udacity Universe into a wide array of new environments. With contributions from leaders like the Dubai Future Foundation, we can ground innovation in the real-world data and models drawn from the city of Dubai. With the contributions of ambitious social good organizations like Zipline, who are bringing vaccines and blood to remote areas of Rwanda by life saving drone delivery technology, we can ensure the work of Udacity Universe serve the greater good. Udacity Universe represents collaborative learning in the service of solving global challenges.

What else would you like to share about your design? Why is it unique and innovative?

We design Udacity Universe at every layer to support exploration and collaboration on real-world challenges. “Exploration” in Udacity Universe comes to life through engaging visualizations and open-source software. Visualizations allow students to explore questions like, “How do traffic flows vary by time of day?” and “How might I better utilize my ground and air fleets?” They also allow students to vividly share and storytell with everyone from fellow students to potential employers. Open-source software allows students to hack on every layer of the “smart city stack”—from high-level population synthesis, transportation modeling, and fleet operation, all the way to low-level vehicle controls. These opportunities are largely absent in open communities today, as most innovation occurs behind the closed doors of mobility companies. But we specifically designed Udacity Universe so that the core game engine is extremely lightweight, and everything that lends to innovation and investigation is completely open-source and accessible (largely written in sophisticated yet simple Python scripts). “Collaboration” is critical for learning and problem-solving. So it’s important we optimize for collaboration as we build our online communities This means addressing challenges ranging from technical (scale) to logistical (time zones) to emotional (minimal in-person interaction). For these reasons, we first and foremost design Udacity Universe so that collaboration is both advantageous and engaging. We group students into teams, organize competitions around transportation system optimization, and create social incentives to encourage students to work together. And since all collaboration is not amongst tens of students, but tens of thousands of students, we also design for scale. We build on top of SpatialOS, a platform that allows massive online multiplayer online games to scale arbitrarily by dynamically allocating cloud computing resources. It is our hope that cooperation and competition at massive scale will result in new and different communities, innovative problem-solving, and dynamic collaboration. Last but certainly not least, Udacity Universe delivers “real-world challenges” through world-class simulation. We leverage models of real cities ranging from San Francisco to Dubai through our partnership with 3D mapping leader WRLD. We also simulate high-fidelity autonomous systems using custom dynamic models of self-driving cars, drones, and flying cars. Given our city and vehicle models, we then put them to task on real-world transportation tasks. We combine data-driven “population synthesizers” with real data from our partner The Dubai Future Foundation. Finally, we work with partners like Zipline, a social good startup who develops drone technologies to deliver emergency aid to remote areas of Rwanda by life saving drone delivery technology, to ensure we focus student efforts on areas of high-impact for social good. Udacity Universe is a technology of massive scale and scope, built on advanced technologies and with diverse partners. But from top to bottom, the design aims to deliver on the promise of exploration and collaboration on real-world challenges.

Who worked on the project?

Jake Lussier (Product Lead), Chris Gundling (Simulation Engineer), Aaron Brown (Simulation Engineer), Dominique Luna (Software Engineer), Christian Plagemann (VP Learning Products) Partners: WRLD, Dubai Future Foundation, Zipline, Unity Technologies


Unseen Oceans Intro Waves

Company American Museum of Natural History

Introduction Date November 20, 2017

Project Website

Why is this project worthy of an award?

As you enter the darkened glass doors of the exhibition, you are immediately transported to a shoreline. The downward projection fills the floor with drone footage of slow-crashing waves. The effect is so realistic, visitors retreat to keep their feet dry. The opposite wall merges the title and accompanying prose with shimmering ocean surfaces. The sounds of ocean and narration are equally sparse to allow the viewer to feel instantaneously immersed in the space. The result is a quietly disarming introduction to a larger exhibition about the ocean we think we know, but are just now discovering.

What else would you like to share about your design? Why is it unique and innovative?

The 2-channel video projection is deceivingly simple and free of distracting embellishment. It acts as a threshold, that also invites visitors to linger as waves wash over them. The scale of the downward projection was selected to optimize the most satisfying build and crash of a wave. It is site–specific, and still designed to adapt to other venues. The piece works at different sizes and even retains the option to separate both channels into distinct singular experiences.

Who worked on the project?

Ariel Nevarez, Director Hélène Alonso, Executive Producer Karolina Ziulkoski, Art Director Chris Cyphers, Cinematographer Matt McCorkle, Sound Designer Kara Green, Voice Actor Joseph Levit, Writer Victoria Azurin, Assistant Producer

View the project video:


Urban Furniture: Mapped Empathies (Prototype)

Company Estudio Guto Requena

Introduction Date May 1, 2018

Project Website https://gutorequena.com/empatias-mapeadas/

Why is this project worthy of an award?

While researching definitions of empathy I read an anonymous quote that has always stuck with me that goes: "Empathy is feeling with the heart of another." Mapped Empathies is an experimental research project which seeks to explore possibilities of adding new poetic layers into urban furniture through interactive digital technologies. While regular public urban furniture seeks to solve practical problems (bus stops, trash cans, shadows, benches, bike stations), we believe in the potential of a new era of street furniture with added technologies to improve our sense of collective, belonging and memory, empowering communities to build a better society that stimulates empathy. We must shape cities and citizens through love and affection. Mapped Empathies was recently prototyped in wood to test shapes, hardware, software and above of all, to feel people’s reaction. The prototype was exhibited for fifteen days and the results were surprising and stunning. The audience was overwhelmed by emotions and truly connected to each other. Some people even cried. At Estudio Guto Requena, we’re obsessed with merging new digital technologies with emotions. We want to create immersive Urban Furniture that invites people to disconnect from their daily lives for a while and connect with each other, with the stranger sitting next to them, and thus reconnect with your own self.

What else would you like to share about your design? Why is it unique and innovative?

Mapped Empathies was designed to be open source, copyleft. To be reproduced anywhere, and then to be improved by the creative community and social entrepreneurs, stimulating digital fabrication, fablabs and a more collaborative production process. Mapped Empathies was created following Universal Design principles, being as intuitive as possible and allowing the same democratic experience for all visitors, regardless of age, ability, physical disabilities or situation. Kids, elders, big or small, blind or deaf, on a wheelchair. We will all be connected through the exact same experience. Architecture has a fundamental role in stimulating sensations. Mapped Empathies was designed with the aid of parametric design (computer generated forms) and was inspired by temples and meditation spots. The result of the prototype is an organic wood structure digitally fabricated by a CNC machine. Its shape resonates that of a cathedral that comfortably houses a small group of people that are unknown to each other. Citizens’ heartbeats were recorded in real time at the touch of a finger via sensors installed on the benches. This vital data was sent to speakers and lights that transformed the architecture into a large sculpture of emotions. A place for human connection. Every individual heartbeat can be heard, and then the generative music software gradually mixes and transforms the heart beats into a symphony driven by the vibrant pulse of life. Lights follow the same sensitive rhythm, creating effects that assist in the immersion process. Mapped Empathies is a performative urban furniture piece. The experience works when visitors connect in an interactive and dreamlike form that blurs the boundaries between the street, technology and our feelings. At a time when so many seek to put up walls that separate us, we believe in the potential of design merged with information and communication technologies to build emotional bridges that remind us that we are always connected.

Who worked on the project?

Creation: Guto Requena Sound design, lighting and interactive system development: Felipe Merker Castellani and Nikolas Gomes Parametric Design: Guilherme Giantini Production and Assembly: GTM Cenografia Production collaborator: Vitor Reis Photos: Lufe Gomes