Experimental Category Entries

Raw Factory - an alternative human-nature relationship

Company Gray Area Art and Technology

Introduction Date May 3, 2017

Project Website http://www.shihanzhang.com/raw-project/

Why is this project worthy of an award?

Raw Factory envisions an alternative vision of human-nature relationship by proposing an innovative manufacturing system called Biological Nutrition Control, BNC. It is a factory, but also an incubator. Raw Factory feeds nutritious water to the worms and guides them to decay tunnels in the wood, and then the worms become food and guide the woodpeckers to create a larger cavity in the wood as a container. Raw Factory amplifies new possibilities where humans provide habitats and food for animals, and in return, animals manufacture our daily household consumer products, such as ashtrays, tableware, and furniture. The example showing in the project is an ashtray, which works as an ironic reminder of environmental problems for its owner when they see it and use it. Raw Factory created delightful human-nature co-creation experiences. It invites customers to curate patterns as a template for the animals and then uses them to develop organic and artistic forms within the pieces. The final piece is an unexpected gift from nature and animals, which is carried out through the sustainable values to the owner as a personal statement. Raw Factory is also an example of how natural processes can offset industrial mass production by harnessing what nature already does well. The energy consumption is nearly non-existent. It utilizes bio-energy instead of electricity thereby reducing carbon emissions. It requires less labor and the final products are the remnants of feeding worms and woodpeckers. Raw Factory is conceptual but also grounded in solid biological research. The specific species are chosen based on the animals’ diet and the local food chain. The factories are designed as animal habitats based on their living and prying habits. Raw Factory helps the public to envision an alternative future and accumulates momentum to make this preferable future happen.

What else would you like to share about your design? Why is it unique and innovative?

Raw Factory envisions an alternative vision of human-nature interactions by proposing an innovative manufacturing system called BNC which sands for Biological Nutrition Control. Raw Factory invites its customers as curators who will choose the pattern which set up form constraints but leave space for the nature to emerge forms by themselves, then worms and woodpeckers come in to fully unfold their creativities to finished the whole pieces. Design challenges: This project started with an investigation of the language of raw by form, material, texture and add gesture as another category. I get inspired by the dynamic raw gesture of the worms decaying the wood and started using this gesture to create the form of the ashtray. After my research, I realized that the holes which worms created are too narrow to create a cavity as cigarette butt container which is one of the main function of an ashtray. Then it became my second challenge. Then I came up an idea that what if I can invite another raw gesture to create the container for me. Finally, I came up with the BNC manufacturing system which involves two raw biological gestures to create the raw ashtray. I overcome the challenges of defining the raw form of the ashtray by designing a system that authorizes customers as curators, worms as hole borers, and woodpeckers as container excavators to co-create each unique raw forms of ashtrays. Scientific research background: This is a conceptual design project but grounded in scientific biological research. I used a top-down approach to define the specific species and materials following the food chain. I choose the Nuttall's woodpecker which lives locally in northern California and has a smaller beak which won’t destroy the holes too much. Nuttall’s woodpeckers mostly feed on oak which is a hardwood and won’t catch on fire when made into an ashtray. Then I define the worm species, the larva of Red Oak Borer and Goldspotted Oak Borer, which are two common larvae of beetles that feed on the oak. The factories are designed based on the biological research. Worm factory is designed to avoid worms to escape and decay, and also provide right tempter and moisture atmosphere for worms to grow healthy. The woodpecker factory is designed to adapt to woodpecker’s natural preying height ranges, and give a firm for the woodpecker to catch and sit. Environmental benefits: The system is sustainable and environmentally friendly. BNC utilizes nutritive water to control worms to decay tunnels in the wood and then use the worms as nutrition to control woodpeckers to create butt containers. It utilizes bio-energy instead of electricity to manufacture consumer products. This will reduce energy usages and carbon emission. The ashtray itself works as an ironic reminder of environmental problems for its owner when they see it and use it. This project forecasted an alternative future of manufacturing and a future vision of how we are able to live with nature in harmony and also create market benefits.

Who worked on the project?

Shihan Zhang - Designer

View the project video:


Scratch Nodes

Company miLAB

Introduction Date February 1, 2018

Project Website https://www.scratchnodes.com/

Why is this project worthy of an award?

Scratch Nodes are programmable devices for children's group outdoor play. They were designed and built at the IDC Media Innovation Lab as a research platform for better understanding the effect of technology, and specifically coding on outdoor play. Each device is equipped with a variety of sensors and outputs that children can program to be used in any game they invent. The Nodes allow children to enhance outdoor play by adding a “digital layer” of communication, measurement, lights, color, vibration, and sound to their games. By using coding and digital technology, we aim to preserve and even enhance the benefits of both outdoor play and technology in children’s development. The Scratch Nodes were developed in an iterative design process that included play tests with children in every step of the way.

What else would you like to share about your design? Why is it unique and innovative?

Challenge: Compared to children in the 70’s, children today spend 50% less time in unstructured outdoor activities. Today, as technology takes an increasingly prominent part of our lives, time previously spent outdoors in active, social, and creative play, is overtaken by screen-time. Having said that, technology for children also holds the potential of many advantages. Coding for example, can develop computational thinking skills related with creativity and problem solving. We set out to address this decline in outdoor play, by integrating technology and coding with a traditional outdoor play object. Our approach: Since technology is a great motivator for children, we integrated sensors and digital feedback with a traditional outdoor play object - a stick. By developing a graphic coding platform, we allow children to create their own games with the devices. We chose to use an existing, popular coding platform and enhanced it with the ability to program the devices, focusing on social play. Our research shows that simply adding technology to outdoor play, can easily damage its advantages. And so, we use the Scratch Nodes to study the nuances of how technology can enhance outdoor play rather than damage it. The Scratch Nodes: Each device is equipped with an acceleration sensor, gyro, and a push-button for input, and 27 RGB LEDs, vibration motor, and a speaker for output. Children can program the devices to function in different ways to be used in any game they invent, using a custom, tablet-version of Scratch 3.0, developed with the help of the MIT Media Lab Scratch team. Together, the devices and the programming platform hold unique features that enable creative and social play. We hope to encourage today’s children to experience and explore outdoor play, while empowering them with the creative tools programming offers.

Who worked on the project?

Tom Hitron, Iddo Wald, Andrey Grishko, Hadas Erel, Netta Ofer, Idan David, Oren Zuckerman


SenseNet

Company United Technologies Research Center

Introduction Date July 18, 2017

Project Website

Why is this project worthy of an award?

Nearly all commercial buildings lack any kind of system for the detection, notification and response to a release of hazardous biological or chemical agents. This is mostly due to the cost and operational requirements of such a system exceeding perceived benefits, despite the serious consequences of a release. United Technologies Research Center (UTRC) is working on a solution to these limitations. The system known as SenseNet leverages both advances in smoke detection for commercial buildings and advances in biological and chemical agent detection and state of the art decision support enabled by advances in machine learning. In 2017, the team completed definition of the proof of concept system architecture for SenseNet and held a Preliminary Design Review including sensor communication requirements and its scalability to nationwide facilities. The team conducted tests using CO2 as a simulant that validated the model predictions and confirmed the capability of a low-regret response to affect dispersion and provide additional response time inside a commercial office building. In addition, models for releases and response scenarios covered a school, a hotel, a convention center, along with low and high-rise buildings. The team also demonstrated the dual role smoke/biological agent sensor (using a light distance and ranging, LIDAR technique), determined bio-surrogate sensitivity, and sensor response to potential interferents via government witnessed testing at a third-party laboratory. The incident and emergency management market is expected to grow from $93.44 billion in 2018 to $122.94 billion by 2023. SenseNet is poised to drive this further as it makes advanced detection and response systems accessible to a much broader array of buildings types and classes, significantly enhancing occupant and first responder safety across a range of facility types. Furthermore, UTRC’s innovation directly supports the Department of Homeland Security’s key objective to build cost-effective sensors and systems that deliver dual-use protection against intentional threats that happen less frequently, as well as the delivery of day-to-day monitoring of fire, building health and similar threats. This will ensure the DHS can achieve its goal of making threat detection ubiquitous in order to save lives and critical building infrastructure. This project is the result of funding provided by the Science and Technology Directorate of the United States Department of Homeland Security under contract number D16PC00118.

What else would you like to share about your design? Why is it unique and innovative?

In the life safety business, we are seeing a shift from “Detection and Alarm” or “D&A” towards “Detection and Response” – where we go beyond just informing to incite immediate action – e.g. altering air flow settings to limit propagation of a harmful agent. The change here is an increased focus on doing what we can to limit the threat to building occupants with what we have (HVAC air flow, elevator positions, video forensics, access control), in the first five minutes before first responders arrive on site. These first few minutes are critical – the overall risk level is at its lowest (but rising) and we need to take advantage of this time. In 2017, UTRC completed release and respond models and validation experiments that show dramatically improved outcomes (e.g. >50% reduction in spaces reaching lethal levels with a simple HVAC response) using such an approach. UTRC is helping to drive this necessary culture shift for the role of system providers – rather than just informing first responders of the situation upon arrival, UTRC aims to deliver a more effective solution that mitigates risks in these crucial first moments. UTRC’s work on SenseNet is changing the way that life safety teams deal with disasters. For example, public health officials are often the only ones able to declare there has been detection of a bio-agent. Fire departments often define what a life safety system can and can’t do. Heating, Ventilation, Air Conditioning (HVAC) engineers install, commission and maintain air handling systems. Industry relies on codes and standards to limit legal product liabilities. Moving to the “detect and respond” approach that UTRC is working towards will blur these lines and require a new vision shared across these communities. In 2017, UTRC coordinated a session at the Annual ASHRAE Meeting held in Long Beach California entitled "What is the Prospect for Low-Cost Chemical and Biological Threat Detection and Response in Commercial Buildings?" including presentations from UTRC, PNNL, Dept. of Defense (DoD) and Department of Homeland Security (DHS) in order to educate and gather feedback from the broader building community to facilitate the SenseNet program’s commercialization. Currently, the sensors and system architecture are being implemented will be and tested on a building at the Edgewood Chemical and Biological Center to validate application of the system by a third party. Additionally, the team is engaged with leading developers and building owners to enable data collection within a real-world deployed environment.

Who worked on the project?

Tim Wagner, Associate Director, CCS Program Office, United Technologies Research Center Hayden Reeve, Associate Director, CCS Program Office, United Technologies Research Center David Lincoln, Staff Research Engineer, Applied Physics, United Technologies Research Center Russ Taylor, Staff Research Engineer, Thermal Management, United Technologies Research Center Peter Harris, Associate Director, Technology and Platforms, CCS Jon Hughes, Director of Product Management Fire Systems Kidde Edwards

View the project video:


Senses Neural Pathways Interactive

Company American Museum of Natural History

Introduction Date November 20, 2017

Project Website

Why is this project worthy of an award?

The playful interface of this interactive is inviting and easy to understand, while at the same time it unveils complex information about animal and human sensing in a visual and compelling way. The piece allows museum visitors to compare visual, aural and olfactory proficiency between a person and other mammalian species. Visitors walk up to models of a human, coyote and dolphin head. They touch sensors linked to the eyes, nose or ears. This triggers data visualizations on a large projected display. The diffusion MRI scans show the neural pathways through the brain, connecting the sensory organ to the region in the brain where its interpretation center lays. By rotating the heads, visitors can compare the differences between brains and access information about them. Patrons quickly determine how much we are alike in some ways, but also how each species has strengths based on what it needs to be successful.

What else would you like to share about your design? Why is it unique and innovative?

We imported raw data from animal and human Diffusion MRIs into a Unity environment. We invite our visitors to navigate in a three-dimensional environment, using physical controls shaped as the animals’ heads. The interface is powered by multiple technologies including capacitive touch sensors and rotary potentiometers connected to a microcontroller, activating several layers of data within the Unity environment. A simple and clean interface used to activate complex data is what makes this piece most innovative.

Who worked on the project?

Brett Peterson, Director/Developer Hélène Alonso, Executive Producer Eui Joon Kim, Developer Gregory Berns, Neuroscience Advisor Joseph Levit, Researcher and Writer Jack Cesareo, Preparator Genaro Mauricio, Preparator Lissa Anderson, Graphic Designer Sasha Nemecek, Editor Victoria Azurin, Assistant Producer

View the project video: https://youtu.be/LTj4peIWvYI


shapeShift: Rich VR Haptics With Shape-Changing Robotic Displays

Company Stanford University

Introduction Date October 23, 2017

Project Website http://shape.stanford.edu/research/shapeShift/

Why is this project worthy of an award?

shapeShift reimagines our future of interaction with computers and digital media by extending the interaction space from 2D screens to our physical world. By doing so, it leverages our inherent spatial reasoning skills and abilities to haptically interact with and manipulate the world around us. Virtual reality (VR) applications aim to engage more of our senses through immersive experiences, but try reaching out to an object and your hand strikes nothing but air. So what if instead of finger swipes and clicks you could instead reach out with your whole hand to touch, feel, move, grasp, and rotate objects and elements? shapeShift is a new type of robotic interface that moves on a tabletop and is capable of dynamically rendering things that you can not only see, but also feel, and manipulate. It uses a grid of actuated pins, like a pin art toy, to bring physical shapes to life in less than a second. Sitting on a mobile platform, objects rendered can be grasped and moved by the user. Unlike existing interfaces that overload our visual senses, shapeShift is designed to be a multimodal interface that leverages our hands’ rich abilities to feel and manipulate things. This novel type of interface engaging all our senses, but in particular our innate haptics and spatial reasoning skills, can be used to create new experiences in all sorts of fields, from medicine and education to gaming and design. In VR applications, the shapeShift robot can track your hands to move under your fingers and can quickly change its shape to re-create the virtual content that you are reaching to touch. Adding tactile sensations to a virtual world offers new potential for richer, more immersive and compelling experiences. shapeShift is like a digital clay that designers can use not only to design in the physical world but also to test designs with users; thus speeding up the overall design and development process of physical prototypes. For example, car designers could quickly test dashboard layouts and interactions that users can both see using VR and feel through shapeShift. With a multimodal computing interface, we can design educational applications that are more inclusive of people’s wide range of abilities. shapeShift’s tactile modality can help create new experiences for the blind and visually impaired to teach abstract and complex scientific concepts. shapeShift is a step towards a future where computers blend in our environments and our interaction with digital content better matches our natural abilities. We imagine future interfaces will enable us to do even more by leveraging the skills we already have and use without even thinking about it.

What else would you like to share about your design? Why is it unique and innovative?

Towards our vision of the future of haptic interaction with digital media through shape-changing interfaces, we have begun exploring several application areas. We have collaborated with Volkswagen’s Virtual Engineering Lab in developing applications to support their designers to more quickly prototype car interiors. We are also doing work to predict user intent and touch when users are interacting with a virtual scene in order to enhance the immersive experience. We have also prototyped applications to make computer-aided design (CAD) accessible to the blind and visually impaired. Currently, most CAD applications rely on a graphical user interface and have a steep learning curve. We are using shapeShift to help visualize and validate designs. Instead of having to create the model and then wait to 3D print it for verification, shapeShift can quickly render the design and enable more iteration. Beyond visualization, we are also thinking of new interactions that allow people to create 3D models in the physical world through gestures instead of having to translate 2D drawing into 3D geometry. Last, we are doing more structured controlled studies to understand how tangibles benefit people. We have done studies where a shapeShift is used to explore a spatial map layout and found the added spatial movement allows users to navigate the map 24% faster, and with measurably less frustration and mental load. While several shape-changing interfaces have been developed in the past, such as the MIT Media Lab’s inFORM, shapeShift expands on several key aspects that allow us to explore new interactions in the design space. For one, shapeShift is compact and mobile thus allowing users to not only touch objects rendered but also to move and grasp them in space. shapeShift’s mobility also enables us to create the illusion of an infinite virtual surface and to support multiple users interacting with it simultaneously. Second, shapeShift has a very dense array of pins that allows rendering of more meaningful shapes. shapeShift is also unique in that all material and procedures necessary to re-create the platform have been open sourced. In efforts to make it easy to access, the platform uses mostly off-the shelf components, and laser cut and 3D printed parts. Moreover, the design is modular so one can build as much or as little as necessary for the application context or resources. We hope these decisions can help increase access to these type of tangible interfaces which in the past have mostly been limited to research settings. By increasing access, we hope to inspire designers, engineers, and researchers to prototype and develop new applications and interactions with physical interfaces.

Who worked on the project?

Alexa F. Siu, Eric J. Gonzalez, Shenli Yuan, Jason B. Ginsberg, Sean Follmer