Experimental Category Entries
Aguahoja: Programmable Water-based Biomaterials for Digital Fabrication
Company The Mediated Matter Group, MIT Media Lab
Introduction Date March 2, 2018
Project Website
Why is this project worthy of an award?
Nature made us half water. With water, natural ecologies facilitate the customization and tunability of an organism’s physical and chemical properties—through growth and biodegradation—as a function of both internal and conditions. This cycle of birth, adaptation, and decay allows ecosystems to use materials in perpetuity. In old growth forests and coral reefs, waste is virtually non-existent. Within this framework, matter produced by one member of an ecosystem, living or nonliving, inevitably fuels the lifecycle of another. Importantly, water is the vessel through which this matter is transformed and utilized, with unparalleled energy and resource efficiency. By contrast, the built environment is composed of inanimate objects that are designed to perform a finite set of predefined functions, with as little variability as possible. The pace at which we build these structures has required us to extract raw materials from the earth and transfer them far from their native habitats faster than they can be replenished. When their function is served or outlived, they become permanent waste in our landfills and oceans. A majority of plastics, woods, glass, and metals are never recycled. Aguahoja recapitulates Nature’s material intelligence in the design and fabrication of the grown environment. The environmentally responsive biocomposites used here are composed of the most abundant biopolymers on our planet - cellulose, chitosan, and pectin - which are parametrically compounded, functionally graded, and digitally fabricated to create biodegradable composites with functional, mechanical, and optical gradients across length scales ranging from millimeters to meters. In life, these materials modulate their properties in response to heat and humidity; in death, they dissociate in water to fuel new life. To design this behavior, generative fabrication algorithms have been implemented to enhance the strength of the panels and compensate for their weakness. These include air pressure modulation and nozzle size variation to tune line diameter and speed variation to modulate resolution and layer alignment. Chemically, surface roughness, weight distribution, tensile strength, and hydrophilicity are controlled via modulation of pH and the relative concentrations of molecular components. In combination, these interactions enable the structure to dynamically fluctuate between a rigid ‘shell’ and a flexible ‘skin,’ depending on environmental heat and humidity. At the local level, these relationships serve as parametric design inputs to algorithms that generate graded geometric patterns and property distributions, which respond to specific environmental conditions such as sunlight and rain. Regionally, these patterns are mapped to structural gradients that enhance or compensate for variability in material properties enabled by slight alterations to chemical formulae, multi-material interactions, and fabrication parameters. Globally, property gradients coalesce to distribute loads toward the central spine; absorb deformation from heat and humidity; and dissociate from outside in. Derived from organic matter, printed by a robot, and shaped by water, this work points toward a future where we grow ecologies, rather than assemble megalopoles. As such, Aguahoja embodies the Material Ecology design approach to material formation and decay by design; it is a realization of the ancient biblical verse “From Dust to Dust”―from water to water.
What else would you like to share about your design? Why is it unique and innovative?
To utilize Nature’s material ecology, we naturally looked to one of its most successful architectural structures, the tree. Here, a single structural material, wood, is produced and modified by living cells in response to environmental conditions. These cells coordinate construction and augmentation that enable parametric oscillation between mechanical, functional, and optical properties, with minimal resources and a very basic library of building blocks. In order to support leaves and cantilever into branches, wood has to be stiff; otherwise, trees would droop under their own weight. It also has to be strong to resist windy shear forces. Wood must be tough to resist and recover after damage, and must also be lightweight to avoid buckling under its own weight. Synthetic materials lack both these properties and the ability to respond to their environment.. Plastics are ductile, bricks are weak, glass is brittle, and steel is prohibitively heavy. In trees, dynamism between living cells, the extracellular environment, and the hierarchical structures they create enable such tunability across length and time scales. Our work recapitulates Nature’s integration of data-driven inputs during material synthesis, fabrication, and decomposition to create optimized, dynamic structures that only temporarily divert their building blocks away from the natural resource cycles that enabled their synthesis. Here, we control the composition, structure, and properties of matter across multiple length and time scales by isolating molecules from natural waste streams, simulating their behavior, and synthesizing new materials. We then utilize molecular composition and material properties to inform robotic fabrication parameters such as nozzle diameter, height from the substrate, air pressure, and speed, in order to vary line thickness and other geometric properties that relate to structural, mechanical, and optical performance. Accordingly, templating multiple compositions of continuous pectin skins and chitosan/cellulose shell enables us to not only design multiscale structural and functional properties, but also the dissociation rate (Kd) and sequence. In particular, pectin's natural translucency and viscoelasticity allow it to act as skin, while the variable ductility and strength of chitosan and cellulose enable structural patterns to function as a responsive shell. The tradeoff in dominance between the two relates to their relative composition, proportion, pH, gas permeability, hydrophilicity, and surface features. Tuning these functional knobs can enable adaptive utility and functional gradients that can, in turn, be parametrically mapped onto a structure at various length scales. In particular, dynamic relationships between the pH, surface roughness, and hydrophilicity of pectin skins and chitosan/cellulose shell yield vastly different shell colors, which correlate to differential stiffness, strength, shape change, brittleness, and dissociation constant (kd) of large-scale panels. Accordingly, some artifacts exhibit dramatic changes in conformation in response to humidity and heat, while others darken or lighten as the seasons change. Some are brittle and transparent, with a glassy texture, while others remain flexible and tough, like leather. Despite their emergent diversity, these artifacts are mediated by climate, and in death, they dissociate in water to continue the natural resource cycles that enabled their synthesis.
Who worked on the project?
Neri Oxman – Primary Investigator, Associate Professor of Media Arts and Sciences Jorge Duro-Royo – Project Lead, Research Assistant Josh Van Zak – Research Assistant Andrea Ling – Research Assistant Yen-Ju Tai - Research Assistant Nicolas Hogan - Research Assistant Barrak Darweesh - Research Assistant Christoph Bader - Research Assistant
View the project video:
AI Driven Mobile Speech Coaching App - Performance Support & Training Reinforcement
Company Orai & Mandel Communications
Introduction Date March 21, 2017
Project Website https://www.oraiapp.com/
Why is this project worthy of an award?
A 2017 Workplace Productivity and Communications Technology Report, Webtorials, found that businesses lose an average of $11,000 per employee every year due to ineffective communications and collaboration. WHAT is said and HOW it is said can energize and move important business projects, activities and decisions forward. . . or stop them dead in their tracks. Unskilled and average communicators lose far too often. And poor communications skills are costing businesses millions of dollars every day. One way to address this pain point is to make employees go through LMS style static training content (videos, PDFs) on communication skills development. Although this is scalable, it’s proven not to be effective. On the other hand, hiring a 1:1 speech coach is super effective but not highly scalable across an organization. And industry research shows that acquiring new behavioral skills (versus concepts) requires repetition; people must try a new behavior multiple times before it becomes practiced enough to be comfortable and effective. We sought to create a solution that brings the best of both: effective, yet scalable and offers an easy way to practice and instill a new behavior. Orai partnered with Mandel, a communications coaching company, to create an “A Mobile Speech Coach” that is powered by artificial intelligence. Using Mandel’s communications content, Orai provides instant feedback on the user’s energy, pace and filler words. Users can easily share their recordings with their managers, and Orai even provides accent support for non-native English speakers. The communication skills lessons reinforce traditional training workshop experiences but can also be used as a stand-alone training experience. And when needed, the content can be tailored to the enterprise’s operating environment, so it reflects the operating culture of the company, further cementing long-term behavior change. Today, sales, technical, marketing, finance, IT professionals, managers, and executives from all functional areas in various organizations are using Orai with Mandel methodology inside to think, speak and generate positive results. Sales and marketing professionals practice their value propositions before a critical customer call. IT professionals use Orai before seeking internal funding for their projects and executives practice with Orai before delivering high-stakes presentations. Here’s a testimonial from a Sr. Manager at a large technology company: “I tried out the ORAI app last night for a presentation I am giving next week to a group of CIO/CEO’s and after 4 run throughs, I’ve already cut down on my filler words by 50% and gotten my pauses way better to keep the pace understandable. If any of you have any presentations coming up, I highly recommend trying the app out with them to see how it can help you improve.” Thanks to the combination of new technology, engaging user experience and award-winning communications content, speaking with impact and clarity for any employee is now available.
What else would you like to share about your design? Why is it unique and innovative?
Being able to generate actionable feedback on a user’s communication skills from a machine was challenging. And what was even harder was to visually display this on a mobile device. We interviewed hundreds of public speakers, speech coaches, public speaking professors on how they give feedback to their learners. We distilled the nuance ways of giving feedback into quantitative methodologies and rules that we could train a machine on. We even read numerous computational linguistics papers on how humans perceive good and bad speech. Simultaneously, we tested several UI/UX flows on how to best present feedback on a small screen. This is still a work in progress and the app that’s live on the App Store is iteration #5! Today with a tap of a button, Orai analyzes your voice, picks up on filler words like um, tells you if youarespeakingtoofast… or.too.slow and provides you a transcript of what you just said - which highlights your varying tone projection… or lack thereof. And very soon, Orai will be able to analyze your posture, body language and facial expressions to provide you with 7-point real-time feedback better than any human could. Orai has been featured in TechCrunch, Fast Company and Wired, and over 100,000 people have downloaded the app across the world. Since launch, Orai has counted more than 100,000 “ums” in people’s speech. And after just 3 sessions, users have improved their communication skills by up to 30%. Last month, Orai was a finalist in Fast Company’s World Changing Ideas Award.
Who worked on the project?
Danish Dhamani (Co-Founder & CEO, Orai), Paritosh Gupta (Co-Founder & CTO, Orai) and Diane Burgess-Faber (Vice President of Solution Design and Client Engagement, Mandel Communications)
View the project video: https://www.youtube.com/watch?v=ocX1QEzIey8
Airbus Transpose: Flying Reinvented, Booking Reimagined
Why is this project worthy of an award?
We are painfully aware of how tiresome it is to book plane tickets. Now imagine asking passengers not only to book their flights but also to figure out what they want out of a flight experience. Our challenge was to keep the passenger’s online booking experience simple, stress-free, and engaging. Idean worked together with A^3 by Airbus Transpose team to seek out a way to design this booking experience with existing user research. The outcome: a patented design of an online service to discover, book, and personalize travel on a Transpose flight. We kept our focus on two primary archetypes: Planners and Non-Planners. While planning trips, the Planners like to feel in control while Non-Planners prefer flexibility. We accommodated the needs of both types of users as the overall booking experience was devised. The booking process was divided into stages to systematically guide users through the process of discovering, choosing, and customizing a unique journey. Through the process of designing this captivating online booking experience, we established five design principles: 1. Simplified choices: To reduce decision-making effort and keep users engaged, Transpose offers four core Collections. Intelligent recommendations of curated experiences are generated for the passenger, based on the best combinations available at the most suitable available times. 2. Energetic visual brand: We infused the experience with energy from vibrant colors and playful iconography to encourage users to explore different Collections and keep them engaged through the booking process. 3. Thoughtful tone of voice: We evolved a conversational tone of voice to differentiate Transpose as an exciting, adventurous new paradigm. Instead of standard “seats” and “flights,” we used more fitting terms like “Experiences” and “Journeys.” 4. Playful interactions with time: Instead of defaulting to a standard schedule, we designed the itinerary as an interactive and flexible guide. Discrete animations guide the user to drag and drop or add/ delete Experiences on the timeline. The interactions are intuitive enough to do away with wordy instructions. We first finessed animations in AfterEffects and subsequently built an HTML prototype to test its efficacy with end users. 5. Scalability: The playful, multi-colored isometric iconography easily scales into a visual system that can be used by multiple airlines. We were inspired by the isometric views of the Transpose interior designs in our choice of iconography. The user experience was complex when it came to booking a series of experiences, but with these design principles, we were able to express our vision of how a user would navigate through the site.
What else would you like to share about your design? Why is it unique and innovative?
Airbus A^3 Transpose is an audacious and revolutionary concept that dares to challenge the future of air travel. It is time to disrupt the commercial aviation industry and surpass passenger expectations of what it means to fly on a plane. We envision air travel that sets itself apart by fearlessly rejecting the jaded ‘one-size-fits-all’ model. Bid goodbye to the tedium of long, cramped, tethered flights and say hello to an exotic cocktail in the lounge followed by a relaxing massage at the spa while your kids play in the jungle gym. Imagine all this delight in a plane that is newly outfitted with curated modules and customized based on passenger needs, seasons and even flight destinations. Transpose is an outcome of radical collaboration amongst a global team of partners and consultants from designers, researchers, and engineers to behavioral economists -- we’ve brainstormed, designed, simulated high and low fidelity physical prototypes and executed live flight experiences with passengers. A future-focused innovation like Transpose called for an equally advanced online booking experience. And this is where Idean stepped in. An essential part of the Transpose experience takes place even before the passengers reach the airport: the task of planning their customized flight experience. Our ambition was to energize a prospective passenger’s weary task of booking plane tickets while simplifying the added complexity of planning their new in-flight experiences.
Who worked on the project?
Sampo Jalasto, Creative Director, Idean Sunita Ram, Project Lead & UX Designer, Idean Darin Hansford, Program Director, Idean Luis Munguia, Visual Designer, Idean Helder Silva, Visual Designer, Idean Hesam Khodabakshi, Visual Designer, Idean Kia Alavi, UX Designer, Idean Justin Dawson, UI Developer, Idean Bryan Downing, UI Developer, Idean Milja Hakala, Project Manager, Idean
View the project video: https://drive.google.com/file/d/0ByEijC9dfA8iSXNCQVl1RVdhVGc/view?usp=sharing
Alibaba ET Brain
Company Wolff Olins
Introduction Date February 10, 2018
Project Website
Why is this project worthy of an award?
ET Brain represents a big leap in Artificial Intelligence. It is the first AI platform to mimic the structures and patterns of our human brains, meaning it learns, adapts and arrives at solutions quicker and more accurately than humans. ET Brain is already part of everyday life in China, keeping city infrastructures flowing, and its people safer. It’s also being rolled out for Industry, Medical, Aviation and Environmental uses – and was unveiled to the world as the digital partner of the Olympics at Pyeongchang 2018. While the power of its technology is impressive, our client knew they faced problems bringing ET Brain to a global audience. High on the list was removing the fear and anxiety associated with AI in general, we needed to change the narrative away from Skynet and Black Mirror into something much more positive and human. The challenge could be summed up as giving something invisible, inhuman and intangible an identity that people could relate to, and ultimately, trust in. It wasn’t enough to make ET Brain globally recognisable, users are interacting with ET Brain everyday and we needed to make that service seamless and simple. We had to go beyond traditional visual identity thinking. We imagined ET Brain as an evolutionary technology, one that was not only highly advanced but one that could help humanity evolve too. To live up to this promise users needed to always understand what ET Brain was doing and why it arrived at it conclusions. By itself the logo is designed to look like a friendly, approachable face, but we also gave the logo a kind of soul. We built a series of facial-like expressions and movements to convey emotions and responses that are based on universally understood human gestures. Each gesture such as waiting, listening, thinking and speaking gives ET Brain the ability to respond and interact with users in real-time, helping it communicate globally without words, and build trust into an invisible and hard to understand technology.
What else would you like to share about your design? Why is it unique and innovative?
This identity is one of the first examples of a new type of identity that we are calling Intelligent Identities. As tech is becoming more integrated into the world around us, brands will need identities that are more intelligent and built for this world. None of the standard VI rules apply, you’re relying on sounds, voice, gestures and behaviours as much as (possibly more than) the mark itself — smart, interconnected and responsive assets, intrinsically connected to new technologies and platforms. An intelligent Identity helps a brand have a conversation at a human level, on a broad scale. It helps people do more. They listen, creating a genuine and lasting dialogue with people. Above all, they seek to build an emotional connection with people, and aim to influence lives in positive ways. For ET Brain this meant thinking about where it lives and bringing that personality across when it is an avatar, a presenter, a voice, an interface, and even a 4ft character! If we got it wrong, and didn’t create a coherent, recognisable, dependable intelligence across any of those touchpoints we would destroy trust very quickly. We used animation principles to humanise every gesture of the logo, we created quirky, soft, approachable sounds to invite people to be curious and step towards it rather than shy away from it. We also created a strong visual world for ET Brain, showing it interacting with people and things in our world where possible.
Who worked on the project?
Emma Barratt, Creative Director Sidney Lim, Senior Designer Larissa Maris, Programme Director Matthias Hoegg, Senior Animator Tom Bennett, Designer Franc Falco, 3D Modeller
View the project video: https://vimeo.com/255443293
AlterEgo: Intelligence Augmentation through Silent Speech Interface
Company MIT Media Lab
Introduction Date March 4, 2018
Project Website https://www.media.mit.edu/projects/alterego/overview/
Why is this project worthy of an award?
While looking at the history of computing, one could argue that machines have invariably been regarded as extrinsic objects that we interact with – early mainframes, personal desktops, smartphones, social robots amongst others. However, personal computing was originally designed as a platform with the intent to directly augment human intellect. The aim of the system, named AlterEgo, is to make computers and machine intelligence an internal extension of a person, as opposed to a human user interacting with an external object as has been the norm (smartphone, desktops, smart speakers, mainframes etc.) for more than half a century. The system described herein, named AlterEgo, aims to dovetail humans and computers as single entities - such that computing, the internet, artificial intelligence would weave into human personality as a “second self” and act as an internal adjunct to our own cognitive abilities. The non-invasive wearable system enables computing, artificial intelligence, telecommunications, internet as a cognitive extension of a human through a silent bi-directional internal speech interface. The AlterEgo computing platform gauges internal human conversation through a peripheral neuromuscular interface, transmitting information to a computer and then relays relevant information back to a human user through bone conduction. This enables the interaction to be completely internal to a human user, with computers, AI assistants or other people without unplugging the user from her environment -- and allows a human user to transmit and receive streams of information to and from a computing device/person without any observable action and in discretion. Such a platform opens up a wide range of possibilities and in turn seeks to change our longstanding relationship with technology. This platform allows a human user to connect to the internet, and access the knowledge of the web in real-time as an extension of the user’s self - a user could internally vocalize a Google query and get a resultant answer through bone conduction without any observable action at all. The system has implications for telecommunications, where people could communicate with the ease and bandwidth of vocal speech with the addition of the fidelity and privacy that silent speech provides. The system acts as a digital prosthetic memory, and therefore enables infinite memory for a human user - the user could internally record streams of information and access these at a later time through the system. The system allows a human user to be a nested node in an internet-of-things (IoT) network such that the user could control diverse appliances and devices without any discernible movements. A central goal of the AlterEgo platform is to democratize artificial intelligence and make it accessible to directly benefit humans. An example application developed, is using AlphaGo engine in conjunction with AlterEgo, which enables a human user to access the expertise of an AI in real time, as though a part of the user herself. The platform, therefore, enables everyone to become expert Go in a demonstration of how AlterEgo could augment human decision-making through machine intelligence in the near future.
What else would you like to share about your design? Why is it unique and innovative?
The design of AlterEgo puts the human user in the center, with the aim of amplifying human cognition by making the interaction with a computer an internal and personal one. The concept behind the interface is to treat internal speech articulators and the peripheral somatic system in likeness to an electronic low-amplitude low-frequency oscillator and subsequently record sparse neuromuscular electrical motifs generated by neurological activation of internal muscles during internal speech, as clues to guess internal speech. The endogenous electric signals are picked up non-invasively from the surface of the skin by a set of electrodes embedded in the wearable system. We believe that it is imperative that an everyday interface does not invade a user's privacy and private thoughts - and therefore not have physical access to the user's continuous brain activity. This is a basis of our design and to that end, AlterEgo is a flip on traditional approaches of interfacing humans and computers, we do not read the brain directly but read deliberate electrical neuromuscular signals from the lower face and neck, which are induced by extremely subtle activation of internal speech organs when a person deliberately speaks but internally, which is a fundamental advantage. In that way, a person has absolute control over what information to transmit to a computer/other person while also conversing with a computing device privately (without anyone in the vicinity noticing that the user is transmitting any information at all). We use this to facilitate a novel user interface where a user can silently communicate in natural language and receive auditory output through bone conduction headphones, thereby enabling discreet, bi-directional interaction with a computing device, and providing a seamless form of intelligence augmentation (which people would feel comfortably using without being concerned about the device invading their privacy). The key features of the system is that a user can have an internal private conversation with a computer/person, while also keeping one's own thoughts private, without the system having any access to the brain directly. This indirect approach enables to record internal vocalization which is a sweet spot between thinking and speaking out loud -- which is both private while also gives the user absolute control over what information to transmit to another person or a computer.
Who worked on the project?
Arnav Kapur - Graduate Researcher Shreyas Kapur - Undergraduate Researcher
View the project video: