Experimental Category Entries
Shutterstock Composition Search
Company Shutterstock
Introduction Date October 12, 2017
Project Website https://www.shutterstock.com/showcase/compositionsearch/
Why is this project worthy of an award?
Shutterstock is on the front lines of improving the future of visual search technology using pixel data, deep learning, and artificial intelligence. For marketers, searching for an image with the right copy space can be a time-consuming process, scrolling through pages to find the right one within a collection of more than 190 million images. Shutterstock has invested in its machine learning technology to improve the customer experience and provide more time for productivity and creativity. Composition Search enables an entirely new search experience. Built on Shutterstock’s next-generation visual similarity model, this tool allows users to specify one or more keywords, or to search for copy space, and arrange them spatially on a canvas to reflect the specific layout of the image they are seeking. This patent pending tool uses a combination of machine vision, natural language processing, and state of the art information retrieval techniques to find strong matches against complex spatially aware search criteria. For example, a user can look for images of wine and cheese, where the wine is on the left and the cheese is on the right. By simply moving the placement in the search, users can see the requested changes reflected in the image results. Shutterstock customers can then license and edit the image for use in their work.
What else would you like to share about your design? Why is it unique and innovative?
Composition Search is the latest innovation leveraging Shutterstock’s investment in deep learning, following the launch of Reverse Image Search and Visually Similar Search last year. These innovations were developed by Shutterstock’s in-house computer vision team whose focus is on creating new ways to search for visual content and providing an unparalleled customer experience. Shutterstock’s new experimentation site, Showcase, demonstrates the company’s commitment to new and innovative search tools. Additional innovative Artificial Intelligence tools on the site include: Reveal - a Google Chrome extension allowing users to select any image online and find a similar photo within Shutterstock’s collection; Copy Space - a tool that enables users to specifically search for images that have space for text and then select where and how much copy space is needed; and Refine - a tool that allows users, from the first page of search results, to select those images most similar to what they are looking for, and Shutterstock’s technology will surface other images that have a similar style and other commonalities to the selected images.
Who worked on the project?
Michael Ranzinger, Principal Software Engineer Nicholas Lineback, Software Engineer III Tyler McCann, Product Manager, Customer Accounts Rose Matsa, Product Design Manager
View the project video: https://www.youtube.com/watch?v=1KxXtI-0DMg
Sketching Interfaces
Company Airbnb
Introduction Date October 24, 2017
Project Website https://airbnb.design/sketching-interfaces/
Why is this project worthy of an award?
As the design systems movement gains steam and interfaces become more standardized, it seems likely that AI assisted design and development will be baked into the next generation of tooling. With Sketching Interfaces, we lowered the time it takes to test an idea by translating low fidelity, hand drawn components to high fidelity prototypes in real time. The project immediately sparked interest in the broader community, inspiring follow-on projects such as Microsoft’s Ink to Code, and a write-up on Floydhub.
What else would you like to share about your design? Why is it unique and innovative?
As it currently stands, every step in the design process, and every artifact produced, is a dead end. Work stops whenever one discipline finishes a portion of the project and passes responsibility on to another discipline. From stakeholder meetings to design and engineering, requirements become explorations, which then become mockups and prototypes that are handed up to developers. This is how final products come to be.Each of these cumbersome steps is, at its core, a translation of shared meaning to a different medium in progression toward a common goal. Skilled experts in each domain then act as translators. By using Machine Learning, we demonstrate a potential continuity between disciplines and, in doing so, increase product development speed.
Who worked on the project?
Benjamin Wilkins (Design Technologist), Jon Gold (Design Technologist)
View the project video: https://airbnb.app.box.com/file/283711550566
Snaptivity
Company R/GA London
Introduction Date February 2, 2017
Project Website http://www.snaptivityapp.com/
Why is this project worthy of an award?
Collecting numerous data points throughout our partner sports stadiums, our AI predicts when and where the next big audience reaction will happen. This triggers our robotic cameras, capturing candid fan moments. We meet the needs of the selfie culture, but leave fans free to watch the game without distraction. Each photo is enhanced with dynamic creative content triggered by on-pitch action, and a smart user experience drives fans to social platforms. Snaptivity enables brands to be part of a fan’s memories, sharing the right message for every unique moment of sports drama. We’re bringing live sports into the digital age. 1.5m people flow into stadiums across the UK every weekend to experience the thrill of live sports. Nothing beats watching the game from inside the stadium, but capturing that excitement on individual phones is hard, and often ruins the moment. Additionally, as the stadium experience has become more immersive, fans have developed ‘banner blindness’ – they no longer notice traditional sponsorship. We were looking for a game-changer – a genuine and memorable way for teams and sponsors to be part of fans’ live sport experiences while keeping fans glued to the pitch, not to their smartphones. We wanted to create an easy-to-install, scalable solution for bringing innovation into stadium infrastructure – a way to increase the value stadiums can offer to fans, teams and brands.
What else would you like to share about your design? Why is it unique and innovative?
Instead of photographing the action on the pitch, we’re photographing the fans. Snaptivity enhances fans’ experiences of live sport and challenges mainstream use of in-stadium technology. We’re using IoT, robotics and live data in a new, positive and rewarding way – reassuring fans that their most memorable moments will be captured and ready to share whenever they want. Each photo is matched with creative content, using contextual information from the fan and data from the match. Brands or teams can then add bespoke creative content, which can also be dynamically matched to the on-pitch moment that triggered it. Along with a smart user experience that drives social sharing in fan communities, we generate unique, high-quality, authentic fan content as fast as the match is played. This offers teams and sponsors a genuine way to contribute to fan conversations on social media as they happen, and to create memories that last. During the live match, our crowd movement and volume sensor data feeds our AI so as to predict when and where the next big audience reaction will happen, and to set the cameras snapping. Coupled with our spatial AR-based stadium mapping system, our API-based solution enables us to deliver the right photo to the right fan at the right time. Our system also enables teams and brands to overlay dynamic content on each photo – targeting down to the seat number – so that brands are part of a specific fan moment, sharing the right emotion and message for every unique moment of sports drama. This relevance, together with our simple user experience, encourages sharing with the fan community on social media. Snaptivity offers teams and sponsors an authentic way to be a value-add contributor to fans’ conversations as they happen, and in their photo memories long after match day. In sports stadiums around the world, including Edgbaston Cricket Ground, Wembley Stadium, Twickenham Stadium and eight FIFA 2018 World Cup host stadiums in Russia, we are transforming the way that teams and sports sponsors connect with fans. We install our custom robotic cameras, internally connecting them to all our sensors through our AI on a mesh network. During the match itself, the system runs at the speed of the on-pitch action, taking photos when excitement in the crowd spikes. We use trigger moments during a match to layer dynamic creative content onto the fan photos. Our partners can create content for the overlays in advance, or create it live in response to what’s happening on pitch. This means their messages will always be relevant for the fans, ready to share and earning the brand or campaign an authentic role in the fans’ conversations on social media.
Who worked on the project?
Olly Paulovich - Co-Founder and CMO Amit Pate - CEO Robert Northam - Creative Director, Visual Design Rosie Flood - Producer Jessica Ryde - Senior Consultant Ed Steadman - Junior Video Editor Martin Spurway - Senior Experience Designer Julen Saenz - Visual Designer James Temple - EVP Chief Creative Officer EMEA Nicolas Olivieri - Senior Director of Programs R/GA Ventures Matt Webb - Managing Director IoT UK Accelerator Lisa Ritchie - Program Director R/GA Ventures Chloe Cronyn - Content Producer
View the project video: http://judgeseyesonly.com/snaptivity
Socializing AI: a human experience with Watson
Why is this project worthy of an award?
Artificial Intelligence is everywhere. More than 8.3 billion devices around the world are turning on lights, brewing cups of coffee, and controlling the comfort of our homes with the ease of AI. But, few among us understand how these systems work, and even fewer can effectively communicate those ideas to others in a way that is clear and approachable. Challenging ourselves to transform the conversation around Artificial Intelligence — IBM set out to create a way to explain these complex and technical ideas in a clear, reasoned, and human voice. To tell the story of AI, we looked to history for inspiration. We were particularly inspired by Ray and Charles Eames’ iconic work for IBM at the World’s Fair; a visionary exhibit that told of the partnership between computers and society. This inspiration led to an idea of humanity at the core of AI — the Watson Experience Center. We created collaborative meeting spaces where clients, researchers, and students learn about Artificial Intelligence with real-world examples of AI working to help people across industries, demystifying AI and reshaping misconceptions in the process. Our visitors encompass all levels of technical expertise — from technically savvy CEOs to a student in our local Pathways in Technology Early College High Schools (P-Tech) partnership. Every visitor leaves with a clearer understanding of Watson and AI and feel inspired by its potential for society. Our guides invite visitors to look under the hood of AI technology: how we transform terabytes into insights, how we teach Watson to learn, how we train it, and how we utilize it; the power of AI presented with a human perspective. Blending storytelling with design, data, and expert-guided interaction, we present immersive AI experiences that are clear, reasoned, and approachable. Every use-case starts with a deep-dive workshop that leverages skills from our product, creative, interaction design, and spatial engineering teams. Over the course of 6 months, our teams iteratively refine the story, interactions, end-to-end prototypes, and spatial engineering to create the applications. Using gesture-based interaction and in-depth data visualizations, we demo how AI systems make decisions, and what we are doing with them today — on natural disasters and financial crimes, oil rigs and art galleries, headlines and product lines. Our 300° immersive room allows humans to see massive amounts data through the lens of machine learning, along with the endless patterns, insights, and opportunities it promises. The centers are utilized 100% throughout the year, during which the centers present these stories of Watson to thousands of groups, made up of 15,000 visitors, across 25 industries — sparking their imaginations and seeding their knowledge of AI. Visits lead to a 93-point increase in Net Promoter Score, with a measurable increase in understanding of AI and Watson.
What else would you like to share about your design? Why is it unique and innovative?
The IBM Watson Experience Centers are permanently branded experiences, with locations in New York City, San Francisco, and Cambridge. The centers are invitation only, and their primary audience is potential customers of IBM Watson. Each location acts as a hive of creative and technical Watson activity. The centers are a physical representation of the IBM Watson brand and challenge both misconceptions of AI and visitors’ preconceived notions of IBM. By leveraging our decades of research and expertise in Artificial Intelligence, we created a form of spatial exploration that allows non-technical visitors to scale their abilities to comprehend data at scale and see how AI works. By integrating science, technology, and creative expression, we can make clear to clients and the world the value of AI, its practical applications, and potential for society, by creating vignettes that use real-world applications of the technology and live data to present an accurate view of AI. Our applications have dynamic and evolving groups of users to solve. We have to develop real examples of Watson technology pulled from our product offerings that are intuitive for our Experience Leaders. We also need to craft stories that demonstrate the use-cases with bright ideas, discernable visualizations, and language that is accessible and not pedantic. We have adopted the ethos, "Show, don't tell," a UX pattern of progressive evidence. Using Watson's findings and discoveries as doorways the user follows — not a waterfall of information — visitors can co-discover the insights Watson is uncovering. The intuitive UX combined with the unique form factor of a 300° immersive room have led us to create practical and innovative practices of experience design that we can apply to AI products. Our team's task is overcoming and countering some unique challenges; reframing the conversation about AI and the misconceptions, creating interactions and experiences for a compelling form factor, and helping to challenge clients’ assumptions about a 107-year-old technology company. We create experiences that go beyond just our user's needs — we overcome and respond to the dynamic changes in our visitors’ and the public’s understanding of the technology.
Who worked on the project?
IBM Watson -------------- Jeffrey Coveyduc, Executive Director, Executive Stakeholder Kai Young, Program Director, Project Stakeholder Patrick Muse, Program Director, Project Stakeholder Jenny Woo, Design Lead, Creative Direction & Strategy Rob Harrigan, Design Lead, Creative Direction & Brand Integration Fredrick Benson, Product Manager, Product Management & Oversight Tom Wall, Immersive Engineering, Development and Deployment Taimur Shah, Immersive Engineering, Development and Deployment Joe Harding, Immersive Engineering, Development and Deployment Aquaris Anderson, Project Coordinator, Budget Coordination & Approvals Local Projects --------------- Jake Barton, Principal, Creative Oversight Marijana Wotton, Account Director, Account Management & Leadership Jeanne Angel, Project Manager, Project Management & Oversight Christina Latina, Senior Art Director, Creative Direction & Design Nina Boesch, Senior UX Designer, User Experience & Design Myles Bryan, Senior Motion Designer, Visual Effects & Motion Design Angelica Jang, Senior Motion Designer, Visual Effects & Motion Design Crystal Law, Motion Designer, Motion Design & Editing Danny Well, Lead Visual Experience Designer, Visual Design Vidya Santosh, Senior Visual Experience Designer, Visual Design Oblong Industries --------------- Pete Hawkes, Interaction Design Director, Interaction & UI/UX Design John Carpenter, Design Lead, Interaction & Data Visualization Samson Klitsner, Interaction Designer, UI/UX Design Brandon Harvey, Engineering Director, Software & Hardware Strategy Justin Shrake, Engineering Lead, Software Architecture Michael Schuresko, Effects Engineer, Visual Effects Tom Jakubowski, Engineer, Software & Data Engineering Tom DiNetta, Engineer, Software & Data Engineering David Schweinsberg, Engineer, Software & Data Engineering Aaron Rice, Engineer, Software & Data Engineering
View the project video: https://drive.google.com/file/d/1WP17KbMrNMDpP_EkiaFUS3pExRVzPD0m/view?usp=sharing
Sounds Like You: An experiment in Music and Personalization
Why is this project worthy of an award?
We all have our favorite songs, the ones that move us, the ones that remind of us our favorite places, people, and times. But what if you could have your own song? A song made entirely for you, based on the musical traits that are the ‘dna’ of all your favorite songs? This is Sounds Like You, an experiment in music and personalization in which we make a song just for you, based entirely on the songs that move you. After picking which songs appeal the most to you, the program offers a personalized song based on your music tastes, pulling from over 10,000 songs across 9 genres including R&B, Rock, Dance, Latin, Jazz, Rap/Hip Hop, Country, Pop, Classical. Created by Tool NA and powered by Pandora’s Music Genome Project, this is a first-of-its-kind application to bring engaging content to the consumer.
What else would you like to share about your design? Why is it unique and innovative?
Sounds Like You leverages Pandora’s proprietary Music Genome data and facial analysis technology to power a first-of-its kind unique personalized music experience.
Who worked on the project?
Pandora: David Bornoff, Sr. Director - Strategic Solutions Susan Panico, SVP - Strategic Solutions Michelle Alexander, Manager - Music Analysis Steve Hogan, Director - Music Analysis Tool: Artist/Back-end Developer - Jeff Crouse Creative Director - Josh Jetson Interactive Designer - Spencer Fink Creative Technologist - Gwen Vanhee Creative Technologist - Jan Vantomme Composer/Sound Design - Gary Gunn Sr. Innovation Producer - Matt Kahn HOP, Innovation - Michael Bucchino EP, Innovation - Adam Baskin Managing Partner - Oliver Fuselier Managing Partner - Dustin Callif li.li1 {margin: 0.0px 0.0px 0.0px 0.0px; font: 11.0px Calibri} span.s1 {font-kerning: none} ul.ul1 {list-style-type: disc}
View the project video: