Papers

2023

FibeRobo: Fabricating 4D Fiber Interfaces by Continuous Drawing of Temperature Tunable Liquid Crystal Elastomers

Jack Forman, Ozgun Kilic Afsar, Sarah Nicita, Rosalie Hsin-Ju Lin, Liu Yang, Megan Hofmann, Akshay Kothakonda, Zachary Gordon, Cedric Honnet, Kristen Dorsey, Neil Gershenfeld, and Hiroshi Ishii. 2023. FibeRobo: Fabricating 4D Fiber Interfaces by Continuous Drawing of Temperature Tunable Liquid Crystal Elastomers. In Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology (UIST '23). Association for Computing Machinery, New York, NY, USA, Article 19, 1–17. https://doi.org/10.1145/3586183.3606732

DOI: https://doi.org/10.1145/3586183.3606732
We present FibeRobo, a thermally-actuated liquid crystal elastomer (LCE) fiber that can be embedded or structured into textiles and enable silent and responsive interactions with shape-changing, fiber-based interfaces. Three definitive properties distinguish FibeRobo from other actuating threads explored in HCI. First, they exhibit rapid thermal self-reversing actuation with large displacements (∼40%) without twisting. Second, we present a reproducible UV fiber drawing setup that produces hundreds of meters of fiber with a sub-millimeter diameter. Third, FibeRobo is fully compatible with existing textile manufacturing machinery such as weaving looms, embroidery, and industrial knitting machines. This paper contributes to developing temperature-responsive LCE fibers, a facile and scalable fabrication pipeline with optional heating element integration for digital control, mechanical characterization, and the establishment of higher hierarchical textile structures and design space. Finally, we introduce a set of demonstrations that illustrate the design space FibeRobo enables.

The Stranger -........ -.-..- -. –...-.

Kyung Yun Choi and Hiroshi Ishii. 2023. The Stranger -........ -.-..- -. –...-. In Companion Publication of the 2023 ACM Designing Interactive Systems Conference (DIS '23 Companion). Association for Computing Machinery, New York, NY, USA, 74–76. https://doi.org/10.1145/3563703.3596648

DOI: https://doi.org/10.1145/3563703.3596648
This interactive art installation creates a strange feeling as if you are speaking to yourself in a soliloquy while having the presence of an invisible entity that might be you or not. The automated typewriter recognizes your speech and translates it into Morse code, excluding the words ’I’ and ’you’ and those letters. It creates a new perception of distance and captures the moment of losing authority over your own thoughts. Even if you see the typewriter transcribing what you just said with Morse code, the resulting sentences composed of dots, dashes, ’I’, and ’You’ on the paper seem to respond to your speech with a completely new and seemingly encrypted meaning, as if they are talking back to you. This can result in the loss of the speaker’s original intention and can also represent the ephemeral aspect of either inner or outer voice.

Modulating Interoceptive Signals for Influencing the Conscious Experience

Abhinandan Jain, Kyung Yun Choi, Hiroshi Ishii, and Pattie Maes. 2023. Modulating Interoceptive Signals for Influencing the Conscious Experience. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA ’23), April 23–28, 2023, Hamburg, Germany. ACM, New York, NY, USA, 7 pages. https://doi.org/10.1145/3544549.3585791

DOI: https://doi.org/10.1145/3544549.3585791
Interoceptive signals play a key role in genesis of our conscious experience. Modulation of interoceptive signals holds great potential for developing new human-computer interactions by creating dynamic experiences that are able to engage on user’s emotional level. We present design of a wearable system for modulating interoceptive sympathetic signal of heart rate by closed-loop feedback via pneumatic haptic stimulation on carotid baroreceptors. Our preliminary results showcase potency of the system to modulate the sympathetic activity and consequently user’s conscious experience. We discuss the potential of modulating interoceptive signals as a new paradigm towards developing interactions and affective interventions for dynamically reshaping the conscious experience.

VR Haptics at Home: Repurposing Everyday Objects and Environment for Casual and On-Demand VR Haptic Experiences

Cathy Mengying Fang, Ryo Suzuki, and Daniel Leithinger. 2023. VR Haptics at Home: Repurposing Everyday Objects and Environment for Casual and On-Demand VR Haptic Experiences. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (CHI EA '23). Association for Computing Machinery, New York, NY, USA, Article 312, 1–7. https://doi.org/10.1145/3544549.3585871

DOI: https://doi.org/10.1145/3544549.3585871
This paper introduces VR Haptics at Home, a method of repurposing everyday objects in the home to provide casual and on-demand haptic experiences. Current VR haptic devices are often expensive, complex, and unreliable, which limits the opportunities for rich haptic experiences outside research labs. In contrast, we envision that, by repurposing everyday objects as passive haptics props, we can create engaging VR experiences for casual uses with minimal cost and setup. To explore and evaluate this idea, we conducted an in-the-wild study with eight participants, in which they used our proof-of-concept system to turn their surrounding objects such as chairs, tables, and pillows at their own homes into haptic props. The study results show that our method can be adapted to different homes and environments, enabling more engaging VR experiences without the need for complex setup process. Based on our findings, we propose a possible design space to showcase the potential for future investigation.

Selective Patterning of Liquid Metal-Based Soft Electronics via Laser-Induced Graphene Residue

Liquid metal-based flexible electronics hold promise for wearable devices that can monitor human activities unobtrusively. However, existing fabrication methods pose challenges in scalability and durability, especially for devices designed for daily use. This study presents a novel approach to pattern liquid metal using laser-induced graphene residue as a non-wetting barrier. The proposed method can resolve liquid metal traces up to 200 μm wide. The electrical and mechanical properties of the liquid metal traces enable these electronic devices to stretch up to 40% and bend with radii as small as 5mm, ensuring reliable connectivity for wearable human-machine interfaces. Furthermore, by transferring optimized liquid metal patterns onto a PDMS substrate, we successfully demonstrate a range of soft electronic devices that enable accessible and conformal interfaces with the human body without mechanical limitations.

2022

(Dis)Appearables: A Concept and Method for Actuated Tangible UIs to Appear and Disappear based on Stages

Ken Nakagaki, Jordan L Tappa, Yi Zheng, Jack Forman, Joanne Leong, Sven Koenig, and Hiroshi Ishii. 2022. (Dis)Appearables: A Concept and Method for Actuated Tangible UIs to Appear and Disappear based on Stages. In CHI Conference on Human Factors in Computing Systems (CHI '22). Association for Computing Machinery, New York, NY, USA, Article 506, 1–13. https://doi.org/10.1145/3491102.3501906

DOI: https://doi.org/10.1145/3491102.3501906
(Dis)Appearables is an approach for actuated Tangible User Interfaces (TUIs) to appear and disappear. This technique is supported by Stages: physical platforms inspired by theatrical stages. Self-propelled TUI's autonomously move between front and back stage allowing them to dynamically appear and disappear from users' attention. This platform opens up a novel interaction design space for expressive displays with dynamic physical affordances. We demonstrate and explore this approach based on a proof-of-concept implementation using two-wheeled robots, and multiple stage design examples. We have implemented a stage design pipeline which allows users to plan and design stages that are composed with front and back stages, and transition portals such as trap doors or lifts. The pipeline includes control of the robots, which guides them on and off stage. With this proof-of-concept prototype, we demonstrated a range of applications including interactive mobility simulation, self re-configuring desktops, remote hockey, and storytelling/gaming. Inspired by theatrical stage designs, this is a new take on `controlling the existence of matter' for user experience design.

Pudica: A Framework For Designing Augmented Human-Flora Interaction

Olivia Seow, Cedric Honnet, Simon Perrault, Hiroshi Ishii. 2022. Pudica: A Framework For Designing Augmented Human-Flora Interaction. In Proceedings of the Augmented Humans Conference (AHs '22). Association for Computing Machinery, New York, NY, USA. DOI: https://doi.org/10.1145/3519391.3519394

This project introduces and studies a design framework for designing human-flora interaction in plant-based interfaces, which could play a prominent role in a world where HCI strives to be less pollutive, more power saving, and humane. It discusses critical considerations (e.g. maintenance, reproducibility) for such interfaces, supported by a user study based on an interactive prototype. The results of our user study show that users’ interest in plants varies significantly with past experience. Users may create a strong emotional bond with plants, suggesting that organic interfaces should be used for emotionally strong use cases, such as keeping in touch with loved ones or checking important data.

Design and Evaluation of a Clippable and Personalizable Pneumatic-haptic Feedback Device for Breathing Guidance

Kyung Yun Choi, Neska ElHaouij, Jinmo Lee, Rosalind W. Picard, and Hiroshi Ishii. 2022. Design and Evaluation of a Clippable and Personalizable Pneumatic-haptic Feedback Device for Breathing Guidance. <i>Proc. ACM Interact. Mob. Wearable Ubiquitous Technol.</i> 6, 1, Article 6 (March 2022), 36 pages. DOI:https://doi.org/10.1145/3517234

To assist people in practicing mindful breathing and regulate their perceived workload while not disturbing the ongoing foreground task during daily routines, we developed a mobile and personalizable pneumatic-haptic feedback device that provides programmable subtle tactile feedback. The device consists of three soft inflatable actuators embedded with DIY stretchable sensors. We introduce their simple and cost-effective fabrication method. We conducted a technical and user-based evaluation of the device. The user-based evaluation focused on the personalization of the tactile feedback based on users' experience assessed during three pilot studies. Different personalization parameters have been tested, such as two tactile patterns, different levels of intensity and frequency. We collected the participants' self-reports and physiological data. Our results show that the device has the potential of a breathing guide under certain conditions. We provide the main findings and design insights from each study and suggest recommendations for developing an on-body personalizable pneumatic-haptic feedback interface.

Cardiac Arrest: Evaluating the Role of Biosignals in Gameplay Strategies and Players' Physiological Synchrony in Social Deception Games

Cathy Mengying Fang, G.R. Marvez, Neska ElHaouij, and Rosalind Picard. 2022. Cardiac Arrest: Evaluating the Role of Biosignals in Gameplay Strategies and Players’ Physiological Synchrony in Social Deception Games. CHI EA '22: CHI Conference on Human Factors in Computing Systems Extended Abstracts (April 2022), Article No. 240, pages 1–7. DOI:http://dx.doi.org/https://doi.org/10.1145/3491101.3519670

Social deduction or deception games are games in which a player or team of players actively deceives other players who are trying to discover hidden roles as a part of the win condition. Included in this category are games like One Night Werewolf, Avalon, and Mafia. In this pilot study (N = 24), we examined how the addition of visual displays of heart rate (HR) signals affected players’ gameplay in a six player version of Mafia in online and in-person settings. We also examined moments of synchrony in HR data during critical moments of gameplay. We find that seeing signals did affect players’ strategies and influenced their gameplay, and that there were moments of HR synchrony during vital game events. These results suggest that HR, when available, is used by players in making game decisions, and that players’ HRs can be a measure of like-minded player decisions. Future work can explore how other biosignals are utilized by players of social deception games, and how those signals may undergo unconscious synchrony.

VibroAware: vibroacoustic sensing for interaction with paper on a surface

Sanad Bushnaq, Joao Wilbert, Ken Nakagaki, Saman Farhangdoust, and Hiroshi Ishii "VibroAware: vibroacoustic sensing for interaction with paper on a surface", Proc. SPIE 12049, NDE 4.0, Predictive Maintenance, and Communication and Energy Systems in a Globally Networked World, 1204909 (18 April 2022); https://doi.org/10.1117/12.2617330

DOI: https://doi.org/10.1117/12.2617330
Vibroacoustic sensing is a method to investigate and enable tangible interaction on surfaces. One of the main challenges in this field is to make a sheet of paper on a surface interactive without either prefabrication or permanent instrumentation. This research presents VibroAware, a novel system that makes paper on a surface interactive as is by leveraging vibrations. The sheet of paper becomes interactive when users attach it to four thin piezoelectric transducers. Users interact with the sheet of paper on the surface by touching or blowing, which produces vibrations captured by our system. In this research, an algorithm is developed to enable localization and adapt to environmental noise without requiring analyzing material properties. VibroAware offers users the ability to test and prototype faster, and create interfaces using vibroacoustic sensing on a sheet of paper more intuitively. This research presents a future vision for using vibroacoustic sensing to enable interaction on a sheet of paper that opens opportunities to utilize inherent material properties, such as vibration.

Mechanical Sensing Towards 3D-Printed Wearables

Sonia F. Roberts, Jack Forman, Hiroshi Ishii, and Kristen L. Dorsey. 2022. Mechanical Sensing Towards 3D-Printed Wearables. To be presented at Hilton Head Workshop 2022: A Solid-State Sensors, Actuators and Microsystems Workshop, 5-9 June 2022.

2021

OmniFiber: Integrated Fluidic Fiber Actuators for Weaving Movement based Interactions into the ‘Fabric of Everyday Life’

Ozgun Kilic Afsar, Ali Shtarbanov, Hila Mor, Ken Nakagaki, Jack Forman, Karen Modrei, Seung Hee Jeong, Klas Hjort, Kristina Höök, and Hiroshi Ishii. 2021. OmniFiber: Integrated Fluidic Fiber Actuators for Weaving Movement based Interactions into the ‘Fabric of Everyday Life’ In The 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21). Association for Computing Machinery, New York, NY, USA, 1010–1026. DOI:https://doi.org/10.1145/3472749.3474802

Fiber – a primitive yet ubiquitous form of material – intertwines with our bodies and surroundings, from constructing our fibrous muscles that enable our movement, to forming fabrics that intimately interface with our skin. In soft robotics and advanced materials science research, actuated fibers are gaining interest as thin, flexible materials that can morph in response to external stimuli. In this paper, we build on fluidic artificial muscles research to develop OmniFiber - a soft, line-based material system for designing movement-based interactions. We devised actuated thin (øouter < 1.8 mm) fluidic fibers with integrated soft sensors that exhibit perceivably strong forces, up to 19 N at 0.5 MPa, and a high speed of linear actuation peaking at 150mm/s. These allow to flexibly weave them into everyday tangible interactions; including on-body haptic devices for embodied learning, synchronized tangible interfaces for remote communication, and robotic crafting for expressivity. The design of such interactive capabilities is supported by OmniFiber’s design space, accessible fabrication pipeline, and a fluidic I/O control system to bring omni-functional fluidic fibers to the HCI toolbox of interactive morphing materials.

MetaSense: Integrating Sensing Capabilities into Mechanical Metamaterial.

Jun Gong*, Olivia Seow*, Cedric Honnet*, Jack Forman, and Stefanie Mueller. 2021. MetaSense: Integrating Sensing Capabilities into Mechanical Metamaterial. In The 34th Annual ACM Symposium on User Interface Software and Technology (UIST '21). Association for Computing Machinery, New York, NY, USA, 1063–1073. DOI:https://doi.org/10.1145/3472749.3474806

In this paper, we present a method to integrate sensing capabilities into 3D printable metamaterial structures comprised of cells, which enables the creation of monolithic input devices for HCI. We accomplish this by converting select opposing cell walls within the metamaterial device into electrodes, thereby creating capacitive sensors. When a user interacts with the object and applies a force, the distance and overlapping area between opposing cell walls change, resulting in a measurable capacitance variation. To help designers create interactive metamaterial devices, we contribute a design and fabrication pipeline based on multi-material 3D printing. Our 3D editor automatically places conductive cells in locations that are most affected by deformation during interaction and thus are most suitable as sensors. On export, our editor creates two files, one for conductive and one for non-conductive cell walls, which designers can fabricate on a multi-material 3D printer. Our applications show that designers can create metamaterial devices that sense various interactions, including sensing acceleration, binary state, shear, and magnitude and direction of applied force.

Therms-Up!: DIY Inflatables and Interactive Materials by Upcycling Wasted Thermoplastic Bags

Kyung Yun Choi and Hiroshi Ishii. 2021. Therms-Up!: DIY Inflatables and Interactive Materials by Upcycling Wasted Thermoplastic Bags. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '21). Association for Computing Machinery, New York, NY, USA, Article 51, 1–8. DOI:https://doi.org/10.1145/3430524.3442457

We introduce a DIY method of creating inflatables and prototyping interactive materials from wasted thermoplastic bags that easily found at home. We used a inexpensive FFF 3D printer, without any customization of the printer, to heat-seal and patterning different types of mono and multilayered thermoplastic bags. We characterized 8 different types of commonly-used product package’s plastic film which are mostly made of polypropylene and polyethylene, and provided 3D printer settings for re-purposing each material. In addition to heat-sealing, we explored a new design space of using a 3D printer to create embossing, origami creases, and textures on thermoplastic bags, and demonstrate examples of applying this technique to create various materials for rapid design and prototyping. To validate the durability of the inflatables, we evaluated 9 different thermoplastic air pouches’ heat-sealed bonding strength. Lastly, we show use-case scenarios of prototyping products and interface, and creating playful experience at home.

inDepth: Force-based Interaction with Objects beyond A Physical Barrier

Takatoshi Yoshida, Junichi Ogawa, Kyung Yun Choi, Sanad Bushnaq, Ken Nakagaki, and Hiroshi Ishii. 2021. InDepth: Force-based Interaction with Objects beyond A Physical Barrier. In Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '21). Association for Computing Machinery, New York, NY, USA, Article 42, 1–6. DOI:https://doi.org/10.1145/3430524.3442447

We propose inDepth, a novel system that enables force-based interaction with objects beyond a physical barrier by using scalable force sensor modules. inDepth transforms a physical barrier (eg. glass showcase or 3D display) to a tangible input interface that enables users to interact with objects out of reach, by applying finger pressure on the barrier’s surface. To achieve this interaction, our system tracks the applied force as a directional vector by using three force sensors installed underneath the barrier. Meanwhile, our force-to-depth conversion algorithm translates force intensity into a spatial position along its direction beyond the barrier. Finally, the system executes various operations on objects in that position based on the type of application. In this paper, we introduce inDepth concept and its design space. We also demonstrate example applications, including selecting items in showcases and manipulating 3D rendered models.

aSpire: Clippable, Mobile Pneumatic-Haptic Device for Breathing Rate Regulation via Personalizable Tactile Feedback

Kyung Yun Choi, Jinmo Lee, Neska ElHaouij, Rosalind Picard, and Hiroshi Ishii. 2021. ASpire: Clippable, Mobile Pneumatic-Haptic Device for Breathing Rate Regulation via Personalizable Tactile Feedback. Extended Abstracts of the 2021 CHI Conference on Human Factors in Computing Systems. Association for Computing Machinery, New York, NY, USA, Article 372, 1–8. DOI:https://doi.org/10.1145/3411763.3451602

We introduce–aSpire–a clippable, mobile pneumatic-haptic device designed to help users regulate their breathing rate via subtle tactile feedback. aSpire can be easily clipped to a strap/belt and used to personalize tactile stimulation patterns, intensity, and frequency via its array of air pouch actuators that inflate/deflate individually. To evaluate the effectiveness of aSpire’s different tactile stimulation patterns in guiding the breathing rate of people on the move, out-of-lab environment, we conducted a user study with car passengers in a real-world commuting setting. The results show that engaging with the aSpire does not evoke extra mental stress, and helps the participants reduce their average breathing rate while keeping their perceived pleasantness and energy level high.

2020

Venous Materials: Towards Interactive Fluidic Mechanisms

Hila Mor, Tianyu Yu, Ken Nakagaki, Benjamin Harvey Miller, Yichen Jia, and Hiroshi Ishii. 2020. Venous Materials: Towards Interactive Fluidic Mechanisms. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–14. DOI:https://doi.org/10.1145/3313831.3376129

Venous Materials is a novel concept and approach of an interactive material utilizing fluidic channels. We present a design method for fluidic mechanisms that respond to deformation by mechanical inputs from the user, such as pressure and bending. We designed a set of primitive venous structures that act as embedded analog fluidic sensors, displaying flow and color change. In this paper, we consider the fluid as the medium to drive tangible information triggered by deformation, and at the same time, to function as a responsive display of that information. To provide users with a simple way to create and validate designs of fluidic structures, we built a software platform and design tool UI. This design tool allows users to quickly design the geometry, and simulate the flow with intended mechanical force dynamically. We present a range of applications that demonstrate how Venous Materials can be utilized to augment interactivity of everyday physical objects.

HERMITS: Dynamically Reconfiguring the Interactivity of Self-propelled TUIs with Mechanical Shell Add-ons

Ken Nakagaki, Joanne Leong, Jordan L. Tappa, João Wilbert, and Hiroshi Ishii. 2020. HERMITS: Dynamically Reconfiguring the Interactivity of Self-propelled TUIs with Mechanical Shell Add-ons. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20). Association for Computing Machinery, New York, NY, USA, 882–896. DOI:https://doi.org/10.1145/3379337.3415831

We introduce HERMITS, a modular interaction architecture for self-propelled Tangible User Interfaces (TUIs) that incorporates physical add-ons, referred to as mechanical shells. The mechanical shell add-ons are intended to be dynamically reconfigured by utilizing the locomotion capability of self-propelled TUIs (e.g. wheeled TUIs, swarm UIs). We developed a proofof-concept system that demonstrates this novel architecture using two-wheeled robots and a variety of mechanical shell examples. These mechanical shell add-ons are passive physical attatchments that extend the primitive interactivities (e.g. shape, motion and light) of the self-propelled robots. The paper proposes the architectural design, interactive functionality of HERMITS as well as design primitives for mechanical shells. The paper also introduces the prototype implementation that is based on an off-the-shelf robotic toy with a modified docking mechanism. A range of applications is demonstrated with the prototype to motivate the collective and dynamically reconfigurable capability of the modular architecture, such as an interactive mobility simulation, an adaptive home/desk environment, and a story-telling narrative. Lastly we discuss the future research opportunity of HERMITS to enrich the interactivity and adaptability of actuated and shape changing TUIs.

Mechanical Shells: Physical Add-ons for Extending and Reconfiguring the Interactivity of Actuated TUIs

Ken Nakagaki. 2020. Mechanical Shells: Physical Add-ons for Extending and Reconfiguring the Interactivities of Actuated TUIs. In Adjunct Publication of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20 Adjunct). Association for Computing Machinery, New York, NY, USA, 151–156. DOI:https://doi.org/10.1145/3379350.3415801

In this paper, I introduce a concept of mechanical shells, which are physical add-ons that can adaptively augment, extend, and reconfigure the interactivities of self-actuated tangible user interfaces (TUIs). While a variety of research explores actuated and shape-changing interfaces for providing dynamic physical affordance and tangible displays, the concept of mechanical shell intends to overcome the constraint of existing generic actuated TUI hardware thereby enabling greater versatility and expression. This paper overviews the mechanical shell concept, describes project examples, outlines a research framework, and suggests open space for future research.

DefeXtiles: 3D Printing Quasi-Woven Fabric via Under-Extrusion

Jack Forman, Mustafa Doga Dogan, Hamilton Forsythe, and Hiroshi Ishii. 2020. DefeXtiles: 3D Printing Quasi-Woven Fabric via Under-Extrusion. In Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (UIST '20). Association for Computing Machinery, New York, NY, USA, 1222–1233. DOI:https://doi.org/10.1145/3379337.3415876

We present DefeXtiles, a rapid and low-cost technique to produce tulle-like fabrics on unmodified fused deposition modeling (FDM) printers. The under-extrusion of filament is a common cause of print failure, resulting in objects with periodic gap defects. In this paper, we demonstrate that these defects can be finely controlled to quickly print thinner, more flexible textiles than previous approaches allow. Our approach allows hierarchical control from micrometer structure to decameter form and is compatible with all common 3D printing materials. In this paper, we introduce the mechanism of DefeXtiles, establish the design space through a set of primitives with detailed workflows, and characterize the mechanical properties of DefeXtiles printed with multiple materials and parameters. Finally, we demonstrate the interactive features and new use cases of our approach through a variety of applications, such as fashion design prototyping, interactive objects, aesthetic patterning, and single-print actuators.

KI/OSK: Practice Study of Load Sensitive Board for Farmers Market

Koichi Yoshino, Takatoshi Yoshida, Yo Sasaki, Xiaoyan Shen, Ken Nakagaki, and Hiroshi Ishii. 2020. KI/OSK: Practice Study of Load Sensitive Board for Farmers Market. In Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems Extended Abstracts (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–8. DOI:https://doi.org/10.1145/3334480.3375208

In recent years, the retail industry has increasingly gained interest in the ICT system for enriching the customers' shopping experience and is now being deployed in some stores. As the most common implementation method is limited with its high cost and large consumption of space, it is a challenge for smaller or temporal stores to install such services. In this paper, we explore the usage of a load-sensitive board to improve the retail shopping experience specifically in smaller and temporal stores. As a case study, we develop and examine KI/OSK, an easy-to-install modular table-top retail application using SCALE a previously developed load sensing toolkit specifically for the Farmers Market. Our study uses iterative user research including surveys with Farmer's Market managers to assess design requirements, and testing and revising through a field study in a Farmers Market.

ambienBeat:Wrist-worn Mobile Tactile Biofeedback for Heart Rate Rhythmic Regulation

Kyung Yun Choi and Hiroshi Ishii. 2020. AmbienBeat: Wrist-worn Mobile Tactile Biofeedback for Heart Rate Rhythmic Regulation. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20). Association for Computing Machinery, New York, NY, USA, 17–30. DOI:https://doi.org/10.1145/3374920.3374938

We present a wrist-worn mobile heart rate regulator—ambienBeat—which provides closed-loop biofeedback via tactile stimulus based on users’ heartbeat rate (HR). We applied the principle of physiological synchronization via touch to achieve our goal of effortless regulation of HR, which is tightly coupled with mental stress levels. ambienBeat provides various patterns of tactile stimuli, which mimics the feeling of a heartbeat pulse, to guide user’s HR to resonate with its rhythmic, tactile patterns. The strength and rhythmic patterns of tactile stimulation are controlled to a level below the cognitive threshold of an individual’s tactile sensitivity on their wrist so as to minimize task disturbance. Here we present an acoustically noise-less soft voice-coil actuator to render the ambient tactile stimulus and present the system and implementation process. We evaluated our system by comparing it to ambient auditory and visual guidance. Results from the user study shows that the tactile stimulation was effective in guiding user’s HR to resonate with ambienBeat to either calm or boost the heart rate using minimal cognitive load.

TRANS-DOCK: Expanding the Interactivity of Pin-based Shape Displays by Docking Mechanical Transducers

Ken Nakagaki, Yingda (Roger) Liu, Chloe Nelson-Arzuaga, and Hiroshi Ishii. 2020. TRANS-DOCK: Expanding the Interactivity of Pin-based Shape Displays by Docking Mechanical Transducers. Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘20). ACM, New York, NY, USA.

DOI: https://doi.org/10.1145/3374920.3374933
This paper introduces TRANS-DOCK, a docking system for pin-based shape displays that enhances their interaction capabilities for both the output and input. By simply interchanging the transducer module, composed of passive mechanical structures, to be docked on a shape display, users can selectively switch between different configurations including display sizes, resolutions, and even motion modalities to allow pins moving in a linear motion to rotate, bend and inflate. We introduce a design space consisting of several mechanical elements and enabled interaction capabilities. We then explain the implementation of the docking system and transducer design components. Our implementation includes providing the limitations and characteristics of each motion transmission method as design guidelines. A number of transducer examples are then shown to demonstrate the range of interactivity and application space achieved with the approach of TRANS-DOCK. Potential use cases to take advantage of the interchangeability of our approach are discussed. Through this paper we intend to expand expressibility, adaptability and customizability of a single shape display for dynamic physical interaction. By converting arrays of linear motion to several types of dynamic motion in an adaptable and flexible manner, we advance shape displays to enable versatile embodied interactions.

WraPr: Spool-Based Fabrication for Object Creation and Modification

Joanne Leong, Jose Martinez, Florian Perteneder, Ken Nakagaki, and Hiroshi Ishii. 2020. WraPr: Spool-Based Fabrication for Object Creation and Modification. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20). Association for Computing Machinery, New York, NY, USA, 581–588. DOI:https://doi.org/10.1145/3374920.3374990

We propose a novel fabrication method for 3D objects based on the principle of spooling. By wrapping off-the-shelf materials such as thread, ribbon, tape or wire onto a core structure, new objects can be created and existing objects can be augmented with desired aesthetic and functional qualities. Our system, WraPr, enables gesture-based modelling and controlled thread deposition. We outline and explore the design space for this approach. Various examples are fabricated to demonstrate the possibility to attain a range of physical and functional properties. The simplicity of the proposed method opens the grounds for a light-weight fabrication approach for the generation of new structures and the customization of existing objects using soft materials.

Prototyping Interactive Fluidic Mechanisms

Hila Mor, Ken Nakagaki, Yu Tianyu, Benjamin Harvey Miller, Yichen Jia, and Hiroshi Ishii. 2020. Prototyping Interactive Fluidic Mechanisms. In Proceedings of the Fourteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’20). Association for Computing Machinery, New York, NY, USA, 881–884. DOI:https://doi.org/10.1145/3374920.3374967

In this hands-on studio we introduce a method of designing and prototyping fluidic mechanisms that utilize the flow as both deformation sensors and displays. A fabrication process and the featured materials will be provided to allow participants to design and prototype self-contained fluidic channels. These channels are designed to respond to mechanical inputs such as deformation and pressure with flow and color change. We will introduce a specialized software plugin for design and flow simulation that enables simple and rapid modelling with optimization of the fluidic mechanism. The goal of this studio is to provide researchers, designers and makers with hand-on experience in designing fluidic mechanisms, coupling shape-change (i.e. deformation input) with displayed response. Our method allows participants to explore meaningful applications such as on-body wearable devices for augmenting motion and animating objects such as interactive books, lampshades and packaging.

2019

SCALE: Enhancing Force-based Interaction by Processing Load Data from Load Sensitive Modules

Takatoshi Yoshida, Xiaoyan Shen, Koichi Yoshino, Ken Nakagaki, and Hiroshi Ishii. 2019. SCALE: Enhancing Force-based Interaction by Processing Load Data from Load Sensitive Modules. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 901-911. DOI: https://doi.org/10.1145/3332165.3347935

SCALE provides a framework for load data from distributed load-sensitive modules for exploring force-based interaction. Force conveys not only the force vector itself but also rich information about activities, including way of touching, object location and body motion. Our system captures these interactions on a single pipeline of load data processing. Furthermore, we have expanded the interaction area from a flat 2D surface to 3D volume by building a mathematical framework, which enables us to capture the vertical height of a touch point. These technical invention opens broad applications, including general shape capturing and motion recognition. We have packaged the framework into a physical prototyping kit, and conducted a workshop with product designers to evaluate our system in practical scenarios.

milliMorph - Fluid-Driven Thin Film Shape-Change Materials for Interaction Design

Qiuyu Lu, Jifei Ou, João Wilbert, André Haben, Haipeng Mi, and Hiroshi Ishii. 2019. milliMorph -- Fluid-Driven Thin Film Shape-Change Materials for Interaction Design. In Proceedings of the 32nd Annual ACM Symposium on User Interface Software and Technology (UIST '19). ACM, New York, NY, USA, 663-672. DOI: https://doi.org/10.1145/3332165.3347956

This paper presents a design space, a fabrication system and applications of creating fluidic chambers and channels at mil-limeter scale for tangible actuated interfaces. The ability to design and fabricate millifluidic chambers allows one to create high frequency actuation, sequential control of flows and high resolution design on thin film materials. We propose a four dimensional design space of creating these fluidic chambers, a novel heat sealing system that enables easy and precise mil-lifluidics fabrication, and application demonstrations of the fabricated materials for haptics, ambient devices and robotics. As shape-change materials are increasingly integrated in de-signing novel interfaces, milliMorph enriches the library of fluid-driven shape-change materials, and demonstrates new de-sign opportunities that is unique at millimeter scale for product and interaction design.

reSpire: Self-awareness and Interpersonal Connectedness through Shape-changing Fabric Display

Kyung Yun Choi, Valentina Sumini, and Hiroshi Ishii. 2019. ReSpire: Self-awareness and Interpersonal Connectedness through Shape-changing Fabric Display. In Proceedings of the 2019 on Creativity and Cognition (C&amp;C ’19). Association for Computing Machinery, New York, NY, USA, 449–454. DOI:https://doi.org/10.1145/3325480.3329176

reSpire lets people bring tangibility to their invisible physiological state through shape-changing fabric deformed by airflow. We explore a way to support mental wellness via improving a self-interaction and interpersonal connectedness. reSpire encourages not only people to focus on their connection to inner body but also to interact with others through playful tangible interactions in the same location and develop a empathy. We created a non-machine like interface responsive to users' respiration patterns and hand gestures using a fabric and its deformation by airflow control. We also introduce a computational model to simulate the deformation of fabric by the variance of airflow pres-sure and direction. Various interaction scenarios highlight its applications not only to health but also to interactive art installation.

SIGCHI Lifetime Research Award Talk: Making Digital Tangible

Hiroshi Ishii 2019. SIGCHI Lifetime Research Award Talk: Making Digital Tangible. CHI’19 Extended Abstracts, May 4-9, 2019, Glasgow, Scotland, UK. ACM ISBN 978-1-4503-5971-9/19/05. https://doi.org/10.1145/3290607.3313769

DOI: https://doi.org/10.1145/3290607.3313769
ABSTRACT1 <br> Today's mainstream Human-Computer Interaction (HCI) research primarily addresses functional concerns – the needs of users, practical applications, and usability evaluation. Tangible Bits and Radical Atoms are driven by vision and carried out with an artistic approach. While today's technologies will become obsolete in one year, and today's applications will be replaced in 10 years, true visions – we believe – can last longer than 100 years. <br> Tangible Bits (3, 4) seeks to realize seamless interfaces between humans, digital information, and the physical environment by giving physical form to digital information and computation, making bits directly manipulatable and perceptible both in the foreground and background of our consciousness (peripheral awareness).Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback. <br> Radical Atoms (5) leaps beyond Tangible Bits by assuming a hypothetical generation of materials that can change form and properties dynamically, becoming as reconfigurable as pixels on a screen. Radical Atoms is the future material that can transform its shape, conform to constraints, and inform the users of their affordances. Radical Atoms is a vision for the future of Human- Material Interaction, in which all digital information has a physical manifestation, thus enabling us to interact directly with it. <br> I will present the trajectory of our vision-driven design research from Tangible Bits towards Radical Atoms, illustrated through a variety of interaction design projects that have been presented and exhibited in Media Arts, Design, and Science communities. These emphasize that the design for engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by my motto, “Be Artistic and Analytic. Be Poetic and Pragmatic.”

SociaBowl: A Dynamic Table Centerpiece to Mediate Group Conversations

Joanne Leong, Yuehan Wang, Romy Sayah, Stella Rossikopoulou Pappa, Florian Perteneder, and Hiroshi Ishii. 2019. SociaBowl: A Dynamic Table Centerpiece to Mediate Group Conversations. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, Paper LBW1114, 1–6. DOI:https://doi.org/10.1145/3290607.3312775

In this paper, we introduce SociaBowl, a dynamic table centerpiece to promote positive social dynamics in 2-way cooperative conversations. A centerpiece such as a bowl of food, a decorative flower arrangement, or a container of writing tools, is commonly placed on a table around which people have conversations. We explore the design space for an augmented table and centerpiece to influence how people may interact with one another. We present an initial functional prototype to explore different choices in materiality of feedback, interaction styles, and animation and motion patterns. These aspects are discussed with respect to how it may impact people's awareness of their turn taking dynamics as well as provide an additional channel for expression. Potential enhancements for future iterations in its design are then outlined based on these findings.

Bubble Talk: Open-source Interactive Art Toolkit for Metaphor of Modern Digital Chat

Kyung Yun Choi and Hiroshi Ishii. 2019. Bubble Talk: Open-source Interactive Art Toolkit for Metaphor of Modern Digital Chat. In Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ’19). Association for Computing Machinery, New York, NY, USA, 525–530. DOI:https://doi.org/10.1145/3294109.3301271

In this art project, the ephemeral and intangible aspects of human's communication are represented by soap-bubble. The shapeless, intangible, and insubstantial speech - once the speech is shouted out through speaker's mouth it disappears unless someone hears it immediately, or even it is heard, the message will be forgotten as time goes - is transferred to a semi-tangible yet still fleeting bubble. The bubble machine that we created provides person-to-person and person-to-space interaction. The machine has a iris mechanism that varies its outlet size reacting to the participant's speech pattern as if it tries to talk something. Once the participant pauses, the machine blows out various sizes of bubble. The floating bubble represents the subtle state of a message from interpersonal communications that lies in the middle of real and digital world. Also, it creates a certain delay until it pops, which is a metaphor of our behavior that we often delay to send out text-messages through chatting apps. We believe that anyone can be an artist. By open-sourcing the details of fabrication process and materials, we want to encourage people to build the machine, interact with it at any locations, and use and modify it as a art tool for realizing their own ideas whether it is for art or not.

inFORCE: Bi-directional 'Force' Shape Display For Haptic Interaction

Ken Nakagaki, Daniel Fitzgerald, Zhiyao (John) Ma, Luke Vink, Daniel Levine, Hiroshi Ishii. 2019. inFORCE: Bi-directional `Force' Shape Display For Haptic Interaction. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '19). ACM, New York, NY, USA, 615-623. DOI: https://doi.org/10.1145/3294109.3295621

While previously proposed hardware on pin-based shape display has improved various technical aspects, there has been a clear limitation on the haptic quality of variable `force' feedback. In this paper, we explore a novel haptic interaction design space with `force' controlled shape display. Utilizing high performance linear actuators with current reading functionality, we built a 10 x 5 `force' shape display, named inFORCE, that can both detect and exert variable force on individual pins. By integrating closed-loop force control, our system can provide real-time variable haptic feedback in response to the way users press the pins. Our haptic interaction design space includes volumetric haptic feedback, material emulation, layer snapping, and friction. Our proposed interaction methods, for example, enables people to ``press through'' computationally rendered dynamic shapes to understand the internal structure of 3D volumetric information. We also demonstrate a material property capturing functionality. Our technical evaluation and user study assesses the hardware capability and haptic perception through interaction with inFORCE. We also discuss application spaces that `force' shape display can be used for.

SensorKnits: Architecting textile sensors with machine knitting

SensorKnit: Architecting Textile Sensors with Machine Knitting Jifei Ou, Daniel Oran, Don Derek Haddad, Joseph Paradiso, and Hiroshi Ishii 3D Printing and Additive Manufacturing 2019 6:1, 1-11

DOI: https://doi.org/10.1089/3dp.2018.0122
This article presents three classes of textile sensors exploiting resistive, piezoresistive, and capacitive properties of various textile structures enabled by machine knitting with conductive yarn. Digital machine knitting is a highly programmable manufacturing process that has been utilized to produce apparel, accessories, and footwear. By carefully designing the knit structure with conductive and dielectric yarns, we found that the resistance of knitted fabric can be programmatically controlled. We also present applications that demonstrate how knitted sensor can be used at home and in wearables. While e-textiles have been well explored in the field of interaction design, this work explores the correlation between the local knitted structure and global electrical property of a textile.

AUFLIP - An Auditory Feedback System Towards Implicit Learning of Advanced Motor Skills

Daniel Levine, Alan Cheng, David Olaleye, Kevin Leonardo, Matthew Shifrin, and Hiroshi Ishii. 2019. AUFLIP - An Auditory Feedback System Towards Implicit Learning of Advanced Motor Skills. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, Paper LBW0188, 1–6. DOI:https://doi.org/10.1145/3290607.3312804

How can people learn advanced motor skills such as front flips and tennis swings without starting from a young age? The answer, following the work of Masters et. al., we believe, is implicitly. Implicit learning is associated with higher retention and knowledge transfer, but that is unable to be explicitly articulated as a set of rules. To achieve implicit learning is difficult, but may be taught using obscured feedback - that is feedback that provides little enough information such that a user does not overfit a mental model of their target action. We have designed an auditory feedback system - AUFLIP - that describes high level properties of an advanced movement using a simplified and validated physics model of the flip. We further detail the implementation of a wearable system, optimized placement procedure, and takeoff capture strategy to realize this model. With an audio cue pattern that conveys this high-level, obscured objective, the system is integrated into a gymnastics-training environment

Auto-Inflatables: Chemical Inflation for Pop-Up Fabrication

Penelope Webb, Valentina Sumini, Amos Golan, and Hiroshi Ishii. 2019. Auto-Inflatables: Chemical Inflation for Pop-Up Fabrication. In Extended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems (CHI EA ’19). Association for Computing Machinery, New York, NY, USA, Paper LBW1411, 1–6. DOI:https://doi.org/10.1145/3290607.3312860

This research aims to utilize an output method for zero energy pop-up fabrication using chemical inflation as a technique for instant, hardware-free shape change. By applying state-changing techniques as a medium for material activation, we provide a framework for a two-part assembly process starting from the manufacturing side whereby a rigid structural body is given its form, through to the user side, where the form potential of a soft structure is activated and the structure becomes complete. To demonstrate this technique, we created two use cases: firstly, a compression material for emergency response, and secondly a self-inflating packaging system. This paper provides details on the auto-inflation process as well as the corresponding digital tool for the design of pneumatic materials. The results show the efficiency of using zero energy auto-inflatable structures for both medical applications and packaging. This rapidly deployable inflatable kit starts from the assumption that every product can provide its own contribution by responding in the best way to a specific application.

2018

reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper

Kyung Yun Choi, Darle Shinsato, Shane Zhang, Ken Nakagaki, and Hiroshi Ishii. 2018. reMi: Translating Ambient Sounds of Moment into Tangible and Shareable Memories through Animated Paper. In The 31st Annual ACM Symposium on User Interface Software and Technology Adjunct Proceedings (UIST '18 Adjunct). ACM, New York, NY, USA, 84-86. DOI: https://doi.org/10.1145/3266037.3266109

We present a tangible memory notebook–reMi–that records the ambient sounds and translates them into a tangible and shareable memory using animated paper. The paper replays the recorded sounds and deforms its shape to generate synchronized motions with the sounds. Computer-mediated communication interfaces have allowed us to share, record and recall memories easily through visual records. However, those digital visual-cues that are trapped behind the device’s 2D screen are not the only means to recall a memory we experienced with more than the sense of vision. To develop a new way to store, recall and share a memory, we investigate how tangible motion of a paper that represents sound can enhance the "reminiscence".

AUFLIP Teaching Front Flips with Auditory Feedback Towards a System for Learning Advanced Movement

AUFLIP describes an auditory feedback system approach for learning advanced movements, informed and motivated by es- tablished methods of implicit motor learning by analogy, our physiological constraints, and the state of the art in augmented motor learning by feedback. AUFLIP presents and validates a physics simplification of an advanced movement, the front flip, and details the implementation of a wearable system, op- timized placement procedure, and takeoff capture strategy to realizes this model. With an audio cue pattern that conveys this high level objective, the system is integrated into a gym- nastics training environment with professional coaches teach- ing novice adults how to perform front flips. A strategy, sys- tem, and application set building off AUFLIP for more general movement, and applications is further proposed. Lastly, this work performs a preliminary investigatation into the notion of Audio-Movement Congruence, and whether audio feedback for motor learning can be personally tailored to individuals’ con- textual experiences and background, and explores future appli- cations of the discussed systems and strategies.

KinetiX - Designing Auxetic-inspired Deformable Material Structures

This paper describes a group of auxetic-inspired material structures that can transform into various shapes upon compression. We developed four cellular-based material structure units composed of rigid plates and elastic/rotary hinges. Different compositions of these units lead to a variety of tunable shape- changing possibilities, such as uniform scaling, shearing, bending and rotating. By tessellating those trans- formations together, we can create various higher level transformations for product design. In the paper, we first give a geometrical and numerical description of the units’ configuration and their degrees of freedom. An interactive simulation tool is developed for users to input designed structures and preview the transformation. Finally we present three application prototypes that utilize our proposed structures.

Programmable Droplets for Interaction

Udayan Umapathi, Patrick Shin, Ken Nakagaki, Daniel Leithinger, and Hiroshi Ishii. 2018. Programmable Droplets for Interaction. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA '18). ACM, New York, NY, USA, Paper VS15, 1 pages. DOI: https://doi.org/10.1145/3170427.3186607

We present a design exploration on how water based droplets in our everyday environment can become interactive elements. For this exploration, we use electrowetting--on-dielectric (EWOD) technology as the underlying mechanism to precisely control motion of droplets. EWOD technology provides a means to precisely transport, merge, mix and split water based droplets and has been widely explored for automating biological experiments in industrial and research settings. More recently, it has been explored for DIY Biology applications. In our exploration we integrate EWOD devices into a range of everyday objects and scenarios to show how programmable water droplets can be used as information displays, interaction medium for painting and personal communication.

Scaling Electrowetting with Printed Circuit Boards for Large Area Droplet Manipulation

Udayan Umapathi, Samantha Chin, Patrick Shin, Dimitris Koutentakis, , Hiroshi Ishii 2018. Scaling Electrowetting with Printed Circuit Boards for Large Area Droplet Manipulation, MRS Advances © 2018 Materials Research Society DOI: 10.1557/adv.2018.331

Droplet based microfluidics (digital microfluidics) with Electrowetting on dielectric (EWOD) has gained popularity with the promise of being technology for a true lab-on-chip device with applications spanning across assays/library prep, next-gen sequencing and point-of-care diagnostics. Most electrowetting device architecture are linear electrode arrays with a shared path for droplets, imposing serious limitations -- cross contamination and limited number of parallel operations. Our work is in addressing these issues through large 2D grid arrays with direct addressability providing flexible programmability. Scaling electrowetting to larger arrays still remains a challenge due to complex and expensive cleanroom fabrication of microfluidic devices. We take the approach of using inexpensive PCB manufacturing, investigate challenges and solutions for scaling electrowetting to large area droplet manipulation. PCB manufactured electrowetting arrays impose many challenges due to the irregularities from process and materials used. These challenges generally relate to preparing the surface that interfaces with droplets -- a dielectric material on the electrodes and the top most hydrophobic coating that interfaces with the droplets. A requirement for robust droplet manipulation with EWOD is thin (<10um) hydrophobic dielectric material which does not break down at droplet actuation voltages (AC/DC, 60V to 200V) and has a no droplet pinning. For this, we engineered materials specifically for large area PCBs. Traditionally, digital microfluidic devices sandwich droplets between two plates and have focussed on sub-microliter droplet volumes. In our approach, droplets are on an open surface with which we are able to manipulate droplets in microliter and milliliter volumes. With milliliter droplet manipulation ability on our electrowetting device, we demonstrate “digital millifluidics”. Finally, we report the performance of our device and to motivate the need for large open arrays we show an example of running multiple parallel biological experiments.

Mediate: A Spatial Tangible Interface for Mixed Reality (paper)

Daniel Fitzgerald and Hiroshi Ishii. 2018. Mediate: A Spatial Tangible Interface for Mixed Reality. In Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems (CHI EA ’18). Association for Computing Machinery, New York, NY, USA, Paper LBW625, 1–6. DOI:https://doi.org/10.1145/3170427.3188472

Recent Virtual Reality (VR) systems render highly immersive visual experiences, yet currently lack tactile feedback for feeling virtual objects with our hands and bodies. Shape Displays offer solid tangible interaction but have not been integrated with VR or have been restricted to desktop-scale workspaces. This work represents a fusion of mobile robotics, haptic props, and shape-display technology and commercial Virtual Reality to overcome these limitations. We present Mediate, a semi-autonomous mobile shape-display that locally renders 3D physical geometry co-located with room-sized virtual environments as a conceptual step towards large-scale tangible interaction in Virtual Reality. We compare this "dynamic just-in-time mockup" concept to other haptic paradigms and discuss future applications and interaction scenarios.

2017

Designing Line-Based Shape-Changing Interfaces

Ken Nakagaki, Sean Follmer, Artem Dementyev, Joseph A. Paradiso, and Hiroshi Ishii. "Designing Line-Based Shape-Changing Interfaces." IEEE Pervasive Computing 16, no. 4 (2017): 36-46.

DOI: https://doi.org/10.1109/MPRV.2017.3971127
This article starts with an overview of work on shape-changing line interfaces in the field of HCI, including the authors’ previous work on actuated-line interfaces, LineFORM and ChainFORM. Related research from other fields, such as robotics and material science, are also introduced. Then, several potential implementation methods are compared and discussed in depth with regards to their potential for future research and applications. The authors also investigate the interaction design space around actuated line interfaces, categorized into four groups: physical display, tangible interaction, constraints, and customization. Leveraging this design space, they present potential applications and demonstrate their use with the LineFORM and ChainFORM prototypes. Envisioning a future where shape-changing lines are woven into daily life, this article aims to explore and initiate a broad research space around line-based shape-changing interfaces and to encourage future researchers and designers to investigate these novel directions. This article is part of a special issue on shape-changing interfaces.

AnimaStage: Hands-on Animated Craft on Pin-based Shape Displays

Ken Nakagaki, Udayan Umapathi, Daniel Leithinger, and Hiroshi Ishii. 2017. AnimaStage: Hands-on Animated Craft Platform Using Shape Displays. In Proceedings of the 2017 DIS Conference on Designing Interactive Systems (DIS ‘17). ACM, New York, NY, USA, 1093-1097.

DOI: https://doi.org/10.1145/3064663.3064670
In this paper, we present AnimaStage: a hands-on animated craft platform based on an actuated stage. Utilizing a pin-based shape display, users can animate their crafts made from various materials. Through this system, we intend to lower the barrier for artists and designers to create actuated objects and to contribute to interaction design using shape changing interfaces for inter-material interactions. We introduce a three-phase design process for AnimaStage with examples of animated crafts. We implemented the system with several control modalities that allow users to manipulate the motion of the crafts so that they could easily explore their desired motion through an iterative process. To complement the animated crafts, dynamic landscapes can also be rendered. We conducted a user study to observe the subject and process by which people make crafts using AnimaStage. We invited participants with different backgrounds to design and create crafts using multiple materials and craft techniques. A variety of outcomes and application spaces were found in this study.

Transformative Appetite: Shape-Changing Food Transforms from 2D to 3D by Water Interaction through Cooking

Wen Wang, Lining Yao, Teng Zhang, Chin-Yi Cheng, Daniel Levine, and Hiroshi Ishii. 2017. Transformative Appetite: Shape-Changing Food Transforms from 2D to 3D by Water Interaction through Cooking. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 6123-6132.

DOI: http://10.1145/3025453.3026019
We developed a concept of transformative appetite, where edible 2D films made of common food materials (protein, cellulose or starch) can transform into 3D food during cooking. This transformation process is triggered by water adsorption, and it is strongly compatible with the 'flat packaging' concept for substantially reducing shipping costs and storage space. To develop these transformable foods, we performed material-based design, established a hybrid fabrication strategy, and conducted performance simulation. Users can customize food shape transformations through a pre-defined simulation platform, and then fabricate these designed patterns using additive manufacturing. Three application techniques are provided - 2D-to-3D folding, hydration-induced wrapping, and temperature-induced self-fragmentation, to present the shape, texture, and interaction with food materials. Based on this concept, several dishes were created in the kitchen, to demonstrate the futuristic dining experience through materials-based interaction design.

Harnessing the hygroscopic and biofluorescent behaviors of genetically tractable microbial cells to design biohybrid wearables

Wen Wang, Lining Yao, Chin-Yi Cheng, Teng Zhang, Hiroshi Atsumi, Luda Wang, Guanyun Wang, Oksana Anilionyte, Helene Steiner, Jifei Ou, Kang Zhou, Chris Wawrousek, Katherine Petrecca, Angela M. Belcher, Rohit Karnik, Xuanhe Zhao, Daniel I. C. Wang, and Hiroshi Ishii. 2017. Harnessing the hygroscopic and biofluorescent behaviors of genetically tractable microbial cells to design biohybrid wearables. Science Advances 3, e1601984.

DOI: http://10.1126/sciadv.1601984
Abstract Cells’ biomechanical responses to external stimuli have been intensively studied but rarely implemented into devices that interact with the human body. We demonstrate that the hygroscopic and biofluorescent behaviors of living cells can be engineered to design biohybrid wearables, which give multifunctional responsiveness to human sweat. By depositing genetically tractable microbes on a humidity-inert material to form a heterogeneous multilayered structure, we obtained biohybrid films that can reversibly change shape and biofluorescence intensity within a few seconds in response to environmental humidity gradients. Experimental characterization and mechanical modeling of the film were performed to guide the design of a wearable running suit and a fluorescent shoe prototype with bio-flaps that dynamically modulates ventilation in synergy with the body’s need for cooling.

Printflatables: Printing Human-scale, Functional and Dynamic Inflatable Objects

Harpreet Sareen, Udayan Umapathi, Patrick Shin, Yasuaki Kakehi, Jifei Ou, Hiroshi Ishii, and Pattie Maes. 2017. Printflatables: Printing Human-Scale, Functional and Dynamic Inflatable Objects. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). ACM, New York, NY, USA, 3669-3680. DOI: https://doi.org/10.1145/3025453.3025898

Printflatables is a design and fabrication system for human- scale, functional and dynamic inflatable objects. We use in- extensible thermoplastic fabric as the raw material with the key principle of introducing folds and thermal sealing. Upon inflation, the sealed object takes the expected three dimen- sional shape. The workflow begins with the user specifying an intended 3D model which is decomposed to two dimensional fabrication geometry. This forms the input for a numerically controlled thermal contact iron that seals layers of thermoplas- tic fabric. In this paper, we discuss the system design in detail, the pneumatic primitives that this technique enables and merits of being able to make large, functional and dynamic pneumatic artifacts. We demonstrate the design output through multiple objects which could motivate fabrication of inflatable media and pressure-based interfaces.

2016

ChainFORM: A Linear Integrated Modular Hardware System for Shape Changing Interfaces

Ken Nakagaki, Artem Dementyev, Sean Follmer, Joseph A. Paradiso, Hiroshi Ishii. 2016. ChainFORM: A Linear Integrated Modular Hardware System for Shape Changing Interfaces. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 87-96 . DOI: http://dx.doi.org/10.1145/2984511.2984587

This paper presents a linear, modular, actuated hardware system as a novel type of shape changing interface. Using rich sensing and actuation capability, this system allows users to construct a wide range of interactive applications. Each module integrates sensing, actuation, and a display, and the user may customize how the modules are connected and configured. Our prototype ChainFORM comprises identical modules that connect in a chain. Modules are equipped with rich input and output capability: touch detection on multiple surfaces, angular detection, visual output, and motor actuation. Each module includes a geared motor wrapped with a flexible circuit board with an embedded microcontroller. Additionally, modules are small enough to be easily attached to other materials or the body to enable the development of handheld-scaled shape changing interfaces. To demonstrate the capability of our system, we implemented variety of applications such as dynamic input device, reconfigurable display, shape changing stylus, interactive actuated craft, and a body augmentation tool.

aeroMorph - Heat-sealing Inflatable Shape-change Materials for Interaction Design

Jifei Ou, Mélina Skouras, Nikolaos Vlavianos, Felix Heibeck, Chin-Yi Cheng, Jannik Peters, and Hiroshi Ishii. 2016. aeroMorph - Heat-sealing Inflatable Shape-change Materials for Interaction Design. In Proceedings of the 29th Annual Symposium on User Interface Software and Technology (UIST '16). ACM, New York, NY, USA, 121-132. DOI: http://dx.doi.org/10.1145/2984511.2984520

This paper presents a design, simulation, and fabrication pipeline for making transforming inflatables with various materials. We introduce a bending mechanism that creates multiple, programmable shape-changing behaviors with inextensible materials, including paper, plastics and fabrics. We developed a software tool that generates these bending mechanism for a given geometry, simulates its transformation, and exports the compound geometry as digital fabrication files. We show a range of fabrication methods, from manual sealing, to heat pressing with custom stencils and a custom heat-sealing head that can be mounted on usual 3-axis CNC machines to precisely fabricate the designed transforming material. Finally, we present three applications to show how this technology could be used for designing interactive wearables, toys, and furniture.

Andantino: Teaching Children Piano with Projected Animated Characters

Xiao Xiao, Pablo Puentes, Edith Ackermann and Hiroshi Ishii. 2016. Andantino: Teaching Children Piano with Projected Animated Characters. In Proceedings of the 15th International Conference on Interaction Design and Children (IDC '16). ACM, New York, NY, USA.

DOI: http://dx.doi.org/10.1145/2930674.2930689
ABSTRACT This paper explores how multi-modal body-syntonic interactive systems may be used to teach children to play the piano beyond the typical focus on reading musical scores and “surface correctness”. Our work draws from Dalcroze Eurhythmics, a method of music pedagogy aimed at instilling an understanding of music rooted in the body. We present a Dalcrozian process of piano learning as a five- step iterative cycle of: listen, internalize, extend, analyze, and improvise. As a case study of how digital technologies may support this process, we present Andantino, a set of extensions of Andante, which projects musical lines as miniature light silhouettes that appear to walk on the keyboard of a player piano. We discuss features of Andantino based on each stage, or step, of the iterative framework and discuss directions for future research, based on two preliminary studies with children between the ages of 7 and 13.

Inspect, Embody, Invent: A Design Framework for Music Learning and Beyond

Xiao Xiao and Hiroshi Ishii. 2016. Inspect, Embody, Invent: A Design Framework for Music Learning and Beyond. In Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI ‘16). ACM, New York, NY, USA.

DOI: http://dx.doi.org/10.1145/2858036.12345271
This paper introduces a new framework to guide the design of interactive music learning systems, focusing on the piano. Taking a Reflective approach, we identify the implicit assumption behind most existing systems—that learning music is learning to play correctly according to the score—and offer an alternative approach. We argue that systems should help cultivate higher levels of musicianship beyond correctness alone for students of all levels. Drawing from both pedagogical literature and the personal experience of learning to play the piano, we identify three skills central to musicianship—listening, embodied understanding, and creative imagination—which we generalize to the Inspect, Embody, Invent framework. To demonstrate how this framework translates to design, we discuss two existing interfaces from our own research— MirrorFugue and Andante—both built on a digitally controlled player piano augmented by in-situ projection. Finally, we discuss the framework’s relevance toward bigger themes of embodied interactions and learning beyond the domain of music.

Cilllia - 3D Printed Micro-Pillar Structures for Surface Texture, Actuation and Sensing

Jifei Ou, Gershon Dublon, Chin-Yi Cheng, Felix Heibeck, Karl Willis, and Hiroshi Ishii. 2016. Cilllia: 3D Printed Micro-Pillar Structures for Surface Texture, Actuation and Sensing. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI '16). ACM, New York, NY, USA, 5753-5764. DOI=http://dx.doi.org/10.1145/2858036.2858257

DOI: 10.1145/2858036.2858257
Looking into the Nature, hair has numerous functions such as to provide warmth, adhesion, locomotion, sensing, a sense of touch, as well as it’s well known aesthetic qualities. This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and generate hair geometry that are smaller than 100 micron. We built a software platform to let one quickly define a hair’s angle, thickness, density, and height. The ability to fabricate customized hair-like structures enables us to design an alternative actuator and sensors. We also present several applications to show how the 3D-printed hair can be used for designing everyday interactive objects.

xPrint: A Modularized Liquid Printer for Smart Materials Deposition

Guanyun Wang, Lining Yao, Wen Wang, Jifei Ou, Chin-Yi Cheng, Hiroshi Ishii. 2016. xPrint: A Modularized Liquid Printer for Smart Materials Deposition. In Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems (CHI ‘16). ACM, New York, NY, USA, 5743-5752.

DOI: http://dx.doi.org/10.1145/2858036.2858281
To meet the increasing requirements of HCI researchers who are looking into using liquid-based materials (e.g., hydrogels) to create novel interfaces, we present a design strategy for HCI researchers to build and customize a liquid-based smart material printing platform with off-theshelf or easy-to-machine parts. For the hardware, we suggest a magnetic assembly–based modular design. These modularized parts can be easily and precisely reconfigured with off-the-shelf or easy-to-machine parts that can meet different processing requirements such as mechanical mixing, chemical reaction, light activation, and solution vaporization. In addition, xPrint supports an open-source, highly customizable software design and simulation platform, which is applicable for simulating and facilitating smart material constructions. Furthermore, compared to inkjet or pneumatic syringe–based printing systems, xPrint has a large range of printable materials from synthesized polymers to natural micro-organism-living cells with a printing resolution from 10μm up to 5mm (droplet size). In this paper, we will introduce the system design in detail and three use cases to demonstrate the material variability and the customizability for users with different demands (e.g., designers, scientific researchers, or artists).

Haptic Edge Display for Mobile Tactile Interaction

Sungjune Jang, Lawrence H. Kim, Kesler Tanner, Hiroshi Ishii, and Sean Follmer. 2016. Haptic Edge Display for Mobile Tactile Interaction. In Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI ‘16). ACM, New York, NY, USA, xx-xx.

DOI: http://dx.doi.org/10.1145/2858036.2858264
Current mobile devices do not leverage the rich haptic channel of information that our hands can sense, and instead focus primarily on touch based graphical interfaces. Our goal is to enrich the user experience of these devices through bidirectional haptic and tactile interactions (display and control) around the edge of hand-held devices. We propose a novel type of haptic interface, a Haptic Edge Display, consisting of actuated pins on the side of a display, to form a linear array of tactile pixels (taxels). These taxels are implemented using small piezoelectric actuators, which can be made cheaply and have ideal characteristics for mobile devices. We developed two prototype Haptic Edge Displays, one with 24 actuated pins (3.75mm in pitch) and a second with 40 pins (2.5mm in pitch). This paper describes several novel haptic interactions for the Haptic Edge Display including dynamic physical affordances, shape display, non-dominant hand interactions, and also in-pocket “pull” style haptic notifications. In a laboratory experiment we investigated the limits of human perception for Haptic Edge Displays, measuring the just-noticeable difference for pin width and height changes for both in-hand and simulated in-pocket conditions.

SoundFORMS: Manipulating Sound Through Touch

Aubrey Colter, Patlapa Davivongsa, Donald Derek Haddad, Halla Moore, Brian Tice, and Hiroshi Ishii. 2016. SoundFORMS: Manipulating Sound Through Touch. In Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (CHI EA ’16). Association for Computing Machinery, New York, NY, USA, 2425–2430. DOI:https://doi.org/10.1145/2851581.2892414

SoundFORMS creates a new method for composers of electronic music to interact with their compositions. Through the use of a pin-based shape-shifting display, synthesized waveforms are projected in three dimensions in real time affording the ability to hear, visualize, and interact with the timbre of the notes. Two types of music composition are explored: generation of oscillator tones, and triggering of pre-recorded audio samples. The synthesized oscillating tones have three timbres: sine, sawtooth and square wave. The pre-recorded audio samples are drum tracks. Through the use of a gestural vocabulary, the user can directly touch and modify synthesized waveforms.

Materiable: Rendering Dynamic Material Properties in Response to Direct Physical Touch with Shape Changing Interfaces

Ken Nakagaki, Luke Vink, Jared Counts, Daniel Windham, Daniel Leithinger, Sean Follmer, and Hiroshi Ishii. 2016. Materiable: Rendering Dynamic Material Properties in Response to Direct Physical Touch with Shape Changing Interfaces. In Proceedings of the 34th Annual ACM Conference on Human Factors in Computing Systems (CHI ‘16). ACM, New York, NY, USA, xx-xx.

DOI: http://dx.doi.org/10.1145/2858036.2858104
Shape changing interfaces give physical shapes to digital data so that users can feel and manipulate data with their hands and bodies. However, physical objects in our daily life not only have shape but also various material properties. In this paper, we propose an interaction technique to represent material properties using shape changing interfaces. Specifically, by integrating the multi-modal sensation techniques of haptics, our approach builds a perceptive model for the properties of deformable materials in response to direct manipulation. As a proof-of-concept prototype, we developed preliminary physics algorithms running on pin-based shape displays. The system can create computationally variable properties of deformable materials that are visually and physically perceivable. In our experiments, users identify three deformable material properties (flexibility, elasticity and viscosity) through direct touch interaction with the shape display and its dynamic movements. In this paper, we describe interaction techniques, our implementation, future applications and evaluation on how users differentiate between specific properties of our system. Our research shows that shape changing interfaces can go beyond simply displaying shape allowing for rich embodied interaction and perceptions of rendered materials with the hands and body.

Inflated Curiosity

Jifei Ou, Felix Heibeck, and Hiroshi Ishii. 2016. TEI 2016 Studio: Inflated Curiosity. In Proceedings of the TEI '16: Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '16). ACM, New York, NY, USA, 766-769. DOI=http://dx.doi.org/10.1145/2839462.2854119

DOI: 10.1145/2839462.2854119

HydroMorph: Shape Changing Water Membrane for Display and Interaction

Ken Nakagaki, Pasquale Totaro, Jim Peraino, Thariq Shihipar, Chantine Akiyama, Yin Shuang, and Hiroshi Ishii. 2016. HydroMorph: Shape Changing Water Membrane for Display and Interaction. In Proceedings of the Tenth International Conference on Tangible, Embedded, and Embodied Interaction (TEI ‘16). ACM, New York, NY, USA, 512-517.

DOI: http://dx.doi.org/10.1145/2807442.2807452
HydroMorph is an interactive display based on shapes formed by a stream of water. Inspired by the membrane formed when a water stream hits a smooth surface (e.g. a spoon), we developed a system that dynamically controls the shape of a water membrane. This paper describes the design and implementation of our system, explores a design space of interactions around water shapes, and proposes a set of user scenarios in applications across scales, from the faucet to the fountain. Through this work, we look to to enrich our interaction with water, an everyday material, with the added dimension of transformation.

2015

bioPrint: A Liquid Deposition Printing System for Natural Actuators

Yao Lining, Ou Jifei, Wang Guanyun, Cheng Chin-Yi, Wang Wen, Steiner Helene, and Ishii Hiroshi. 2015. bioPrint: A Liquid Deposition Printing System for Natural Actuators

DOI: http://dx.doi.org/10.1089/3dp.2015.0033
This article presents a digital fabrication platform for depositing solution-based natural stimuli-responsive material on a thin flat substrate to create hygromorphic biohybrid films. Bacillus subtilis bacterial spores are deposited in the printing process. The hardware system consists of a progressive cavity pump fluidic dispenser, a numerical control gantry, a cooling fan, a heating bed, an agitation module, and a camera module. The software pipeline includes the design of print patterns, simulation of resulting material transformations, and communication with computer hardware. The hardware and software systems are highly modularized and can therefore be easily reconfigured by the user.

LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint

Ken Nakagaki, Sean Follmer, and Hiroshi Ishii. 2015. LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint. In Proceedings of the 28th annual ACM symposium on User interface software and technology (UIST '15). ACM, New York, NY, USA.

In this paper we explore the design space of actuated curve interfaces, a novel class of shape changing-interfaces. Physical curves have several interesting characteristics from the perspective of interaction design: they have a variety of inherent affordances; they can easily represent abstract data; and they can act as constraints, boundaries, or borderlines. By utilizing such aspects of lines and curves, together with the added capability of shape-change, new possibilities for display, interaction, and body constraint are possible. In order to investigate these possibilities we have implemented two actuated curve interfaces at different scales. LineFORM, our implementation, inspired by serpentine robotics, is comprised of a series chain of 1DOF servo motors with integrated sensors for direct manipulation. To motivate this work we present applications such as shape changing cords, mobiles, body constraints, and data manipulation tools.

uniMorph - Fabricating Thin Film Composites for Shape-Changing Interfaces

Felix Heibeck, Basheer Tome, Clark Della Silva, and Hiroshi Ishii. 2015. UniMorph - Fabricating Thin-Film Composite for Shape-Changing Interfaces. UIST'15.

Researchers have been investigating shape changing inter- faces, however technologies for thin, reversible shape change remain complicated to fabricate. uniMorph is an enabling technology for rapid digital fabrication of customized thin- film shape-changing interfaces. By combining the thermo- electric characteristics of copper with the high thermal ex- pansion rate of ultra-high molecular weight polyethylene, we are able to actuate the shape of flexible circuit composites di- rectly. The shape-changing actuation is enabled by a temper- ature driven mechanism and reduces the complexity of fab- rication for thin shape-changing interfaces. In this paper we describe how to design and fabricate thin uniMorph compos- ites. We present composites that are actuated by either envi- ronmental temperature changes or active heating of embed- ded structures and provide a systematic overview of shape- changing primitives. Finally, we present different sensing techniques that leverage the existing copper structures or can be seamlessly embedded into the uniMorph composite. To demonstrate the wide applicability of uniMorph, we present several applications in ubiquitous and mobile computing.

Grasping Information and Collaborating through Shape Displavs

Leithinger, Daniel. Grasping information and collaborating through shape displays / by Daniel Leithinger. ©2015.

DOI: https://dspace.mit.edu/handle/1721.1/101848
The vision to interact with computers through our whole body - to not only visually perceive information, but to engage with it through multiple senses has inspired hu- man computer interaction (HCI) research for decades. Shape displays address this challenge by rendering dynamic physical shapes through computer controlled, actu- ated surfaces that users can view from different angles and touch with their hands to experience digital models, express their ideas and collaborate with each other. Similar to kinetic sculptures, shape displays do not just occupy, rather they redefine the phys- ical space around them. By dynamically transforming their surface geometry, they directly push against hands and objects, yet they also form a perceptual connection with the users gestures and body movements at a distance. Based on this principle of spatial continuity, this thesis introduces a set of interaction techniques that move be- tween touching the interface surface, to interacting with tangible objects on top, and to engaging through gestures in relation to it. These techniques are implemented on custom-built shape display systems that integrate physical rendering, synchronized visual display, shape sensing, and spatial tracking. On top of this hardware platform, applications for computer-aided design, urban planning, and volumetric data explo- ration allow users to manipulate data at different scales and modalities. To support remote collaboration, shared telepresence workspaces capture and remotely render the physical shapes of people and objects. Users can modify shared models, and han- dle remote objects, while augmenting their capabilities through altered remote body representations. The insights gained from building these prototype workspaces and from gathering user feedback point towards a future in which computationally trans- forming materials will enable new types of bodily, spatial interaction with computers.

Shape Displays: Spatial Interaction with Dynamic Physical Form

Leithinger, Daniel, Sean Follmer, Alex Olwal, and Hiroshi Ishii. "Shape Displays: Spatial Interaction with Dynamic Physical Form." Computer Graphics and Applications, IEEE 35, no. 5 (2015): 5-11.

DOI: http://dx.doi.org/10.1109/MCG.2015.111
Shape displays are an emerging class of devices that emphasize actuation to enable rich physical interaction, complementing concepts in virtual and augmented reality. The ability to render form introduces new opportunities to touch, grasp, and manipulate dynamic physical content and tangible objects, in both nearby and remote environments. This article presents novel hardware, interaction techniques, and applications, which point to the potential for extending the ways that we traditionally interact with the physical world, empowered by digital computation.

Methods of 3D Printing Micro-pillar Structures on Surfaces

Jifei Ou, Chin-Yi Cheng, Liang Zhou, Gershon Dublon, and Hiroshi Ishii. 2015. Methods of 3D Printing Micro-pillar Structures on Surfaces. In Adjunct Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (UIST '15 Adjunct). ACM, New York, NY, USA, 59-60. DOI=http://dx.doi.org/10.1145/2815585.2817812

DOI: 10.1145/2815585.2817812
This work presents a method of 3D printing hair-like structures on both flat and curved surfaces. It allows a user to design and fabricate hair geometry that is smaller than 100 micron. We built a software platform to let one quickly define a hair's angle, thickness, density, and height. The ability to fabricate customized hair-like structures expands the library of 3D-printable shape. We then present several applications to show how the 3D-printed hair can be used for designing toy objects.

Exoskin: Pneumatically Augmenting Inelastic Materials for Texture Changing Interfaces

Programmable materials have the power to bring to life inert materials in the world around us. Exoskin, provides a way to embed a multitude of static, rigid materials into actuatable, elastic membranes, allowing the new semi-rigid composites to sense, react, and compute. In this thesis, we give an overview of our motivations, design space, and molding architecture that together answer the when, where, and how of Exoskin’s use. We then use Exowheel, an automotive steering wheel, as a case study illustrating the concrete benefits and uses of texture change as a multi-modal, bi-directional interface. By incorporating Exoskin, Exowheel is able to transform its surface dynamically to create a customized grip for each individual user, on-the-fly, as well as to adapt the grip during the drive, as the car moves from congested city driving to rougher rural roads. Finally, we introduce the idea of membrane-backed rigid materials as a broader, more versatile platform for introducing texture change and sensing into a variety of other products as well. By deeply embedding soft materials with more-static materials, we can break down the divide between rigid and soft, and animate and inanimate, providing inspiration for Human-Computer Interaction researchers to design more interfaces using physical materials around them, rather than just relying on intangible pixels and their limitations.

TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week

Hiroshi Ishii, Daniel Leithinger, Sean Follmer, Amit Zoran, Philipp Schoessler, and Jared Counts, "TRANSFORM: Embodiment of “Radical Atoms” at Milano Design Week," CHI'15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

DOI: http://dx.doi.org/10.1145/2702613.2702969
TRANSFORM fuses technology and design to celebrate the transformation from a piece of static furniture to a dynamic machine driven by streams of data and energy. TRANSFORM aims to inspire viewers with unexpected transformations, as well as the aesthetics of a complex machine in motion. This paper describes the concept, engine, product, and motion design of TRANSFORM, which was first exhibited at LEXUS DESIGN AMAZING 2014 MILAN in April 2014.

bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces

Lining Yao, Jifei Ou, Chin-Yi Cheng, Helene Steiner, Wen Wang, Guanyun Wang, and Hiroshi Ishii. 2015. bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (CHI '15). ACM, New York, NY, USA, 1-10. DOI=10.1145/2702123.2702611 http://doi.acm.org/10.1145/2702123.2702611

DOI: DOI=10.1145/2702123.2702611 http://doi.acm.org/10.1145/2702123.2702611
Through scientific research in collaboration with biologists,we found natto cells can contract and expand with the change of relative humidity. In this paper, we firstly describe the scientific discovery of natto cells as a biological actuator. Next, we expand on the technological developments which enables the translation between the nanoscale actuators and the macroscale interface design: the development of the composite biofilm, the development of the responsive structures, the control setup for actuating biofilms, a simulation and fabrication platform. Finally, we provide a variety of application designs, with and without computer control to demonstrate the potential of our bioactuators. Through this paper, we intend to encourage the use of natto cells and our platform technologies for the design of shape changing interfaces, and more generally, the use and research of biological materials in HCI.

Kinetic Blocks - Actuated Constructive Assembly for lnteraction and Display

Philipp Schoessler, Daniel Windham, Daniel Leithinger, Sean Follmer, and Hiroshi Ishii. 2015. Kinetic Blocks - Actuated Constructive Assembly for Interaction and Display. In Proceedings of the 28th annual ACM symposium on User interface software and technology (UIST '15). ACM, New York, NY, USA.

DOI: http://dx.doi.org/10.1145/2807442.2807453
Pin-based shape displays not only give physical form to digital information, they have the inherent ability to accurately move and manipulate objects placed on top of them. In this paper we focus on such object manipulation: we present ideas and techniques that use the underlying shape change to give kinetic ability to otherwise inanimate objects. First, we describe the shape display’s ability to assemble, disas- semble, and reassemble structures from simple passive build- ing blocks through stacking, scaffolding, and catapulting. A technical evaluation demonstrates the reliability of the presented techniques. Second, we introduce special kinematic blocks that are actuated and sensed through the underlying pins. These blocks translate vertical pin movements into other degrees of freedom like rotation or horizontal movement. This interplay of the shape display with objects on its surface allows us to render otherwise inaccessible forms, like overhangs, and enables richer input and output.

Dynamic Physical Affordances for Shape-Changing and Deformable User Interfaces

Follmer, Sean (Sean Weston)    Dynamic physical affordances for shape-changing and deformable user interfaces / by Sean Weston Follmer.  ©2015.

The world is filled with tools and devices designed to fit specific needs and goals, and their physical form plays an important role in helping users understand their use. These physical affordances provide products and interfaces with many advantages: they contribute to good ergonomics, allow users to attend to other tasks visually, and take advantage of embodied and distributed cognition by allowing users to offload mental computation spatially. However, devices today include more and more functionality, with increasingly fewer physical affordances, losing many of the advantages in expressivity and dexterity that our hands can provide. My research examines how we can apply shape-changing and deformable interfaces to address the lack of physical affordances in today’s interactive products and enable richer physical interaction with general purpose computing interfaces. In this thesis, I introduce tangible interfaces that use their form to adapt to the functions and ways users want to interact with them. I explore two solutions: 1) creating Dynamic Physical Affordances through shape change and 2) user Improvised Physical Affordances through direct deformation and through appropriation of existing objects. Dynamic Physical Affordances can provide buttons and sliders on demand as an application changes, or even allow users to directly manipulate 3D models or data sets through physical handles which appear out of the data. Improvised Physical Affordances can allow users to squeeze, stretch, and deform input devices to fit their needs, creating the perfect game controller, or shaping a mobile phone around their wrist to form a bracelet. Novel technical solutions are needed to enable these new interaction techniques; this thesis describes techniques both for actuation and robust sensing for shape-changing and deformable interfaces. Finally, systems that utilize Dynamic Physical Affordances and Improvised Physical Affordances are evaluated to understand patterns of use and performance. My belief is that shape-changing UI will become increasingly available in the future, and this work begins to create a vocabulary and design space for more general-purpose interaction for shape-changing UI.

THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens

Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2015. THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15). ACM, New York, NY, USA, 89-96.

DOI: http://dx.doi.org/10.1145/2677199.2680584
The huge influx of mobile display devices is transforming computing into multi-device interaction, demanding a fluid mechanism for using multiple devices in synergy. In this paper, we present a novel interaction system that allows a collocated large display and a small handheld device to work together. The smartphone acts as a physical interface for near-surface interactions on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. As a result, the smartphone can directly interact with data displayed on the host computer, with precisely aligned visual feedback from both devices. The possible interactions are described and classified in a framework, which we exemplify on the basis of several implemented applications. Finally, we present a technical evaluation and describe how our system is unique compared to other existing near-surface interaction systems. The proposed technique can be implemented on existing devices without the need for additional hardware, promising immediate integration into existing systems.

Cord UIs: Controlling Devices with Augmented Cables

Philipp Schoessler, Sang-won Leigh, Krithika Jagannath, Patrick van Hoof, and Hiroshi Ishii. 2015. Cord UIs: Controlling Devices with Augmented Cables. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Em

DOI: http://doi.acm.org/10.1145/2677199.2680601
Cord UIs are sensorial augmented cords that allow for simple metaphor-rich interactions to interface with their connected devices. Cords offer a large underexplored space for interactions as well as unique properties and a diverse set of metaphors that make them potentially interesting tangible interfaces. We use cords as input devices and explore different interactions like tying knots, stretching, pinching and kinking to control the flow of data and/or electricity. We also look at ways to use objects in combination with augmented cords to manipulate data or certain properties of a device. For instance, placing a clamp on a cable can obstruct the audio signal to the headphones. To test and evaluate our ideas, we built five working prototypes that showcase some of the interactions described in this paper as well as special materials such as piezo copolymer cables and stretchable cords.

Sticky Actuator: Free-Form Planar Actuators for Animated Objects

Niiyama, R., Sun, X., Yao, L., Ishii, H., Rus, D., and Kim, S. Sticky Actuator: Free-Form Planar Actuators for Animated Objects. International Conference on Tangible, Embedded, and Embodied Interaction (TEI), ACM Press (2015), 77–84.

DOI: http://dx.doi.org/10.1145/2677199.2680600
We propose soft planar actuators enhanced by free-form fabrication that are suitable for making everyday objects move. The actuator consists of one or more inflatable pouches with an adhesive back. We have developed a machine for the fabrication of free-from pouches; squares, circles and ribbons are all possible. The deformation of the pouches can provide linear, rotational, and more complicated motion corresponding to the pouch’s geometry. We also provide a both manual and programmable control system. In a user study, we organized a hands-on workshop of actuated origami for children. The results show that the combination of the actuator and classic materials can enhance rapid prototyping of animated objects.

Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging

Viirj Kan, Katsuya Fujii, Judith Amores, Chang Long Zhu Jin, Pattie Maes, and Hiroshi Ishii. 2015. Social Textiles: Social Affordances and Icebreaking Interactions Through Wearable Social Messaging. In Proceedings of the Ninth International Conference on Tangible, Embedded, and Embodied Interaction (TEI '15). ACM, New York, NY, USA, 619-624.

DOI: http://dx.doi.org/10.1145/2677199.2688816
Wearable commodities are able to extend beyond the temporal span of a particular community event, offering omnipresent vehicles for producing icebreaking interaction opportunities. We introduce a novel platform, which generates social affordances to facilitate community organizers in aggregating social interaction among unacquainted, collocated members beyond initial hosted gatherings. To support these efforts, we present functional work-in-progress prototypes for Social Textiles, wearable computing textiles which enable social messaging and peripheral social awareness on non-emissive digitally linked shirts. The shirts serve as catalysts for different social depths as they reveal common interests (mediated by community organizers), based on the physical proximity of users. We provide 3 key scenarios, which demonstrate the user experience envisioned with our system. We present a conceptual framework, which shows how different community organizers across domains such as universities, brand communities and digital self-organized communities can benefit from our technology.

TRANSFORM as Adaptive and Dynamic Furniture

Luke Vink, Viirj Kan, Ken Nakagaki, Daniel Leithinger, Sean Follmer, Philipp Schoessler, Amit Zoran and Hiroshi Ishii, "TRANSFORM as Adaptive and Dynamic Furniture," CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

DOI: http://dx.doi.org/10.1145/2702613.2732494
TRANSFORM is an exploration of how shape display technology can be integrated into our everyday lives as interactive, shape changing furniture. These interfaces not only serve as traditional computing devices, but also support a variety of physical activities. By creating shapes on demand or by moving objects around, TRANSFORM changes the ergonomics, functionality and aesthetic dimensions of furniture. The video depicts a story with various scenarios of how TRANSFORM shape shifts to support a variety of use cases in the home and in the work environment: It holds and moves objects like fruits, game tokens, office supplies and tablets; creates dividers on demand; and generates interactive sculptures to convey messages and audio.

Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick

Ken Nakagaki, Chikara Inamura, Pasquale Totaro, Thariq Shihipar, Chantine Akiyama, Yin Shuang and Hiroshi Ishii, "Linked-Stick: Conveying a Physical Experience using a Shape-Shifting Stick," CHI’15 Extended Abstracts, April 18–23, 2015, Seoul, Republic of Korea.

DOI: http://dx.doi.org/10.1145/2702613.2732712
We use sticks as tools for a variety of activities, everything from conducting music to playing sports or even engage in combat. However, these experiences are inherently physical and are poorly conveyed through traditional digital mediums such as video. Linked-Stick is a shape-changing stick that can mirror the movements of another person’s stick-shape tool. We explore how this can be used to experience and learn music, sports and fiction in a more authentic manner. Our work attempts to expand the ways in which we interact with and learn to use tools.

bioLogic: Natto Cells as Nanoactuators for Shape Changing Interfaces

CHI '15 Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems Pages 1-10 Seoul, Republic of Korea — April 18 - 23, 2015 ACM New York, NY, USA ©2015 table of contents ISBN: 978-1-4503-3145-6 doi 10.1145/2702123.2702611

DOI: http://dl.acm.org/citation.cfm?doid=2702123.2702611
Nature has engineered its own actuators, as well as the efficient material composition, geometry and structure to utilize its actuators and achieve functional transformation. Based on the natural phenomenon of cells' hygromorphic transformation, we introduce the living Bacillus Subtilis natto cell as a humidity sensitive nanoactuator. In this paper, we unfold the process of exploring and comparing cell types that are proper for HCI use, the development of the composite biofilm, the development of the responsive structures, the control setup for actuating biofilms, and a simulation and fabrication platform. Finally, we provide a variety of application designs, with and without computer control to demonstrate the potential of our bio actuators. Through this paper, we intend to enable the use of natto cells and our platform technologies for HCI researchers, designers and bio-hackers. More generally, we try to encourage the research and use of biological responsive materials and interdisciplinary research in HCI.

2014

Physical Telepresence: Shape Capture and Display for Embodied, Computer-mediated Remote Collaboration

Daniel Leithinger, Sean Follmer, Alex Olwal, and Hiroshi Ishii. 2014. Physical telepresence: shape capture and display for embodied, computer-mediated remote collaboration. In Proceedings of the 27th annual ACM symposium on User interface software and technology (UIST '14). ACM, New York, NY, USA, 461-470. DOI=10.1145/2642918.2647377 http://doi.acm.org/10.1145/2642918.2647377

DOI: DOI=10.1145/2642918.2647377 http://doi.acm.org/10.1145/2642918.2647377
We propose a new approach to Physical Telepresence, based on shared workspaces with the ability to capture and remotely render the shapes of people and objects. In this paper, we describe the concept of shape transmission, and propose interaction techniques to manipulate remote physical objects and physical renderings of shared digital content. We investigate how the representation of user's body parts can be altered to amplify their capabilities for teleoperation. We also describe the details of building and testing prototype Physical Telepresence workspaces based on shape displays. A preliminary evaluation shows how users are able to manipulate remote objects, and we report on our observations of several different manipulation techniques that highlight the expressive nature of our system. <br> <b>Presentation:</b><br> <iframe src="//player.vimeo.com/video/110563314" width="500" height="281" frameborder="0" webkitallowfullscreen mozallowfullscreen allowfullscreen></iframe>

AnnoScape: Remote Collaborative Review Using Live Video Overlay in Shared 3D Virtual Workspace

Austin Lee, Hiroshi Chigira, Sheng Kai Tang, Kojo Acquah, and Hiroshi Ishii. 2014. AnnoScape: remote collaborative review using live video overlay in shared 3D virtual workspace. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI '14). ACM, New York, NY, USA, 26-29. DOI=10.1145/2659766.2659776 http://doi.acm.org/10.1145/2659766.2659776

DOI: http://dl.acm.org/citation.cfm?doid=2659766.2659776
We introduce AnnoScape, a remote collaboration system that allows users to overlay live video of the physical desktop image on a shared 3D virtual workspace to support individual and collaborative review of 2D and 3D content using hand gestures and real ink. The AnnoScape system enables distributed users to visually navigate the shared 3D virtual workspace individually or jointly by moving tangible handles; simultaneously snap into a shared viewpoint and generate a live video overlay of freehand annotations from the desktop surface onto the system's virtual viewports which can be placed spatially in the 3D data space. Finally, we present results of our preliminary user study and discuss design issues and AnnoScape's potential to facilitate effective communication during remote 3D data reviews.

T(ether): Spatially-Aware Handhelds, Gestures and Proprioception for Multi-User 3D Modeling and Animation

David Lakatos, Matthew Blackshaw, Alex Olwal, Zachary Barryte, Ken Perlin, and Hiroshi Ishii. 2014. T(ether): spatially-aware handhelds, gestures and proprioception for multi-user 3D modeling and animation. In Proceedings of the 2nd ACM symposium on Spatial user interaction (SUI '14). ACM, New York, NY, USA, 90-93. DOI=10.1145/2659766.2659785 http://doi.acm.org/10.1145/2659766.2659785

DOI: http://dl.acm.org/citation.cfm?doid=2659766.2659785
T(ether) is a spatially-aware display system for multi-user, collaborative manipulation and animation of virtual 3D objects. The handheld display acts as a window into virtual reality, providing users with a perspective view of 3D data. T(ether) tracks users' heads, hands, fingers and pinching, in addition to a handheld touch screen, to enable rich interaction with the virtual scene. We introduce gestural interaction techniques that exploit proprioception to adapt the UI based on the hand's position above, behind or on the surface of the display. These spatial interactions use a tangible frame of reference to help users manipulate and animate the model in addition to controlling environment properties. We report on initial user observations from an experiment for 3D modeling, which indicate T(ether)'s potential for embodied viewport control and 3D modeling interactions.

bioPrint: An automatic deposition system for Bacteria Spore Actuators

Jifei Ou, Lining Yao, Clark Della Silva, Wen Wang, and Hiroshi Ishii. 2014. bioPrint: an automatic deposition system for bacteria spore actuators. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST'14 Adjunct). ACM, New York, NY, USA, 121-122.

DOI: DOI=10.1145/2658779.2658806 http://doi.acm.org/10.1145/2658779.2658806
We propose an automatic deposition method of bacteria spores, which deform thin soft materials under environmental humidity change. We describe the process of two-dimensional printing the spore solution as well as a design application. This research intends to contribute to the understanding of the control and pre-programming the transformation of future interfaces.

THAW: Tangible Interaction with See-Through Augmentation for Smartphones on Computer Screens

Sang-won Leigh, Philipp Schoessler, Felix Heibeck, Pattie Maes, and Hiroshi Ishii. 2014. THAW: tangible interaction with see-through augmentation for smartphones on computer screens. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST'14 Adjunct). ACM, New York, NY, USA, 55-56

DOI: http://doi.acm.org/10.1145/2658779.2659111
In this paper, we present a novel interaction system that allows a collocated large display and small handheld devices to seamlessly work together. The smartphone acts both as a physical interface and as an additional graphics layer for near-surface interaction on a computer screen. Our system enables accurate position tracking of a smartphone placed on or over any screen by displaying a 2D color pattern that is captured using the smartphone’s back-facing camera. The proposed technique can be implemented on existing devices without the need for additional hardware.

Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion

Xiao Xiao, Basheer Tome, and Hiroshi Ishii. 2014. Andante: Walking Figures on the Piano Keyboard to Visualize Musical Motion. In Proceedings of the 14th International Conference on New Interfaces for Musical Expression (NIME ‘14). Goldsmiths University of London. London, UK.

We present Andante, a representation of music as animated characters walking along the piano keyboard that appear to play the physical keys with each step. Based on a view of music pedagogy that emphasizes expressive, full- body communication early in the learning process, Andante promotes an understanding of music rooted in the body, taking advantage of walking as one of the most fundamental human rhythms. We describe three example visualizations on a preliminary prototype as well as applications extending our examples for practice feedback, improvisation and composition. Through our project, we reflect on some high level considerations for the NIME community.

jamSheets: Thin Interfaces with Tunable Stiffness Enabled by Layer Jamming

Jifei Ou, Lining Yao, Daniel Tauber, Jürgen Steimle, Ryuma Niiyama, and Hiroshi Ishii. 2014. jamSheets: thin interfaces with tunable stiffness enabled by layer jamming. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI '14). ACM, New York, NY, USA, 65-72. DOI=10.1145/2540930.2540971 http://doi.acm.org/10.1145/2540930.2540971

DOI: DOI=10.1145/2540930.2540971 http://doi.acm.org/10.1145/2540930.2540971
This works introduces layer jamming as an enabling technology for designing deformable, stiffness-tunable, thin sheet interfaces. Interfaces that exhibit tunable stiffness properties can yield dynamic haptic feedback and shape deformation capabilities. In comparison to the particle jamming, layer jamming allows for constructing thin and lightweight form factors of an interface. We propose five layer structure designs and an approach which composites multiple materials to control the deformability of the interfaces. We also present methods to embed different types of sensing and pneumatic actuation layers on the layer-jamming unit. Through three application prototypes we demonstrate the benefits of using layer jamming in interface design. Finally, we provide a survey of materials that have proven successful for layer jamming.

Weight and Volume Changing Device with Liquid Metal Transfer

Ryuma Niiyama, Lining Yao, and Hiroshi Ishii. 2014. Weight and volume changing device with liquid metal transfer. In Proceedings of the 8th International Conference on Tangible, Embedded and Embodied Interaction (TEI '14). ACM, New York, NY, USA, 49-52. DOI=10.1145/2540930.2540953 http://doi.acm.org/10.1145/2540930.2540953

DOI: DOI=10.1145/2540930.2540953 http://doi.acm.org/10.1145/2540930.2540953
This paper presents a weight-changing device based on the transfer of mass. We chose liquid metal (Ga-In-Tin eutectic) and a bi-directional pump to control the mass that is injected into or removed from a target object. The liquid metal has a density of 6.44g/cm3, which is about six times heavier than water, and is thus suitable for effective mass transfer. We also combine the device with a dynamic volume-changing function to achieve programmable mass and volume at the same time. We explore three potential applications enabled by weight-changing devices: density simulation of different materials, miniature representation of planets with scaled size and mass, and motion control by changing gravity force. This technique opens up a new design space in human-computer interactions.

Integrating Optical Waveguides for Display and Sensing on Pneumatic Soft Shape Changing Interfaces

Lining Yao, Jifei Ou, Daniel Tauber, and Hiroshi Ishii. 2014. Integrating optical waveguides for display and sensing on pneumatic soft shape changing interfaces. In Proceedings of the adjunct publication of the 27th annual ACM symposium on User interface software and technology (UIST'14 Adjunct). ACM, New York, NY, USA, 117-118.

DOI: DOI=10.1145/2658779.2658804 http://doi.acm.org/10.1145/2658779.2658804
We introduce the design and fabrication process of integrating optical fiber into pneumatically driven soft composite shape changing interfaces. Embedded optical waveguides can provide both sensing and illumination, and add one more building block to the design of designing soft pneumatic shape changing interfaces.

2013

inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation

Sean Follmer, Daniel Leithinger, Alex Olwal, Akimitsu Hogge, and Hiroshi Ishii. 2013. inFORM: Dynamic Physical Affordances and Constraints through Shape and Object Actuation. To appear in Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST ‘13). ACM, New York, NY, USA.

DOI: http://doi.acm.org/10.1145/2501988.2502032
Past research on shape displays has primarily focused on rendering content and user interface elements through shape output, with less emphasis on dynamically changing UIs. We propose utilizing shape displays in three different ways to mediate interaction: to facilitate by providing dynamic physical affordances through shape change, to restrict by guiding users with dynamic physical constraints, and to manipulate by actuating physical objects. We outline potential interaction techniques and introduce Dynamic Physical Affordances and Constraints with our inFORM system, built on top of a state-of-the-art shape display, which provides for variable stiffness rendering and real-time user input through direct touch and tangible interaction. A set of motivating examples demonstrates how dynamic affordances, constraints and object actuation can create novel interaction possibilities.

FocalSpace: Multimodal Activity Tracking, Synthetic Blur and Adaptive Presentation for Video Conferencing

Lining Yao, Anthony DeVincenzi, Anna Pereira, and Hiroshi Ishii. 2013. FocalSpace: multimodal activity tracking, synthetic blur and adaptive presentation for video conferencing. In Proceedings of the 1st symposium on Spatial user interaction (SUI '13). ACM, New York, NY, USA, 73-76. DOI=10.1145/2491367.2491377 http://doi.acm.org/10.1145/2491367.2491377

DOI: DOI=10.1145/2491367.2491377 http://doi.acm.org/10.1145/2491367.2491377
We introduce FocalSpace, a video conferencing system that dynamically recognizes relevant activities and objects through depth sensing and hybrid tracking of multimodal cues, such as voice, gesture, and proximity to surfaces. FocalSpace uses this information to enhance users’ focus by diminishing the background through synthetic blur effects. We present scenarios that support the suppression of visual distraction, provide contextual augmentation, and enable privacy in dynamic mobile environments. Our user evaluation indicates increased memory accuracy and user preference for FocalSpace techniques compared to traditional video conferencing.

Sublimate: State-Changing Virtual and Physical Rendering to Augment Interaction with Shape Displays

Daniel Leithinger, Sean Follmer, Alex Olwal, Samuel Luescher, Akimitsu Hogge, Jinha Lee, and Hiroshi Ishii. 2013. Sublimate: state-changing virtual and physical rendering to augment interaction with shape displays. In Proceedings of the 2013 ACM annual conference on Human factors in computing systems (CHI '13). ACM, New York, NY, USA, 1441-1450.

DOI: http://dx.doi.org/10.1145/2470654.2466191
Recent research in 3D user interfaces pushes towards immersive graphics and actuated shape displays. Our work explores the hybrid of these directions, and we introduce sublimation and deposition, as metaphors for the transitions between physical and virtual states. We discuss how digital models, handles and controls can be interacted with as virtual 3D graphics or dynamic physical shapes, and how user interfaces can rapidly and fluidly switch between those representations. To explore this space, we developed two systems that integrate actuated shape displays and augmented reality (AR) for co-located physical shapes and 3D graphics. Our spatial optical see-through display provides a single user with head-tracked stereoscopic augmentation, whereas our handheld devices enable multi-user interaction through video see-through AR. We describe interaction techniques and applications that explore 3D interaction for these new modalities. We conclude by discussing the results from a user study that show how freehand interaction with physical shape displays and co-located graphics can outperform wand-based interaction with virtual 3D graphics.

Beyond Visualization – Designing Interfaces to Contextualize Geospatial Data

Samuel Luescher

The growing sensor data collections about our environment have the potential to drastically change our perception of the fragile world we live in. To make sense of such data, we commonly use visualization techniques, enabling public discourse and analysis. This thesis describes the design and implementation of a series of interactive systems that integrate geospatial sensor data visualization and terrain models with various user interface modalities in an educational context to support data analysis and knowledge building using part-digital, part-physical rendering. The main contribution of this thesis is a concrete application scenario and initial prototype of a “Designed Environment” where we can explore the relationship between the surface of Japan’s islands, the tension that originates in the fault lines along the seafloor beneath its east coast, and the resulting natural disasters. The system is able to import geospatial data from a multitude of sources on the “Spatial Web”, bringing us one step closer to a tangible “dashboard of the Earth.”

synchroLight: Three-dimensional Pointing System for Remote Video Communication

Jifei Ou, Sheng Kai Tang, and Hiroshi Ishii. 2013. synchroLight: three-dimensional pointing system for remote video communication. In CHI '13 Extended Abstracts on Human Factors in Computing Systems (CHI EA '13). ACM, New York, NY, USA, 169-174. DOI=10.1145/2468356.2468387 http://doi.acm.org/10.1145/2468356.2468387

DOI: DOI=10.1145/2468356.2468387 http://doi.acm.org/10.1145/2468356.2468387
Although the image quality and transmission speed of current remote video communication systems have vastly improved in recent years, its interactions still remain detached from the physical world. This causes frustration and lowers working efficiency, especially when both sides are referencing physical objects and space. In this paper, we propose a remote pointing system named synchroLight that allows users to point at remote physical objects with synthetic light. The system extends the interaction of the existing remote pointing systems from two-dimensional surfaces to three-dimensional space. The goal of this project is to approach a seamless experience in video communication.

exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality

Shunichi Kasahara, Ryuma Niiyama, Valentin Heun, and Hiroshi Ishii. 2013. exTouch: Spatially-Aware Embodied Manipulation of Actuated Objects Mediated by Augmented Reality. In Proceedings of the seventh international conference on Tangible, embedded, and embodied interaction (TEI ‘13). ACM, Barcelona, Spain.

DOI: http://doi.acm.org/10.1145/2460625.2460661
As domestic robots and smart appliances become increasingly common, they require a simple, universal interface to control their motion. Such an interface must support a simple selection of a connected device, highlight its capabilities and allow for an intuitive manipulation. We propose "exTouch", an embodied spatially-aware approach to touch and control devices through an augmented reality mediated mobile interface. The "exTouch" system extends the users touchscreen interactions into the real world by enabling spatial control over the actuated object. When users touch a device shown in live video on the screen, they can change its position and orientation through multi-touch gestures or by physically moving the screen in relation to the controlled object. We demonstrate that the system can be used for applications such as an omnidirectional vehicle, a drone, and moving furniture for reconfigurable room.

MirrorFugue III: Conjuring the Recorded Pianist

Xiao Xiao, Anna Pereira and Hiroshi Ishii. 2013. MirrorFugue III: Conjuring the Recorded Pianist. In Proceedings of 13th conference on New Interfaces for Musical Expression (NIME '13). KAIST. Daejeon, South Korea.

The body channels rich layers of information when playing music, from intricate manipulations of the instrument to vivid personifications of expression. But when music is captured and replayed across distance and time, the performer’s body is too often trapped behind a small screen or absent entirely. This paper introduces MirrorFugue III, an interface to conjure the recorded performer by combining the moving keys of a player piano with life-sized projection of the pianist’s hands and upper body. Inspired by reflections on a lacquered grand piano, our interface evokes the sense that the virtual pianist is playing the physically moving keys. Through MirrorFugue III, we explore the question of how to viscerally simulate a performer’s presence to create immersive experiences. We discuss design choices, outline a space of usage scenarios and report reactions from users.

PneUI: Pneumatically Actuated Soft Composite Materials for Shape Changing Interfaces

Lining Yao, Ryuma Niiyama, Jifei Ou, Sean Follmer, Clark Della Silva, and Hiroshi Ishii. 2013. PneUI: pneumatically actuated soft composite materials for shape changing interfaces. In Proceedings of the 26th annual ACM symposium on User interface software and technology (UIST '13). ACM, New York, NY, USA, 13-22. DOI=10.1145/2501988.2502037 http://doi.acm.org/10.1145/2501988.2502037

This paper presents PneUI, an enabling technology to build shape-changing interfaces through pneumatically-actuated soft composite materials. The composite materials integrate the capabilities of both input sensing and active shape output. This is enabled by the composites’ multi-layer structures with different mechanical or electrical properties. The shape changing states are computationally controllable through pneumatics and pre-defined structure. We explore the design space of PneUI through four applications: height changing tangible phicons, a shape changing mobile, a transformable tablet case and a shape shifting lamp.

2012

Towards Radical Atoms – Form-giving to Transformable Materials

Dávid Lakatos, Hiroshi Ishii. 2012. Towards Radical Atoms — Form-giving to transformable materials. In proceedings of Cognitive Infocommunications (CogInfoCom), 2012 IEEE 3rd International Conference, Kosice, Slovakia

DOI: 10.1109/CogInfoCom.2012.6422023
Form, as the externalization of an idea has been present in our civilization for several millennia. Humans have used their hands and tools to directly manipulate and alter/deform the shape of physical materials. Concurrently, we have been inventing tools in the digital domains that allow us to freely manipulate digital information. The next step in the evolution of form-giving is toward shape- changing materials, with tight coupling between their shape and an underlying digital model. In this paper we compare approaches for interaction design of these shape-shafting entities that we call Radical Atoms. We use three projects to elaborate on appropriate interaction techniques for both the physical and the virtual domains.

Second surface: multi-user spatial collaboration system based on augmented reality

Shunichi Kasahara, Valentin Heun, Austin S. Lee, and Hiroshi Ishii. 2012. Second surface: multi-user spatial collaboration system based on augmented reality. In SIGGRAPH Asia 2012 Emerging Technologies (SA '12). ACM, New York, NY, USA, , Article 20 , 4 pages. DOI=10.1145/2407707.2407727 http://doi.acm.org/10.1145/2407707.2407727

DOI: http://doi.acm.org/10.1145/2407707.2407727
An environment for creative collaboration is significant for enhancing human communication and expressive activities, and many researchers have explored different collaborative spatial interaction technologies. However, most of these systems require special equipment and cannot adapt to everyday environment. We introduce Second Surface, a novel multi-user Augmented reality system that fosters a real-time interaction for user-generated contents on top of the physical environment. This interaction takes place in the physical surroundings of everyday objects such as trees or houses. Our system allows users to place three dimensional drawings, texts, and photos relative to such objects and share this expression with any other person who uses the same software at the same spot. Second Surface explores a vision that integrates collaborative virtual spaces into the physical space. Our system can provide an alternate reality that generates a playful and natural interaction in an everyday setup.

Jamming User Interfaces: Programmable Particle Stiffness and Sensing for Malleable and Shape-Changing Devices.

Sean Follmer, Daniel Leithinger, Alex Olwal, Nadia Cheng, and Hiroshi Ishii. 2012. Jamming user interfaces: programmable particle stiffness and sensing for malleable and shape-changing devices. In Proceedings of the 25th annual ACM symposium on User interface software and technology (UIST '12). ACM, New York, NY, USA, 519-528.

DOI: http://dx.doi.org/10.1145/2380116.2380181
Malleable and organic user interfaces have the potential to enable radically new forms of interactions and expressiveness through flexible, free-form and computationally controlled shapes and displays. This work, specifically focuses on particle jamming as a simple, effective method for flexible, shape-changing user interfaces where programmatic control of material stiffness enables haptic feedback, deformation, tunable affordances and control gain. We introduce a compact, low-power pneumatic jamming system suitable for mobile devices, and a new hydraulic-based technique with fast, silent actuation and optical shape sensing. We enable jamming structures to sense input and function as interaction devices through two contributed methods for high-resolution shape sensing using: 1) index-matched particles and fluids, and 2) capacitive and electric field sensing. We explore the design space of malleable and organic user interfaces enabled by jamming through four motivational prototypes that highlight jamming's potential in HCI, including applications for tabletops, tablets and for portable shape-changing mobile devices.

Point and share: from paper to whiteboard

Misha Sra, Austin Lee, Sheng-Ying Pao, Gonglue Jiang, and Hiroshii Ishii. 2012. Point and share: from paper to whiteboard. In Adjunct proceedings of the 25th annual ACM symposium on User interface software and technology (UIST Adjunct Proceedings '12). ACM, New York, NY, USA, 23-24. DOI=10.1145/2380296.2380309 http://doi.acm.org/10.1145/2380296.2380309

DOI: http://doi.acm.org/10.1145/2380296.2380309
Traditional writing instruments have the potential to enable new forms of interactions and collaboration though digital enhancement. This work specifically enables the user to utilize pen and paper as input mechanisms for content to be displayed on a shared interactive whiteboard. We introduce a pen cap with an infrared led, an actuator and a switch. Pointing the pen cap at the whiteboard allows users to select and position a “canvas” on the whiteboard to display handwritten text while the actuator enables resizing the canvas and the text. It is conceivable that anything one can write on paper anywhere, could be displayed on an interactive whiteboard.

Amphorm

Shape-shifting materials have been part of sci-fi literature for decades. But if tomorrow we invent them, how are we going to communicate to them what shape we want them to morph into? If we look at our history, for thousands of years humans have been using the dexterity of their hands as primary means to alter the topology of their surroundings. While direct manipulation, as a primary method for form giving, allows for high precision deformation, the scope of interaction is limited to the scale of the hand. In order to extend the scope of manipulation beyond the hand scale, tools were invented to reach further and to augment the capabilities of our hands. In this thesis, I propose "Amphorm", a perceptually equivalent example of Radical Atoms, our vision on the interaction techniques for future, highly malleable, shape-shifting materials. "Amphorm" is a cylindrical kinetic sculpture that resembles a vase. Since "Amphorm" is a dual citizen between the digital and the physical world, its shape can be altered in both worlds. I describe novel interaction techniques for rapid shape deformation both in the physical world through free hand gestures and in the digital world through a Graphical User Interface. Additionally I explore how the physical world could be synchronized with the digital world and how tools from both worlds can jointly alter dual-citizens.

rainBottles: gathering raindrops of data from the cloud

Jinha Lee, Greg Vargas, Mason Tang, and Hiroshi Ishii. 2012. rainbottles: gathering raindrops of data from the cloud. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA '12). ACM, New York, NY, USA, 1901-1906.

DOI: http://doi.acm.org/10.1145/2212776.2223726
This paper introduces a design for a new way of managing the flow of information in the age of overflow. The device, rainBottles, collects virtual data and converts it into a virtual liquid that fills up specially designed glass bottles. The bottles then serve as an ambient interface displaying the quantity of information in a queue as well as a tangible controller for opening the applications associated with the data in the bottles. With customizable data relevance metrics, the bottles can also serve as filters by letting less relevant data overflow out of the bottle.

Point-and-Shoot Data

Stephanie Lin, Samuel Luescher, Travis Rich, Shaun Salzberg, and Hiroshi Ishii. 2012. Point-and-shoot data. In Proceedings of the 2012 ACM annual conference extended abstracts on Human Factors in Computing Systems Extended Abstracts (CHI EA '12). ACM, New York, NY, USA, 2027-2032.

DOI: http://doi.acm.org/10.1145/2223656.2223747
We explore the use of visible light as a wireless communication medium for mobile devices. We discuss the advantages of a human perceptible communication medium in regards to user experience and create tools for direct manipulation of the communication channel.

KidCAD: digitally remixing toys through tangible tools

Sean Follmer and Hiroshi Ishii. 2012. KidCAD: digitally remixing toys through tangible tools. In Proceedings of the 2012 ACM annual conference on Human Factors in Computing Systems (CHI '12). ACM, New York, NY, USA, 2401-2410.

DOI: http://doi.acm.org/10.1145/2208276.2208403
Children have great facility in the physical world, and can skillfully model in clay and draw expressive illustrations. Traditional digital modeling tools have focused on mouse, keyboard and stylus input. These tools may be complicated and difficult for young users to easily and quickly create exciting designs. We seek to bring physical interaction to digital modeling, to allow users to use existing physical objects as tangible building blocks for new designs. We introduce KidCAD a digital clay interface for children to remix toys. KidCAD allows children to imprint 2.5D shapes from physical objects into their digital models by deforming a malleable gel input device, deForm. Users can mashup existing objects, edit and sculpt or draw new designs on a 2.5D canvas using physical objects, hands and tools as well as 2D touch gestures. We report on a preliminary user study with 13 children, ages 7 to 10, which provides feedback for our design and helps guide future work in tangible modeling for children.

People in books: using a FlashCam to become part of an interactive book for connected reading

Sean Follmer, Rafael (Tico) Ballagas, Hayes Raffle, Mirjana Spasojevic, and Hiroshi Ishii. 2012. People in books: using a FlashCam to become part of an interactive book for connected reading. In Proceedings of the ACM 2012 conference on Computer Supported Cooperative Work (CSCW '12). ACM, New York, NY, USA, 685-694.

DOI: http://doi.acm.org/10.1145/2145204.2145309
We introduce People in Books with FlashCam technology, a system that supports children and long-distance family members to act as characters in children's storybooks while they read stories together over a distance. By segmenting the video chat streams of the child and remote family member from their background surroundings, we create the illusion that the child and adult reader are immersed among the storybook illustrations. The illusion of inhabiting a shared story environment helps remote family members feel a sense of togetherness and encourages active reading behaviors for children ages three to five. People In Books is designed to fit into families' traditional reading practices, such as reading ebooks on couches or in bed via netbook or tablet computers. To accommodate this goal we implemented FlashCam, a computationally cost effective and physically small background subtraction system for mobile devices that allows users to move locations and change lighting conditions while they engage in background-subtracted video communications. A lab evaluation compared People in Books with a conventional remote reading application. Results show that People in Books motivates parents and children to be more performative readers and encourages open-ended play beyond the story, while creating a strong sense of togetherness.

Radical Atoms: Beyond Tangible Bits, Toward Transformable Materials

Hiroshi Ishii, Dávid Lakatos, Leonardo Bonanni, and Jean-Baptiste Labrune. 2012. Radical atoms: beyond tangible bits, toward transformable materials. interactions 19, 1 (January 2012), 38-51.

DOI: http://doi.acm.org/10.1145/2065327.2065337
“Radical Atoms” is our vision of human interaction with future dynamic materials that are computationally reconfigurable. “Radical Atoms” was created to overcome the fundamental limitations of its precursor, the “Tangible Bits” vision. Tangible Bits - the physical embodiment of digital information and computation - was constrained by the rigidity of “atoms” in comparison with the fluidity of bits. This makes it difficult to represent fluid digital information in traditionally rigid physical objects, and inhibits dynamic tangible interfaces from being able to control or represent computational inputs and outputs. In order to augment the vocabulary of Tangible User Interfaces or TUIs, we use dynamic representations such as co-located projections or “digital shadows”. However the physical objects on the tabletop stay static and rigid. To overcome these limitations, we began to experiment with a variety of actuated and kinetic tangibles, which can transform their physical positions or shapes as an additional output modality beyond the traditional manual input mode of TUI’s. Our vision of “Radical Atoms” is based on hypothetical, extremely malleable and reconfigurable materials that can be described by real-time digital models so that dynamic changes in digital information can be reflected by a dynamic change in physical state and vice-versa. Bidirectional synchronization is key to making Radical Atoms a tangible but dynamic representation & control of digital information, and enabling new forms of Human Computer Interaction. In this article, we review the original vision and limitations of Tangible Bits and introduce an array of actuated/kinetic tangibles that emerged in the past 10 years of Tangible Media Group’s research to overcome the issue of atoms’ rigidity. Then we illustrate our vision of interactions with Radical Atoms which do not exist today, but may be invented in next 100 years by atom hackers (material scientists, self-organizing nano-robot engineers, etc.) and speculate new interaction techniques and applications which would be enabled by the Radical Atoms.

MirrorFugue2: Embodied Representation of Recorded Piano Performances

Xiao Xiao and Hiroshi Ishii. 2012. MirrorFugue2: Embodied Representation of Recorded Piano Performances. In Extended Abstracts of the 2012 international conference on Interactive Tabletops and Surfaces (ITS '12). ACM, New York, NY, USA.

We present MirrorFugue2, and interface for viewing recorded piano playing where video of the hands and upper body of a performer are projected on the surface of the instrument at full scale. Rooted in the idea that a performer’s body plays a key role in channeling musical expression, we introduce an upper body display, extending a previous prototype that demonstrated the benefits of a full-scale hands display for pedagogy. We describe two prototypes of MirrorFugue2 and discuss how the interface can benefit pedagogy, watching performances and collaborative playing.

2011

PingPong++: Community Customization in Games and Entertainment

Xiao Xiao, Michael S. Bernstein, Lining Yao, David Lakatos, Lauren Gust, Kojo Acquah, and Hiroshi Ishii. 2011. PingPong++: community customization in games and entertainment. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE '11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 24 , 6 pages.

DOI: http://doi.acm.org/10.1145/2071423.2071453
In this paper, we introduce PingPong++, an augmented ping pong table that applies Do-It-Yourself (DIY) and community contribution principles to the world of physical sports and play. PingPong++ includes an API for creating new visualizations, easily recreateable hardware, an end-user interface for those without programming experience, and a crowd data API for replaying and remixing past games. We discuss a range of contribution domains for PingPong++ and share the design, usage, feedback, and lessons for each domain. We then reflect on our process and outline a design space for community-contributed sports.

ZeroN: Mid-air Tangible Interaction Enabled by Computer-Controlled Magnetic Levitation

Jinha Lee, Rehmi Post, and Hiroshi Ishii. 2011. ZeroN: mid-air tangible interaction enabled by computer controlled magnetic levitation. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 327-336.

DOI: http://doi.acm.org/10.1145/2047196.2047239
This paper presents ZeroN, a new tangible interface element that can be levitated and moved freely by computer in a three dimensional space. ZeroN serves as a tangible representation of a 3D coordinate of the virtual world through which users can see, feel, and control computation. To accomplish this we developed a magnetic control system that can levitate and actuate a permanent magnet in a predefined 3D volume. This is combined with an optical track- ing and display system that projects images on the levitating object. We present applications that explore this new interaction modality. Users are invited to place or move the ZeroN object just as they can place objects on surfaces. For example, users can place the sun above physical objects to cast digital shadows, or place a planet that will start revolving based on simulated physical conditions. We describe the technology, interaction scenarios and challenges, dis- cuss initial observations, and outline future development.

Rope Revolution: Tangible and Gestural Rope Interface for Collaborative Play

Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Rope Revolution: tangible and gestural rope interface for collaborative play. In Proceedings of the 8th International Conference on Advances in Computer Entertainment Technology (ACE '11), Teresa Romão, Nuno Correia, Masahiko Inami, Hirokasu Kato, Rui Prada, Tsutomu Terada, Eduardo Dias, and Teresa Chambel (Eds.). ACM, New York, NY, USA, , Article 11 , 8 pages. DOI=10.1145/2071423.2071437 http://doi.acm.org/10.1145/2071423.2071437

DOI: http://doi.acm.org/10.1145/2071423.2071437
In this paper we describe Rope Revolution, a rope-based gaming system for collaborative play. After identifying popular rope games and activities around the world, we developed a generalized tangible rope interface that includes a compact motion-sensing and force-feedback module that can be used for a variety of rope-based games. Rope Revolution is designed to foster both co-located and remote collaborative experiences by using actual rope to connect players in physical activities across virtual spaces. Results from this study suggest that a tangible user interface with rich metaphors and physical feedback help enhance the gaming experience in addition to helping remote players feel connected across distances. We use this design as an example to motivate discussion on how to take advantage of the various physical affordances of common objects to build a generalized tangible interface for remote play.

Sourcemap: eco-design, sustainable supply chains, and radical transparency

Leo Bonanni. 2011. Sourcemap: eco-design, sustainable supply chains, and radical transparency. XRDS 17, 4 (June 2011), 22-26.

DOI: http://doi.acm.org/10.1145/1961678.1961681
Industry and consumers need tools to help make decisions that are good for communities and for the environment

Duet for Solo Piano: MirrorFugue for Single User Playing with Recorded Performances

Xiao Xiao and Hiroshi Ishii. 2011. Duet for solo piano: MirrorFugue for single user playing with recorded performances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA '11). ACM, New York, NY, USA, 1285-1290.

DOI: http://doi.acm.org/10.1145/1979742.1979762
MirrorFugue is an interface that supports symmetric, real-time collaboration on the piano using spatial metaphors to communicate the hand gesture of collaborators. In this paper, we present an extension of MirrorFugue to support single-user interactions with recorded material and outline usage scenarios focusing on practicing and self-reflection. Based on interviews with expert musicians, we discuss how single-user interactions on MirrorFugue relate to larger themes in music learning and suggest directions for future research.

RopePlus: Bridging Distances with Social and Kinesthetic Rope Games

Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Multi-jump: jump roping over distances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA '11). ACM, New York, NY, USA, 1729-1734.

DOI: http://doi.acm.org/10.1145/1979742.1979836
Rope-based games such as jump rope, tug-of-war, and kite- flying promote physical activity and social interaction among people of all ages and especially in children during the de- velopment of their coordination skills and physical fitness. Our RopePlus system builds on those traditional games by enabling players to participate remotely through interacting with ropes that connect physical and virtual spaces. The RopePlus platform is centered around the rope as a tangible interface with various hardware extensions to allow for multi- ple playing modes. In this paper, we present two games that have been implemented in detail: a kite-flying game called Multi-Fly and a jump-rope game called Multi-Jump. Our work aims to expand tangible interface gaming to real-time social playing environments.

Multi-Jump: Jump Roping Over Distances

Lining Yao, Sayamindu Dasgupta, Nadia Cheng, Jason Spingarn-Koff, Ostap Rudakevych, and Hiroshi Ishii. 2011. Multi-jump: jump roping over distances. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA '11). ACM, New York, NY, USA, 1729-1734.

DOI: http://doi.acm.org/10.1145/1979742.1979836
Jump roping, a game in which one or more people twirl a rope while others jump over the rope, promotes social interaction among children while developing their coordination skills and physical fitness. However, the traditional game requires that players be in the same physical location. Our ‘Multi-Jump’ jump-roping game platform builds on the traditional game by allowing players to participate remotely by employing an augmented rope system. The game involves full-body motion in a shared game space and is enhanced with live video feeds, player rewards and music. Our work aims to expand exertion interface gaming, or games that deliberately require intense physical effort, with genuine tangible interfaces connected to real-time shared social gaming environments.

Direct and Gestural Interaction with Relief: A 2.5D Shape Display

Daniel Leithinger, David Lakatos, Anthony DeVincenzi, Matthew Blackshaw, and Hiroshi Ishii. 2011. Direct and gestural interaction with relief: a 2.5D shape display. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 541-548.

DOI: http://doi.acm.org/10.1145/2047196.2047268
Actuated shape output provides novel opportunities for experiencing, creating and manipulating 3D content in the physical world. While various shape displays have been proposed, a common approach utilizes an array of linear actuators to form 2.5D surfaces. Through identifying a set of common interactions for viewing and manipulating content on shape displays, we argue why input modalities beyond direct touch are required. The combination of free hand gestures and direct touch provides additional degrees of freedom and resolves input ambiguities, while keeping the locus of interaction on the shape output. To demonstrate the proposed combination of input modalities and explore applications for 2.5D shape displays, two example scenarios are implemented on a prototype system.

Kinected Conference: Augmenting Video Imaging with Calibrated Depth and Audio

Anthony DeVincenzi, Lining Yao, Hiroshi Ishii, and Ramesh Raskar. 2011. Kinected conference: augmenting video imaging with calibrated depth and audio. In Proceedings of the ACM 2011 conference on Computer supported cooperative work (CSCW '11). ACM, New York, NY, USA, 621-624.

DOI: http://doi.acm.org/10.1145/1958824.1958929
The proliferation of broadband and high-speed Internet access has, in general, democratized the ability to commonly engage in videoconference. However, current video systems do not meet their full potential, as they are restricted to a simple display of unintelligent 2D pixels. In this paper we present a system for enhancing distance-based communication by augmenting the traditional video conferencing system with additional attributes beyond two-dimensional video. We explore how expanding a system’s understanding of spatially calibrated depth and audio alongside a live video stream can generate semantically rich three-dimensional pixels containing information regarding their material properties and location. We discuss specific scenarios that explore features such as synthetic refocusing, gesture activated privacy, and spatiotemporal graphic augmentation.

Shape-changing interfaces.

Marcelo Coelho and Jamie Zigelbaum. 2011. Shape-changing interfaces. Personal Ubiquitous Comput. 15, 2 (February 2011), 161-173.

DOI: http://dx.doi.org/10.1007/s00779-010-0311-y
The design of physical interfaces has been constrained by the relative akinesis of the material world. Current advances in materials science promise to change this. In this paper, we present a foundation for the design of shape-changing surfaces in human---computer interaction. We provide a survey of shape-changing materials and their primary dynamic properties, define the concept of soft mechanics within an HCI context, and describe a soft mechanical alphabet that provides the kinetic foundation for the design of four design probes: Surflex, SpeakCup, Sprout I/O, and Shutters. These probes explore how individual soft mechanical elements can be combined to create large-scale transformable surfaces, which can alter their topology, texture, and permeability. We conclude by providing application themes for shape-changing materials in HCI and directions for future work.

MirrorFugue: Communicating Hand Gesture in Remote Piano Collaboration

Xiao Xiao and Hiroshi Ishii. 2010. MirrorFugue: communicating hand gesture in remote piano collaboration. In Proceedings of the fifth international conference on Tangible, embedded, and embodied interaction (TEI '11). ACM, New York, NY, USA, 13-20.

DOI: http://doi.acm.org/10.1145/1935701.1935705
Playing a musical instrument involves a complex set of continuous gestures, both to play the notes and to convey expression. To learn an instrument, a student must learn not only the music itself but also how to perform these bodily gestures. We present MirrorFugue, a set of three interfaces on a piano keyboard designed to visualize hand gesture of a remote collaborator. Based their spatial configurations, we call our interfaces Shadow, Reflection, and Organ. We describe the configurations and detail studies of our designs on synchronous, remote collaboration, focusing specifically on remote lessons for beginners. Based on our evaluations, we conclude that displaying the to-scale hand gestures of a teacher at the locus of interaction can improve remote piano learning for novices.

Recompose: Direct and Gestural Interaction with an Actuated Surface

Matthew Blackshaw, Anthony DeVincenzi, David Lakatos, Daniel Leithinger, and Hiroshi Ishii. 2011. Recompose: direct and gestural interaction with an actuated surface. In Proceedings of the 2011 annual conference extended abstracts on Human factors in computing systems (CHI EA '11). ACM, New York, NY, USA, 1237-1242.

DOI: http://dx.doi.org/10.1145/1979742.1979754
In this paper we present Recompose, a new system for manipulation of an actuated surface. By collectively utilizing the body as a tool for direct manipulation alongside gestural input for functional manipulation, we show how a user is afforded unprecedented control over an actuated surface. We describe a number of interaction techniques exploring the shared space of direct and gestural input, demonstrating how their combined use can greatly enhance creation and manipulation beyond unaided human capability.

deFORM: An Interactive Malleable Surface For Capturing 2.5D Arbitrary Objects, Tools and Touch

Sean Follmer, Micah Johnson, Edward Adelson, and Hiroshi Ishii. 2011. deForm: an interactive malleable surface for capturing 2.5D arbitrary objects, tools and touch. In Proceedings of the 24th annual ACM symposium on User interface software and technology (UIST '11). ACM, New York, NY, USA, 527-536.

DOI: http://doi.acm.org/10.1145/2047196.2047265
We introduce a novel input device, deForm, that supports 2.5D touch gestures, tangible tools, and arbitrary objects through real-time structured light scanning of a malleable surface of interaction. DeForm captures high-resolution surface deformations and 2D grey-scale textures of a gel surface through a three-phase structured light 3D scanner. This technique can be combined with IR projection to allow for invisible capture, providing the opportunity for co-located visual feedback on the deformable surface. We describe methods for tracking fingers, whole hand gestures, and arbitrary tangible tools. We outline a method for physically encoding fiducial marker information in the height map of tangible tools. In addition, we describe a novel method for distinguishing between human touch and tangible tools, through capacitive sensing on top of the input surface. Finally we motivate our device through a number of sample applications.

2010

OnObject: gestural play with tagged everyday objects

Keywon Chung, Michael Shilman, Chris Merrill, and Hiroshi Ishii. 2010. OnObject: gestural play with tagged everyday objects. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY, USA, 379-380. DOI: https://doi.org/10.1145/1866218.1866229

CopyCAD: remixing physical objects with copy and paste from the real world

Sean Follmer, David Carr, Emily Lovell, and Hiroshi Ishii. 2010. CopyCAD: remixing physical objects with copy and paste from the real world. In Adjunct proceedings of the 23nd annual ACM symposium on User interface software and technology (UIST '10). ACM, New York, NY, USA, 381-382. DOI: https://doi.org/10.1145/1866218.1866230

Beyond Transparency: Collective Engagement in Sustainable Design

Bonanni, Leonardo Amerigo, 1977- Beyond transparency : collective engagement in sustainable design / by Leonardo Bonanni. c2010.

For a timely answer to the question of sustainability,or how to provide for future generations, there needs to be shared accounting of our social and physical resources. Supply chain transparency makes it possible to map resource flows and ensure dependable production while avoiding social and environ- mental problems. Open channels of communications can support a collective effort to account for the impacts of supply chains and engage more people in the invention of long-term solutions, or sustainable design. This thesis proposes a crowd-sourced approach to resource accounting through the democratization of sustainable design. Aweb-based social network called Sourcemapwas built to link diverse stakeholders through an open forum for supply chain transparency and environmental assessment. The scalable sys- tem points the way towards comprehensive sustainability accounting through the distributed verifica- tion of industrial practices. Sourcemap was developed over a two-year period in partnership with regional organizations, large businesses and SME's. Small business case studies show that an open social media platform can moti- vate sustainable practices at an enterprise level and on a regional scale. The public-facing supply chain publishing platform actively engages communities of producers, experts, consumers and oversight groups. Thousands of user-generated contributions point towards the need to improve the quality of transparency to form a broadly accessible resource for sustainability accounting.

Virtual Guilds: Collective Intelligence and the Future of Craft

BONANNI, L. AND PARKES, A. Virtual Guilds: Collective Intelligence and the Future of Craft. The Journal of Modern Craft, Volume 3, Number 2, July 2010 , pp. 179-190(12)

DOI: http://dx.doi.org/10.2752/174967810X12774789403564
In its most basic definition, craft refers to the skilled practice of making things, which is shaped as much by technological advancements as by cultural practices. Richard Sennett discusses the nature of craftsmanship as an enduring, basic human impulse, the desire to do a job well for its own sake.1 This encompasses a much broader context than skilled labor and promotes an objective standard of excellence which incorporates shapers of culture, policy, and technology as craftsmen. The emerging nature of craft is transdisciplinary in its formation and must consider how emerging materials, processes and cultures influence the objects we make and how the processes of design and production can be used to reflect new social values and to change cultural practices. In order to re-think the kind of objects we make, it is necessary to rethink the way we craft our objects. Digital technologies and media are defining a new sort of craft, seamlessly blending technology, design, and production into the post-industrial landscape. As an early pioneer in redefining craftsmanship to include digital processes, Malcolm McCullough explored the computer as a craft medium inviting interpretation and subltleties, with the combined skill sets of the machine and the human (both mind and hands) providing a structured system of transformations resulting in a crafted object.2 The nature of digital technologies also allows craft to evolve into a form which is decentralized and distributed, and can give rise to excellence through a collective desire and a combined multiplicity of knowledge through community Craft is inherently a social activity, shaped by communal resources and motivations. The collective approach of craft communities - or guilds - is characterized by the master-apprentice model, where practitioners devote significant time passing on their skills to the next generation. The open source software movement embodies the communal character and the highly skilled practices of craft guilds. Until recently skilled handicraft relied on hands-on teaching and access to local physical resources. Mass media and the internet make it possible to transmit skills and resources to isolated individuals, making possible entirely new kinds of distributed craft communities. These “Virtual Guilds” form at the margins of established domains, extending the reach of specialized knowledge and technology. Virtual Guilds benefit from the free exchange of expert information to bring about innovation in sometimes neglected domains. The growth of open-source software projects provides the model by which dispersed, collective innovation becomes possible in other domains. Shared resources maintained by a socially motivated community form the backbone of these non-commercial efforts. Digital channels of communication can extend this free exchange of information to the domain of craft, so that specialized designs and processes can be shared among a wide audience. Online distribution provides access to rare materials and tools and provides a market for craft products. Several Virtual Guilds exist today, and they are contributing important inventions and new domains to often neglected markets. These communities of skilled practitioners are characterized by their marginal nature, where the free and open exchange of ideas is carried forward for collective benefit. At the same time, the popularity of Virtual Guilds and the commercial success of their inventions endanger the free exchange of information on which they are built. The survival of collective craft communities is important to under-served groups and for technological innovation, so it is essential that more practitioners engage in collective action. The new generation of digital design and fabrication tools lays the groundwork for more skilled craftspeople to collectively expand on their practice.

Construction by replacement: a new approach to simulation modeling

James Hines, Thomas Malone, Paulo Gonçalves, George Herman, John Quimby, Mary Murphy-Hoye, James Rice, James Patten, Hiroshi Ishii, Syst. Dyn. Rev. July 2010

DOI: http://dx.doi.org/10.1002/sdr.437
Simulation modeling can be valuable in many areas of management science, but it is often costly, time consuming, and difficult to do. To reduce these problems, system dynamics researchers have previously developed standard pieces of model structure, called molecules, that can be reused in different models. However, the models assembled from these molecules often lacked feedback loops and generated few, if any, insights. This paper describes a new and more promising approach to using molecules in system dynamics modeling. The heart of the approach is a systematically organized library (or taxonomy) of predefined model components, or molecules, and a set of software tools for replacing one molecule with another. Users start with a simple generic model and progressively replace parts of the model with more specialized molecules from a systematically organized library of predefined components. These substitutions either create a new running model automatically or request further manual changes from the user. The paper describes our exploration using this approach to construct system dynamics models of supply chain processes in a large manufacturing company. The experiment included developing an innovative “tangible user interface” and a comprehensive catalog of system dynamics molecules. The paper concludes with a discussion of the benefits and limitations of this approach. Copyright © 2010 John Wiley & Sons, Ltd. Syst. Dyn. Rev. (2009) http://papers.ssrn.com/sol3/papers.cfm?abstract_id=1354665

Small Business Applications of Sourcemap: A Web Tool for Sustainable Design and Supply Chain Transparency

Bonanni, L., Hockenberry, M., Zwarg, D., Csikszentmihalyi, C., and Ishii, H. 2010. Small business applications of sourcemap: a web tool for sustainable design and supply chain transparency. In Proceedings of the 28th international Conference on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI '10. ACM, New York, NY, 937-946.

DOI: http://doi.acm.org/10.1145/1753326.1753465
This paper introduces sustainable design applications for small businesses through the Life Cycle Assessment and supply chain publishing platform Sourcemap.org. This web-based tool was developed through a year-long participatory design process with five small businesses in Scotland and in New England. Sourcemap was used as a diagnostic tool for carbon accounting, design and supply chain management. It offers a number of ways to market sustainable practices through embedded and printed visualizations. Our experiences confirm the potential of web sustainability tools and social media to expand the discourse and to negotiate the diverse goals inherent in social and environmental sustainability.

Beyond: collapsible tools and gestures for computational design

Jinha Lee and Hiroshi Ishii. 2010. Beyond: collapsible tools and gestures for computational design. In Proceedings of the 28th of the international conference extended abstracts on Human factors in computing systems (CHI EA '10). ACM, New York, NY, USA, 3931-3936.

DOI: http://doi.acm.org/10.1145/1753846.1754081
Since the invention of the personal computer, digital media has remained separate from the physical world, blocked by a rigid screen. In this paper, we present Beyond, an interface for 3-D design where users can directly manipulate digital media with physically retractable tools and hand gestures. When pushed onto the screen, these tools physically collapse and project themselves onto the screen, letting users perceive as if they were inserting the tools into the digital space beyond the screen. The aim of Beyond is to make the digital 3-D design process straightforward, and more accessible to general users by extending physical affordances to the digital space beyond the computer screen.

Tangible Interfaces for Art Restoration

BONANNI, L., SERACINI, M., XIAO, X., HOCKENBERRY, M., COSTANZO, B.C., SHUM, A., TEIL, R., SPERANZA, A., AND ISHII, H.2010. Tangible Interfaces for Art Restoration. International Journal of Creative Interfaces and Computer Graphics 1, 54-66. DOI: 10.4018/jcicg.2010010105

Few people experience art the way a restorer does: as a tactile, multi-dimensional and ever-changing object. The authors investigate a set of tools for the distributed analysis of artworks in physical and digital realms. Their work is based on observation of professional art restoration practice and rich data available through multi-spectral imaging. The article presents a multidisciplinary approach to develop interfaces usable by restorers, students and amateurs. Several interaction techniques were built using physical metaphors to navigate the layers of information revealed by multi-spectral imaging, prototyped using single- and multi-touch displays. The authors built modular systems to accommodate the technical needs and resources of various institutions and individuals, with the aim to make high-quality art diagnostics possible on different hardware platforms, as well as rich diagnostic and historic information about art available for education and research through a cohesive set of web-based tools instantiated in physical interfaces and public installations.

Relief: A Scalable Actuated Shape Display

Leithinger, D. and Ishii, H. 2010. Relief: a scalable actuated shape display. In Proceedings of the Fourth international Conference on Tangible, Embedded, and Embodied interaction (Cambridge, Massachusetts, USA, January 24 - 27, 2010). TEI '10. ACM, New York, NY, 221-222.

DOI: http://doi.acm.org/10.1145/1709886.1709928
Relief is an actuated tabletop display, which is able to render and animate three-dimensional shapes with a malleable surface. It allows users to experience and form digital models like geographical terrain in an intuitive manner. The tabletop surface is actuated by an array of 120 motorized pins, which are controlled with a low-cost, scalable platform built upon open-source hardware and software tools. Each pin can be addressed individually and senses user input like pulling and pushing.

g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface.

Jamie Zigelbaum, Alan Browning, Daniel Leithinger, Olivier Bau, and Hiroshi Ishii. 2010. g-stalt: a chirocentric, spatiotemporal, and telekinetic gestural interface. In Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction (TEI '10). ACM, New York, NY, USA, 261-264.

DOI: http://dl.acm.org/citation.cfm?id=1709939
In this paper we present g-stalt, a gestural interface for interacting with video. g-stalt is built upon the g-speak spatial operating environment (SOE) from Oblong Industries. The version of g-stalt presented here is realized as a three-dimensional graphical space filled with over 60 cartoons. These cartoons can be viewed and rearranged along with their metadata using a specialized gesture set. g- stalt is designed to be chirocentric, spatiotemporal, and telekinetic.

Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.

Cati Vaucelle. Play it by eye, frame it by hand! Gesture Object Interfaces to enable a world of multiple projections.Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.

DOI: http://hdl.handle.net/1721.1/61936
Tangible Media as an area has not explored how the tangible handle is more than a marker or place-holder for digital data. Tangible Media can do more. It has the power to materialize and redefine our conception of space and content during the creative process. It can vary from an abstract token that represents a movie to an anthropomorphic plush that reflects the behavior of a sibling during play. My work begins by extending tangible concepts of representation and token-based interactions into movie editing and play scenarios. Through several design iterations and research studies, I establish tangible technologies to drive visual and oral perspectives along with finalized creative works, all during a child's play and exploration. <br> <br> I define the framework, Gesture Object Interfaces, expanding on the fields of Tangible User Interaction and Gesture Recognition. Gesture is a mechanism that can reinforce or create the anthropomorphism of an object. It can give the object life. A Gesture Object is an object in hand while doing anthropomorphized gestures. Gesture Object Interfaces engender new visual and narrative perspectives as part of automatic film assembly during children's play. I generated a suite of automatic film assembly tools accessible to diverse users. The tools that I designed allow for capture, editing and performing to be completely indistinguishable from one another. Gestures integrated with objects become a coherent interface on top of natural play. I built a distributed, modular camera environment and gesture interaction to control that environment. The goal of these new technologies is to motivate children to take new visual and narrative perspectives. <br> <br> In this dissertation I present four tangible platforms that I created as alternatives to the usual fragmented and sequential capturing, editing and performing of narratives available to users of current storytelling tools. I developed Play it by Eye, Frame it by hand, a new generation of narrative tools that shift the frame of reference from the eye to the hand, from the viewpoint (where the eye is) to the standpoint (where the hand is). In Play it by Eye, Frame it by Hand environments, children discover atypical perspectives through the lens of everyday objects. When using Picture This!, children imagine how an object would appear relative to the viewpoint of the toy. They iterate between <i>trying</i> and <i>correcting</i> in a world of multiple perspectives. The results are entirely new genres of child-created films, where children finally capture the cherished visual idioms of action and drama. I report my design process over the course of four tangible research projects that I evaluate during qualitative observations with over one hundred 4- to 14-year-old users. Based on these research findings, I propose a class of moviemaking tools that transform the way users interpret the world visually, and through storytelling.

WOW pod

Vaucelle, C., Shada, S., and Jahn, M. 2010. WOW pod. In Proceedings of the 28th of the international Conference Extended Abstracts on Human Factors in Computing Systems (Atlanta, Georgia, USA, April 10 - 15, 2010). CHI EA '10. ACM, New York, NY, 4813-4816.

DOI: http://doi.acm.org/10.1145/1753846.1754237
WOW Pod is an immersive architectural solution for the advanced massive online role-playing gamer that provides and anticipates all life needs. Inside, the player finds him/herself comfortably seated in front of the computer screen with easy-to-reach water, pre-packaged food, and a toilet conveniently placed underneath a built-in throne.

OnObject: programming of physical objects for gestural interaction

Keywon Chung. OnObject: programming of physical objects for gestural interaction. Thesis (MsC)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2010.

DOI: http://hdl.handle.net/1721.1/61943
Tangible User Interfaces (TUIs) have fueled our imagination about the future of computational user experience by coupling physical objects and activities with digital information. Despite their conceptual popularity, TUIs are still difficult and time-consuming to construct, requiring custom hardware assembly and software programming by skilled individuals. This limitation makes it impossible for end users and designers to interactively build TUIs that suit their context or embody their creative expression. OnObject enables novice end users to turn everyday objects into gestural interfaces through the simple act of tagging. Wearing a sensing device, a user adds a behavior to a tagged object by grabbing the object, demonstrating a trigger gesture, and specifying a desired response. Following this simple Tag-Gesture-Response programming grammar, novice end users are able to transform mundane objects into gestural interfaces in 30 seconds or less. Instead of being exposed to low-level development tasks, users are can focus on creating an enjoyable mapping between gestures and media responses. The design of OnObject introduces a novel class of Human-Computer Interaction (HCI): gestural programming of situated physical objects. This thesis first outlines the research challenge and the proposed solution. It then surveys related work to identify the inspirations and differentiations from existing HCI and design research. Next, it describes the sensing and programming hardware and gesture event server architecture. Finally, it introduces a set of applications created with OnObject and gives observations from user participated sessions.

Bosu: a physical programmable design tool for transformability with soft mechanics

Amanda Parkes and Hiroshi Ishii. 2010. Bosu: a physical programmable design tool for transformability with soft mechanics. In Proceedings of the 8th ACM Conference on Designing Interactive Systems (DIS '10). ACM, New York, NY, USA, 189-198. DOI=http://dx.doi.org/10.1145/1858171.1858205

DOI: http://dx.doi.org/10.1145/1858171.1858205

tangible interfaces for art restoration

Few people experience art the way a restorer does: as a tactile, multi-dimensional and ever-changing object. The authors investigate a set of tools for the distributed analysis of artworks in physical and digital realms. Their work is based on observation of professional art restoration practice and rich data available through multi-spectral imaging. The article presents a multidisciplinary approach to develop interfaces usable by restorers, students and amateurs. Several interaction techniques were built using physical metaphors to navigate the layers of information revealed by multi-spectral imaging, prototyped using single- and multi-touch displays. The authors built modular systems to accommodate the technical needs and resources of various institutions and individuals, with the aim to make high-quality art diagnostics possible on different hardware platforms, as well as rich diagnostic and historic information about art available for education and research through a cohesive set of web-based tools instantiated in physical interfaces and public installations.

2009

Trackmate: Large-Scale Accessibility of Tangible User Interfaces

Adam Kumpf. Trackmate: Large-Scale Accessibility of Tangible User Interfaces. Thesis (M.S.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2009.

There is a long history of Tangible User Interfaces (TUI) in the community of human- computer interaction, but surprisingly few of these interfaces have made it beyond lab and gallery spaces. This thesis explores how the research community may begin to remedy the disconnect between modern TUIs and the everyday computing experience via the creation and dissemination of Trackmate, an accessible (both ubiquitous and enabling) tabletop tangible user interface that scales to a large number of users with minimal hardware and configuration overhead. Trackmate is entirely open source and designed: to be community-centric; to leverage common objects and infrastructure; to provide a low floor, high ceiling, and wide walls for development; to allow user mod- ifications and improvisation; to be shared easily via the web; and to work alongside a broad range of existing applications and new research interface prototypes.

Ubiquitous Computing in Chaos and Tangible Bits (in Japanese) 混迷するユビキタスの未来とタンジブル・ビット

Hiroshi Ishii, Ubiquitous Computing in Chaos and Tangible Bits (in Japanese), Next Generation Service Forum, Focus Column, June 19, 2009,

混迷するユビキタスの未来とタンジブル・ビット ■宣伝色に染まった「ユビキタス」  「ユビキタス」(Ubiquitous)の文脈は今ひどく混迷している。  「至る所にある」というユビキタスの辞書的な意味が転じて、日本のメディアでは「いつでも・どこでも」ネットアクセスできる多機能モバイル・コンピューティングという意味で使われているように思える。各人がたくさんのコンピュータを使うユビキタスの時代──という未来像は、小型情報通信機器の販売を促進する旗としては、確かに有効な宣伝コピーではある。しかし、「いつでも・どこでも」(any time, any place)は、かつて80年代の「高度情報化社会」や90年代の「マルチメディア社会」が華々しく論じられたときに、何度も出てきた使い古されたスローガンだった。では、「ユビキタス」も結局はこれらのスローガンと同様、一時的な流行歌に過ぎず、時が経てばすぐ忘れ去られてしまうのか。ユビキタスに未来はないのか。それは、ユビキタスの原点の理解にかかっていると思う。 ■ユビキタス・コンピューティングの原点  今は亡きマーク・ワイザー(Mark Weiser)がユビキタス・コンピューティングの概念を「The Computer for the 21st Century」(21世紀のコンピュータ)と題した論文として「サイエンティフィック・アメリカン」誌に発表したのは1991年だった。「最も深い技術とは、見えなくなるものである。日々の生活環境と区別がつかないほど、その中に溶け込む。」※1この冒頭の文章が、ユビキタス・コンピューティングの精神と哲学を最も明快に示している。コンピュータが「環境にすっかり溶け込み消えてしまう」ことがそのビジョンであり、それはまさにテクノロジーの浸透した世界を人間がどのように認知するのかという観点から、インターフェースの理想を描いたものである。しかし残念ながら、この論文の中で紹介されていた具体的なプロトタイプ(大小さまざまなスクリーン付きの情報通信端末)は、彼の理念である「見えないコンピュータ」を十分に説明するレベルには達していなかった。彼がこの論文の中で使った「ユビキタス」という言葉は、彼が本当に実現したいビジョンを明快に表すものではなかった。 ■マーク・ワイザーからのメール  実際、本人自身もユビキタス・コンピューティングについての誤解をかなり気にかけていたようだ。'97年1月26日に「Tangible Bits」(タンジブル・ビット)の論文※2を読んだワイザーから、私が受け取ったメールの一部※3をここに紹介する。  ひとつお願いがあります。テニュアを得た元教授として、それまでのすべての研究から自身の研究を際立たせる必要があることを、よく理解しています。また、私に対する献辞にも、とても感謝しています。ありがとう!  私のお願いは、ユビキタス・コンピューティングについて、その名称から生じた誤解が広まるのを防ぐこと、これをあなたに手伝ってほしいのです。ユビキタス・コンピューティングは、「コンピュータ」をユビキタスにすることだけでは決してありません。これは、あなたの研究のように、コンピュータをメディアとして環境に溶け込ませるということだったのです。 (中略)  誤解を招きやすいため、私はユビキタス・コンピューティングという言い方をやめようとしてきました。しかし、その後も幾度にわたってこの言葉が重くのしかかってきます。そこで、Things That Thinkなどを含み、私がかかわっているさまざまな仕事すべての総称として、これを使うようにしていました。拡張現実という言葉をしばらく使ったこともありますが、これもまた異なった意味合いが生み出されてしまいました。私はテーマとしてカーム・テクノロジーという言葉を用い始めましたが、これは研究プロジェクトというよりも、その目的にふさわしいものです。「タンジブル・ビット」はとても素晴らしく、総称にふさわしいかもしれませんが、そうするとあなたの研究プロジェクトを示す名前ではなくなってしまいます!  われわれが何かひとつの共通な事柄を掲げ、その下でそれぞれの差異を規定していけば、みんなにとって得になるのではないでしょうか。もっとも、それを何と呼ぶかに苦心するでしょうが。  とにかく、これはすばらしい仕事です。近いうちにMITを訪問して話をするのを楽しみにしています。 ─ マーク ■ネーミングの重要性  ワイザーが私の「タンジブル・ビット」のビジョンに共感を覚え、さらにそれが彼のユビキタス・コンピューティングの思想と深い地下水脈を通じてつながっていることを明快に指摘してくれたことに、私は強い知的興奮を覚えた。彼がメールの中で述べたように、「ユビキタス」という言葉は、彼の思想を表現するには不適切なラベルだったのだと思う。本来の意味を十分理解されず、マーケティングの文脈で濫用されているラベル「ユビキタス」は、残念ながら、かつてはやった「ニューメディア」や「マルチメディア」のように、一過性の流行歌のような運命にあると思われる。もしワイザーが生きていたなら、おそらく彼は「ユビキタス・コンピューティング」というラベルを捨て、「見えないコンピュータ」をコア・コンセプトとした新しいビジョンを作り上げていたに違いない。  コンピュータが1人あたり何台あるか、それが分散しているか、集中しているか、携帯型か、環境埋め込み型か──それらは、彼の究極のインターフェース理念とは本来無関係のはずであった。後にワイザーは、「Calm Technology」(穏やかな技術)という言葉を使って環境的(アンビエント)なインターフェースを強調しようとしたが、「遍在するコンピュータ」と誤解されたユビキタス・コンピューティングは、すでにひとり歩きを始めていた。 ■タンジブル・ビットとは  私が提唱している「タンジブル・ユーザー・インターフェース」(Tangible User Interface [TUI]) の目的は、物理世界における人とモノとのインタラクションをベースにして、コンピュータの内部にあるデジタル情報とのインタラクションをシームレスに融合することにある。TUIの基本定義と特徴は、直接触れて操作できるインターフェースのデザインであり、'96年から多様なTUIプロトタイプを発表してきた。最も純粋なタンジブルと呼べる入出力一体型TUIとは、入力と出力とが完全に一体化しており、人に力覚情報を提示するフォース・ディスプレー技術によって、実際の動きを表現出力に用いている。直接触れることのできない「インタンジブル」な表現はなく、すべてがタンジブルな物理的媒体のみで成り立っている点が「純粋なTUI」たるゆえんだ。それによって物理世界そのものをデジタル世界とのインターフェースにする道が開け、現在の複雑かつ不透明なデジタル世界への窓口を、認知的に「透明」にすることが可能になるのだ。  これらのTUIは、概念的にはそろばんに近い。そろばんは、10進数という情報を珠の位置関係で物理的に表現する。10本の指でその珠に直接触れ、情報を操作・計算する。そこには、入力と出力の境界は存在しない。情報表現と操作手段が密に結合した物理的インターフェースの明快さ、直接性がそろばんの特徴である。TUIはこのそろばんの特徴に、デジタル情報の表示手段としてフォース・ディスプレーを加味し、ビット(デジタル情報)をアトム(物理的世界)の着物でつつみ込んだことに特徴がある。 ■透明なインターフェースの追求  私が'99年に発表した「musicBottles」(ミュージックボトル)は、ワイザーの哲学のコアである透明なインターフェースの概念を自分なりに解釈して形にした TUI の代表例である。これは、私からマーク・ワイザーにささげる贈り物だった。  ミュージックボトルは、この透明なインターフェースのコンセプトを可視化・可触化し、モノの持つメタファーに加えて情緒的・審美的な価値にも注目しつつ、「ミニマル・デザイン」を意図的に追求した作品である。  人類が数千年に渡って使ってきたガラス製のボトル。そのメタファーとアフォーダンスをデジタル世界に拡張することにより、ミュージックボトル・プロジェクトはインターフェースの透明性(transparency)を追求する。ガラスボトルをデジタル情報のコンテナおよびコントローラーとして使い、フタの開け閉めという単純な操作だけでデジタルコンテンツへのアクセスを実現するシンプルなインターフェース。  このプロジェクトの原点は、私が母親への贈り物として温めていた「天気予報の小瓶」のアイデアだった。台所で料理をしている最中、しょうゆ瓶のフタを開けるとしょうゆの香りが漂ってくる──彼女が慣れ親しんだ物理世界のモデルをベースに、天気予報というデジタル情報へアクセスするための青い小瓶をデザインしようと考えていたのだ。朝起きて枕元にある青い小瓶のフタを開ける。小鳥のさえずりが聞こえれば天気は晴れ、雨の音が聞こえてくれば、雨天というアイデアだ。  しかし、'98年の夏の終わりに長い闘病生活を続けてきた母が亡くなり、天気予報の小瓶をプレゼントする機会は永遠に失われてしまった。その年の暮れ、私と当時博士課程に在学していたリッチ・フレッチャーとのディスカッションからミュージックボトルのアイデアが生まれ、母への追悼の意味も込めてこのプロジェクトを開始することにした。翌'99年4月27日には敬愛するマーク・ワイザーが急逝し、彼が私に残した言葉がこれにさらなる個人的な意味合いを添える。  パソコンや携帯電話が登場するはるか昔から人類の日常生活に遍在していたガラス瓶に、デジタルコンテンツを詰めることで、ミニマルかつユニバーサルな情報へのインターフェースを実現する。この可能性は、音楽コンテンツに限定されるものではない。例えば天気予報の小瓶はもちろん、詩の入った香水の瓶、物語の入ったワインボトルなど、多様な応用が考えられるだろう。実用性を追求するなら、薬瓶がたくさん置かれた棚を想像してほしい。薬の服用パターンと照らし合わせて患者に服用を促す、その情報を病院へ送るなど、ガラス瓶を使ったサービスはいくらでも考えられる。私たちの生活の奥深くに浸透しているが故に、ガラス瓶のインターフェースには数多くの用途が広がっているのだ。  デザインされたテーブル上のボトル、それを開けるときのガラスの感触、流れ出る音楽に同期してボトルの中で乱反射するLEDの光──それらは、独特の情緒的・審美的な体験を作り上げる。審美的な喜びは、単純なスイッチやマウスのクリックからは決して得ることができない。そして、この体験はあらゆるガラス瓶の中に入り得るコンテンツを想像するという喜びももたらしてくれる。インタラクティブ・アートとインターフェース・デザインとの境界線をも、あいまいにするのだ。  人々の日常生活に溶け込む「透明なインターフェース」に加え、機能や性能が中心の従来型インターフェース・デザインとは異なる美的価値の追求が、ミュージックボトル・プロジェクトの大切なメッセージなのである。 石井 裕(MIT Media Lab 副所長) ※1. 【原文】 「The most profound technologies are those that disappear. They weave themselves into the fabric of everyday life until they are indistinguishable from it.」(Weiser, M. The Computer for the 21st Century. Scientific American, 1991, 265(3), pp. 94-104. ※2. Ishii, H. and Ullmer, B., Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms, in Proceedings of Conference on Human Factors in Computing Systems (CHI '97), (Atlanta, March 1997), ACM Press, pp. 234-241.

MIT: Creation, Collaboration and Competition (in Japanese) 米国MITの独創・協創・競創の風土

Ishii, H. 2009. MIT: Creation, Collaboration and Competition (in Japanese). In the Journal of the IEICE (Institute of Electronics, Information, and Communication Engineers). Vol. 92. No. 5, pp.327-331

1994年にMITにヘッドハントされ、1995年にNTTの研究所からMITメディアラボに転身。「これまでの研究を捨て、新しいテーマで再出発すること」を条件にメディラボ準教授に採用されてから、生き残りをかけた「競創」の戦いが始まった。 オリジナリティを徹底的に追求しながら、同時に大きなインパクトを作り出すための戦略、テニュア取得という大きなプレッシャーに耐えながらの全力疾走、テニュア取得後のラボ全体のための研究資金調達のプレッシャーなど、MITでの過去14年間の体験は、米国の競創社会の縮図だと言える。本稿では、独創的研究で世界を目指す読者に向けて、私の体験をまとめてみた。

Wetpaint: Scraping Through Multi-Layered Images

Bonanni, L., Xiao, X., Hockenberry, M., Subramani, P., Ishii, H., Seracini, M., and Schulze, J. 2009. Wetpaint: scraping through multi-layered images. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 571-574.

DOI: http://doi.acm.org/10.1145/1518701.1518789
A work of art rarely reveals the history of creation and interpretation that has given it meaning and value. Wetpaint is a gallery interface based on a large touch screen that allows curators and museumgoers to investigate the hidden layers of a painting, and in the process contribute to the pluralistic interpretation of the piece, both locally and online. Inspired by traditional restoration and curatorial methods, we have designed a touch-based user interface for exhibition spaces that allows "virtual restoration" by scraping through the multi-spectral scans of a painting, and "collaborative curation" by leaving voice annotations within the artwork. The system functions through an online social image network for flexibility and to support rich and collaborative commentary for local and remote visitors

Burn Your Memory Away: One-time Use Video Capture and Storage Device to Encourage Memory Appreciation

Chi, P., Xiao, X., Chung, K., and Chiu, C. 2009. Burn your memory away: one-time use video capture and storage device to encourage memory appreciation. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 2397-2406.

DOI: http://doi.acm.org/10.1145/1520340.1520342
Although modern ease of access to technology enables many of us to obsessively document our lives, much of the captured digital content is often disregarded and forgotten on storage devices, with no concerns of cost or decay. Can we design technology that helps people better appreciate captured memories? What would people do if they only had one more chance to relive past memories? In this paper, we present a prototype design, PY-ROM, a matchstick-like video recording and storage device that burns itself away after being used. This encourages designers to consider lifecycles and human-computer relationships by integrating physical properties into digitally augmenting everyday objects.

Stress OutSourced: A Haptic Social Network via Crowdsourcing

Chung, K., Chiu, C., Xiao, X., and Chi, P. 2009. Stress outsourced: a haptic social network via crowdsourcing. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI EA '09. ACM, New York, NY, 2439-2448.

DOI: http://doi.acm.org/10.1145/1520340.1520346
Stress OutSourced (SOS) is a peer-to-peer network that allows anonymous users to send each other therapeutic massages to relieve stress. By applying the emerging concept of crowdsourcing to haptic therapy, SOS brings physical and affective dimensions to our already networked lifestyle while preserving the privacy of its members. This paper first describes the system, its three unique design choices regarding privacy model, combining mobility and scalability, and affective communication for an impersonal crowd, and contrasts them with other efforts in their respective areas. Finally, this paper describes future work and opportunities in the area of haptic social networks.

Some Challenges for Designers of Shape Changing Interfaces

Zigelbaum, J., Labrune, J.B. Some Challenges for Designers of Shape Changing Interfaces. CHI 2009 Workshop on Transitive Materials (2009).

In this paper we describe some challenges we find in the design of shape changing user interfaces through our own work and thoughts on the current state of the art in HCI. Due to the large set of possibilities for shape changing materials we are faced with a too-large constraint system. Without a good understanding and the beginning of a standardization or physical language for shape change it will be hard to design interactions that make sense beyond those in very limited, one-off applications. We are excited by the challenge that this poses to researchers and look forward to understanding how to use programmable and shape changing materials in the future.

Fusing Computation into Mega-Affordance Objects

Chung, K., Ishii, H., Fusing computation into mega-affordance objects. CHI 2009 Workshop on Transitive Materials (2009).

In this paper, I present the concept of "Mega- Affordance Objects" (MAOs). An MAO is a common object with a primitive form factor that exhibits multiple affordances and can perform numerous improvised functions in addition to its original one. In order to broaden the reach of Tangible User Interfaces (TUIs) and create compelling everyday applications, I propose applying computational power to Mega-Affordance Objects that are highly adaptable and frequently used. This approach will leverage the capabilities of smart materials and contribute to the principles of Organic User Interface (OUI) design.

Spime Builder: A Tangible Interface for Designing Hyperlinked Objects

Bonanni, L., Vargas, G., Chao, N., Pueblo, S., and Ishii, H. 2009. Spime builder: a tangible interface for designing hyperlinked objects. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 - 18, 2009). TEI '09. ACM, New York, NY, 263-266.

DOI: http://doi.acm.org/10.1145/1517664.1517719
Ubiquitous computing is fostering an explosion of physical artifacts that are coupled to digital information – so-called Spimes. We introduce a tangible workbench that allows for the placement of hyperlinks within physical models to couple physical artifacts with located interactive digital media. A computer vision system allows users to model three-dimensional objects and environments in real-time using physical materials and to place hyperlinks in specific areas using laser pointer gestures. We present a working system for real-time physical/digital exhibit design, and propose the means for expanding the system to assist Design for the Environment strategies in product design.

Proverbial Wallet: Tangible Interface for Financial Awareness

Kestner, J., Leithinger, D., Jung, J., and Petersen, M. 2009. Proverbial wallet: tangible interface for financial awareness. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 - 18, 2009). TEI '09. ACM, New York, NY, 55-56.

DOI: http://doi.acm.org/10.1145/1517664.1517683
We propose a tangible interface concept for communicating personal financial information in an ambient and relevant manner. The concept is embodied in a set of wallets that provide the user with haptic feedback about personal financial metrics. We describe how such feedback can inform purchasing decisions and improve general financial awareness.

Stop-Motion Prototyping for Tangible Interfaces

Bonanni, L. and Ishii, H. 2009. Stop-motion prototyping for tangible interfaces. In Proceedings of the 3rd international Conference on Tangible and Embedded interaction (Cambridge, United Kingdom, February 16 - 18, 2009). TEI '09. ACM, New York, NY, 315-316.

DOI: http://doi.acm.org/10.1145/1517664.1517729
Stop-motion animation brings the constraints of the body, space and materials into video production. Building on the tradition of video prototyping for interaction design, stop motion is an effective technique for concept development in the design of Tangible User Interfaces. This paper presents a framework for stop-motion prototyping and the results of two workshops based on stop-motion techniques including pixillation, claymation and time-lapse photography. The process of stop-motion prototyping fosters collaboration, legibility and rapid iterative design in a physical context that can be useful to the early stages of tangible interaction design.

Piezing: a garment harvesting energy from the natural motion of the human body

Amanda Parkes, Adam Kumpf, and Hiroshi Ishii. 2009. Piezing: a garment harvesting energy from the natural motion of the human body. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI '09). ACM, New York, NY, USA, 23-24. DOI: https://doi.org/10.1145/1517664.1517674

Piezing is a garment which harnesses energy from the natural gestures of the human body in motion. Around the joints of the elbows and hips, the garment is embedded with piezoelectric material elements which generate an electric potential in response to applied mechanical stress. The electric potential is then stored as voltage in a centralized small battery and later can be discharged into a device. As a concept, Piezing explores a decentralized and self-reliant energy model for embedded interaction, pushing forward possibilities for mobility.

Kinetic sketchup: motion prototyping in the tangible design process

Amanda Parkes and Hiroshi Ishii. 2009. Kinetic sketchup: motion prototyping in the tangible design process. In Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (TEI '09). ACM, New York, NY, USA, 367-372. DOI: https://doi.org/10.1145/1517664.1517738

Design of Haptic Interfaces for Psychotherapy

Vaucelle, C., Bonanni, L., and Ishii, H. 2009. Design of haptic interfaces for therapy. In Proceedings of the 27th international Conference on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI '09. ACM, New York, NY, 467-470.

DOI: http://doi.acm.org/10.1145/1518701.1518776
Touch is fundamental to our emotional well-being. Medical science is starting to understand and develop touch-based therapies for autism spectrum, mood, anxiety and borderline disorders. Based on the most promising touch therapy protocols, we are presenting the first devices that simulate touch through haptic devices to bring relief and assist clinical therapy for mental health. We present several haptic systems that enable medical professionals to facilitate the collaboration between patients and doctors and potentially pave the way for a new form of non-invasive treatment that could be adapted from use in care-giving facilities to public use. We developed these prototypes working closely with a team of mental health professionals.

Play-it-by-eye! Collect Movies and Improvise Perspectives with Tangible Video Objects.

Vaucelle, C. and Ishii, H. 2009. Play-it-by-eye! Collect movies and improvise perspectives with tangible video objects. In Artificial Intelligence for Engineering Design, Analysis and Manufacturing (2009), Special Issue: Tangible Interaction, 23, 305–316. Cambridge University Press.

DOI: https://doi.org/10.1017/S0890060409000262
We present an alternative video-making framework for children with tools that integrate video capture with movie production. We propose different forms of interaction with physical artifacts to capture storytelling. Play interactions as input to video editing systems assuage the interface complexities of film construction in commercial software. We aim to motivate young users in telling their stories, extracting meaning from their experiences by capturing supporting video to accompany their stories, and driving reflection on the outcomes of their movies. We report on our design process over the course of four research projects that span from a graphical user interface to a physical instantiation of video. We interface the digital and physical realms using tangible metaphors for digital data, providing a spontaneous and collaborative approach to video composition. We evaluate our systems during observations with 4- to 14-year-old users and analyze their different approaches to capturing, collecting, editing, and performing visual and sound clips.

Cost-effective Wearable Sensor to Detect EMF

Vaucelle, C., Ishii, H., and Paradiso, J. A. 2009. Cost-effective wearable sensor to detect EMF. In Proceedings of the 27th international Conference Extended Abstracts on Human Factors in Computing Systems (Boston, MA, USA, April 04 - 09, 2009). CHI'09. ACM, New York, NY, 4309-4314.

DOI: http://doi.acm.org/10.1145/1520340.1520658
In this paper we present the design of a cost-effective wearable sensor to detect and indicate the strength and other characteristics of the electric field emanating from a laptop display. Our Electromagnetic Field Detector Bracelet can provide an immediate awareness of electric fields radiated from an object used frequently. Our technology thus supports awareness of ambient background emanation beyond human perception. We discuss how detection of such radiation might help to “fingerprint” devices and aid in applications that require determination of indoor location.

2008

Sculpting Behavior A Tangible Language for Hands-On Play and Learning

Hayes Raffle. Sculpting Behavior A Tangible Language for Hands-On Play and Learning. Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2008.

DOI: http://hdl.handle.net/1721.1/44912
For over a century, educators and constructivist theorists have argued that children learn by actively forming and testing – constructing – theories about how the world works. Recent efforts in the design of “tangible user interfaces” (TUIs) for learning have sought to bring together interaction models like direct manipulation and pedagogical frameworks like constructivism to make new, often complex, ideas salient for young children. Tangible interfaces attempt to eliminate the distance between the computational and physical world by making behavior directly manipulable with one’s hands. In the past, systems for children to model behavior have been either intuitive-but-simple (e.g. curlybot) or complex-but-abstract, (e.g. LEGO Mindstorms). In order to develop a system that supports a user’s transition from intuitive-but-simple constructions to constructions that are complex-but-abstract, I draw upon constructivist educational theories, particularly Bruner’s theories of how learning progresses through enactive then iconic and then symbolic representations. This thesis presents an example system and set of design guidelines to create a class of tools that helps people transition from simple-but-intuitive exploration to abstract-and-flexible exploration. The Topobo system is designed to facilitate mental transitions between different representations of ideas, and between different tools. A modular approach, with an inherent grammar, helps people make such transitions. With Topobo, children use enactive knowledge, e.g. knowing how to walk, as the intellectual basis to understand a scientific domain, e.g. engineering and robot locomotion. Queens, backpacks, Remix and Robo add various abstractions to the system, and extend the tangible interface. Children use Topobo to transition from hands-on knowledge to theories that can be tested and reformulated, employing a combination of enactive, iconic and symbolic representations of ideas.

SpeakCup: Simplicity, BABL, and Shape Change

Zigelbaum, J., Chang, A., Gouldstone, J., Monzen, J. J., and Ishii, H. 2008. SpeakCup: simplicity, BABL, and shape change. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 - 20, 2008). TEI '08. ACM, New York, NY, 145-146.

DOI: http://doi.acm.org/10.1145/1347390.1347422
In this paper we present SpeakCup, a simple tangible interface that uses shape change to convey meaning in its interaction design. SpeakCup is a voice recorder in the form of a soft silicone disk with embedded sensors and actuators. Advances in sensor technology and material science have provided new ways for users to interact with computational devices. Rather than issuing commands to a system via abstract and multi-purpose buttons the door is open for more nuanced and application-specific approaches. Here we explore the coupling of shape and action in an interface designed for simplicity while discussing some questions that we have encountered along the way.

Picture This! Film assembly using toy gestures

Vaucelle, C. and Ishii, H. 2008. Picture this!: film assembly using toy gestures. In Proceedings of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 - 24, 2008). UbiComp '08, vol. 344. ACM, New York, NY, 350-359.

DOI: http://doi.acm.org/10.1145/1409635.1409683
We present Picture This! a new input device embedded in children’s toys for video composition. It consists of a new form of interaction for children’s capturing of storytelling with physical artifacts. It functions as a video and storytelling performance system in that children craft videos with and about character toys as the system analyzes their gestures and play patterns. Children’s favorite props alternate between characters and cameramen in a film. As they play with the toys to act out a story, they conduct film assembly. We position our work as ubiquitous computing that supports children’s tangible interaction with digital materials. During user testing, we observed children ages 4 to 10 playing with Picture This!. We assess to what extent gesture interaction with objects for video editing allows children to explore visual perspectives in storytelling. A new genre of Gesture Object Interfaces as exemplified by Picture This relies on the analysis of gestures coupled with objects to represent bits.

From Touch Sensitive to Aerial Jewelry

Cati Vaucelle. From Touch Sensitive to Aerial Jewelry (Book Chapter). In Fashionable Technology, The intersection of Design, Fashion, Science, and Technology. Editor Seymour, S., Springer-Verlag Wien New York, 2008

Now that we constantly travel by plane, use GIS, google map, satellite imagery, our vision is expanded. Our everyday objects have a language that adapts itself to our influences. On the other end, as much as the car has influenced painting and the representation of space and movement, we wanted to show how the use of new technologies can change our way to design personal objects as exemplified by Aerial Jewelry

Handsaw: Tangible Exploration of Volumetric Data by Direct Cut-Plane Projection

Bonanni, L., Alonso, J., Chao, N., Vargas, G., and Ishii, H. 2008. Handsaw: tangible exploration of volumetric data by direct cut-plane projection. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 251-254.

DOI: http://doi.acm.org/10.1145/1357054.1357098
Tangible User Interfaces are well-suited to handling three-dimensional data sets by direct manipulation of real objects in space, but current interfaces can make it difficult to look inside dense volumes of information. This paper presents the SoftSaw, a system that detects a virtual cut-plane projected by an outstretched hand or laser-line directly on an object or space and reveals sectional data on an adjacent display. By leaving the hands free and using a remote display, these techniques can be shared between multiple users and integrated into everyday practice. The SoftSaw has been prototyped for scientific visualizations in medicine, engineering and urban design. User evaluations suggest that using a hand is more intuitive while projected light is more precise than keyboard and mouse control, and the SoftSaw system has the potential to be used more effectively by novices and in groups.

Renaissance Panel: The Roles of Creative Synthesis in Innovation

Hockenberry, M. and Bonanni, L. 2008. Renaissance panel: the roles of creative synthesis in innovation. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 2237-2240.

DOI: http://doi.acm.org/10.1145/1358628.1358658
The Renaissance ideal can be expressed as a creative synthesis between cultural disciplines, standing in stark contrast to our traditional focus on scientific specialization. This panel presents a number of experts who approach the synthesis of art and science as the modus operandi for their work, using it as a tool for creativity, research, and practice. Understanding these approaches allows us to identify the roles of synthesis in successful innovation and improve the implementation of interdisciplinary synthesis in research and practice.

Future Craft: How Digital Media is Transforming Product Design

Bonanni, L., Parkes, A., and Ishii, H. 2008. Future craft: how digital media is transforming product design. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 2553-2564.

DOI: http://doi.acm.org/10.1145/1358628.1358712
The open and collective traditions of the interaction community have created new opportunities for product designers to engage in the social issues around industrial production. This paper introduces Future Craft, a design methodology which applies emerging digital tools and processes to product design toward new objects that are socially and environmentally sustainable. We present the results of teaching the Future Craft curriculum at the MIT Media Lab including principal themes of public, local and personal design, resources, assignments and student work. Novel ethnographic methods are discussed with relevance to informing the design of physical products. We aim to create a dialogue around these themes for the product design and HCI communities.

Slurp: Tangibility, Spatiality, and an Eyedropper

Zigelbaum, J., Kumpf, A., Vazquez, A., and Ishii, H. 2008. Slurp: tangibility spatiality and an eyedropper. In CHI '08 Extended Abstracts on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 2565-2574.

DOI: http://doi.acm.org/10.1145/1358628.1358713
The value of tangibility for ubiquitous computing is in its simplicity-when faced with the question of how to grasp a digital object, why not just pick it up? But this is problematic; digital media is powerful due to its extreme mutability and is therefore resistant to the constraints of static physical form. We present Slurp, a tangible interface for locative media interactions in a ubiquitous computing environment. Based on the affordances of an eyedropper, Slurp provides haptic and visual feedback while extracting and injecting pointers to digital media between physical objects and displays.

Reality-Based Interaction: A Framework for Post-WIMP Interfaces

Jacob, R. J., Girouard, A., Hirshfield, L. M., Horn, M. S., Shaer, O., Solovey, E. T., and Zigelbaum, J. 2008. Reality-based interaction: a framework for post-WIMP interfaces. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 201-210.

DOI: http://doi.acm.org/10.1145/1357054.1357089
We are in the midst of an explosion of emerging human-computer interaction techniques that redefine our understanding of both computers and interaction. We propose the notion of Reality-Based Interaction (RBI) as a unifying concept that ties together a large subset of these emerging interaction styles. Based on this concept of RBI, we provide a framework that can be used to understand, compare, and relate current paths of recent HCI research as well as to analyze specific interaction designs. We believe that viewing interaction through the lens of RBI provides insights for design and uncovers gaps or opportunities for future research.

AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling (in Japanese)

Anabuki, M., Ishii, H. 2008. AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling. Transactions of the Virtual Reality Society of Japan, Special Issue on Mixed Reality 4 (Japanese Edition), Vol.13, No.2, 2008

AR-Jig: A Handheld Tangible User Interface for 3D Digital Modeling Mahoro Anabuki*1 and Hiroshi Ishii*2 Abstract --- We introduce AR-Jig, a new handheld tangible user interface for 3D digital modeling in Augmented Reality space. AR-Jig has a pin array that displays a 2D physical curve coincident with a contour of a digitally-displayed 3D form. It supports physical interaction with a portion of a 3D digital representation, allowing 3D forms to be directly touched and modified. This project leaves the majority of the data in the digital domain but gives physicality to any portion of the larger digital dataset via a handheld tool. Through informal evaluations, we demonstrate AR-Jig would be useful for a design domain where manual modeling skills are critical. Keywords: actuated interface, augmented reality, handheld tool, pin array display

Tangible Bits: Beyond Pixels

Ishii, H. 2008. Tangible bits: beyond pixels. In Proceedings of the 2nd international Conference on Tangible and Embedded interaction (Bonn, Germany, February 18 - 20, 2008). TEI '08. ACM, New York, NY, xv-xxv.

DOI: http://doi.acm.org/10.1145/1347390.1347392
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and design by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper discusses a model of TUI, key properties, genres, applications, and summarizes the contributions made by the Tangible Media Group and other researchers since the publication of the first Tangible Bits paper at CHI 1997. http://tangible.media.mit.edu/

Topobo in the Wild: Longitudinal Evaluations of Educators Appropriating a Tangible Interface

Parkes, A., Raffle, H., and Ishii, H. 2008. Topobo in the wild: longitudinal evaluations of educators appropriating a tangible interface. In Proceeding of the Twenty-Sixth Annual SIGCHI Conference on Human Factors in Computing Systems (Florence, Italy, April 05 - 10, 2008). CHI '08. ACM, New York, NY, 1129-1138.

DOI: http://doi.acm.org/10.1145/1357054.1357232
What issues arise when designing and deploying tangibles for learning in long term evaluations? This paper reports on a series of studies in which the Topobo system, a 3D tangible construction kit with the ability to record and playback motion, was provided to educators and designers to use over extended periods of time in the context of their day-to-day work. Tangibles for learning - like all educational materials - must be evaluated in relation both to the student and the teacher, but most studies of tangibles for learning focus on the student as user. Here, we focus on the conception of the educator, and their use of the tangible interface in the absence of an inventor or HCI researcher. The results of this study identify design and pedagogical issues that arise in response to distribution of a tangible for learning in different educational environments.

The Everyday Collector

Vaucelle, C. The Everyday Collector. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 - 24, 2008). UbiComp '08, vol. 344. ACM, New York, NY.

This paper presents the conceptualization of the Everyday Collector as a bridge between the traditional physical collection and the growing digital one. This work supports a reflection on the collection impulse and the impact that digital technologies have on the physical act of collection.

Electromagnetic Field Detector Bracelet

Vaucelle, C., Ishii, H. and Paradiso,.J. Electromagnetic Field Detector Bracelet. In Extended Abstracts of the 10th international Conference on Ubiquitous Computing (Seoul, Korea, September 21 - 24, 2008). UbiComp '08, vol. 344. ACM, New York, NY.

We present the design of a cost-effective wearable sensor to detect and indicate the strength and other characteristics of the electric field emanating from a laptop display. Our bracelet can provide an immediate awareness of electric fields radiated from an object used frequently. Our technology thus supports awareness of ambient background emanation beyond human perception. We discuss how detection of such radiation might help to “fingerprint” devices and aid in applications that require determination of indoor location.

2007

TILTle: exploring dynamic balance

Modlitba, P., Offenhuber, D., Ting, M., Tsigaridi, D., and Ishii, H. 2007. TILTle: exploring dynamic balance. In Proceedings of the 2007 Conference on Designing Pleasurable Products and interfaces (Helsinki, Finland, August 22 - 25, 2007). DPPI '07. ACM, New York, NY, 466-472. DOI= http://doi.acm.org/10.1145/1314161.1314207

DOI: http://doi.acm.org/10.1145/1314161.1314207
In this paper we introduce a novel interface for exploring dynamic equilibria using the metaphor of a traditional balance scale. Rather than comparing and identifying physical weight, our scale can be used for contrasting digital data in different domains. We do this by assigning virtual weight to objects, which physically affects the scale. Our goal is to make complex comparison mechanisms more visible and graspable.

The Sound of Touch

David Merrill and Hayes Raffle. 2007. The sound of touch. In ACM SIGGRAPH 2007 posters (SIGGRAPH '07). ACM, New York, NY, USA, , Article 138

DOI: http://doi.acm.org/10.1145/1280720.1280871
All people have experienced hearing sounds produced when they touch and manipulate different materials. We know what it will sound like to bang our fist against a wooden door, or to crumple a piece of newspaper. We can imagine what a coffee mug will sound like if it is dropped onto a concrete floor. But our wealth of experience handling physical materials does not typically produce much intuition for operating a new electronic instrument, given the inherently arbitrary mapping from gesture to sound.

SP3X: a six-degree of freedom device for natural model creation

Richard Whitney. SP3X: a six-degree of freedom device for natural model creation. Thesis (M.S.)—Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2007.

DOI: http://hdl.handle.net/1721.1/38641
This thesis presents a novel input device, called SP3X, for the creation of digital models in a semi-immersive environment. The goal of SP3X is to enable novice users to construct geometrically complex three-dimensional objects without extensive training or difficulty. SP3X extends the ideas of mixed reality and partial physical instantiation while building on the foundation of tangible interfaces. The design of the device reflects attention to human physiologic capabilities in manual precision, binocular vision, and reach. The design also considers cost and manufacturability. This thesis presents prior and contributing research from industry, biology, and interfaces in academia. A study investigates the usability of the device and finds that it is functional and easily learned, and identifies several areas for improvement. Finally, a Future Work section is provided to guide researchers pursuing this or similar interfaces. The SP3X project is a result of extensive collaboration with Mahoro Anabuki, a visiting scientist from Canon Development Americas, and could not have been completed without his software or his insight.

The Sound of Touch: Physical Manipulation of Digital Sound.

David Merrill, Hayes Raffle, and Roberto Aimi. 2008. The sound of touch: physical manipulation of digital sound. In Proceedings of the twenty-sixth annual SIGCHI conference on Human factors in computing systems (CHI '08). ACM, New York, NY, USA, 739-742.

DOI: http://dl.acm.org/citation.cfm?doid=1357054.1357171
The Sound of Touch is a new tool for real-time capture and sensitive physical stimulation of sound samples using digital convolution. Our hand-held wand can be used to (1) record sound, then (2) play back the recording by brushing, scraping, striking or otherwise physically manipulating the wand against physical objects. During playback, the recorded sound is continuously filtered by the acoustic interaction of the wand and the material being touched. The Sound of Touch enables a physical and continuous sculpting of sound that is typical of acoustic musical instruments and interactions with natural objects and materials, but not available in GUI-based tools or most electronic music instruments. This paper reports the design of the system and observations of thousands of users interacting with it in an exhibition format. Preliminary user feedback suggests future applications to foley, professional sound design, and musical performance.

The Sound of Touch

David Merrill and Hayes Raffle. 2007. The sound of touch. In CHI '07 extended abstracts on Human factors in computing systems (CHI EA '07). ACM, New York, NY, USA, 2807-2812.

DOI: http://doi.acm.org/10.1145/1240866.1241044
In this paper we describe the Sound of Touch, a new instrument for real-time capture and sensitive physical stimulation of sound samples using digital convolution. Our hand-held wand can be used to (1) record sound, then (2) playback the recording by brushing, scraping, striking or otherwise physically manipulating the wand against physical objects. During playback, the recorded sound is continuously filtered by the acoustic interaction of the wand and the material being touched. Our texture kit allows for convenient acoustic exploration of a range of materials.An acoustic instrument.s resonance is typically determined by the materials from which it is built. With the Sound of Touch, resonant materials can be chosen during the performance itself, allowing performers to shape the acoustics of digital sounds by leveraging their intuitions for the acoustics of physical objects. The Sound of Touch permits real-time exploitation of the sonic properties of a physical environment, to achieve a rich and expressive control of digital sound that is not typically possible in electronic sound synthesis and control systems.

Simplicity in Interaction Design

Chang, A., Gouldstone, J., Zigelbaum, J., and Ishii, H. 2007. Simplicity in interaction design. In Proceedings of the 1st international Conference on Tangible and Embedded interaction (Baton Rouge, Louisiana, February 15 - 17, 2007). TEI '07. ACM, New York, NY, 135-138.

DOI: http://doi.acm.org/10.1145/1226969.1226997
Attaining simplicy is a key challenge in interaction design. Our approach relies on a minimalist design exercise to explore the communication capacity for interaction components. This approach results in expressive design solutions, useful perspectives of interaction design and new interaction techniques.

Zstretch: A Stretchy Fabric Music Controller

Chang, A. and Ishii, H. 2007. Zstretch: a stretchy fabric music controller. In Proceedings of the 7th international Conference on New interfaces For Musical Expression (New York, New York, June 06 - 10, 2007). NIME '07. ACM, New York, NY, 46-49.

DOI: http://doi.acm.org/10.1145/1279740.1279746
We present Zstretch, a textile music controller that supports expressive haptic interactions. The musical controller takes advantage of the fabric's topological constraints to enable proportional control of musical parameters. This novel interface explores ways in which one might treat music as a sheet of cloth. This paper proposes an approach to engage simple technologies for supporting ordinary hand interactions. We show that this combination of basic technology with general tactile movements can result in an expressive musical interface.

AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve

Anabuki, M.; Ishi, H., "AR-Jig: A Handheld Tangible User Interface for Modification of 3D Digital Form via 2D Physical Curve," Mixed and Augmented Reality, 2007. ISMAR 2007. 6th IEEE and ACM International Symposium on , vol., no., pp.55-66, 13-16 Nov. 2007

DOI: https://doi.ieeecomputersociety.org/10.1109/ISMAR.2007.4538826
We introduce AR-Jig, a new handheld tangible user interface for 3D digital modeling in augmented reality (AR) space. AR-Jig has a pin array that displays a 2D physical curve coincident with a contour of a digitally displayed 3D form. It supports physical interaction with a portion of a 3D digital representation, allowing 3D forms to be directly touched and modified. Traditional tangible user interfaces physically embody all the data; in contrast, this project leaves the majority of the data in the digital domain but gives physicality to any portion of the larger digital dataset via a handheld tool. This tangible intersection enables the flexible manipulation of digital artifacts, both physically and virtually. Through an informal test by end-users and interviews with professionals, we confirmed the potential of the AR-Jig concept while identifying the improvements necessary to make AR-Jig a practical tool for 3D digital design.

Interfacing Video Capture, Editing and Publication in a Tangible Environment

Vaucelle, C. and Ishii, H. 2007. C. Interfacing Video Capture, Editing and Publication in a Tangible Environment. In. Baranauskas et al. (Eds.): INTERACT 2007, LNCS 4663, Lecture Notes in Computer Science, Part II, pp. 1 – 14, 2007. Springer Berlin / Heidelberg publisher.

DOI: http://dx.doi.org/10.1007/978-3-540-74800-7_1
The paper presents a novel approach to collecting, editing and performing visual and sound clips in real time. The cumbersome process of capturing and editing becomes fluid in the improvisation of a story, and accessible as a way to create a final movie. It is shown how a graphical interface created for video production informs the design of a tangible environment that provides a spontaneous and collaborative approach to video creation, selection and sequencing. Iterative design process, participatory design sessions and workshop observations with 10-12 year old users from Sweden and Ireland are discussed. The limitations of interfacing video capture, editing and publication in a self-contained platform are addressed. <br> <br> <b>Keywords</b>: Tangible User Interface - Video - Authorship - Mobile Technology - Digital Media - Video Jockey - Learning - Children - Collaboration

Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain

LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 - May 03, 2007). CHI '07. ACM, New York, NY, 801-804.

DOI: http://doi.acm.org/10.1145/1240624.1240744
We present Senspectra, a computationally augmented physical modeling toolkit designed for sensing and visualization of structural strain. Senspectra seeks to explore a new direction in computational materiality, incorporating the material quality of malleable elements of an interface into its digital control structure. The system functions as a decentralized sensor network consisting of nodes, embedded with computational capabilities and a full spectrum LED, and fl exible joints. Each joint functions as an omnidirectional bend sensing mechanism to sense and communicate mechanical strain between neighboring nodes.

Tug n Talk: A Belt Buckle for Tangible Tugging Communication

Adcock, M., Harry, D., Boch, M., Poblano, R.-D. and Harden, V. Tug n' Talk: A Belt Buckle for Tangible Tugging Communication. Presented at alt.chi 2007

Tug and Talk is prototype communication system with which you can send a "tug" to another person. The Tug and Talk device sits on your belt and connects to your shirt. Another Tug and Talk user can tug on the chain coming out of their matching belt and their tugging pattern is replicated as a tug on your own shirt. Tugs can express lots of different ideas, depending on context. A tug could be brief and small to see if someone is interruptable, or large, fast, and long to get someone's attention in an urgent situation. We think this sort of tangible social channel between people is a powerful idea, and we implemented two prototype devices to explore its potential.

Touch . Sensitive Apparel

Vaucelle, C. and Abbas, Y. 2007. Touch: sensitive apparel. In CHI '07 Extended Abstracts on Human Factors in Computing Systems (San Jose, CA, USA, April 28 - May 03, 2007). CHI '07. ACM, New York, NY, 2723-2728.

DOI: http://doi.acm.org/10.1145/1240866.1241069
Touch·Sensitive is a haptic apparel that allows massage therapy to be diffused, customized and controlled by people while on the move. It provides individuals with a sensory cocoon. Made of modular garments, Touch·Sensitive applies personalized stimuli. We present the design process and a series of low fidelity prototypes that lead us to the Touch·Sensitive Apparel.

Jabberstamp: Embedding sound and voice in traditional drawings

Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In Proceedings of the 6th international Conference on interaction Design and Children (Aalborg, Denmark, June 06 - 08, 2007). IDC '07. ACM, New York, NY, 137-144.

DOI: http://doi.acm.org/10.1145/1297277.1297306
We introduce Jabberstamp, the first tool that allows children to synthesize their drawings and voices. To use Jabberstamp, children create drawings, collages or paintings on normal paper. They press a special rubber stamp onto the page to record sounds into their drawings. When children touch the marks of the stamp with a small trumpet, they can hear the sounds playback, retelling the stories they created. We describe our design process and analyze the mechanism between the act of drawing and the one of telling, defining interdependencies between the two activities. In a series of studies, children ages 4--8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children's narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children's emergent literacy.

Remix and Robo: Improvisational performance and competition with modular robotic building toys

Raffle, H., Yip, L., and Ishii, H. 2007. Remix and robo: sampling, sequencing and real-time control of a tangible robotic construction system. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 - 09, 2007). SIGGRAPH '07. ACM, New York, NY, 35.

DOI: http://doi.acm.org/10.1145/1282040.1282077
We present Remix and Robo, new composition and performance based tools for robotics control. Remix is a tangible interface used to sample, organize and manipulate gesturally-recorded robotic motions. Robo is a modified game controller used to capture robotic motions, adjust global motion parameters and execute motion recordings in real-time. Children use Remix and Robo to engage in (1) character design and (2) competitive endeavors with Topobo, a constructive assembly system with kinetic memory. Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.

Mechanical Constraints as Computational Constraints in Tabletop Tangible Interfaces

Patten, J. and Ishii, H. 2007. Mechanical constraints as computational constraints in tabletop tangible interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 - May 03, 2007). CHI '07. ACM, New York, NY, 809-818.

DOI: http://doi.acm.org/10.1145/1240624.1240746
This paper presents a new type of human-computer interface called Pico (Physical Intervention in Computational Optimization) based on mechanical constraints that combines some of the tactile feedback and affordances of mechanical systems with the abstract computational power of modern computers. The interface is based on a tabletop interaction surface that can sense and move small objects on top of it. The positions of these physical objects represent and control parameters inside a software application, such as a system for optimizing the configuration of radio towers in a cellular telephone network. The computer autonomously attempts to optimize the network, moving the objects on the table as it changes their corresponding parameters in software. As these objects move, the user can constrain their motion with his or her hands, or many other kinds of physical objects. The interface provides ample opportunities for improvisation by allowing the user to employ a rich variety of everyday physical objects as mechanical constraints. This approach leverages the user's mechanical intuition for how objects respond to physical forces. As well, it allows the user to balance the numerical optimization performed by the computer with other goals that are difficult to quantify. Subjects in an evaluation were more effective at solving a complex spatial layout problem using this system than with either of two alternative interfaces that did not feature actuation.

Senspectra: A Computationally Augmented Physical Modeling Toolkit for Sensing and Visualization of Structural Strain

LeClerc, V., Parkes, A., and Ishii, H. 2007. Senspectra: a computationally augmented physical modeling toolkit for sensing and visualization of structural strain. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (San Jose, California, USA, April 28 - May 03, 2007). CHI '07. ACM, New York, NY, 801-804.

DOI: http://doi.acm.org/10.1145/1240624.1240744
We present Senspectra, a computationally augmented physical modeling toolkit designed for sensing and visualization of structural strain. Senspectra seeks to explore a new direction in computational materiality, incorporating the material quality of malleable elements of an interface into its digital control structure. The system functions as a decentralized sensor network consisting of nodes, embedded with computational capabilities and a full spectrum LED, and flexible joints. Each joint functions as an omnidirectional bend sensing mechanism to sense and communicate mechanical strain between neighboring nodes. Using Senspectra, a user incrementally assembles and refines a physical 3D model of discrete elements with a real-time visualization of structural strain. While the Senspectra infrastructure provides a flexible modular sensor network platform, its primary application derives from the need to couple physical modeling techniques utilized in architecture and design disciplines with systems for structural engineering analysis. This offers direct manipulation augmented with visual feedback for an intuitive approach to physical real-time finite element analysis, particularly for organic forms.

Reflecting on Tangible User Interfaces: Three Issues Concerning Domestic Technology

Zigelbaum, J., and Csikszentmihályi, C. Reflecting on Tangible User Interfaces: Three Issues Concerning Domestic Technology. CHI 2007 Workshop on Tangible User Interfaces in Context and Theory (2007).

As tangible interface design continues to gain currency within the mainstream HCI community and further manifests within the space of consumer electronics how will its impact be realized and how as designers of new technologies can we shape that impact? In this paper we examine the question of choice in technology design from the perspective of the social sciences and then reflect on ways that TUI designers could use these insights within their own practices. Of particular interest to this work is the repurposing and transplantation of current technologies into the domestic environment. The home has been a focus for much of the new work in HCI and in the near future we will see a continuation and increase in the development of domestic technologies. Much of the current work developing connected homes and ubiquitous systems for domestic use is compelling, though it seems to run directly counter to insights gained from the social sciences and philosophy of technology. In particular computer scientists, designers, anthropologists, and historians all offer very different points of departure concerning commercialization of domestic space and privacy versus data sharing. These differences may indicate a fertile area for research. We've identified three issues for domestic technology design: 1) context and the differentiation of constraints, 2) the privitization of space, and 3) the perception of control. These issues are not original to this work, nor are they exhaustive. Our work here is to discuss them within the context of tangible interface and domestic technology design as a means for critical reflection.

Jabberstamp: embedding sound and voice in traditional drawings

Raffle, H., Vaucelle, C., Wang, R., and Ishii, H. 2007. Jabberstamp: embedding sound and voice in traditional drawings. In ACM SIGGRAPH 2007 Educators Program (San Diego, California, August 05 - 09, 2007). SIGGRAPH '07. ACM, New York, NY, 32.

DOI: http://doi.acm.org/10.1145/1282040.1282074
Children in our culture are accustomed to creating people and things and places - with implied context - in their drawings. Since the first days they draw, parents will ask "who is that? Where are they? What are they doing?" From early on, children have learned through drawing to provide the information necessary for an audience to understand the story that is going on in their drawing. Conversely, learning how to contextualize an oral or written story in the absence of images is a much slower learning process for children, and children's ability to use language to communicate when and where their story takes place is considered a milestone in literacy development. Jabberstamp is the first tool that allows children to synthesize their drawings and voices. To use Jabberstamp, children create drawings, collages or paintings on normal paper. They press a special rubber stamp onto the page to record sounds into their drawings. When children touch the marks of the stamp with a small trumpet, they can hear the sounds playback, retelling the stories they created. In a series of studies, children ages 4-8 use Jabberstamp to convey meaning in their drawings. The system allows collaboration among peers at different developmental levels. Jabberstamp compositions reveal children's narrative styles and their planning strategies. In guided activities, children develop stories by situating sound recording in their drawing, which suggests future opportunities for hybrid voice-visual tools to support children's emergent literacy.

Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system

Hayes Raffle, Hiroshi Ishii, and Laura Yip. 2007. Remix and Robo: sampling, sequencing and real-time control of a tangible robotic construction system. In Proceedings of the 6th international conference on Interaction design and children (IDC '07). ACM, New York, NY, USA, 89-96.

DOI: http://dx.doi.org/10.1145/1297277.1297295
We present Remix and Robo, new composition and performance based tools for robotics control. Remix is a tangible interface used to sample, organize and manipulate gesturally-recorded robotic motions. Robo is a modified game controller used to capture robotic motions, adjust global motion parameters and execute motion recordings in real-time. Children use Remix and Robo to engage in (1) character design and (2) competitive endeavors with Topobo, a constructive assembly system with kinetic memory. Our objective is to provide new entry paths into robotics learning. This paper overviews our design process and reports how users age 7-adult use Remix and Robo to engage in different kinds of performative activities. Whereas robotic design is typically rooted in engineering paradigms, with Remix and Robo users pursue cooperative and competitive social performances. Activities like character design and robot competitions introduce a social context that motivates learners to focus and reflect upon their understanding of the robotic manipulative itself.

2006

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces

Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 - 16, 2006). ACE '06, vol. 266. ACM, New York, NY, 27.

DOI: http://doi.acm.org/10.1145/1178823.1178856
We present a set of interaction techniques for electronic musical performance using a tabletop tangible interface. Our system, the Audiopad, tracks the positions of objects on a tabletop surface and translates their motions into commands for a musical synthesizer. We developed and refi ned these interaction techniques through an iterative design process, in which new interaction techniques were periodically evaluated through performances and gallery installations. Based on our experience refi ning the design of this system, we conclude that tabletop interfaces intended for collaborative use should use interaction techniques designed to be legible to onlookers. We also conclude that these interfaces should allow users to spatially reconfi gure the objects in the interface in ways that are personally meaningful.

Glume: Exploring Materiality in a Soft Augmented Modular Modeling

Parkes, A., LeClerc, V., and Ishii, H. 2006. Glume: exploring materiality in a soft augmented modular modeling system. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 1211-1216.

DOI: http://doi.acm.org/10.1145/1125451.1125678
This paper presents Glume, a system modular primitives – six silicone bulbs, embedded with sculptable gel and a full spectrum LED- attached to a central processing “nucleus.” The nodes communicate capacitively to their neighbors to determine a network topology taking advantage of the novel conductive characteristics of hairgel. As a modular, scalable platform, Glume provides a system with discrete internal structure coupled with a soft organic form, like the skeleton defi nes the structure of a body, to provide a means for expression and investigation of structures and processes not possible with existing systems.

BodyBeats: Whole-Body, Musical Interfaces for Children

Zigelbaum, J., Millner, A., Desai, B., and Ishii, H. 2006. BodyBeats: whole-body, musical interfaces for children. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 1595-1600.

DOI: http://doi.acm.org/10.1145/1125451.1125742
This work in progress presents the BodyBeats Suite— three prototypes built to explore the interaction between children and computational musical instruments by using sound and music patterns. Our goals in developing the BodyBeats prototypes are (1) to help children engage their whole bodies while interacting with computers, (2) foster collaboration and pattern learning, and (3) provide a playful interaction for creating sound and music. We posit that electronic instruments for children that incorporate whole-body movement can provide active ways for children to play and learn with technology (while challenging a growing rate of childhood obesity). We describe how we implemented our current BodyBeats prototypes and discuss how users interact with them. We then highlight our plans for future work in the fields of whole-body interaction design, education, and music.

3D and Sequential Representations of Spatial Relationships among Photos

Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 472-477.

DOI: http://doi.acm.org/10.1145/1125451.1125555
This paper proposes automatic representations of spatial relationships among photos for structure analysis and review of a photographic subject. Based on camera tracking, photos are shown in a 3D virtual reality space to represent global spatial relationships. At the same time, the spatial relationships between two of the photos are represented in slide show sequences. This proposal allows people to organize photos quickly in spatial representations with qualitative meaning.

Mechanical Constraints as Common Ground between People and Computers

Patten, J. M. 2006 Mechanical Constraints as Common Ground between People and Computers. Doctoral Thesis. UMI Order Number: AAI0808956., Massachusetts Institute of Technology.

This thesis presents a new type of human-computer interface based on mechanical constraints that combines some of the tactile feedback and affordances of mechanical systems with the abstract computational power of modern computers. The interface is based on a tabletop interaction surface that can sense and move small objects on top of it. Computation is merged with dynamic physical processes on the tabletop that are exposed to and modified by the user in order to accomplish his or her task. The system places mechanical constraints and mathematical constraints on the same level, allowing users to guide simulations and optimization processes by constraining the motion of physical objects on the interaction surface. The interface provides ample opportunities for improvisation by allowing the user to employ a rich variety of everyday physical objects as interface elements. Subjects in an evaluation were more effective at solving a complex spatial layout problem using this system than with either of two alternative interfaces that did not feature actuation.

SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface

Vincent Leclerc. SENSPECTRA: An Elastic, Strain-Aware Physical Modeling Interface. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006.

Senspectra is a computationally augmented physical modeling toolkit designed for sensing and visualization of structural strain. The system functions as a distributed sensor network consisting of nodes, embedded with computational capabilities and a full spectrum LED, which communicate to neighbor nodes to determine a network topology through a system of flexible joints. Each joint, while serving as a data and power bus between nodes, also integrates an omnidirectional bend sensing mechanism, which uses a simple optical occlusion technique to sense and communicate mechanical strain between neighboring nodes. Using Senspectra, a user incrementally assembles and refines a physical 3D model of discrete elements with a real-time visualization of structural strain. While the Senspectra infrastructure provides a flexible modular sensor network platform, its primary application derives from the need to couple physical modeling techniques utilized in the architecture and industrial design disciplines with systems for structural engineering analysis, offering an intuitive approach for physical real-time finite element analysis. Utilizing direct manipulation augmented with visual feedback, the system gives users valuable insights on the global behavior of a constructed system defined as a network of discrete elements.

The Texture of Light

Vaucelle, C. 2006. The texture of light. In ACM SIGGRAPH 2006 Research Posters (Boston, Massachusetts, July 30 - August 03, 2006). SIGGRAPH '06. ACM, New York, NY, 27.

DOI: http://doi.acm.org/10.1145/1179622.1179651
The Texture of Light is research on lighting principles and the exploration of life feed video metamorphosis in the public space using reflection of light on transparent materials. The Texture of Light is an attempt to fight the boredom of everyday life. This project employs the simple use of chemistry, Plexiglas, and plastic patterns to form a reconstruction of reality, giving it a texture and an expressive form. The transformation of life feed video comes from physical, plastic circles that act as different masks of reality. These masks can be moved around and swapped by the public, enabling collective expression. This metamorphosis of the public space is presented in real time as a moving painting and is projected on city walls. The public can record video clips of their 'moving painting' and project them back onto different city locations.

Affective TouchCasting

Bonanni, L. and Vaucelle, C. 2006. Affective TouchCasting. In ACM SIGGRAPH 2006 Sketches (Boston, Massachusetts, July 30 - August 03, 2006). SIGGRAPH '06. ACM, New York, NY, 35.

DOI: http://doi.acm.org/10.1145/1179849.1179893
The sense of touch is not only informative: certain kinds of touch are directly related to emotions. Haptics can enrich the experience of broadcast media through tactile stimulus that is mapped to emotional response and distributed over the body. This sketch applies affective touch research to haptic broadcast in a wearable device that can record, distribute and play back touch information. TouchCasting augments broadcast media with affective haptics that can be experienced in public as a new form of art.

Collaborative Simulation Interface for Planning Disaster Measures

Kobayashi, K., Narita, A., Hirano, M., Kase, I., Tsuchida, S., Omi, T., Kakizaki, T., and Hosokawa, T. 2006. Collaborative simulation interface for planning disaster measures. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 977-982.

DOI: http://doi.acm.org/10.1145/1125451.1125639

PlayPals: Tangible Interfaces for Remote Communication and Play

Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. PlayPals: tangible interfaces for remote communication and play. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 574-579.

DOI: http://doi.acm.org/10.1145/1125451.1125572
PlayPals are a set of wireless figurines with their electronic accessories that provide children with a playful way to communicate between remote locations. PlayPals is designed for children aged 5-8 to share multimedia experiences and virtual co-presence. We learned from our pilot study that embedding digital communication into existing play pattern enhances both remote play and communication.

TapTap: A Haptic Wearable for Asynchronous Distributed Touch Therapy

Bonanni, L., Vaucelle, C., Lieberman, J., and Zuckerman, O. 2006. TapTap: a haptic wearable for asynchronous distributed touch therapy. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 580-585.

DOI: http://doi.acm.org/10.1145/1125451.1125573
TapTap is a wearable haptic system that allows nurturing human touch to be recorded, broadcast and played back for emotional therapy. Haptic input/output modules in a convenient modular scarf provide affectionate touch that can be personalized. We present a working prototype informed by a pilot study.

3D and Sequential Representations of Spatial Relationships among Photos

Anabuki, M. and Ishii, H. 2006. 3D and sequential representations of spatial relationships among photos. In CHI '06 Extended Abstracts on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). CHI '06. ACM, New York, NY, 472-477.

DOI: http://doi.acm.org/10.1145/1125451.1125555
This paper proposes automatic representations of spatial relationships among photos for structure analysis and review of a photographic subject. Based on camera tracking, photos are shown in a 3D virtual reality space to represent global spatial relationships. At the same time, the spatial relationships between two of the photos are represented in slide show sequences. This proposal allows people to organize photos quickly in spatial representations with qualitative meaning.

Beyond Record and Play: Backpacks: Tangible Modulators for Kinetic Behavior

Raffle, H., Parkes, A., Ishii, H., and Lifton, J. 2006. Beyond record and play: backpacks: tangible modulators for kinetic behavior. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Montréal, Québec, Canada, April 22 - 27, 2006). R. Grinter, T. Rodden, P. Aoki, E. Cutrell, R. Jeffries, and G. Olson, Eds. CHI '06. ACM, New York, NY, 681-690.

DOI: http://doi.acm.org/10.1145/1124772.1124874
Digital Manipulatives embed computation in familiar children's toys and provide means for children to design behavior. Some systems use "record and play" as a form of programming by demonstration that is intuitive and easy to learn. With others, children write symbolic programs with a GUI and download them into a toy, an approach that is conceptually extensible, but is inconsistent with the physicality of educational manipulatives. The challenge we address is to create a tangible interface that can retain the immediacy and emotional engagement of "record and play" and incorporate a mechanism for real time and direct modulation of behavior during program execution.We introduce the Backpacks, modular physical components that children can incorporate into robotic creations to modulate frequency, amplitude, phase and orientation of motion recordings. Using Backpacks, children can investigate basic kinematic principles that underly why their specific creations exhibit the specific behaviors they observe. We demonstrate that Backpacks make tangible some of the benefits of symbolic abstraction, and introduce sensors, feedback and behavior modulation to the record and play paradigm. Through our review of user studies with children ages 6-15, we argue that Backpacks extend the conceptual limits of record and play with an interface that is consistent with both the physicality of educational manipulatives and the local-global systems dynamics that are characteristic of complex robots.

Interaction Techniques for Musical Performance with Tabletop Tangible Interfaces

Patten, J., Recht, B., and Ishii, H. 2006. Interaction techniques for musical performance with tabletop tangible interfaces. In Proceedings of the 2006 ACM SIGCHI international Conference on Advances in Computer Entertainment Technology (Hollywood, California, June 14 - 16, 2006). ACE '06, vol. 266. ACM, New York, NY, 27.

DOI: http://doi.acm.org/10.1145/1178823.1178856
We present a set of interaction techniques for electronic musical performance using a tabletop tangible interface. Our system, the Audiopad, tracks the positions of objects on a tabletop surface and translates their motions into commands for a musical synthesizer. We developed and refined these interaction techniques through an iterative design process, in which new interaction techniques were periodically evaluated through performances and gallery installations. Based on our experience refining the design of this system, we conclude that tabletop interfaces intended for collaborative use should use interaction techniques designed to be legible to onlookers. We also conclude that these interfaces should allow users to spatially reconfigure the objects in the interface in ways that are personally meaningful.

2005

Designing the "World as your Palette"

Ryokai, K., Marti, S., and Ishii, H. 2005. Designing the world as your palette. In CHI '05 Extended Abstracts on Human Factors in Computing Systems (Portland, OR, USA, April 02 - 07, 2005). CHI '05. ACM, New York, NY, 1037-1049.

DOI: http://doi.acm.org/10.1145/1056808.1056816
"The World as your Palette" is our ongoing effort to design and develop tools to allow artists to create visual art projects with elements (specifically, the color, texture, and moving patterns) extracted directly from their personal objects and their immediate environment. Our tool called "I/O Brush" looks like a regular physical paintbrush, but contains a video camera, lights, and touch sensors. Outside of the drawing canvas, the brush can pick up colors, textures, and movements of a brushed surface. On the canvas, artists can draw with the special "ink" they just picked up from their immediate environment. We describe the evolution and development of our system, from kindergarten classrooms to an art museum, as well as the reactions of our users to the growing expressive capabilities of our brush, as an iterative design process.

The world as a palette : painting with attributes of the environment

Kimiko Ryokai. Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.

2004

Bottles: A Transparent Interface as a Tribute to Mark Weiser

Hiroshi Ishii, "Bottles: A Transparent Interface as a Tribute to Mark Weiser." IEICE TRANSACTIONS on Information and Systems Vol.E87-D No.6 pp.1299-1311

This paper first discusses the misinterpretation of the concept of "ubiquitous computing" that Mark Weiser originally proposed in 1991. Weiser's main message was not the ubiquity of computers, but thetransparency of interface that determines users' perception of digital technologies embedded in our physical environment seamlessly. To explore Weiser's philosophy of transparency in interfaces, this paper presents the design of an interface that uses glass bottles as "containers" and "controls" for digital information. The metaphor is a perfume bottle: Instead of scent, the bottles have been filled with music -- classical, jazz, and techno music. Opening each bottle releases the sound of a specific instrument accompanied by dynamic colored light. Physical manipulation of the bottles -- opening and closing -- is the primary mode of interaction for controlling their musical contents. The bottlesillustrates Mark Weiser's vision of the transparent (or invisible) interface that weaves itself into the fabric of everyday life. The bottles also exploits the emotional aspects of glass bottles that are tangible and visual, and evoke the smell of perfume and the taste of exotic beverages. This paper describes the design goals of the bottle interface, the arrangement of musical content, the implementation of the wireless electromagnetic tag technology, and the feedback from users who have played with the system.

Topobo: A 3-D Constructive Assembly System with Kinetic Memory

Hayes Solos Raffle. Topobo: A 3-D Constructive Assembly System with Kinetic Memory. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.

DOI: http://hdl.handle.net/1721.1/26920

Tangible User Interfaces (TUIs): A Novel Paradigm for GIS

Ratti, C., Wang, Y., Ishii, H., Piper, B. Frenchman, D., "Tangible User Interfaces (TUIs): A Novel Paradigm for GIS," Trans. GIS, vol. 8, no. 4, 2004, pp. 407–421.

DOI: http://dx.doi.org/10.1111/j.1467-9671.2004.00193.x
In recent years, an increasing amount of effort has gone into the design of GIS user interfaces. On the one hand, Graphical User Interfaces (GUIs) with a high degree of sophistication have replaced line-driven commands of first-generation GIS. On the other hand, a number of alternative approaches have been suggested, most notably those based on Virtual Environments (VEs). In this paper we discuss a novel interface for GIS, which springs from recent work carried out in the field of Tangible User Interfaces (TUIs). The philosophy behind TUIs is to allow people to interact with computers via familiar tangible objects, therefore taking advantage of the richness of the tactile world combined with the power of numerical simulations. Two experimental systems, named Illuminating Clay and SandScape, are described here and their applications to GIS are examined. Conclusions suggest that these interfaces might streamline the landscape design process and result in a more effective use of GIS, especially when distributed decision-making and discussion with non-experts are involved.

egaku: Enhancing the Sketching Process

Yoon, J., Ryokai, K., Dyner, C., Alonso, J., and Ishii, H. 2004. egaku: enhancing the sketching process. In ACM SIGGRAPH 2004 Posters (Los Angeles, California, August 08 - 12, 2004). R. Barzel, Ed. SIGGRAPH '04. ACM, New York, NY, 42.

DOI: http://doi.acm.org/10.1145/1186415.1186464
egaku is a tabletop user interface designed to enhance the ideation process with seamless image management tools. Designers sketch ideas as the system captures high-resolution images of the sketches and organizes them in a transparent image management structure. The system's ability to determine and recognize layer associations allows users to quickly and intuitively visualize, retrieve, navigate through, and switch between layers of information without the hassle of traversing through multiple sheets of paper. With its strong emphasis on maintaining and enhancing the natural affordances of physical tracing paper, egaku allows users to overlay multiple digital translucent images to compose and compare different designs.

Phoxel-Space: an Interface for Exploring Volumetric Data with Physical Voxels

Ratti, C., Wang, Y., Piper, B., Ishii, H., and Biderman, A. 2004. PHOXEL-SPACE: an interface for exploring volumetric data with physical voxels. In Proceedings of the 5th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Cambridge, MA, USA, August 01 - 04, 2004). DIS '04. ACM, New York, NY, 289-296.

DOI: http://doi.acm.org/10.1145/1013115.1013156
Phoxel-Space is an interface to enable the exploration of voxel data through the use of physical models and materials. Our goal is to improve the means to intuitively navigate and understand complex 3-dimensional datasets. The system works by allowing the user to define a free form geometry that can be utilized as a cutting surface with which to intersect a voxel dataset. The intersected voxel values are projected back onto the surface of the physical material. The paper describes how the interface approach builds on previous graphical, virtual and tangible interface approaches and how Phoxel-Space can be used as a representational aid in the example application domains of biomedicine, geophysics and fluid dynamics simulation

I/O Brush: Drawing with Everyday Objects as Ink

Ryokai, K., Marti, S., and Ishii, H. 2004. I/O brush: drawing with everyday objects as ink. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 - 29, 2004). CHI '04. ACM, New York, NY, 303-310.

DOI: http://doi.acm.org/10.1145/985692.985731
We introduce I/O Brush, a new drawing tool aimed at young children, ages four and up, to explore colors, textures, and movements found in everyday materials by “picking up” and drawing with them. I/O Brush looks like a regular physical paintbrush but has a small video camera with lights and touch sensors embedded inside. Outside of the drawing canvas, the brush can pick up color, texture, and movement of a brushed surface. On the canvas, children can draw with the special “ink” they just picked up from their immediate environment. In our study with kindergarteners, we found that children not only produced complex works of art using I/O Brush, but they also engaged in explicit talk about patterns and features available in their environment. I/O Brush invites children to explore the transformation from concrete and familiar raw material into abstract concepts about patterns of colors, textures and movements.

Topobo: A Constructive Assembly System with Kinetic Memory

Raffle, H. S., Parkes, A. J., and Ishii, H. 2004. Topobo: a constructive assembly system with kinetic memory. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Vienna, Austria, April 24 - 29, 2004). CHI '04. ACM, New York, NY, 647-654.

DOI: http://doi.acm.org/10.1145/985692.985774
We introduce Topobo, a 3D constructive assembly system embedded with kinetic memory, the ability to record and playback physical motion. Unique among modeling systems is Topobo's coincident physical input and output behaviors. By snapping together a combination of Passive (static) and Active (motorized) components, people can quickly assemble dynamic biomorphic forms like animals and skeletons with Topobo, animate those forms by pushing, pulling, and twisting them, and observe the system repeatedly play back those motions. For example, a dog can be constructed and then taught to gesture and walk by twisting its body and legs. The dog will then repeat those movements and walk repeatedly.

Bringing clay and sand into digital design — continuous tangible user interfaces

Ishii, H., Ratti, C., Piper, B., Wang, Y., Biderman, A., and Ben-Joseph, E. 2004. Bringing Clay and Sand into Digital Design — Continuous Tangible user Interfaces. BT Technology Journal 22, 4 (Oct. 2004), 287-299.

DOI: http://dx.doi.org/10.1023/B:BTTJ.0000047607.16164.16
Tangible user interfaces (TUIs) provide physical form to digital information and computation, facilitating the direct manipulation of bits. Our goal in TUI development is to empower collaboration, learning, and decision-making by using digital technology and at the same time taking advantage of human abilities to grasp and manipulate physical objects and materials. This paper presents a new generation of TUIs that enable dynamic sculpting and computational analysis using digitally augmented continuous physical materials. These new types of TUI, which we have termed ‘Continuous TUIs’, offer rapid form giving in combination with computational feedback. Two experimental systems and their applications in the domain of landscape architecture are discussed here, namely ‘Illuminating Clay’ and ‘SandScape’. Our results suggest that by exploiting the physical properties of continuous soft materials such as clay and sand, it is possible to bridge the division between physical and digital forms and potentially to revolutionise the current design process.

Super Cilia Skin: A Textural Interface

Raffe, H., Tichenor, J., Ishii, H. 2004. Super Cilia Skin: A Textural Interface. Textile, Volume 2, Issue 3, pp. 1–19

Super Cilia Skin is a literal membrane separating a computer from its environment. Like our skin, it is haptic I/O membrane that can sense and simulate movement and wind flow. Our intention is to have it be universally applied to sheath any surface. As a display, it can mimic another person’s gesture over a distance via a form of tangible telepresence. A hand-sized interface covered with Super Cilia Skin would produce subtle changes in surface texture that feel much like a telepresent "butterfly kiss."

Topobo: A Gestural Design Tool with Kinetic Memory

Amanda Parkes. Topobo: A Gestural Design Tool with Kinetic Memory. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2004.

DOI: http://hdl.handle.net/1721.1/28768
The modeling of kinetic systems, both in physical materials and virtual simulations, provides a methodology to better understand and explore the forces and dynamics of our physical environment. The need to experiment, prototype and model with programmable kinetic forms is becoming increasingly important as digital technology becomes more readily embedded in physical structures and provides real-time variable data the capacity to transform the structures themselves. This thesis introduces Topobo, a gestural design tool embedded with kinetic memory--the ability to record, playback, and transform physical motion in three dimensional space. As a set of kinetic building blocks, Topobo records and repeats the body's gesture while the system's peer-to-peer networking scheme provides the capability to pass and transform q gesture. This creates a means to represent and understand algorithmic simulations in a physical material, providing a physical demonstration of how a simple set of rules can lead to complex form and behavior. Topobo takes advantage of the editability of computer data combined with the physical immediacy of a tangible model to provide a means for expression and investigation of kinetic patterns and processes not possible with existing materials.

PINS : a haptic computer interface system.

Bradley Carter Kaanta. PINS : a haptic computer interface system. Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 2004.

DOI: http://hdl.handle.net/1721.1/28419

2003

Tangible Query Interfaces: Physically Constrained Tokens for Manipulating Database Queries

Ullmer B, Ishii H, Jacob R.J.K (2003) Tangible query interfaces: physically constrained tokens for manipulating database queries. In: Proceedings of the 9th IFIP international conference on human-computer interaction (INTERACT 2003), Zurich, Switzerland, September 2003.

We present a new approach for using physically constrained tokens to express, manipulate, and visualize parameterized database queries. This method extends tangible interfaces to enable interaction with large aggregates of information. We describe two interface prototypes that use physical tokens to represent database parameters. These tokens are manipulated upon physical constraints, which map compositions of tokens onto interpretations including database queries, views, and Boolean operations. We propose a framework for “token + constraint” interfaces, and compare one of our prototypes with a comparable graphical interface in a preliminary user study.

Super Cilia Skin: An Interactive Membrane

Raffle, H., Joachim, M. W., and Tichenor, J. 2003. Super cilia skin: an interactive membrane. In CHI '03 Extended Abstracts on Human Factors in Computing Systems (Ft. Lauderdale, Florida, USA, April 05 - 10, 2003). CHI '03. ACM, New York, NY, 808-809.

DOI: http://doi.acm.org/10.1145/765891.766004
Super Cilia Skin is a literal membrane separating a computer from its environment. Like our skin, it is haptic I/O membrane that can sense and simulate movement and wind flow. Our intention is to have it be universally applied to sheath any surface. As a display, it can mimic another person’s gesture over a distance via a form of tangible telepresence. A hand-sized interface covered with Super Cilia Skin would produce subtle changes in surface texture that feel much like a telepresent "butterfly kiss."

Applications of Computer-Controlled Actuation in Workbench Tangible User Interfaces

Daniel Maynes-Aminzade. Applications of Computer-Controlled Actuation in Workbench Tangible User Interfaces. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.

The actuated workbench : 2D actuation in tabletop tangible interfaces

Gian Antonio Pangaro. The actuated workbench : 2D actuation in tabletop tangible interfaces. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2003.

DOI: http://hdl.handle.net/1721.1/17620
The Actuated Workbench is a new actuation mechanism that uses magnetic forces to control the two-dimensional movement of physical objects on flat surfaces. This mechanism is intended for use with existing tabletop Tangible User Interfaces, providing computer-controlled movement of the physical objects on the table, and creating an additional feedback layer for Human Computer Interaction (HCI). Use of this actuation technique makes possible new kinds of physical interactions with tabletop interfaces, and allows the computer to maintain consistency between the physical and digital states of data objects in the interface. This thesis focuses on the design and implementation of the actuation mechanism as an enabling technology, introduces new techniques for motion control, and discusses practical and theoretical implications of computer-controlled movement of physical objects in tabletop tangible interfaces.

2002

The Actuated Workbench: Computer-Controlled Actuation in Tabletop Tangible Interfaces

Pangaro, G., Maynes-Aminzade, D., Ishii, H. 2003. The Actuated Workbench: Computer-Controlled Actuation in Tabletop Interfaces. ACM Trans. Graph. 22, 3 (Jul. 2003), 699-699.

DOI: http://doi.acm.org/10.1145/882262.882330
The Actuated Workbench is a device that uses magnetic forces to move objects on a table in two dimensions. It is intended for use with existing tabletop tangible interfaces, providing an additional feedback loop for computer output, and helping to resolve inconsistencies that otherwise arise from the computer's inability to move objects on the table.

ComTouch: A Vibrotactile Communication Device

Angela Chang. ComTouch: A Vibrotactile Mobile Communication Device. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Tangible Interfaces for Manipulating Aggregates of Digital Information

Brygg Ullmer. Tangible Interfaces for Manipulating Aggregates of Digital Information. Thesis (Ph. D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis

Ben Piper. The Illuminated Design Environment: a 3D Tangible Interface for Landscape Analysis. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Hover: Conveying Remote Presence

Maynes-Aminzade, D., Tan, B., Goulding, K., and Vaucelle, C. 2002. Hover: conveying remote presence. In ACM SIGGRAPH 2002 Conference Abstracts and Applications (San Antonio, Texas, July 21 - 26, 2002). SIGGRAPH '02. ACM, New York, NY, 194-194.

DOI: http://doi.acm.org/10.1145/1242073.1242207
This sketch presents Hover, a device that enhances remote telecommunication by providing a sense of the activity and presence of remote users. The motion of a remote persona is manifested as the playful movements of a ball floating in midair. Hover is both a communication medium and an aesthetic object.

Audiopad: A Tag Based Interface for Musical Performance

Patten, J., Recht, B., and Ishii, H. 2002. Audiopad: a tag-based interface for musical performance. In Proceedings of the 2002 Conference on New interfaces For Musical Expression (Dublin, Ireland, May 24 - 26, 2002). E. Brazil, Ed. New Interfaces For Musical Expression. National University of Singapore, Singapore, 1-6.

We present Audiopad, an interface for musical performance that aims to combine the modularity of knob based controllers with the expressive character of multidimensional tracking interfaces. The performer's manipulations of physical pucks on a tabletop control a real-time synthesis process. The pucks are embedded with LC tags that the system tracks in two dimensions with a series of specially shaped antennae. The system projects graphical information on and around the pucks to give the performer sophisticated control over the synthesis process

Illuminating Clay: A Tangible Interface with potential GRASS applications

Piper B., Ratti C., Ishii H., 2002, Illuminating Clay: a tangible interface with potential GRASS applications. Proceedings of the open-source GIS - GRASS users conference, Trento, Italy, September 2002.

This paper introduces Illuminating Clay, an alternative interface for manipulating and navigating landscape representations that has been designed according to the specific needs of the landscape analyst

ComTouch: A Vibrotactile Communication Device

Chang, A., O'Modhrain, S., Jacob, R., Gunther, E., and Ishii, H. 2002. ComTouch: design of a vibrotactile communication device. In Proceedings of the 4th Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (London, England, June 25 - 28, 2002). DIS '02. ACM, New York, NY, 312-320.

DOI: http://doi.acm.org/10.1145/778712.778755
We describe the design of ComTouch, a device that augments remote voice communication with touch, by converting hand pressure into vibrational intensity between users in real-time. The goal of this work is to enrich inter-personal communication by complementing voice with a tactile channel. We present preliminary user studies performed on 24 people to observe possible uses of the tactile channel when used in conjunction with audio. By recording and examining both audio and tactile data, we found strong relationships between the two communication channels. Our studies show that users developed an encoding system similar to that of Morse code, as well as three original uses: emphasis, mimicry, and turn-taking. We demonstrate the potential of the tactile channel to enhance the existing voice communication channel.

PegBlocks: a Learning Aid for the Elementary Classroom

Ben Piper and Hiroshi Ishii. 2002. PegBlocks: a learning aid for the elementary classroom. In CHI ’02 Extended Abstracts on Human Factors in Computing Systems (CHI EA ’02). Association for Computing Machinery, New York, NY, USA, 686–687. DOI:https://doi.org/10.1145/506443.506546

In this paper we describe the implementation of PegBlocks - an educational toy that can be used to illustrate some basic physics principles to elementary students.

Illuminating Clay: A 3-D Tangible Interface for Landscape Analysis

Piper, B., Ratti, C., and Ishii, H. 2002. Illuminating clay: a 3-D tangible interface for landscape analysis. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 - 25, 2002). CHI '02. ACM, New York, NY, 355-362.

DOI: http://doi.acm.org/10.1145/503376.503439
This paper describes a novel system for the real-time computational analysis of landscape models. Users of the system - called Illuminating Clay - alter the topography of a clay landscape model while the changing geometry is captured in real-time by a ceiling-mounted laser scanner. A depth image of the model serves as an input to a library of landscape analysis functions. The results of this analysis are projected back into the workspace and registered with the surfaces of the model.We describe a scenario for which this kind of tool has been developed and we review past work that has taken a similar approach. We describe our system architecture and highlight specific technical issues in its implementation.We conclude with a discussion of the benefits of the system in combining the tangible immediacy of physical models with the dynamic capabilities of computational simulations.

A Tangible Interface for Organizing Information Using a Grid

Jacob, R. J., Ishii, H., Pangaro, G., and Patten, J. 2002. A tangible interface for organizing information using a grid. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: Changing Our World, Changing Ourselves (Minneapolis, Minnesota, USA, April 20 - 25, 2002). CHI '02. ACM, New York, NY, 339-346.

DOI: http://doi.acm.org/10.1145/503376.503437
The task of organizing information is typically performed either by physically manipulating note cards or sticky notes or by arranging icons on a computer with a graphical user interface. We present a new tangible interface platform for manipulating discrete pieces of abstract information, which attempts to combine the benefits of each of these two alternatives into a single system. We developed interaction techniques and an example application for organizing conference papers. We assessed the effectiveness of our system by experimentally comparing it to both graphical and paper interfaces. The results suggest that our tangible interface can provide a more effective means of organizing, grouping, and manipulating data than either physical operations or graphical computer interaction alone

Dolltalk: a computational toy to enhance children's creativity

Vaucelle, C. and Jehan, T. 2002. Dolltalk: a computational toy to enhance children's creativity. In CHI '02 Extended Abstracts on Human Factors in Computing Systems (Minneapolis, Minnesota, USA, April 20 - 25, 2002). CHI '02. ACM, New York, NY, 776-777.

DOI: http://doi.acm.org/10.1145/506443.506592
This paper presents a novel approach and interface for encouraging children to tell and act out original stories. Dolltalk is a toy that simulates speech recognition by capturing the gestures and speech of a child. The toy then plays back a child's pretend-play speech in altered voices representing the characters of the child's story. Dolltalk's tangible interface and ability to retell a child's story may enhance a child's creativity in narrative elaboration.

Dolltalk: A computational toy to enhance narrative perspective-taking

Cati Vaucelle. Dolltalk: A computational toy to enhance narrative perspective-taking. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation

Hiroshi Ishii, Eran Ben-Joseph, John Underkoffler, Luke Yeung, Dan Chak, Zahra Kanji, and Ben Piper. 2002. Augmented Urban Planning Workbench: Overlaying Drawings, Physical Models and Digital Simulation. In Proceedings of the 1st International Symposium on Mixed and Augmented Reality (ISMAR '02). IEEE Computer Society, Washington, DC, USA, 203-.

DOI: http://doi.ieeecomputersociety.org/10.1109/ISMAR.2002.1115090
There is a problem in the spatial and temporal separation between the varying forms of representation used in urban design. Sketches, physical models, and more recently computational simulation, while each serving a useful purpose, tend to be incompatible forms of representation. The contemporary designer is required assimilate these divergent media into a single mental construct and in so doing is distracted from the central process of design. We propose an augmented reality workbench called "Luminous Table" that attempts to address this issue by integrating multiple forms of physical and digital representations. 2D drawings, 3D physical models, and digital simulation are overlaid into a single information space in order to support the urban design process. We describe how the system was used in a graduate design course and discuss how the simultaneous use of physical and digital media allowed for a more holistic design approach. We also discuss the need for future technical improvements.

2001

Urban Simulation and the Luminous Planning Table

Ben-Joseph, E., Ishii, H., Underkoffler, J., Piper, B. & Yeung, L. 2001. Urban Simulation and the Luminous Planning Table: Bridging the Gap between the Digital and the Tangible , Journal of Planning Education and Research, 21, 195-202

DOI: http://dx.doi.org/10.1177/0739456X0102100207
Multi-layered manipulative platforms that integrate digital and physical representations will have a significant impact on urban design and planning processes in the future. The usefulness of these platforms will be in their ability to combine and update digital and tangible data in seamless ways to enhance the design process of the professional and the communication process with the public. The Luminous Planning Table is one of the first prototypes that use a tangible computerized interface. The use of this system is unique in the design and presentation process in which, at the moment, the activity of viewing physical models and the viewing of animation and computerized simulations are separate. This ability to engage and provide an integrated medium for information delivery and understanding is promising in its pedagogical, professional, and public engagement outcomes.

Pinwheels: Visualizing Information Flow in an Architectural Space

Hiroshi Ishii, Sandia Ren, and Phil Frei. 2001. Pinwheels: visualizing information flow in an architectural space. In CHI '01 extended abstracts on Human factors in computing systems (CHI EA '01). ACM, New York, NY, USA, 111-112. DOI=10.1145/634067.634135 http://doi.acm.org/10.1145/634067.634135

DOI: http://doi.acm.org/10.1145/634067.634135
We envision that the architectural spaces we inhabit will become an interface between humans and online digital information. We have been designing ambient information displays to explore the use of kinetic physical objects to present information at the periphery of human perception. This paper reports the design of a large-scale Pinwheels installation made of 40 computer-controlled pinwheel units in a museum context. The Pinwheels spin in a “wind of bits” that blows from cyberspace. The array of spinning pinwheels presents information within an architectural space through subtle changes in movement and sound. We describe the iterative design and implementation of the Pinwheels, and discuss design issues.

Bottles as a minimal interface to access digital information

Hiroshi Ishii, Ali Mazalek, and Jay Lee. 2001. Bottles as a minimal interface to access digital information. In CHI '01 extended abstracts on Human factors in computing systems (CHI EA '01). ACM, New York, NY, USA, 187-188. DOI=10.1145/634067.634180 http://doi.acm.org/10.1145/634067.634180

DOI: http://dx.doi.org/10.1145/634067.634180
We present the design of a minimal interface to access digital information using glass bottles as "containers" and "controls". The project illustrates our attempt to explore the transparency of an interface that weaves itself into the fabric of everyday life, and exploits the emotional aspects of glass bottles that are both tangible and visual. This paper describes the design of the bottle interface, and the implementation of the musicBottles installation, in which the opening of each bottle releases the sound of a specific instrument.

Sensetable: A Wireless Object tracking platform for tangible user interfaces.

James Patten. Sensetable: A Wireless Object tracking platform for tangible user interfaces. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Tangible Interfaces for Interactive Point-of-View Narratives

Alexandra Mazalek. Tangible Interfaces for Interactive Point-of-View Narratives. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

Telling Tales: A new way to encourage written literacy through oral language

Mike Ananny. Telling Tales: A new way to encourage written literacy through oral language. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2002.

genieBottles: An Interactive Narrative in Bottles

A. Mazalek, A. Wood, and H. Ishii. Geniebottles: An interactive narrative in bottles. In Conference Abstracts and Applications SIGGRAPH 2001, page 189, Los Angeles, California USA, August 2001.

LumiTouch: An Emotional Communication Device

Chang, A., Resner, B., Koerner, B., Wang, X., and Ishii, H. 2001. LumiTouch: an emotional communication device. In CHI '01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 - April 05, 2001). CHI '01. ACM, New York, NY, 313-314.

DOI: http://doi.acm.org/10.1145/634067.634252
We present the Lumitouch system consisting of a pair of interactive picture frames. When one user touches her picture frame, the other picture frame lights up. This touch is translated to light over an Internet connection. We introduce a semi-ambient display that can transition seamlessly from periphery to foreground in addition to communicating emotional content. In addition to enhancing the communication between loved ones, people can use LumiTouch to develop a personal emotional language.Based upon prior work on telepresence and tangible interfaces, LumiTouch explores emotional communication in tangible form. This paper describes the components, interactions, implementation and design approach of the LumiTouch system.

The HomeBox: A Web Content Creation Tool for The Developing World

Piper, B. and Hwang, R. E. 2001. The HomeBox: a web content creation tool for the developing world. In CHI '01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 - April 05, 2001). CHI '01. ACM, New York, NY, 145-146.

DOI: http://doi.acm.org/10.1145/634067.634156
This paper describes the implementation and testing of the HomeBox, a prototype that seeks to provide a cost effective and scalable means for allowing users in the developing world to publish on the Web. It identifies the key requirements for such a design by drawing lessons from a variety of sources including two studies of networked community projects in Africa and South America. It. It ends with a discussion of possible design developments and plans for field trails in the Dominican Republic

Strata/ICC: Physical Models as Computational Interfaces

Ullmer, B., Kim, E., Kilian, A., Gray, S., and Ishii, H. 2001. Strata/ICC: physical models as computational interfaces. In CHI '01 Extended Abstracts on Human Factors in Computing Systems (Seattle, Washington, March 31 - April 05, 2001). CHI '01. ACM, New York, NY, 373-374.

DOI: http://doi.acm.org/10.1145/634067.634287
We present Strata/ICC: a computationally-augmented physical model of a 54-story skyscraper that serves as an interactive display of electricity consumption, water consumption, network utilization, and other kinds of infrastructure. Our approach pushes information visualizations into the physical world, with a vision of transforming large-scale physical models into new kinds of interaction workspaces.

Designing Touch-based Communication Devices

Chang, A., Kanji, Z., Ishii, H. 2001. Designing Touch-based Communication Devices. CHI 2001 Workshop: Universal design: Towards universal access in the Information Society

DataTiles: A Modular Platform for Mixed Physical and Graphical Interactions

Rekimoto, J., Ullmer, B., and Oba, H. 2001. DataTiles: a modular platform for mixed physical and graphical interactions. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI '01. ACM, New York, NY, 269-276.

DOI: http://doi.acm.org/10.1145/365024.365115
The DataTiles system integrates the benefits of two major interaction paradigms: graphical and physical user interfaces. Tagged transparent tiles are used as modular construction units. These tiles are augmented by dynamic graphical information when they are placed on a sensor-enhanced flat panel display. They can be used independently or can be combined into more complex configurations, similar to the way language can express complex concepts through a sequence of simple words. In this paper, we discuss our design principles for mixing physical and graphical interface techniques, and describe the system architecture and example applications of the DataTiles system.

Sensetable: A Wireless Object Tracking Platform for Tangible User Interfaces

Patten, J., Ishii, H., Hines, J., and Pangaro, G. 2001. Sensetable: a wireless object tracking platform for tangible user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Seattle, Washington, United States). CHI '01. ACM, New York, NY, 253-260.

DOI: http://doi.acm.org/10.1145/365024.365112
In this paper we present a system that electromagnetically tracks the positions and orientations of multiple wireless objects on a tabletop display surface. The system offers two types of improvements over existing tracking approaches such as computer vision. First, the system tracks objects quickly and accurately without susceptibility to occlusion or changes in lighting conditions. Second, the tracked objects have state that can be modified by attaching physical dials and modifiers. The system can detect these changes in real-time.We present several new interaction techniques developed in the context of this system. Finally, we present two applications of the system: chemistry and system dynamics simulation

GeoSCAPE: designing a reconstructive tool for field archaeological excavation.

Jay Lee, Hiroshi Ishii, Blair Duun, Victor Su, and Sandia Ren. 2001. GeoSCAPE: designing a reconstructive tool for field archaeological excavation. In CHI '01 extended abstracts on Human factors in computing systems (CHI EA '01). ACM, New York, NY, USA, 35-36.

DOI: http://dx.doi.org/10.1145/634067.634093
We introduce GeoSCAPE, a "reconstructive" tool for capturing measurement data in field archaeology and facilitating a 3D visualization of an excavation rendered in computer graphics. This project is carried out by extending a recently developed an orientation-aware digital measuring tape, called HandSCAPE that has been examined to address the efficiency of bridging measuring and modeling for on-site application areas [2]. In this paper, we present the GeoSCAPE system using the same digital tape measure interacting with an enhancing archaeological-specific 3D visualizations the goal is to provide visual reconstruction methods by acquiring accurate field measurements and visualizing the complex work of an archaeologist during the course of on-site excavation.

2000

Emerging Frameworks for Tangible User Interfaces

Ullmer, B. and Ishii, H. 2000. Emerging frameworks for tangible user interfaces. IBM Syst. J. 39, 3-4 (Jul. 2000), 915-931.

We present steps toward a conceptual framework for tangible user interfaces. We introduce the MCRpd interaction model for tangible interfaces, which relates the role of physical and digital representations, physical control, and underlying digital models. This model serves as a foundation for identifying and discussing several key characteristics of tangible user interfaces. We identify a number of systems exhibiting these characteristics, and situate these within 12 application domains. Finally, we discuss tangible interfaces in the context of related research themes, both within and outside of the human-computer interaction domain.

A Comparison of Spatial Organization Strategies in Graphical and Tangible User Interfaces

Patten, J. and Ishii, H. 2000. A comparison of spatial organization strategies in graphical and tangible user interfaces. In Proceedings of DARE 2000 on Designing Augmented Reality Environments (Elsinore, Denmark). DARE '00. ACM, New York, NY, 41-50.

DOI: http://doi.acm.org/10.1145/354666.354671
We present a study comparing how people use space in a Tangible User Interface (TUI) and in a Graphical User Interface (GUI). We asked subjects to read ten summaries of recent news articles and to think about the relationships between them. In our TUI condition, we bound each of the summaries to one of ten visually identical wooden blocks. In our GUI condition, each summary was represented by an icon on the screen. We asked subjects to indicate the location of each summary by pointing to the corresponding icon or wooden block. Afterward, we interviewed them about the strategies they used to position the blocks or icons during the task. We observed that TUI subjects performed better at the location recall task than GUI subjects. In addition, some TUI subjects used the spatial relationship between specific blocks and parts of the environment to help them remember the content of those blocks, while GUI subjects did not do this. Those TUI subjects who reported encoding information using this strategy tended to perform better at the recall task than those who did not.

curlybot: Designing a New Class of Computational Toys

Frei, P., Su, V., Mikhak, B., and Ishii, H. 2000. curlybot: designing a new class of computational toys. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (The Hague, The Netherlands, April 01 - 06, 2000). CHI '00. ACM, New York, NY, 129-136. DOI= http://doi.acm.org/10.1145/332040.332416

DOI: http://doi.acm.org/10.1145/332040.332416
We introduce an educational toy, called curlybot, as the basis for a new class of toys aimed at children in their early stages of development – ages four and up. curlybot is an autonomous two-wheeled vehicle with embedded electronics that can record how it has been moved on any flat surface and then play back that motion accurately and repeatedly. Children can use curlybot to develop intuitions for advanced mathematical and computational concepts, like differential geometry, through play away from a traditional computer.

HandSCAPE: a vectorizing tape measure for on-site measuring applications

Jay Lee, Victor Su, Sandia Ren, and Hiroshi Ishii. 2000. HandSCAPE: a vectorizing tape measure for on-site measuring applications. In Proceedings of the SIGCHI conference on Human factors in computing systems (CHI '00). ACM, New York, NY, USA, 137-144.

DOI: http://dx.doi.org/10.1145/332040.332417
We introduce HandSCAPE, an orientation-aware digital tape measure, as an input device for digitizing field measurements, and visualizing the volume of the resulting vectors with computer graphics. Using embedded orientation-sensing hardware, HandSCAPE captures relevant vectors on each linear measurements and transmits this data wirelessly to a remote computer in real-time. To guide us in design, we have closely studied the intended users, their tasks, and the physical workplaces to extract the needs from real worlds. In this paper, we first describe the potential utility of HandSCAPE for three on-site application areas: archeological surveys, interior design, and storage space allocation. We then describe the overall system which includes orientation sensing, vector calculation, and primitive modeling. With exploratory usage results, we conclude our paper for interface design issues and future developments.

1999

musicBottles

musicBottles introduces a tangible interface that deploys bottles as containers and controls for digital information.The system consists of a specially designed table and three corked bottles that “contain” the sounds of the violin, the cello, and the piano in Édouard Lalo’s Piano Trio in C Minor, Op. 7. Custom-designed electromagnetic tags embedded in the bottles enable each one to be wirelessly identified. When a bottle is placed onto the stage area of the table and the cork is removed, the corresponding instrument becomes audible. A pattern of colored light is rear-projected onto the table’s translucent surface to reflect changes in pitch and volume.The interface allows users to structure the experience of the musical composition by physically manipulating the different sound tracks.

TouchCounters: designing interactive electronic labels for physical containers

Paul Yarin and Hiroshi Ishii. 1999. TouchCounters: designing interactive electronic labels for physical containers. In Proceedings of the SIGCHI conference on Human Factors in Computing Systems (CHI '99). ACM, New York, NY, USA, 362-369. DOI=http://dx.doi.org/10.1145/302979.303110

DOI: DOI=http://dx.doi.org/10.1145/302979.303110
We present TouchCounters, an integrated system of electronic modules, physical storage containers, and shelving surfaces for the support of collaborative physical work. Through physical sensors and local displays, TouchCounters record and display usage history information upon physical storage containers, thus allowing access to this information during the performance of real-world tasks. A distributed communications network allows this data to be exchanged with a server, such that users can access this information from remote locations as well. Based upon prior work in ubiquitous computing and tangible interfaces, TouchCounters incorporate new techniques, including usage history tracking for physical objects and multi-display visualization. This paper describes the components, interactions, implementation, and conceptual approach of the TouchCounters system.

The Design of Personal Ambient Displays

Craig Wisneski. The Design of Personal Ambient Displays. Thesis (S.M.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1999.

The Design and Implementation of inTouch: A Distributed, Haptic Communication System

Victor Su. The Design and Implementation of inTouch: A Distributed, Haptic Communication System. Thesis (M. Eng. and S.B.)--Massachusetts Institute of Technology, Dept. of Electrical Engineering and Computer Science, 1999.

The I/O Bulb and the Luminous Room

John Underkoffler, The I/O Bulb and the Luminous Room, Thesis (Ph.D.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts & Sciences, 1999.

Emancipated Pixels: Real-World Graphics in the Luminous Room

Underkoffler, J., Ullmer, B., and Ishii, H. 1999. Emancipated pixels: real-world graphics in the luminous room. In Proceedings of the 26th Annual Conference on Computer Graphics and interactive Techniques International Conference on Computer Graphics and Interactive Techniques. ACM Press/Addison-Wesley Publishing Co., New York, NY, 385-392.

DOI: http://doi.acm.org/10.1145/311535.311593
We describe a conceptual infrastructure the Luminous Room for providing graphical display and interaction at each of an interior architectural space's various surfaces, arguing that pervasive environmental output and input is one natural heir to today's rather more limited notion of spatially-confined, output-only display (the CRT). We discuss the requirements of such real-world graphics, including computational & networking demands; schemes for spatially omnipresent capture and display; and issues of design and interaction that emerge under these new circumstances. These discussions are both illustrated and motivated by five particular applications that have been built for a real, experimental Luminous Room space, and by details of the current technical approach to its construction (involving a two-way optical transducer called an I/O Bulb that projects and captures pixels).

Urp: A Luminous-Tangible Workbench for Urban Planning and Design

Underkoffler, J. and Ishii, H. 1999. Urp: a luminous-tangible workbench for urban planning and design. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems: the CHI Is the Limit (Pittsburgh, Pennsylvania, United States, May 15 - 20, 1999). CHI '99. ACM, New York, NY, 386-393.

DOI: http://doi.acm.org/10.1145/302979.303114
We introduce a system for urban planning - called Urp -that integrates functions addressing a broad range of the fields concerns into a single, physically based workbench setting. The I/O Bulb infrastructure on which the application is based allows physical architectural models placed on an ordinary table surface to cast shadows accurate for arbitrary times of day; to throw reflections off glass facade surfaces; to affect a real-time and visually coincident simulation of pedestrian-level windflow; and so on. We then use comparisons among Urp and several earlier I/O Bulb applications as the basis for an understanding of luminous-tangible interactions, which result whenever an interface distributes meaning and functionality between physical objects and visual information projectively coupled to those objects. Finally, we briefly discuss two issues common to all such systems, offering them as informal thought-tools for the design and analysis of luminous-tangible interfaces.

Towards the Distributed Visualization of Usage History

Paul Yarin. Towards the Distributed Visualization of Usage History. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1999.

Curlybot

Phil Frei, Victor Su, and Hiroshi Ishii. 1999. Curlybot. In ACM SIGGRAPH 99 Conference abstracts and applications (SIGGRAPH '99). ACM, New York, NY, USA, 173-. DOI=10.1145/311625.311972 http://doi.acm.org/10.1145/311625.311972

DOI: http://dx.doi.org/10.1145/311625.311972

PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play

Hiroshi Ishii, Craig Wisneski, Julian Orbanes, Ben Chun, and Joe Paradiso. 1999. PingPongPlus: design of an athletic-tangible interface for computer-supported cooperative play. In Proceedings of the SIGCHI conference on Human factors in computing systems: the CHI is the limit (CHI '99). ACM, New York, NY, USA, 394-401. DOI=10.1145/302979.303115

DOI: http://doi.acm.org/10.1145/302979.303115

1998

The Last Farewell: Traces of Physical Presence

Ishii, H. 1998. Reflections: “The last farewell”: traces of physical presence. interactions 5, 4 (Jul. 1998), 56-ff.

DOI: http://doi.acm.org/10.1145/278465.278474
In the Spring of 1995, I was finally able to realize a dream that I’d held for quite a number of years; I was able to visit Hanamaki village, the home of the famous author Miazawa Kenji. Before leaving Japan, I had wanted to see Kenji’s “World of Efertobe” once with my own eyes.

Designing Kinetic Objects for Digital Information Display

Andy Dahley. Designing Kinetic Objects for Digital Information Display. Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 1998.

We have access to more and more information from computer networks. However, the means of monitoring this changing information is limited by its access through the narrow window of a computer screen. The interactions between people and digital information are now almost entirely confined to the conventional GUI (Graphical User Interface) comprised of a keyboard, monitor, and mouse, largely ignoring the richness of the physical world. As a critical step in moving beyond current interface limitations, this research is attempts to use many parts of our environment to convey information in a variety of ways. Rather than adding more video terminals into an environment, this thesis examines how to move information off the screen into our physical environment, where it is manifested in a more physical and kinetic manner. The thesis explores how these kinetic objects can be used to display information on a more visceral cognitive level than afforded by the interfaces of generalized information appliances like the computer. The approach in this thesis is through several exploratory design studies. A geography of the design space of kinetic objects as digital information displays was developed through this series of design studies so that it can be used in the development of future kinetic displays.

Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects

Matthew Gorbet. Beyond Input Devices: A New Conceptual Framework for the Design of Physical-Digital Objects. Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.

DOI: http://hdl.handle.net/1721.1/29138
This work introduces the concept of physical-digital objects: physical objects which allow people to interact with digital information as though it were tangible. I treat the design of physical-digital objects as a new field, and establish a conceptual framework within which to approach this design task. My design strategy treats objects as having both a physical and a digital identity, related to one another by three design principles: coupling, transparency, and mapping. With these principles as a guide, designers can take advantage of emerging digital technologies to create entirely new physical-digital objects. This new design perspective encourages a conceptual shift away from discrete input and output devices as gateways to a digital world, and towards a more seamless interaction with information, enabled by our knowledge and understanding of the physical world. I illustrate this by introducing and discussing seven actual physical-digital object systems, including two which I developed: Bottles and Triangles.

Tangible Interfaces for Remote Communication and Collaboration

Scott Brave. Tangible Interfaces for Remote Communication and Collaboration. Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1998.

This thesis presents inTouch, a new device enabling long distance communication through touch. inTouch is based on a concept called Synchronized Distributed Physical Objects, which employs telemanipulation technology to create the illusion that distant users are interacting with a shared physical object. I discuss the design and prototype implementation of inTouch, along with control strategies for extending the physical link over an arbitrary distance. User reactions to the prototype system suggest many similarities to direct touch interactions while, at the same time, point to new possibilities for object-mediated touch communication. I also present two initial experiments that begin to explore more objective properties of the haptic communication channel provided by inTouch and develop analysis techniques for future investigations.

mediaBlocks: Physical Containers,Transports, and Controls for Online Media

Ullmer, B., Ishii, H., and Glas, D. 1998. mediaBlocks: physical containers, transports, and controls for online media. In Proceedings of the 25th Annual Conference on Computer Graphics and interactive Techniques SIGGRAPH '98. ACM, New York, NY, 379-386.

DOI: http://doi.acm.org/10.1145/280814.280940
We present a tangible user interface based upon media-Blocks: small, electronically tagged wooden blocks that serve as physical icons (cicons”) for the containment, transport, and manipulation of online media. MediaBlocks interface with media input and output devices such as video cameras and projectors, allowing digital media to be rapidly cpied” from a media source and pasted into a media display. Media-Blocks are also compatible with traditional GUIs, providing seamless gateways between tangible and graphical interfaces. Finally, mediaBlocks act as physical cntrols” in tangible interfaces for tasks such as se-quencing collections of media elements.

Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information

Wisneski, C., Ishii, H., Dahley, A., Gorbet, M., Brave, S., Ullmer, B., Yarin, P. Ambient Displays: Turning Architectural Space into an Interface between People and Digital Information. CoBuild 1998.

We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these "ambient media".

Tangible Interfaces for Remote Collaboration and Communication

Scott Brave, Hiroshi Ishii, and Andrew Dahley. 1998. Tangible interfaces for remote collaboration and communication. In Proceedings of the 1998 ACM conference on Computer supported cooperative work (CSCW '98). ACM, New York, NY, USA, 169-178.

DOI: http://dl.acm.org/citation.cfm?doid=289444.289491
Current systems for real-time distributed CSCW are largely rooted in traditional GUI-based groupware and voice/video conferencing methodologies. In these approaches, interactions are limited to visual and auditory media, and shared environments are confined to the digital world. This paper presents a new approach to enhance remote collaboration and communication, based on the idea of Tangible Interfaces, which places a greater emphasis on touch and physicality. The approach is grounded in a concept called Synchronized Distributed Physical Objects, which employs telemanipulation technology to create the illusion that distant users are interacting with shared physical objects. We describe two applications of this approach: PSyBench, a physical shared workspace, and inTouch, a device for haptic interpersonal communication.

ambientROOM: Integrating Ambient Media with Architectural Space

Ishii, H., Wisneski, C., Brave, S., Dahley, A., Gorbet, M., Ullmer, B., and Yarin, P. 1998. ambientROOM: integrating ambient media with architectural space. In CHI 98 Conference Summary on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 - 23, 1998). CHI '98. ACM, New York, NY, 173-174.

DOI: http://doi.acm.org/10.1145/286498.286652
We envision that the physical architectural space we inhabit will be a new form of interface between humans and digital information. This paper and video present the design of the ambientROOM, an interface to information for processing in the background of awareness. This information is displayed through various subtle displays of light, sound, and movement. Physical objects are also employed as controls for these "ambient media".

Illuminating Light: An Optical Design Tool with a Luminous-Tangible Interface

Underkoffler, J. and Ishii, H. 1998. Illuminating light: an optical design tool with a luminous-tangible interface. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 - 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 542-549.

DOI: http://doi.acm.org/10.1145/274644.274717
We describe a novel system for rapid prototyping of laser- based optical and holographic layouts. Users of this optical prototyping tool - called the Illuminating Light system - move physical representations of various optical elements about a workspace, while the system tracks these compo- nents and projects back onto the workspace surface the simulated propagation of laser light through the evolving layout. This application is built atop the Luminous Room infrastrncture, an aggregate of interlinked, computer-con- trolled projector-camera units called Z/O Bulbs. Philosophi- cally, the work embodies the emerging ideas of the Luminous Room and builds on the notions of ‘graspable media’. We briefly introduce the VO Bulb and Luminous Room con- cepts and discuss their current implementations. After an overview of the optical domain that the Illuminating Light system is designed to address, we present the overall sys- tem design and implementation, including that of an inter- mediary toolkit called voodoo which provides a general facility for object identification and tracking. beam that continues through the beamsplitter.

Triangles: Tangible Interface for Manipulation and Exploration of Digital Information Topography

Gorbet, M. G., Orth, M., and Ishii, H. 1998. Triangles: tangible interface for manipulation and exploration of digital information topography. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Los Angeles, California, United States, April 18 - 23, 1998). C. Karat, A. Lund, J. Coutaz, and J. Karat, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 49-56.

DOI: http://doi.acm.org/10.1145/274644.274652
This paper presents a system for interacting with digital information, called Triangles. The Triangles system is a physictidigital construction kit, which allows users to use two hands to grasp and manipulate complex digital information. The kit consists of a set of identical flat, plastic triangles, each with a microprocessor inside and magnetic edge connectors. The connectors enable the Triangles to be physically connected to each other and provide tactile feedback of these connections. The connectors also pass electricity, allowing the Triangles to communicate digital information to each other and to a ,desktop computer. When the pieces contact one another, specific connection information is sent back to a computer that keeps track of the configuration of the system.

1997

Email from Dr. Mark Weiser in January 1997

【至宝手紙】In January 1997, I received an email from the late Dr. Mark Weiser. That email became my greatest treasure in life. Dr. Mark Weiser and Prof. Jim Hollan reviewed my & Brygg Ullmer's Tangible Bits paper submitted to CHI97. Our CHI '97 paper was rescued from the brink of rejection by Mark and Jim. <br> I cannot thank Mark and Jim enough since Tangible Bits / Tangible UI (TUI) and I myself could not exist in HCI community today without their encouragement in 1997. <br> https://bit.ly/2GssFuu <br> http://tangible.media.mit.edu/vision/

The metaDESK: Models and Prototypes for Tangible User Interfaces

Ullmer, B. and Ishii, H. 1997. The metaDESK: models and prototypes for tangible user interfaces. In Proceedings of the 10th Annual ACM Symposium on User interface Software and Technology (Banff, Alberta, Canada, October 14 - 17, 1997). UIST '97. ACM, New York, NY, 223-232.

DOI: http://doi.acm.org/10.1145/263407.263551
The metaDESK is our first platform for exploring the design of tangible user interfaces. The metaDESK integrates multiple 2D and 3D graphic displays with an assortment of physical objects and instruments, sensed by an array of optical, mechanical, and electromagnetic field sensors. The metaDESK "brings to life" these physical objects and instruments as tangible interfaces to a range of graphically-intensive applications. Using the metaDESK platform, we are studying issues such as a) the physical embodiment of GUI (graphical user interface) widgets such as icons, handles, and windows; b) the coupling of everyday physical objects with the digital information that pertains to them.

Models and Mechanisms for Tangible User Interfaces

Brygg Ullmer. Models and Mechanisms for Tangible User Interfaces. Thesis (M.S.)--Massachusetts Institute of Technology, Program in Media Arts & Sciences, 1997.

Current human-computer interface design is dominated by the graphical user interface approach, where users interact with graphical abstractions of virtual interface devices through a few general-purpose input “peripherals.” The thesis develops models and mechanisms for “tangible user interfaces” – user interfaces which use physical objects, instruments, surfaces, and spaces as physical interfaces to digital information. Prototype applications on three platforms – the metaDESK, transBOARD, and ambientROOM – are introduced as examples of this approach. These instances are used to generalize the “GUI widgetry,” “optical,” and “containers and conduits” interface metaphors. The thesis also develops engineering mechanisms called proxy-distributed or “proxdist” computation, which provide a layered approach for integrating physical objects with diverse sensing, display, communication, and computation capabilities into coherent interface implementations. The combined research provides a vehicle for moving beyond the keyboard, monitor, and pointer of current computer interfaces towards use of the physical world itself as a kind of computationally-augmented interface.

Triangles: Design of a Physical/Digital Construction Kit

Gorbet, M. G. and Orth, M. 1997. Triangles: design of a physical/digital construction kit. In Proceedings of the 2nd Conference on Designing interactive Systems: Processes, Practices, Methods, and Techniques (Amsterdam, The Netherlands, August 18 - 20, 1997). S. Coles, Ed. DIS '97. ACM, New York, NY, 125-128.

DOI: http://doi.acm.org/10.1145/263552.263592
This paper describes the design process and philosophy behind Triangles, a new physical computer interface in the form of a construction kit of identical, flat, plastic triangles. The triangles connect together both mechanically and electrically with magnetic, conducting connectors. When the pieces contact one another, information about the specific connection is passed through the conducting connectors to the computer. In this way, users can create both two and three-dimensional objects whose exact configuration is known by the computer. The physical connection of any two Triangles can also trigger specific events in the computer, creating a simple but powerful means for physically interacting with digital information. This paper will describe the Triangles system, its advantages and applications. It will also highlight the importance of collaborative and multidisciplinarian design teams in the creation of new digital objects that bridge electrical engineering, industrial design, and software design--such as the Triangles

inTouch: A Medium for Haptic Interpersonal Communication

Brave, S. and Dahley, A. 1997. inTouch: a medium for haptic interpersonal communication. In CHI '97 Extended Abstracts on Human Factors in Computing Systems: Looking To the Future (Atlanta, Georgia, March 22 - 27, 1997). CHI '97. ACM, New York, NY, 363-364.

DOI: http://doi.acm.org/10.1145/1120212.1120435
In this paper, we introduce a new approach for applying haptic feedback technology to interpersonal communication. We present the design of our prototype inTouch system which provides a physical link between users separated by distance.

Tangible Bits: Towards Seamless Interfaces between People, Bits, and Atoms

Ishii, H. and Ullmer, B. 1997. Tangible bits: towards seamless interfaces between people, bits and atoms. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Atlanta, Georgia, United States, March 22 - 27, 1997). S. Pemberton, Ed. CHI '97. ACM, New York, NY, 234-241.

DOI: http://doi.acm.org/10.1145/258549.258715
This paper presents our vision of Human Computer Interaction (HCI): "Tangible Bits." Tangible Bits allows users to "grasp & manipulate" bits in the center of users’ attention by coupling the bits with everyday physical objects and architectural surfaces. Tangible Bits also enables users to be aware of background bits at the periphery of human perception using ambient display media such as light, sound, airflow, and water movement in an augmented space. The goal of Tangible Bits is to bridge the gaps between both cyberspace and the physical environment, as well as the foreground and background of human activities. This paper describes three key concepts of Tangible Bits: interactive surfaces; the coupling of bits with graspable physical objects; and ambient media for background awareness. We illustrate these concepts with three prototype systems – the metaDESK, transBOARD and ambientROOM – to identify underlying research issues.

1995

Bricks: Laying the Foundations for Graspable User Interfaces

Fitzmaurice, G. W., Ishii, H., and Buxton, W. A. 1995. Bricks: laying the foundations for graspable user interfaces. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Denver, Colorado, United States, May 07 - 11, 1995). I. R. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, Eds. Conference on Human Factors in Computing Systems. ACM Press/Addison-Wesley Publishing Co., New York, NY, 442-449

DOI: http://doi.acm.org/10.1145/223904.223964
We introduce the concept of Graspable User Interfaces that allow direct control of electronic or virtual objects through physical handles for control. These physical artifacts, which we call "bricks," are essentially new input devices that can be tightly coupled or “attached” to virtual objects for manipulation or for expressing action (e.g., to set parameters or for initiating processes). Our bricks operate on top of a large horizontal display surface known as the "ActiveDesk." We present four stages in the development of Graspable UIs: (1) a series of exploratory studies on hand gestures and grasping; (2) interaction simulations using mock-ups and rapid prototyping tools; (3) a working prototype and sample application called GraspDraw; and (4) the initial integrating of the Graspable UI concepts into a commercial application. Finally, we conclude by presenting a design space for Bricks which lay the foundation for further exploring and developing Graspable User Interfaces

1994

Iterative Design of Seamless Collaboration Media

Hiroshi Ishii, Minoru Kobayashi, and Kazuho Arita. 1994. Iterative design of seamless collaboration media. Commun. ACM 37, 8 (August 1994), 83-97. DOI=10.1145/179606.179687 http://doi.acm.org/10.1145/179606.179687

DOI: DOI=10.1145/179606.179687 http://doi.acm.org/10.1145/179606.179687

1993

Integration of interpersonal space and shared workspace: ClearBoard design and experiments

Ishii, H., Kobayashi, M., and Grudin, J. 1993. Integration of interpersonal space and shared workspace: ClearBoard design and experiments. ACM Trans. Inf. Syst. 11, 4 (Oct. 1993), 349-375. DOI= http://doi.acm.org/10.1145/159764.159762

DOI: DOI= http://doi.acm.org/10.1145/159764.159762

1992

ClearBoard: a seamless medium for shared drawing and conversation with eye contact

Ishii, H. and Kobayashi, M. 1992. ClearBoard: a seamless medium for shared drawing and conversation with eye contact. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (Monterey, California, United States, May 03 - 07, 1992). P. Bauersfeld, J. Bennett, and G. Lynch, Eds. CHI '92. ACM, New York, NY, 525-532. DOI= http://doi.acm.org/10.1145/142750.142977

DOI: http://doi.acm.org/10.1145/142750.142977
This paper introduces a novel shared drawing medium called ClearBoard. It realizes (1) a seamless shared drawing space and (2) eye contact to support realtime and remote collaboration by two users. We devised the key metaphor: “talking through and drawing on a transparent glass window” to design ClearBoard. A prototype of ClearBoard is implemented based on the “Drafter-Mirror” architecture. This paper first reviews previous work on shared drawing support to clarify the design goals. We then examine three methaphors that fulfill these goals. The design requirements and the two possible system architectures of ClearBoard are described. Finally, some findings gained through the experimental use of the prototype, including the feature of “gaze awareness”, are discussed.

Integration of inter-personal space and shared workspace: ClearBoard design and experiments

Ishii, H., Kobayashi, M., and Grudin, J. 1992. Integration of inter-personal space and shared workspace: ClearBoard design and experiments. In Proceedings of the 1992 ACM Conference on Computer-Supported Cooperative Work (Toronto, Ontario, Canada, November 01 - 04, 1992). CSCW '92. ACM, New York, NY, 33-42. DOI= http://doi.acm.org/10.1145/143457.143459

DOI: http://doi.acm.org/10.1145/143457.143459
This paper describes the evolution of a novel shared drawing medium that permits co-workers in two different locations to draw with color markers or with electronic pens and software tools while maintaining direct eye contact and the ability to employ natural gestures. We describe the evolution from ClearBoard-1 (based on a video drawing technique) to ClearBoard-2 (which incorporates TeamPaint, a multi-user paint editor). Initial observations based on use and experimentation are reported. Further experiments are conducted with ClearBoard-0 (a simple mockup), with ClearBoard-1, and with an actual desktop as a control. These experiments verify the increase of eye contact and awareness of collaborator's gaze direction in ClearBoard environments where workspace and co-worker images compete for attention.

1991

Toward An Open Shared Workspace: Computer and Video Fusion Approach of TeamWorkStation

Hiroshi Ishii and Naomi Miyake. 1991. Toward an open shared workspace: computer and video fusion approach of TeamWorkStation. Commun. ACM 34, 12 (December 1991), 37-50. DOI=10.1145/125319.125321 http://doi.acm.org/10.1145/125319.125321

DOI: DOI=10.1145/125319.125321 http://doi.acm.org/10.1145/125319.125321
Groupware is intended to create a shared workspace that supports dynamic collaboration in a work group over space and time constraints. To gain the collective benefits of groupware use, the groupware must be accepted by a majority of workgroup members as a common tool. Groupware must overcome the hurdle of critical mass.

1990

TeamWorkStation: towards a seamless shared workspace

H. Ishii. 1990. TeamWorkStation: towards a seamless shared workspace. In Proceedings of the 1990 ACM conference on Computer-supported cooperative work (CSCW '90). ACM, New York, NY, USA, 13-26. DOI=10.1145/99332.99337 http://doi.acm.org/10.1145/99332.99337

DOI: DOI=10.1145/99332.99337 http://doi.acm.org/10.1145/99332.99337
This paper introduces TeamWorkStation (TWS), a new desktop real-time shared workspace characterized by reduced cognitive seams. TWS integrates two existing kinds of individual workspaces, computers and desktops, to create a virtual shared workspace. The key ideas are the overlay of individual workspace images in a virtual shared workspace and the creation of a shared drawing surface. Because each co-worker can continue to use his/her favorite application programs or manual tools in the virtual shared workspace, the cognitive discontinuity (seam) between the individual and shared workspaces is greatly reduced, and users can shuttle smoothly between these two workspaces. This paper discusses where the seams exist in the current CSCW environment to clarify the issue of shared workspace design. The new technique of fusing individual workspaces is introduced. The application of TWS to the remote teaching of calligraphy is presented to show its potential. The prototype system is described and compared with other comparable approaches.