Research Interests

“The ultimate display would be a room within which the computer can control the existence of matter” – Ivan E. Sutherland (1965)

Vision: Programmable Matter

The cinema and entertainment industries often present speculative visions of future technologies, serving as a bridge between scientific discourse and the collective imagination. For instance, many films and television series explore the concept of transformative materials and environments capable of dynamic reconfiguration, as illustrated in Figure 1.

(a) In X-Men (2000), the surface of a round table morphs to form a miniature city model, aiding in the visualization of a collective battle strategy (gif).
(b) In Matrix Revolution (2006), a artificial intelligence manifests itself by assembling a swarm of robots into a large humanoid face during its interaction with the protagonist (gif).
(c) In Big Hero 6 (2014), the protagonist controls a swarm of microbots that assemble into various shapes and structures through thought alone (gif).
(d) In Avengers: Infinity War (2018), one of the protagonist’s suits, composed of nanobots, dynamically reconfigures to adapt to different combat scenarios (gif).
(e) In Man of Steel (2012), the interior of an alien ship materializes a physical key that the protagonist can subsequently remove and use (gif).
(f) In Suicide Squad (2021), a character’s firearm physically reconfigures as components are attached or detached, enabling or disabling specific functions (gif).
Figure 1: Interpretations of Programmable Matter in Science-Fiction

In research, this speculative idea corresponds to what is known as Programmable Matter — a technological paradigm representing the convergence of computation and physical materiality, where algorithms are embedded within matter itself, enabling it to sense, compute, and reconfigure its own structure or properties. The conceptual roots of this vision can be traced back to 1965, when Sutherland envisioned a computer capable of “controlling the existence of matter” [31].

Today, this vision unites a range of research domains as shown in Figure 2 (a), including self-reconfiguring modular robotics within Robotics (e.g., [36]), smart materials and metamaterials in Materials Science (e.g., [21]), and shape-changing interfaces within Human–Computer Interaction (e.g., [24]).

(a) Venn Diagram
(b) Robotics: Roombots is a self-reconfiguring modular robot [28].
(c) Human-Computer Interaction: Elevate is a shape-changing interface [15].
(d) Material Sciences: Auxetic Materials are metamaterials [4].
Figure 2: Multiple Research Fields Contributing Towards Programmable Matter

Research Field: Shape-Changing Interfaces

Nowadays, Graphical User Interfaces (GUIs) are ubiquitous in our societies offering flexibility and interactivity through graphical elements within a limited two-dimensional (2D) space (e.g., phones, tablets, laptops). While GUIs provide a high degree of malleability, allowing users to manipulate digital content with relative ease, they are constrained by the abstraction of the 2D screen. In contrast, Tangible User Interfaces (TUIs) [13, 14] leverage physical objects as interactive elements, enabling more natural and embodied interactions. TUIs support collaborative work, facilitate embodied thinking, sustain external memory, and allow parallel actions through spatially distributed interactions [27]. By grounding interactions in the physical 3D world, TUIs leverage human cognitive, motor, and collaborative abilities that have evolved over millennia through hands-on experience with real-world objects.

However, despite these advantages, TUIs are inherently limited in terms of malleability. Unlike GUIs, the physical properties of tangible objects – such as position, orientation, shape, color, and transparency – cannot be easily or dynamically modified through software, which restricts their adaptability and versatility in supporting a wide range of interaction scenarios. Without malleability, TUIs will remain perpetually outpaced by GUIs. Therefore, a key design challenge emerges towards the development of interfaces that combine the flexibility of GUIs with the embodied richness of TUIs.

Aligned with Programmable Matter, Shape-changing interfaces (SCIs) aim to blur the boundary between physical and virtual objects, combining the physicality of Tangible User Interfaces (TUIs) with the malleability of Graphical User Interfaces (GUIs) [12, 27, 30] (see Figure 3): They are interactive devices performing physical transformations in response to user inputs or system events, enabling them to convey information, meaning, or affect [1].

Figure 3: Radical Atoms [12]:
Shape-Changing Interfaces in the landscape of human-computer interaction

Purposes and Benefits

To help researchers and target user-groups see clear benefits in the development of shape-changing interfaces, [1] explored the purposes and benefits from the literature. Five fundamental purposes for SCIs were identified, each illustrated in Figure 4:

  • Communicate Information – conveying information through combinations of visual, haptic, and shape animation (see Figure 4 (a)).
  • Simulate Objects – physically simulating real or imaginary objects (see Figure 4 (b)).
  • Adapt Interfaces – adapting shape to specific interaction contexts (see Figure 4 (c)).
  • Augment Users – augmenting elements of or within the user’s body (see Figure 4 (d)).
  • Support Arts – supporting aesthetic, sensorial, or entertainment goals (see Figure 4 (e)).
(a) Communicate Information: dynamic physical bar chart [34]
(b) Simulate Objects: physical rendering of a 3D car model [10]
(c) Adapt Interfaces: a reconfigurable game controller [26]
(d) Augment Users: a sixth finger [22]
(e) Support Arts: kinetic garments [3]
Figure 4: Purposes of Shape-Changing Interfaces

Interaction Types

Rasmussen et al. [25] identify three primary interaction types for shape-changing interfaces (See Figure 5 (a)):

  • No interaction – shape change is used solely as system output (see Figure 5 (b) and Figure 5 (c)).

  • Indirect interaction – shape change occurs in response to implicit input (see Figure 5 (d) and Figure 5 (e)).

  • Direct interaction – shape change functions as both input and output. Within direct interaction, they distinguish two subtypes:

    • Action and reaction – users physically manipulate the interface’s shape, and the system responds with its own physical transformation (see Figure 5 (f) and Figure 5 (g)).
    • Input and output – both the user and the system simultaneously manipulate the interface’s shape (see Figure 5 (h) and Figure 5 (i)).
(a) Types of Interaction for Shape-Changing Interfaces
(b) No interaction: 10,000 drones physically render a 3D model of Shenzhen (China) in a show by Damoda, September 26th, 2024 (video)).
(c) No interaction: 3D robotic billboard by The Coca-Cola Company (USA) at Times Square, New York, August 8th, 2017 (video).
(d) Indirect interaction: an actuated water faucet with creature-like motion that bends when users’ daily water consumption exceeds a threshold [35].
(e) Indirect interaction: an actuated foldable table that reshapes itself to accommodate the number of nearby users[11] (video).
(f) Direct interaction (action and reaction): a shape-changing widget combining a knob and slider, allowing mode switching via a central button [17] (video).
(g) Direct interaction (action and reaction): a kinetic toy that records user-manipulated shape transformations and plays back corresponding movements [23] (video).
(h) Direct interaction (input and output): users push against a pin-based shape display that simulates chest resistance during CPR training [19] (video).
(i) Direct interaction (input and output): an auxetic surface actuated by a small number of motors simulates varying 3D curvatures as users move it across a table [29] (video).
Figure 5: Examples Illustrating Each Interaction Type

Shape-Change Properties

Rasmussen et al. [25] identify eight primary types of shape transformations : Orientation, Form, Volume, Texture, Viscosity, Spatiality, Adding/Subtracting, and Permeability. See Figure 6 for illustrations of these properties.

Figure 6: Types of shape transformations for Shape-Changing Interfaces

Architectures

To realize interfaces with physically reconfigurable geometry, researchers integrate combinations of sensors, hard or soft actuators, and smart materials into surfaces or volumes. These reconfigurable geometries can function as both input and output and are computationally controlled [1]. Various actuated architectures for SCIs have emerged in recent years as illustrated in Figure 7. Each of these architectures enables the exploration of one or more types of shape transformations. This diversity is necessary, as no single interface can realize all possible transformations, highlighting the current absence of fully programmable matter.

(a) Layers (e.g., [37])
(b) Pin Array (e.g., [34])
(c) Sparse Dots (e.g., [18])
(d) Sparse Lines (e.g., [33])
(e) Single Line (e.g., [20])
(f) Voxels (e.g., [32])
(g) Surfaces (e.g., [2])
(h) Hub and Structs (e.g., [16])
(i) Ring Stack (e.g., [5])
(j) String Array (e.g., [9])
Figure 7: Actuated Architectures for Shape-Changing Interfaces

Research Agenda: Technology, User Behavior, Design, Ethics, Policy, and Sustainability

To advance SCIs, [1] identified twelve grand challenges across four domains: technology, user behavior, design, and policy, ethics, and sustainability. Figure 8 illustrates these grand challenges.

Figure 8: Grand challenges for Shape-Changing Interfaces [1]

Technology

Toolkits for Prototyping of Shape-changing Hardware. There is a need to develop toolkits that facilitate the prototyping of shape-changing interfaces, which requires knowledge of complex electronics and mechanical engineering. This research aims to lower the implementation barrier by creating a standard platform for hardware prototyping, a cross-platform software layer, and tools for end-user programming, ultimately reducing the effort of classic interfaces by at least a factor of 10 in time and cost.

Miniaturized Device Form Factors and High Resolution. A significant challenge in shape-changing interfaces is the need for small, lightweight, and high-resolution actuators that can transition from stationary to mobile and wearable forms. Current electromechanical actuators often result in bulky setups, while user expectations demand high-resolution outputs. Addressing this challenge involves leveraging advancements in soft robotics and smart materials to create slimmer, more responsive actuation systems that can support a higher resolution of shape change.

Integration of Additional I/O Modalities. Today’s shape-changing interfaces need to integrate additional input and output modalities, such as high-resolution touch sensing and adjustable material properties, to realize their full potential. This can be achieved by incorporating off-the-shelf sensors and displays, as well as advances in flexible technologies that allow for conformal integration with actuators.

Non-functional Requirements. Energy consumption is a significant challenge in actuated systems, particularly in mobile or wearable solutions that must be self-contained, often relying on large batteries or having short battery life-spans. To address this, there is potential in systems design that reduces power consumption by offering (bi-)stable states, along with environmental energy harvesting and self-charging capabilities to ensure usability for at least a full day without recharging.

User Behavior

Understanding the User Experience of Shape-change. We should seek to understand the user experience (UX) of shape-change, as evaluating its effectiveness presents unique challenges. These include isolating the effects of combined modalities and assessing the robustness of current systems for in-depth evaluations. By characterizing the value of shape-changing interfaces, we can identify beneficial domains and tasks, supporting their design and construction. This involves conducting comparative studies and isolating factors to deepen our understanding of user interactions with shape-change.

Shape-Change Theory Building. Behavioural sciences construct theories to integrate empirical results and make predictions, yet shape-change research lacks such theorizing. Developing theoretical statements that articulate how shape-change affects interaction could help predict user reactions and enhance our understanding of its usefulness.

Design

Designing for Temporality. Shape-changing interfaces require temporal design, presenting challenges in translating behavioral sketches into actual designs due to the unique experiences users have with dynamic forms. While traditional design methods struggle to articulate these properties, inspiration from disciplines like dance and music can help develop techniques for designing and comparing temporal forms.

Integrating Artefact and Interaction. Designers of shape-changing interfaces must create devices that harmonize usability and aesthetics, engaging both the body and mind. This requires an understanding of theory, heuristics, and dynamic affordances, alongside the development of tools and methods that integrate artefact and interaction in the design process.

Application and Content Design. The design and implementation of applications and content for shape-changing interfaces present significant challenges, which can be categorized into four parts: when to apply shape-change, what shape-changes to implement, what applications to build, and how to design the content for those applications. It is crucial to develop frameworks and design principles that clarify the appropriate contexts for shape-change and establish consensus on shape-change semantics to avoid user confusion. Additionally, careful consideration of end-user content is essential, particularly for interfaces that utilize dynamic displays, necessitating the development of toolkits for content design across various formats.

Policy, Ethics, and Sustainability

Policy and Ethics. Policy-makers face the challenge of creating legislation that ensures the safe and ethical operation of shape-changing interfaces without stifling innovation. Key issues include safety, security, content appropriateness, ownership responsibility, and the implications of non-permanency in transformations.

Sustainability and the Environment. Shape-changing interfaces pose sustainability challenges due to their demand for natural resources and recycling difficulties, but their morphing ability could reduce the need for multiple devices, ultimately lowering resource requirements. Researchers should focus on developing long-lasting, modular interfaces that support upgrades and interoperability to enhance sustainability.

My Main Contributions

So far, I contributed three research projects in the field of Shape-Changing Interfaces that addressed some of these grand challenges.

Expandable Illuminated Ring (2018)

In [6], I presented the design and implementation of an expandable and stackable illuminated ring as a building block for ring-based shape displays. Ring-based shape displays are a type of shape-changing interface where the deformation or movement of the display is achieved through a system of rings. These rings can be mechanically actuated to create different shapes, surfaces, or structures in a controlled manner. We present the iterative design process of an expandable and stackable illuminated ring (Figure 9) – the building block of a ring-based shape display.

This project contributed to the grand challenges of Technology (toolkits for prototyping of shape-changing hardware, miniaturized device form factors and high resolution).

(a) Version 1
(b) Version 2
(c) Version 3
(d) Version Finale
Figure 9: Designing an Expandable Illuminated Ring to Build an Actuated Ring Chart

CairnFORM (2019)

In [5], we introduced CairnFORM, (1) a stack of expandable illuminated rings for display that can change of axisymmetric shape (e.g., cone, double cone, cylinder). We show that axisymmetric shape-change can be used (2) for informing users around the display through data physicalization and (3) for unobtrusively notifying users around the display through peripheral interaction.

This project contributed to the grand challenges of Technology (miniaturized device form factors and high resolution) and Design (integrating artefact and interaction, application and content design).

(a) A stack of expandable illuminated rings for display
(b) Factors of the first study on data physicalization
(c) Factors of the second study on peripheral interaction
Figure 10: CairnFORM: a Shape-Changing Ring Chart Notifying Renewable Energy Availability in Peripheral Locations

CairnFORM² (2021)

In [8], we extend the understanding of the potential utility and usability of axisymmetric shape-change for display. We present (1) 16 new use cases for CairnFORM (Figure 11 (a)) and (2) a two-month comparative field study with CairnFORM in the workplace (Figure 11 (b)). Compared with flat-screen animations, early results show that cylindrical shape-change animations keep a better attractiveness over time.

This project contributed to the grand challenges of User Behavior (understanding the user experience of shape-change) and Design (application and content design).

(a) Sixtheen use cases for ring-based shape displays
(b) Two-month field study comparing ring-based shape displays and flat displays
Figure 11: Exploring Axisymmetric Shape-Change’s Purposes and Allure for Ambient Display: 16 Potential Use Cases and a Two-Month Preliminary Study on Daily Notifications

ShyPins (2025)

In [7], we introduced ShyPins, a pin-based shape display that implements a safety strategy to protect users’ physical and psychological well-being by regulating pin motion based on user proximity (Figure 12). Pin-based shape displays can cause discomfort or psychological harm when pins move forcefully against the user’s body; ShyPins addresses this by modulating actuation behavior as users approach. A user study showed that gradually reducing pin speed as users move closer is perceived as safer than abruptly stopping motion, while the perceived safety of motion pauses depends on user preferences in user-triggered actuation. Another study found that projecting user-centered stop zones improves perceived safety compared to pin-centered zones, and that users feel less safe when approaching moving pins than when pins move toward them.

This project contributed to the grand challenges of Technology (integration of additional I/O modalities), Policy, Ethics, and Sustainability (policy and ethics) and User Behavior (understanding the user experience of shape-change).

(a) ShyPins regulates pin motion to prevent the forcible (dis)appearance of tangible controls, mitigating psychological harm to users during scenarios like reconfiguring a tangible cockpit (top) and transitioning a physical bar chart (bottom).
(b) Factors of the first study on perceived safety: the number of pin speed zones (a), the level of user body engagement (b), and the trigger of pin actuation (c).
(c) Factors of the second study on perceived safety: the stop zone visualization (a), the pin matrix actuation density (b), and the shape-change interruption event (c).
Figure 12: ShyPins: Safeguarding User Mental Safety From The Forcible (Dis)Appearance Of Pin-based Controls By Using Speed Zones

My Personal Perspectives

Research

My research perspectives are quite straightforward. I believe that Shape-Changing Interfaces hold great potential to transform the field of Human-Computer Interaction, particularly in light of the long-term vision of Programmable Matter. At the same time, I recognize that many significant challenges remain before this vision can be fully realized in human societies. These grand challenges serve as guiding principles for my research agenda. I am particularly interested in addressing the technological and behavioral aspects of these challenges, while also remaining open to contributing to the design, policy, ethical, and sustainability dimensions of the field.

My next goal in my academic career is to train PhD students in Shape-Changing Interfaces and to pursue an Habilitation à Diriger des Recherches (HDR), which will allow me to supervise PhD students and lead independent research projects. I intend to build upon my current work on Shape-Changing Interfaces as the foundation of my HDR, and I am eager to contribute further to the advancement and maturation of this emerging research area.

My next career goal is to train PhD students in Shape-Changing Interfaces and pursue an Habilitation à Diriger des Recherches (HDR), which will enable me to supervise doctoral candidates and lead independent research programs. I aim to build upon my current body of work on Shape-Changing Interfaces as the foundation for my HDR thesis, using the grand challenges outlined above as guiding principles for my research agenda.

Teaching

Beyond research supervision, I am deeply passionate about educating the next generation of computer scientists at the university level, from undergraduate (Licence) to graduate (Master) programs. My teaching interests span foundational Computer Science topics including algorithms and data structures, procedural programming, object-oriented programming, event-driven programming, and embedded systems programming.

Ultimately, I aspire to develop and teach a dedicated course on Human-Computer Interaction, with a particular focus on Shape-Changing Interfaces. While I have not yet had the opportunity to create such a course, I am eager to design a curriculum that would introduce students to this emerging field, bridging theoretical foundations with hands-on prototyping experiences. I believe that educating future researchers and practitioners in Shape-Changing Interfaces will be crucial for advancing the field and realizing the long-term vision of Programmable Matter.

References

[1]
Alexander, J. et al. 2018. Grand Challenges in Shape-Changing Interface Research. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (Montreal QC, Canada, Apr. 2018), 1–14.
[2]
Belke, C.H. and Paik, J. 2017. Mori: A Modular Origami Robot. IEEE/ASME Transactions on Mechatronics. 22, 5 (Oct. 2017), 2153–2164. https://doi.org/10.1109/TMECH.2017.2697310.
[3]
Berzowska, J. and Coelho, M. 2005. Kukkia and Vilkas: Kinetic Electronic Garments. Ninth IEEE International Symposium on Wearable Computers (ISWC’05) (Osaka, Japan, 2005), 82–85.
[4]
Chen, T. et al. 2021. Bistable auxetic surface structures. ACM Trans. Graph. 40, 4 (Jul. 2021). https://doi.org/10.1145/3450626.3459940.
[5]
Daniel, M. et al. 2019. CairnFORM: A Shape-Changing Ring Chart Notifying Renewable Energy Availability in Peripheral Locations. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction - TEI ’19 (Tempe, Arizona, USA, 2019), 275–286.
[6]
Daniel, M. et al. 2018. Designing an expandable illuminated ring to build an actuated ring chart. Proceedings of the twelfth international conference on tangible, embedded, and embodied interaction (New York, NY, USA, 2018), 140–147.
[7]
Daniel, M. and Delamare, W. 2025. ShyPins: Safeguarding user mental safety from the forcible (dis)appearance of pin-based controls by using speed zones. Proceedings of the nineteenth international conference on tangible, embedded, and embodied interaction (New York, NY, USA, 2025).
[8]
Daniel, M. and Rivière, G. 2021. Exploring Axisymmetric Shape-Change’s Purposes and Allure for Ambient Display: 16 Potential Use Cases and a Two-Month Preliminary Study on Daily Notifications. Proceedings of the Fifteenth International Conference on Tangible, Embedded, and Embodied Interaction (Salzburg Austria, Feb. 2021), 1–6.
[9]
Engert, S. et al. 2022. STRAIDE: A Research Platform for Shape-Changing Spatial Displays based on Actuated Strings. CHI Conference on Human Factors in Computing Systems (New Orleans LA USA, Apr. 2022), 1–16.
[10]
Follmer, S. et al. 2013. inFORM: Dynamic Physical Affordances and Constraints Through Shape and Object Actuation. Proceedings of the 26th Annual ACM Symposium on User Interface Software and Technology (New York, NY, USA, 2013), 417–426.
[11]
Grønbæk, J.E. et al. 2020. KirigamiTable: Designing for Proxemic Transitions with a Shape-Changing Tabletop. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu HI USA, Apr. 2020), 1–15.
[12]
Ishii, H. et al. 2012. Radical Atoms: Beyond Tangible Bits, Toward Transformable Materials. interactions. 19, 1 (Jan. 2012), 38–51. https://doi.org/10.1145/2065327.2065337.
[13]
Ishii, H. 2008. Tangible Bits: Beyond Pixels. Proceedings of the 2nd International Conference on Tangible and Embedded Interaction (New York, NY, USA, 2008), xv–xxv.
[14]
Ishii, H. and Ullmer, B. 1997. Tangible Bits: Towards Seamless Interfaces Between People, Bits and Atoms. Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 1997), 234–241.
[15]
Je, S. et al. 2021. Elevate: A walkable pin-array for large shape-changing terrains. Proceedings of the 2021 CHI conference on human factors in computing systems (New York, NY, USA, 2021).
[16]
Katsumoto, Y. 2020. Inside Out. ACM SIGGRAPH 2020 Art Gallery (New York, NY, USA, 2020), 467.
[17]
Kim, H. et al. 2018. KnobSlider: Design of a Shape-Changing UI for Parameter Control. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2018), 1–13.
[18]
Le Goc, M. et al. 2016. Zooids: Building Blocks for Swarm User Interfaces. Proceedings of the 29th Annual Symposium on User Interface Software and Technology - UIST ’16 (Tokyo, Japan, 2016), 97–109.
[19]
Nakagaki, K. et al. 2019. inFORCE: Bi-directional Force Shape Display for Haptic Interaction. Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied Interaction (Tempe Arizona USA, Mar. 2019), 615–623.
[20]
Nakagaki, K. et al. 2015. LineFORM: Actuated Curve Interfaces for Display, Interaction, and Constraint. Proceedings of the 28th Annual ACM Symposium on User Interface Software & Technology (Charlotte NC USA, Nov. 2015), 333–339.
[21]
Pishvar, M. and Harne, R.L. 2020. Foundations for soft, smart matter by active mechanical metamaterials. Advanced science. 7, 18 (2020), 2001384. https://doi.org/10.1002/advs.202001384.
[22]
Prattichizzo, D. et al. 2014. The Sixth-Finger: A modular extra-finger to enhance human hand capabilities. The 23rd IEEE International Symposium on Robot and Human Interactive Communication (Edinburgh, UK, Aug. 2014), 993–998.
[23]
Raffle, H.S. et al. 2004. Topobo: A constructive assembly system with kinetic memory. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2004), 647–654.
[24]
Rasmussen, M.K. et al. 2012. Shape-changing interfaces: A review of the design space and open research questions. Proceedings of the SIGCHI conference on human factors in computing systems (New York, NY, USA, 2012), 735–744.
[25]
Rasmussen, M.K. et al. 2012. Shape-changing Interfaces: A Review of the Design Space and Open Research Questions. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (New York, NY, USA, 2012), 735–744.
[26]
Roudaut, A. et al. 2016. Cubimorph: Designing modular interactive devices. 2016 IEEE International Conference on Robotics and Automation (ICRA) (Stockholm, Sweden, May 2016), 3339–3345.
[27]
Shaer, O. and Hornecker, E. 2010. Tangible User Interfaces: Past, Present, and Future Directions. Found. Trends Hum.-Comput. Interact. 3, 1&2 (Jan. 2010), 1–137. https://doi.org/10.1561/1100000026.
[28]
Spröwitz, A. et al. 2010. Roombots: Reconfigurable robots for adaptive furniture. IEEE Computational Intelligence Magazine. 5, 3 (2010), 20–32. https://doi.org/10.1109/MCI.2010.937320.
[29]
Steed, A. et al. 2021. A mechatronic shape display based on auxetic materials. Nature Communications. 12, 1 (Dec. 2021), 4758. https://doi.org/10.1038/s41467-021-24974-0.
[30]
Strohmeier, P. et al. 2016. Sharing Perspectives on the Design of Shape-Changing Interfaces. Proceedings of the 2016 CHI Conference Extended Abstracts on Human Factors in Computing Systems (San Jose California USA, May 2016), 3492–3499.
[31]
Sutherland, I.E. et al. 1965. The ultimate display. Proceedings of the IFIP congress (1965), 506–508.
[32]
Suzuki, R. et al. 2018. Dynablock: Dynamic 3D Printing for Instant and Reconstructable Shape Formation. Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology (Berlin Germany, Oct. 2018), 99–111.
[33]
Suzuki, R. et al. 2019. ShapeBots: Shape-changing swarm robots. (2019), 493–505. https://doi.org/10.1145/3332165.3347911.
[34]
Taher, F. et al. 2015. Exploring Interactions with Physically Dynamic Bar Charts. Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (New York, NY, USA, 2015), 3237–3246.
[35]
Togler, J. et al. 2009. Living Interfaces: The Thrifty Faucet. Proceedings of the 3rd International Conference on Tangible and Embedded Interaction (New York, NY, USA, 2009), 43–44.
[36]
Yim, M. et al. 2007. Modular self-reconfigurable robot systems [grand challenges of robotics]. IEEE Robotics & Automation Magazine. 14, 1 (2007), 43–52. https://doi.org/10.1109/MRA.2007.339623.
[37]
Yim, S. et al. 2018. Animatronic soft robots by additive folding. The International Journal of Robotics Research. 37, 6 (May 2018), 611–628. https://doi.org/10.1177/0278364918772023.