The Augmented Materiality Lab (AML) brings together scientists and artists with a maker mindset, interested in the technological continuum going from matter/object-centric research on one side, to human/society-centric research on the other.

Central to this is the creative process of growing elemental computational agency into matter and “introspective” capacity into everyday objects – the kind one can find in our own body organs. Technology exploiting this substance can become more than a tool at the service of humans, but a way to extend our bodies and selves and ultimately reconnect us with the physical world.

Augmented Materiality is real augmented reality: it does not overlays the digital over the real or augment our senses, but makes objects capable of producing this information and expressing it naturally. In this sense, Augmented Materiality extends the Internet of Things both downwards (programmable matter) and upwards (augmentation of bodies and minds), striving to open a communication channel accross different levels of organizational complexity.

With this mission, the AML naturally engages in interdisciplinary research: from Smart Materials, Programmable Matter and the Internet of Things to Robotics, Responsive Architectures and Smart Cities; from Human-Computer Interfaces and Accessibility Interfaces to research on devices that alter perception through Augmented or Virtual Reality; from Machine Learning to embedded Deep Neural Networks hardware. These subjects are explored both in a traditional academic way, but also in a speculative manner.

The main axis of resarch are:

  • Smart materials combining passive bulk physical properties with “smart impurities” capable of modulating light or sound for use in new interfaces, architecture, engineering, design, and art.
  • New 3d printing and display technology (2d or volumetric) integrating these augmented/aware materials.
  • Light-field / sound field sensing and projecting techniques for large scale, ubiquitous spatial augmented reality.
  • Multimodal human-computer interfaces relying on these systems in tandem with more conventional technology based on wearables, biosensing, speech analysis, and AI.

We want to study the opportunities of what is in essence a new medium (relying on ubiquitous sensing and projection, as well as fine grain distributed computer processing) from different angles, including HCI, telepresence, cognitive sciences, and of course New Media research.