Multidisciplinary Column: An Interview with Suranga Nanayakkara

Author/Interviewee: Suranga Nanayakkara
Affiliation: Singapore University of Technology and Design (SUTD)
Editors/InterviewersCynthia C. S. Liem, Jochen Huber

 

suranga

 

Could you tell us a bit about your background, and what the road to your current position was?

I was born and raised in Sri Lanka and my mother being an electrical engineer by profession, it always fascinated me to watch her tinkering around with the TV or the radio and other such things. At age of 19, I moved to Singapore to pursue my Bachelors degree at National University of Singapore (NUS) on electronics and computer engineering. I then wanted to go into a field of research that would help me to apply my skills into creating a meaningful solution. As such, for my PhD I started exploring ways of providing the most satisfying musical experience to profoundly deaf Children.

That gave me the inspiration to design something that provides a full body haptic sense.  We researched on various structures and materials, and did lots of user studies. The final design, which we call the Haptic Chair, was a wooden chair that has contact speakers embedded to it. Once you play music through this chair, the whole chair vibrates and a person sitting on the chair gets a full body vibration in tune with the music been played.

I was lucky to form a collaboration with one of the deaf schools in Sri Lanka, Savan Sahana Sewa, a College in Rawatawatte, Moratuwa. They gave me the opportunity to install the Haptic Chair in house, where there were about 90 hearing-impaired kids. I conducted user studies over a year and a half with these hearing-impaired kids, trying to figure out if this was really providing a satisfying musical experience. Haptic Chair has been in use for more than 8 years now and has provided a platform for deaf students and their hearing teachers to connect and communicate via vibrations generated from sound.

After my PhD, I met Professor Pattie Maes, who directs the Fluid Interfaces Group at the Media Lab. After talking to her about my research and future plans, she offered me a postdoctoral position in her group.  The 1.5 years at MIT Media Lab was a game changer in my research career where I was able to form the emphasis is on “enabling” rather than “fixing”. The technologies that I have developed there, for example, FingerReader, demonstrated this idea and have a potentially much broader range of applications.

At this time, Singapore government was setting up a new public university, Singapore University of Technology and Design (SUTD), in collaboration with MIT. I then moved to SUTD where I work as an Assistant Professor and direct the Augmented Human Lab (www.ahlab.org).

Your general agenda is towards humanizing technology. Can you tell us a bit about this mission and how it impacts your research?

When I started my bachelor’s degree in National University of Singapore, in 2001, I spoke no English and had not used a computer. My own “disability” to interact with computers gave me a chance to realize that there’s a lot of opportunity to create an impact with assistive human-computer interfaces.

This inspired me to establish ‘Augmented Human Lab’ with a broader vision of creating interfaces to enable people, connecting different user communities through technology and empowering them to go beyond what they think they could do. Our work has use cases for everyone regardless of where you stand in the continuum of sensorial ability and disability.   

In a short period of 6 years, our work resulted in over 11 million (SGD) research funding, more than 60 publications, 12 patents, more than 20 live demonstrations and most importantly the real-world deployments of my work that created a social impact.

How does multidisciplinary work play a role in your research?

My research focuses on design and development of new sensory-substitution systems, user interfaces and interactions to enhance sensorial and cognitive capabilities of humans.  This really is multidisciplinary in nature, including development of new hardware technologies, software algorithms, understanding the users and practical behavioral issues, understanding real-life contexts in which technologies function.

Can you tell us about your work on interactive installations, e.g. for Singapore’s 50th birthday? What are lessons learnt from working across disciplines?

I’ve always enjoyed working with people from different domains. Together with an interdisciplinary team, we designed an interactive light installation, iSwarm (http://ahlab.org/project/iswarm), for iLight Marina Bay, a light festival in Singapore. iSwarm consisted of 1600 addressable LEDs submerged in a bay area near the Singapore City center. iSwarm reacted to the presence of visitors with a modulation of its pattern and color.  This made a significant impact as more than 685,000 visitors came to see this (http://www.ura.gov.sg/uol/media-room/news/2014/apr/pr14-27.aspx).  Subsequently, the curators of the Wellington LUX festival invited us to feature a version of iSwarm (nZwarm) for their 2014 festival. Also, we were invited to create an interactive installation “SonicSG” (http://ahlab.org/project/sonicsg), for Singapore’s 50th anniversary SonicSG aimed at fostering a holistic understanding of the ways in which technology is changing our thinking about design in high-density contexts such as Singapore and how its creative use can reflect a sense of place. The project consisted of a large-scale interactive light installation that consisted on 1,800 floating LED lights in the Singapore River in the shape of the island nation.

Could you name a grand research challenge in your current field of work?

The idea of ‘universal design’, which, sometimes, is about creating main stream technology and adding a little ‘patch’ to label it being universal. Take the voiceover feature for example – it is better than nothing, but not really the ideal solution.  This is why despite efforts and the great variety of wearable assistive devices available, user acceptance is still quite low.  For example, the blind community is still so much dependent on the low-tech whitecane.

The grand challenge really is to develop assistive interfaces that feels like a natural extension of the body (i.e. Seamless to use), socially acceptable, works reliably in the complex, messy world of real situations and support independent and portable interaction.

When would you consider yourself successful in reaching your overall mission of humanizing technology?

We want to be able to create the assistive devices that set the de facto standard for people we work with – especially the blind community and deaf community.  We would like to be known as a team who “Provide a ray of light to the blind and a rhythm to the lives of the deaf”.

How and in what form do you feel we as academics can be most impactful?

For me it is very important to be able to understand where our academic work can be not just exciting or novel, but have a meaningful impact on the way people live.  The connection we have with the communities in which we live and with whom we work is a quality that will ensure our research will always have real relevance.


Bios

 

Suranga Nanayakkara:

Before joining SUTD, Suranga was a Postdoctoral Associate at the Fluid Interfaces group, MIT Media Lab. He received his PhD in 2010 and BEng in 2005 from the National University of Singapore. In 2011, he founded the “Augmented Human Lab” (www.ahlab.org) to explore ways of creating ‘enabling’ human-computer interfaces to enhance the sensory and cognitive abilities of humans. With publications in prestigious conferences, demonstrations, patents, media coverage and real-world deployments, Suranga has demonstrated the potential of advancing the state-of-the art in Assistive Human Computer Interfaces. For the totality and breadth of achievements, Suranga has been recognized with many awards, including young inventor under 35 (TR35 award) in the Asia Pacific region by MIT TechReview, Ten Outstanding Yong Professionals (TOYP) by JCI Sri Lanka and INK Fellow 2016.

Editor Biographies

Cynthia_Liem_2017Dr. Cynthia C. S. Liem is an Assistant Professor in the Multimedia Computing Group of Delft University of Technology, The Netherlands, and pianist of the Magma Duo. She initiated and co-coordinated the European research project PHENICX (2013-2016), focusing on technological enrichment of symphonic concert recordings with partners such as the Royal Concertgebouw Orchestra. Her research interests consider music and multimedia search and recommendation, and increasingly shift towards making people discover new interests and content which would not trivially be retrieved. Beyond her academic activities, Cynthia gained industrial experience at Bell Labs Netherlands, Philips Research and Google. She was a recipient of the Lucent Global Science and Google Anita Borg Europe Memorial scholarships, the Google European Doctoral Fellowship 2010 in Multimedia, and a finalist of the New Scientist Science Talent Award 2016 for young scientists committed to public outreach.

 

 

jochen_huberDr. Jochen Huber is a Senior User Experience Researcher at Synaptics. Previously, he was an SUTD-MIT postdoctoral fellow in the Fluid Interfaces Group at MIT Media Lab and the Augmented Human Lab at Singapore University of Technology and Design. He holds a Ph.D. in Computer Science and degrees in both Mathematics (Dipl.-Math.) and Computer Science (Dipl.-Inform.), all from Technische Universität Darmstadt, Germany. Jochen’s work is situated at the intersection of Human-Computer Interaction and Human Augmentation. He designs, implements and studies novel input technology in the areas of mobile, tangible & non-visual interaction, automotive UX and assistive augmentation. He has co-authored over 60 academic publications and regularly serves as program committee member in premier HCI and multimedia conferences. He was program co-chair of ACM TVX 2016 and Augmented Human 2015 and chaired tracks of ACM Multimedia, ACM Creativity and Cognition and ACM International Conference on Interface Surfaces and Spaces, as well as numerous workshops at ACM CHI and IUI. Further information can be found on his personal homepage: http://jochenhuber.com

Bookmark the permalink.