In the tech world, there's always a 'next big thing' on the horizon. Right now, that horizon is dominated by conversations about artificial intelligence, mainly ChatGPT, Google’s Bard, and Generative AI tools like Midjourney and Dall-E 2. Some professions are embracing Generative AI, while others are fearing it.
Included in that conversation is a lot of talk about chatbots. But chatbots are not the future, they are our present. In many ways, chat interfaces are the foundation of more conversational and natural interactions with systems and computers, but really they are the start of something much bigger.
We’ve been anticipating this future for a while, but it hasn’t really been possible until the last couple of years when the right ingredients and technology have converged. It’s an exciting time.
Ubiquitous Computing and Intelligent Interfaces
What happens when we move beyond our screens and unleash computing power, the internet, and AI into the real world? It starts to get interesting. Welcome to the age of ubiquitous computing — the future of intelligent systems.
Ubiquitous computing, also known as pervasive computing or ambient intelligence, is a concept first proposed by Mark Weiser, Chief Technologist at Xerox PARC, in the late 1980s. Weiser later wrote a paper on the topic in 1991 titled "The computer for the 21st century". The idea behind ubiquitous computing was to create an environment where computers were embedded seamlessly into the physical world and where human-computer interaction was natural and effortless. Where we forget about the underlying technology. Weiser envisioned a world where computing would be "invisible," and users would be surrounded by an "information fabric" that would provide them with relevant information and services.
Ubiquitous computing has been somewhat realized in the Internet of Things (IoT), the network of interconnected devices and sensors embedded in everyday objects, from smart homes to self-driving cars. But, the original vision of ubiquitous computing as a truly seamless and integrated experience has yet to be fully realized.
Then there are intelligent Interfaces, representing a paradigm shift in human-computer interaction. Unlike traditional GUIs (Graphical User Interfaces) that require users to learn the system's language, intelligent interfaces aim to make the system understand the user's language. Instead of humans adapting to the system, the system adapts to them. Intelligent interfaces leverage human instincts and behaviors to create a more intuitive experience. These systems integrate artificial intelligence (AI) and can leverage everything from touch interfaces, voice recognition systems, and gesture-based controls to brain-computer interfaces — they’re multi-modal.
These intelligent systems can make our interactions with technology more intuitive, natural, and efficient. Think Siri, but on steroids. So as we stand on the brink of a new era in human-computer interaction, it's worth exploring how these interfaces have evolved, where they're headed, and how they will revolutionize how we interact with devices and the world around us.
The journey has been gradual, with momentum building in the last five years. It started with command-line interfaces, moved to graphical user interfaces, and then to touch interfaces with the advent of smartphones and tablets.
Today, we're already seeing the beginnings of intelligent interfaces. Technology is getting smarter, from Alexa managing our smart homes to AI algorithms recommending our next Netflix binge. They're in our phones, our cars, even our refrigerators. And they're changing the way we interact with technology. We’re now using much more natural and multimodal interaction in everyday objects. We're seeing the rise of voice interfaces like Apple's Siri and Google Assistant, gesture-based controls in gaming systems like the Nintendo Switch, and even early versions of brain-computer interfaces (BCIs) from companies like Neuralink.
As consumers, we’ve become accustomed to using biometrics, like facial recognition and fingerprints, to authenticate and unlock our devices. Even BMW is starting to use gestural interaction and gaze for hands-free interaction in their cars. Augmented Reality (AR) and Virtual Reality (VR) interfaces are becoming more popular, providing immersive experiences that blend the virtual and real worlds. This is seen in devices like the Meta Quest and Microsoft HoloLens.
Interaction patterns of the future.
The chat interfaces we’re seeing today with ChatGPT and others are setting the foundation for multimodal systems of the future. The back-and-forth interaction volley will lay the groundwork for how systems interact with us, anticipating our needs, confirming our requests, and acting on our behalf.
We’ve seen glimpses of ubiquitous computing and intelligent interfaces in TV shows and movies, but it’s always seemed to be further into the distance. However, we’re now on the cusp of an explosion of product innovation that advances in AI, computing, and hardware will finally enable. It’s exciting that what used to be science fiction and innovation concepts are now becoming reality. Movies and TV shows have played a large part in inspiring us to push further.
One of the most famous examples is the user interface in Minority Report. The movie has been highly influential in shaping public perceptions of interfaces and has inspired real-world applications of gesture-based interfaces and other emerging technologies. Most remember the gestural interface used by Tom Cruise’s character, what’s not widely known is the computer system used in the movie. John Underkoffler of Oblong Industries created the g-speak Spatial Operating Environment, which uses natural gestures — no keyboard, mouse, or command line. Underkoffler also worked on the gestural holographic interfaces in Iron Man (2006).
The gesture-based interfaces in Minority Report and Iron Man gave us a glimpse of what's possible, while the voice interface in the movie Her showed us a future where our devices understand us on a deeply personal level. Her explores the idea of natural UI, the potential for human-like interactions with digital assistants, and the potential implications of a future where technology becomes more integrated into our personal lives.
Another great example is Westworld, which explores the concept of ephemeral interfaces and artificial intelligence, where android hosts adapt and respond to the guests' actions and preferences in real-time, creating a highly personalized and immersive experience in a technologically advanced amusement park. It gets even more interesting when the androids return to the “real world.”
Blade Runner 2049 offers a glimpse into a future where projection mapping and augmented reality create immersive and dynamic environments. In the movie, projection mapping creates large-scale holographic displays that interact with the physical environment, creating an immersive and surreal atmosphere.
Iron Man’s J.A.R.V.I.S. AI assistant and the virtual assistant FRIDAY in Captain America: Civil War are great examples of natural UI. These interfaces demonstrate the potential for natural language processing and machine learning to create sophisticated and responsive digital assistants to help us navigate our increasingly complex and interconnected world.
Intelligent Interfaces promise to make our interactions with technology more natural and effortless. By leveraging our behaviors and instincts, they can also reduce the learning curve associated with new technologies, making them more accessible to a broader range of users.
Emerging technology shaping the future
The interface of the future is not chat — it’s multimodal and ubiquitous.
The future is not pages and pages of UI flows or detecting whether you’re on desktop or mobile. Our future interfaces will be intelligent, contextual, and ephemeral. Just enough interface compiled in real-time, based on context and relevance.
UI that appears when it’s needed and hidden when it’s not.
The ability to interact by voice, touch, or typing, easily switching modalities based on what’s natural for the user. Where interfaces are fluid, and sound and haptics enhance calm, ambient interactions. A proactive concierge that provides what’s needed based on understanding who you are and gets better the more you interact with it.
Systems that adapt to humans instead of the other way around.
Some companies are already working towards this vision of the future. At this year’s TED Conference, Humane’s Imran Chaudhri provided a preview of their unreleased tech, a system where AI, computer vision, and projection come together to create an assistant that’s with you throughout your day without using a phone — where the device disappears. Early views of this type of system are reminiscent of Pranav Mistry’s 2009 SixSense demo at TEDIndia of a wearable gestural interface, his MIT Media Lab thesis project. Pattie Maes, who runs the Media Lab's Fluid Interfaces research group, created a huge buzz at the TED main stage that year, introducing the project.
Another team pushing the boundaries, former Apple designer and founder Jason Yuan and Sam Whitmore, have just received funding for new.computer, whose mission is to “create a future where computers intuitively adapt to humans, forging relationships as essential as the tools we use today.” Jason Yuan’s name may be familiar for creating Mercury OS, a minimal, fluid reimagining of the traditional operating system focused on the user’s intention instead of apps and folders.
And some of the most experimental art may push the boundaries and help shape how we interact with future systems. Refik Anadol uses projection mapping and machine learning to create immersive AI data sculptures and interactive art installations. Anadol's work blurs the line between the physical and digital worlds, creating beautiful and thought-provoking environments.
Advancements in AI, machine learning, natural language processing, and human-computer interaction will likely drive the evolution of intelligent interfaces over the next decade. As we think about the types of new interactions and experiences that will evolve, here are some of the trends and developments that will influence that future:
- Multimodal Interfaces: Future interfaces will likely combine text, voice, visual, and even tactile inputs and outputs. This will allow users to interact with AI in whatever way is convenient or intuitive for them at any given moment. For example, you might speak a command to your AI assistant, then receive a visual response on your smart glasses.
- Context-Aware Interfaces: AI will become better at understanding the context of user interactions. This means that the AI will understand what you're saying, where you are, what you're doing, and what you might need in that specific situation. This could involve integrating data from various sensors and sources to provide more relevant and personalized responses.
- Emotionally Intelligent Interfaces: AI will become more adept at recognizing and responding to human emotions. This could involve analyzing voice tones, facial expressions, or even physiological signals to understand the user's emotional state and adjust its responses accordingly.
- Proactive Interfaces: Instead of waiting for commands, AI interfaces will become more proactive, anticipating user needs based on patterns, habits, and preferences. For example, your AI assistant might suggest leaving early for a meeting if it knows there's heavy traffic on your usual route.
- Immersive Interfaces: With advancements in AR, VR, and projection mapping technologies like those demonstrated by Humane, we can expect more immersive AI interfaces. These technologies could allow for more natural and intuitive interactions with digital content.
- Collaborative Interfaces: AI will become more capable of collaborative problem-solving, working alongside humans to tackle complex tasks. This will involve understanding and contributing to human-like conversations, including recognizing when to take initiative and when to ask for clarification.
In the future, we can expect more personalized and immersive interfaces. AI and machine learning will continue to make interfaces smarter and more adaptive. They'll learn from our habits and preferences, making our interactions with technology more efficient and enjoyable. AR and VR will continue to evolve, creating more immersive and interactive experiences, blurring the lines between the physical and digital worlds, and creating new possibilities for interaction. Though still in their infancy, Brain-Computer Interfaces (BCIs) hold the promise of a future where we can interact with technology using our thoughts alone.
We'll see a move towards more continuous, personalized, ambient interfaces. These interfaces will be seamlessly integrated into our daily lives, allowing us to interact with AI in a more natural and intuitive way, similar to how we interact with other humans. Consider a combination of voice, gesture, gaze, and even thought-based interfaces in the future enabled by technological advancements like BCIs.
The latest research in the field is fascinating. Scientists are exploring everything from AI algorithms to brain-computer interfaces, and their findings could revolutionize how we interact with technology. Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) are developing systems that can understand and respond to human emotions, potentially making our interactions with technology more empathetic and engaging. Meanwhile, researchers at the University of California, San Francisco, are making strides in BCI technology, recently developing a system that can translate brain signals into complete sentences.
Intelligent Interfaces represent the next frontier in human-computer interaction. They hold the promise of making our interactions with technology more natural, intuitive, and engaging. While significant challenges and ethical considerationsexist, the potential benefits are immense. One thing is clear — how we interact with technology is about to change significantly.
This is a more dense approach than my usual posts. As always, send me your feedback. If the community is very interested in this topic, I’ll write part two, where I’ll dive into intelligent interfaces' challenges, ethical considerations, and how the role of designers will evolve. I’ve been trying out ChatGPT as my research assistant, how did it do?