Multi-modality Creates More Inclusive Product Experiences
Product creators have the opportunity to leverage genUI to create bespoke solutions that are customized for every user
A new frontier with generative UI
I recently spoke at PUSH Conference in Munich about how generative user interfaces can be a way to adapt to users where they are - whether they need additional support with navigating an interface or adapting to their environment (such as a bumping train ride or load environment).
With multi-modality and generative UI, these types on environments can adapt for users. Multi-modality is the ability to interact with interfaces using many different types of ‘inputs’ and ‘outputs’, such as voice, adding images through drag and drop, selecting text any many more. What’s exciting about these interaction patterns is that interfaces can adapt beyond text and typing/selecting with keyboards or mouse or finger, which are limiting for many users.
With these new interaction patterns, or ways of engaging with different types of interfaces, users with different types of needs will have more flexibility. For example, if they require a low sensory interface or want a touch free experience, UIs can adapt to them rather than the other way around.
With traditional UIs, we get one option for interaction
With traditional UIs, there are a few ways to adapt the interface. Accessibility tools, dynamic light and dark mode/light mode are a few options. But what about the spectrum of users in between? Or what about users who require one type of interface based on their environment but not all the time?
More diverse outcomes
It is difficult for users to understand the spectrum of possible outputs in
user interfaces. When entering a user intent (through text, voice or another interaction pattern), it can be challenging to know the range of outcomes. As product creators we can mitigate this by providing users with more granular controls.
Generative models can create diverse outcomes, but we should give users more granular controls
Providing better multi-modal controls can increase product inclusion and equity outcomes.
Touch isn’t always the best option
Demands full attention
Requires fine motor skills or use of one or more hands
Assumes readability or literacy of the users
As product creators, we can generate bespoke interfaces tailored to the unique needs (or dimensions of identity) of each user1
Empower users with tools that best fit them through multi-modality -touch/voice/etc. So they can use devices in many contexts
Anticipate user intent and create multiple option for generating bespoke interfaces
Making experiences on devices more inclusive and adaptable for everyone
Types of interaction patterns with genUI for product inclusion and equity
Explain it to me like a 3rd grader or make this more dramatic
Make this higher contrast so that I can see it better
A path forward for genUI and product inclusion and equity
Designing for everyone starts from the beginning
Explore multi-modality outside of touch
Consider providing UI options through genUI
Remember you are the product creator!
As always, I would love to hear from you! What would you like to learn more about?
For more information about the dimensions of identity, you can reference Google’s product inclusion and equity information https://about.google/belonging/product-inclusion-and-equity/