What is Gen UI and does it matter?
Understanding the fundamentals of GenUI and the conditions for its success.
Name
- Ross Tulloch
Date
- 9th May 2024
What is Gen UI?
Nielsen Norman recently penned a definitional piece on Generative UI, while informative it left us wanting to expand on the topic.
Generative UI (Gen UI) is a type of technology that in the future could use artificial intelligence to automatically create and adjust the layout and design of digital interfaces like websites or apps. It changes interfaces based on what it learns about each user's preferences and needs, making each user experience more personalised and responsive.
This technology, a combination of AI and machine learning, enables interfaces to evolve in real-time, reacting not only to broad user trends but also to individual interactions. For example, if the system notices that a user prefers larger text sizes or frequently uses certain features, it can automatically adjust the display to accommodate those preferences and remember them for the future. This level of customisation aims to make digital platforms easier and more enjoyable to use, enhancing the overall user experience.
In the future, generative UI will dynamically create customised user interfaces in real-time. This shift will force an outcome-oriented design approach where designers prioritize user goals and define constraints (for AI to operate within), rather than design discrete interface elements.
What are the conditions for its success?
With definitions out of the way, we believe that the successful implementation and operation of Gen UI hinges on organisations having robust foundations in content, information and technical architecture supported by a well-articulated design system where design and code align seamlessly. These elements are essential for Gen UI systems to function effectively, ensuring that user interfaces are dynamic, responsive and tailored to individual user needs. With that said it’s worth noting that the current way of describing how Gen UI will work relies on businesses and technologies rather than people defining, controlling and articulating their preferences for augmentation.
What are the key features of Gen UI?
Gen UI could enable scalability and consistency across different platforms, thanks to its ability to make smart design decisions based on what it learns from user interactions. This technology would create interfaces that are not only intuitive and responsive but also focused on what users need. Without further ado, let's look at some of the key features of Gen UI:
Dynamic personalisation Gen UI changes the user interface for each person based on their actions, preferences and specific situations, making it more user-friendly and relevant for everyone, unlike today’s ‘one size fits all’ approach to experience.
Real-time responsiveness It reacts to what users do, changing things like the visuals or how the interface and call to actions are laid out as soon as the user interacts with an interface.
Data-driven design Gen UI uses data to make decisions about the design by analysing in real-time how users interact with the interface, it spots trends and preferences that help it make smarter design choices.
Consistency Automation allows Gen UI to maintain a consistent look, feel and functionality across different devices and platforms, making the user experience uniform everywhere.
Learning and adaptation Gen UI systems learn from each user interaction, they constantly get better at tailoring the interface to meet user needs over time.
Gen UI: the benefits
Real time response makes Gen UI an invaluable area to explore for businesses looking to stay competitive in a digital-first world, ensuring that every user interaction is effective and useful at an individual level.
Let’s look at some of the potential benefits of Gen UI:
Scalability Gen UI could make it easier to expand and adjust user interfaces to work on devices and platforms, like smartphones, tablets, and computers without having to redesign for each one. This means that no matter what device you're using, the experience will feel familiar and consistent despite modulating to a user’s behaviour on a 1:1 basis.
Cost reduction Gen UI coupled with AI assisted design could reduce the time spent designing user interfaces allowing design activities to be concentrated around information, technical and content architecture and delight through powerful visual design.
Data driven decisions The data from users interacting with generative user interfaces, if underpinned by strong data architecture, could feed back into, and inform, future design, content and architecture decisions to more rapidly map future iterations based on use cases.
Enhanced accessibility It could automatically adjusts interfaces for users with disabilities both situational and long-term, offering features like high contrast modes and voice navigation to suit individual needs. However, it should not replace or supplant these efforts.
Gen UI in the real world
Here are three ways companies across industries may be exploring Gen UI:
Automotive In cars, Gen UI could tailor the dashboard displays and controls to match the driver's habits and preferences. It could even adjust based on different driving conditions or times of day. With major car manufacturers like Tesla and BMW already integrating more digital technology into their vehicles, the move to utilise Gen UI to create adaptable user interfaces as part of their car dashboards and infotainment systems seems like the next logical step.
Financial services Banks could move to use Gen UI to personalise banking apps and websites, displaying financial information or suggested actions based on a user's past behaviour and current financial situation. This could mean surfacing the most accessed information differently or pushing help options forwards, or the next most likely action, based on previous behaviour, if someone is struggling to find something.
Media and entertainment Streaming services could employ Gen UI to adapt their interfaces, offering content recommendations based on what the user has watched, when they watch and what device they use. Netflix, Spotify, and Hulu who prioritise personalised content delivery, might use Gen UI to dynamically adjust their interfaces based on user preferences and viewing or listening habits in real time.
But wait, how did we get to the dawn of Gen UI?
We have a different narrative about its emergence compared to the failing of accessibility. As digital has grown more and more people have been served online, and with this rise, personalisation has come to influence experiences more greatly. However, personalisation has sometimes felt too creepy, annoying, not accurate enough to be useful, or too aligned to marketing. It also hasn’t often involved the deep modulation of interfaces based on real-time and aggregate data or memory of those changes. At the same time, we’ve seen the rise of systematised design and the escape from static pattern libraries and brands to robustly designed and engineered systems that run across a brand’s entire digital ecosystem. A build-up of data over time derived from usage, better tools and a greater relationship between data and design has created the foundations for models to be trained on specific human behaviours and interactions.
Underpinning this data are front-end frameworks, technical and content architecture and tools that enable UI to adjust in real-time and serve interfaces that align best with the interaction pattern of the person.
Of course, without enhancements in computing power and cloud, none of this would be possible.
Pop the hood: how does Gen UI work in theory?
Step 1. Data collection and pre-processing Gen UI systems begin by gathering extensive user interaction data, which may include clicks, scroll behaviour, time spent on a particular part of an interface and input data. This data is pre-processed to normalise and structure it, making it suitable for analysis.
Step 2. Machine learning and model training The pre-processed data is used to train machine learning models. These models are typically based on supervised learning techniques and are trained to recognise patterns in user behaviour and preferences. Common algorithms used include decision trees, neural networks, and clustering algorithms, which can classify user behaviours and predict preferences.
Feature engineering is all about extracting key data points that are indicative of user preferences and behaviours.
Model training involves using historical data so that the models can get better at understanding how certain data points relate to user preferences.
Validation and testing the models are validated and tested against a separate set of data to ensure they perform well in predicting user interface preferences. At this stage, testing with humans is essential, particularly with those who exhibit atypical browsing patterns or those who may use assistive technologies.
Step 3. Dynamic UI generation With trained models, Gen UI systems can begin dynamically generating UI components and layouts based on the design library and user preferences. This process uses:
Generative algorithms These algorithms use the outputs from the machine learning models to generate UI elements that are predicted to meet the user's needs and preferences. They create design variations by altering elements like layout, colour, and typography.
Rule-based systems To ensure usability and adhere to design standards, rule-based systems are used alongside generative algorithms. These rules define acceptable design parameters, ensuring that the generated UIs are not only personalised but also practical, accessible and in line with the brand and the design system.
Step 4. Real-time adaptation As users interact with the generated UI, the system collects real-time feedback and continuously adjusts the interface over time:
Feedback loops User interactions with the newly generated UI components are monitored to gather feedback.
Adaptive learning The system adapts and refines its algorithms based on this feedback, allowing for improvements to the UI.
Step 5. Deployment and integration
APIs and microservices Gen UI components are typically deployed through APIs and microservices, supporting integration with existing software architectures, if they have been prepared and architected to receive this input.
Cross-platform compatibility Ensuring that generative UI components are compatible across different platforms and devices, particularly those with assistive technology, is important. This requires using responsive design principles and automated testing on multiple device types.
Security and privacy considerations Given that Gen UI systems rely heavily on user data, implementing robust security measures to protect this data is crucial. This includes encryption, secure data storage and compliance with privacy regulations like GDPR as well as clearly communicating about how data is used to evolve experiences and design with trust in mind.
Gen UI where to now?
It isn’t here just yet. But, there’s no denying the power of this shift, the realisation of personalisation in its truest sense, not dominated by marketing but aligned to user interaction at a minute level.
With all that possibility in mind, there’s a lot to get right from design systems to architecture across content, design, technology and data to support this kind of continuous learning that will feed the right interface, at the right time to a person based on their interaction. Considerations around direct user control opposed to inference based on interaction also remain important to consider, as does the how brand and the power of creativity will be built into generative user interfaces.