Lately, I've been uneasy about where our profession is heading. With AI taking over more of our main tasks, I find myself wondering what sets us apart. In this essay, I want to share these concerns and ask if our field is really fading away or just changing.
What I discovered wasn't the end of our work, but a need for real change. I hope you'll see that our value comes from leading ethical design as AI grows, not from holding onto old ways. We need to move from User Experience to Human Experience Design, helping technology better connect with people.
Let's explore how we can make our work meaningful again and ensure technology really helps people.
Introduction: The Reckoning
Our field faces a pivotal crisis. Some claim it's finished, while others view recent shifts as a natural market correction. Many attribute this to AI, arguing it reduces the need for human designers. Underneath all these opinions lies a central debate: what is truly valuable in UX as technology transforms our profession?
I've been struggling with these questions too. In this essay, I want to sort through the changes in technology, the crowded job market, and a culture that values newness over experience. This reflection made me ask: Did our focus on user-centric design bring us to an unexpected turning point? By perfecting our skills, we created systems—especially AI—that now threaten our own field, turning human experience into data and creative decisions into algorithms.
When I say "UX is Dead," I am not writing an obituary for human-centred design. I am sounding the alarm for its current, commoditised incarnation. We are witnessing a metamorphosis. The "caterpillar" of traditional UX—the practice narrowly defined by crafting efficient interfaces, A/B testing for clicks, and removing friction from predetermined business goals—is being consumed by the very technologies it helped create.
What is dying is not our value, but a constrained definition of our practice. What is being born is the "butterfly": the evolution of UX into a broader, more critical discipline focused on Human Experience. This is not a rejection of our past but an elevation of our core principles—empathy, advocacy, and systems thinking—to meet the ethical challenges of the AI era. It demands we shift our primary question from "How can we make this easier to use?" to "How can we make this worthy of a human's trust, time, and data?"
This is more than just another market shift; it's a fundamental transformation. To move forward, we need to rethink our purpose, shifting from asking how we do things to asking why we do them. With this in mind, I've organised my thoughts about what comes next. Let's examine the forces shaping our current predicament.
1. Diagnosis: The Perfect Storm Converging on UX
"You either die a hero, or you live long enough to see yourself become the villain."
We're not facing just one problem; we're dealing with three at the same time. Technology, the job market, and workplace culture are all changing fast, creating a perfect storm. It's like what Clayton Christensen described in The Innovator's Dilemma, where new tools and business models disrupt old ways of working (Christensen, 1997).
1.1. The Technological Force: AI and the Automation of Empathy
AI now does more than just simple tasks; it handles analysis that used to need experts. This creates a cycle: as AI takes over entry-level jobs, fewer people get the experience they need. In our rush to use new technology, we often overlook the importance of real judgment, empathy, ethics, and human connection.
The tools we use are making us question our purpose. If we focus only on what we can measure—like clicks, engagement, and conversions—we risk forgetting what really matters: meaning. Losing this sense of purpose has real effects. Economists like Daron Acemoglu warn that when AI takes over the creative and thinking parts of our jobs, it can make careers less meaningful and less stable, leading to job loss and uncertainty (Acemoglu & Restrepo, 2019).
1.2. The Human Force: Market Saturation and the Hollowing Out of Experience
AI's impact gets worse as more people enter the field while demand drops. The pandemic temporarily boosted demand, hiding the fact that AI was replacing entry-level jobs.
We saw a similar trend in UX over the past decade. Good pay and popular industries drew people from many backgrounds—writers, researchers, psychologists, artists—many without formal design training. Bootcamps and online certificates made it easier to join the field, but also reduced the value of deep expertise, favouring speed over depth. At first, this diversity was a real strength, bringing new ideas and improving design thinking.
But this growth also had downsides. As more people joined the field, the value of skills and credentials dropped—a classic case of "credential inflation" (Collins, 1979). The pandemic and low interest rates led to a shaky tech hiring boom, as The Economist pointed out; easy money caused unsustainable growth. Now, as the economy shifts, the market is shrinking and demand is falling, made worse by the fast spread of AI.
This has created a tough situation: there's more competition for jobs while AI takes over many roles. Entry-level jobs that once helped people gain experience are disappearing. Both new and experienced designers now face "digital ageism"—the belief that older professionals aren't as skilled or creative (Loos & Romano, 2021).
1.3. The Cultural Force: Ageism and the Devaluation of Judgment
We're partly responsible for these problems. By always chasing what's new, we encouraged AI adoption and made experience seem less important in a fast-changing world. We built a system that values what's easy to measure and scale, instead of the careful judgment that comes from real experience.
We see this problem in real situations. When an experienced designer warns about dark patterns, they're often ignored if a product manager points to an A/B test showing a quick boost in conversions. Researchers who want to dig deeper are told to justify their work instead of relying on simple analytics. We end up focusing on what we can measure instead of asking why it matters.
Ageism makes experienced professionals seem outdated or unable to keep up. This ignores their value as guides through tough problems. Strangely, we often promote them to leadership roles but keep them away from hands-on work, where their judgment is most needed. By leaving experienced professionals to only high-level strategy, we end up proving our own doubts right. If they're only setting the vision and not involved in daily product work, their skills become disconnected from the real situations where ethical choices and user empathy matter most. As Diginomica points out, ageism in tech is a significant problem that makes it harder to retain talent and slows innovation, underscoring the need for a more inclusive approach (Bennett, 2021).
Believing that only young people can innovate overlooks the valuable knowledge that comes with experience. In the end, this limits both creativity and critical thinking.
Conclusion: The Cumulation of the Storm
The real problem is that three forces are converging: AI is eroding the value of expertise, market saturation is devaluing unique skills, and a culture of ageism is dismissing hard-won experience. To treat this as a simple market correction is to miss the critical flaw: we are sacrificing meaningful, sustainable UX at the altar of efficiency and hype. A true correction wouldn't devalue experienced judgment or swap ethical thinking for the pursuit of metrics. It wouldn't treat deep experience as a relic.
What's happening now is more than just a normal disruption. We're not just becoming more efficient; by following these trends, UX could become unnecessary and lose its power to question how and why we build technology. If we helped create these problems, we can also help fix them. It's up to us to lead by example, redefine what matters, and make sure our future is as humane as we want it to be.
2. Root Cause Analysis: The Flaw in Our Own Foundation
"We have met the enemy and he is us."
These problems didn't just appear out of nowhere. Looking back, I see that we all played a part in creating them. We built a field that focused on what we could measure, and in the process, we pushed aside the intuition, empathy, and human wisdom that made our work meaningful. Now we have to ask ourselves: did we earn our place by truly helping users, or by turning them into data points? In chasing metrics, did we lose the humanity we wanted to protect?
We had good intentions, but we built a weakness into our own field—a weakness that shows up whenever we expect numbers alone to explain complex human needs.
2.1. The Quantitative Bias: Mistaking Behaviour for Need
It began subtly. We started to value numbers and data more than real understanding. We became caught up in big data—A/B tests, analytics, and engagement numbers—while ignoring the deep insights from interviews and field research.
We became champions of "validation," but narrowed its definition to only what could be quantified and A/B tested in the short term. We began validating by data rather than by the user. Or worse, we reduced their complex humanity to a mere number—a percentage, a completion rate—mistaking the metric for the meaning behind it.
The philosopher Martin Heidegger would have recognised this shift instantly. His concept of "Enframing" (Gestell) describes how technology reduces the world to a resource to be optimised (Heidegger, 1954). We weaponised it, reframing users as datasets to suit our hunger for scalable certainty. We stopped seeing people as subjects with stories and began treating them as a resource for extraction—a practice critics call "dataism," where data speaks for itself and the human context is abandoned (van Dijck, 2014).
Ironically, this focus led to designs that a human designer might see as 'bad' but that still got lots of clicks and conversions. That's because, unlike people who use ethics and empathy, AI just follows the numbers it's given and optimises for them without hesitation.
We didn't just start using a new tool; we changed our whole approach. Philosopher Albert Borgmann said we moved from working with 'things' to using 'devices' (Borgmann, 1984). A 'thing' like a musical instrument takes skill and gives deep satisfaction. A 'device' like a music app is easy to use but doesn't require real involvement. In trying to make everything efficient, UX has mostly built 'devices' that reduce real engagement. We chose convenience over depth, trading meaningful understanding for easy data and losing the expertise that once made us valuable.
2.2. The Paradox of Success: How Our Victory Led to Our Vulnerability
Here's the bitter paraHere's the tough truth: we gave up depth for popularity. Our push for user-centric design was so convincing that companies focused only on what they could measure. I'm not sure we even noticed when influence began to replace integrity—when our principles quietly faded in the name of practicality. By showing our value so well, we gave the systems we built the tools to make us less needed. Taism" leads to what Erika Hall describes as knowing "exactly how many people clicked a button but having no idea why"—abandoning the 'why' for the 'what' (Hall, 2013). This quantitative bias perfectly primed the industry for AI's arrival, as AI excels at analysing these very same behavioural datasets, further entrenching the fallacy that all that matters is what is measurable.
If these methods are so flawed, why are the products using them so successful? The answer is that we've been measuring success the wrong way. We conflate engagement with value and revenue with righteousness. A product can be highly "successful" by narrow metrics—addictive, viral, profitable—while being ethically barren or socially corrosive. We optimised for financial and engagement metrics that benefit the business in the short term, while de-prioritising human metrics like well-being, trust, and long-term satisfaction—the very values we once swore to protect. We won the battle for attention, but we are losing the war for human dignity and trust.
2.3. The Philosophical Precedent: A History of Optimising Humanity Out
Focusing only on measurement isn't new; it's a digital version of an old problem. Long before algorithms, Mary Shelley's Frankenstein exposed the archetype we're now living—the creator horrified by his creation (Shelley, 1818). Her warning was about hubris: the danger of pursuing progress without interrogating the human cost. We knew this story, yet we planted the same seed, repeating Shelley's tragedy in spreadsheets and sprint reviews.
Philosopher Theodor Adorno warned that systems of control and conformity can make people stop thinking critically (Adorno & Horkheimer, 1947).
This shift is best understood through Marshall McLuhan's foundational idea: "the medium is the message" (McLuhan, 1964). The nature of a medium itself shapes society and human behaviour more than any specific content it delivers.
Now, AI has become the ultimate embodiment of this principle: it is not just a tool but a medium that actively reframes how we see the world. It prioritises scale, speed, and optimisation—the very values philosopher Byung-Chul Han identifies as the engine of the 'smooth' society (Han, 2015).
In this AI-shaped world, we see McLuhan's theory realised: a medium that rewards frictionless interaction and quantifiable outcomes, fundamentally reshaping behaviour away from depth, meaning, and critical thought. Han calls this the tyranny of the 'smooth'—the removal of all friction, and with it, the space for authenticity, empathy, and real human connection (Han, 2015).
2.4 The Forgotten Foundation
We did not arrive at this crisis by accident. The quantitative bias we now endure is a historical aberration, a wrong turn taken in a rush for influence and scale. To understand the path forward, we must remember the qualitative foundation upon which our field was built.
User experience did not emerge from a vacuum of data sheets and analytics dashboards. It was born from the rich, humanistic traditions of human-computer interaction (HCI) and participatory design. Its pioneers were cognitive psychologists and ethnographers, not growth hackers.
Don Norman, who popularised the term "user experience," framed it in terms of cognitive science—how humans perceive, learn, and emotionally engage with the world (Norman, 1988). His work was about affordances and signifiers—deeply qualitative concepts about how design communicates meaning and suggests action.
Concurrently, the Scandinavian school of participatory design, led by figures like Pelle Ehn, insisted that users must be active co-designers in the process, not mere subjects to be observed (Ehn, 1988). This was a fundamentally qualitative, democratic ethos that valued the wisdom of lived experience over the cold analysis of behaviour.
Our foundational methodologies were manuals on qualitative understanding. Contextual Design, pioneered by Beyer and Holtzblatt, was built on the principle of "going to the gemba"—the place where work actually happens—to observe and interview users in their natural habitat, uncovering their unarticulated needs and cultural dynamics (Beyer & Holtzblatt, 1998). The goal was to build a shared understanding, a narrative, not to mine data points.
Somewhere in our quest for a seat at the table, we made a Faustian bargain. We traded this deep, empathic understanding for the currency of scalable data. We abandoned the rich, complex narrative of the human experience for the thin, comforting certainty of the dashboard. In championing what we could measure, we forgot how to measure what really counts.
This is not just a professional misstep; it is a philosophical betrayal of our roots. The quantitative bias is a system doing exactly what we built it to do. The way out is not to invent a new future from scratch, but to return to our forgotten foundation and build anew from there.
Conclusion: The Architecture of Our Own Obsolescence
We did not set out to undermine our profession, but neither were we bystanders. How many times did we celebrate 'earning a seat at the table' without asking what we'd smuggled under it—compromises disguised as collaboration, metrics masquerading as insight? Every time we let a KPI dashboard stand in for a user's story, we lay another brick in the architecture of our obsolescence.
This isn't just a normal market change. It's a deeper problem about our values. Psychologist Abraham Maslow warned, "if the only tool you have is a hammer, you treat everything as a nail" (Maslow, 1966). For us, if the only thing we value is measurable growth, we start to see every human experience as just another data point to optimise. We became mechanics of interaction when we were meant to be architects of experience.
AI was never the real problem; it is the ultimate expression of the system we built. It is a mirror reflecting our own quantitative bias back at us with perfect, unblinking efficiency. The system is doing exactly what we designed it to do.
Now we face a choice: continue to be the mechanics of this system, or remember our foundation and become its architects once more. The path forward is not to destroy the tools, but to redesign the blueprint. We must ask if we're willing to change not just our tools, but what we think success means—to build systems that measure trust as carefully as clicks, and dignity as closely as conversions. If we don't, we'll keep getting people's attention but lose what really matters. The next chapter is not about survival; it is about redemption.
3. The Pivot: From User Experience to Human Experience Design (HXD)
"We cannot solve our problems with the same thinking we used when we created them."
With these challenges, trying to be as efficient as AI isn't the answer. The only way forward is to rethink our purpose. We need to move from simply improving interfaces to protecting human dignity.
This is a shift from focusing only on skill to focusing on conscience. We need to change our main question from "How can we make this easier to use?" to "How can we make this truly improve people's lives?" That means putting human well-being first, not just engagement, and designing for independence instead of dependence.
This is what Human Experience Design is all about—a term inspired by Frank Chimero's idea to design for "Humans, not Users," but now with even more meaning: making human dignity, independence, and well-being our top priorities (Chimero, 2013). This new focus builds on our skills; we're not just designing for screens anymore, but for how people interact with their lives.
3.1. The New Responsibilities: From Designer to Architect
This change in purpose means we also need to change how we see ourselves and how we work. We're not just designers focused on details anymore; we need to become architects who think about the bigger picture. Our main skills as designers—like empathy, prototyping, and testing—are still important. But as architects, our job is bigger: we help set the foundation for how products are built. We take responsibility for human dignity, making sure we address issues that used to be seen as someone else's job.
Our new responsibilities include:
- Orchestrating Transparency: It's not enough for an AI to be transparent; we must define what needs to be explained, to whom, and why it matters in a human context. We translate technical explainability (XAI) into genuine user understanding and trust, moving toward what anthropologist Kate Crawford calls "actionable accountability" (Crawford, 2021).
- Championing Data Minimisation: We must establish the ethical principle that data should not be collected in the first place unless absolutely necessary. We set the boundary within which AI operates, championing the principle of "privacy by design" as a foundational ethical requirement, first articulated by Ann Cavoukian (Cavoukian, 2009).
- Directing the Fight Against Bias: AI can identify statistical biases in datasets, but it cannot define fairness. It is our role to ask, "Fairness for which group? Under what conditions?" and make the nuanced value judgments that no algorithm can. This requires moving beyond technical "debiasing" to embrace a practice of algorithmic auditing that centres equity and justice, as pioneered by researchers like Inioluwa Deborah Raji (Raji et al., 2020).
- Leading Sustainable and Inclusive Design: AI can optimise a server farm for efficiency, but it cannot decide that user well-being is a higher priority than endless growth. We must set the north star—sustainability, inclusivity, well-being—ensuring technology supports human dignity and autonomy as paramount values, a core tenet of human-centred design reaffirmed by Don Norman (Norman, 2013).
This aligns with the principles of Value-Sensitive Design (VSD), which insists that ethical considerations must be woven into the fabric of the design process from the very beginning (Friedman & Hendry, 2019).
3.2. A Practitioner's Reflection: Ideas for the Transition
With tight deadlines and busy schedules, big ideas can feel out of reach. But even small steps toward ethical design reveal the bigger problems built into our systems.
It's encouraging to see I'm not alone in this. Erika Hall, in Just Enough Research, argues that design must move beyond a narrow focus on metrics and toward deeper human understanding (Hall, 2013). And Cennydd Bowles, in Future Ethics, insists that ethical design isn't a constraint but a new frontier for innovation, a way to build trust and lasting value (Bowles, 2018).
A few notions for our practice—and the tensions they reveal:
- Pre-Mortem for Trust: In a kickoff, dedicate 30 minutes to: "If this product eroded user trust a year from now, what systemic choice would have caused it?" Using a tool like the Ethics Kit's Tarot Cards of Tech can surface risks early, but it often clashes with cultures that reward speed over scrutiny.
- Mandatory Harm Audit: Add a single field to design briefs: "What systemic harm could this design perpetuate?" This modest step risks tokenisation unless leadership treats ethics as a KPI, not a checkbox. Projects like the Dark Patterns Tip Line show the power of naming manipulative design, but prevention requires structural change.
- The "Consent & Clarity" Sprint: Before development, run a dedicated sprint focused solely on the user's understanding of the product. Prototype and test not just the UI, but the privacy policy summaries, data usage explanations, and permission dialogues with the same rigour applied to core features. This shifts transparency from a legal afterthought to a core user-experience challenge.
- Bias Discovery Workshop: Facilitate a one-hour workshop with a diverse group (engineers, PMs, marketers) using a simple framework: 1. Who are we potentially excluding? 2. What assumptions are we making about our users? 3. How could this functionality be misused? This leverages collective intuition to identify blind spots before a single line of code is written.
- Ethical Prototyping: We prototype for usability; we should also prototype consent flows and privacy settings. We should intentionally design and test "friction for integrity"—deliberate pauses that protect user autonomy. This challenges the dogma that "smooth" equals "better" and demands a radical rethinking of success metrics.
- A/B Tests for Integrity: Redefine A/B tests to start with well-being metrics like "sense of control" or "perceived fairness"—not as secondary to conversions, but as the precondition for their legitimacy. This aligns with a broader shift toward holistic metrics but highlights the tension between business goals and human outcomes.
On winning hearts and minds:
- Reframe the Value: To navigate short-termism, demonstrate how ethical rigour is sustainable engagement. When we advocate for "brand integrity" or "risk mitigation," we're exposing ethics as business. Trust isn't a line item; it's the ledger itself.
- Pilot Partnerships: Find a product manager or engineer who also worries about long-term brand health. Propose a small experiment: test a dark pattern against a transparent alternative. Measure not just clicks, but trust. Apple's privacy stance proves that ethics can be a unique selling proposition, exposing the false choice between ethics and profit.
In the end, it's about changing how we think. We need to care not just about the user at the screen, but also about their future, privacy, and well-being throughout the whole system.
Conclusion: The Ally in the Machine.
I believe this new role, as an architect of human dignity rather than a designer of interfaces, is not a retreat from technology but the only way to truly guide it. Writing this essay proved that to me. To cover such a big topic, I used AI not as a replacement but as a kind of digital ghostwriter—a fast, helpful partner that helped me organise my drafts, my reading, and my scattered thoughts into a clear whole.
But the narrative arc, the core argument, and the ethical framing? That is and must remain the author's work. This process changed my perspective. It confirmed my belief that AI, our most disruptive force, doesn't have to be a threat. Instead, it can be our strongest ally when we understand its true role: to carry out the vision we set. The most important question now isn't if we'll use it, but how we can use it well.
4. The New Symbiosis: From Quantitative Tools to Qualitative Partnerships
"It is only with the heart that one can see rightly; what is essential is invisible to the eye."
I've been thinking about that question a lot. My own experiment in writing, using AI as a kind of digital ghostwriter, wasn't about using its power for more speed or scale. It was an attempt to reclaim space for deeper thinking. I found that by letting AI handle the heavy lifting—like sorting through lots of text and managing information—it actually freed me to do the slower, more human work: listening for meaning, shaping the story, and considering the ethical weight of each word.
This chapter explores that possibility for our whole field. This partnership is a practical way out of the quantitative trap we created. It's how we put the shift from UX to Human Experience into practice.
The framework here is based on a simple idea: we should use our quantitative tools to support a deeper, qualitative understanding. AI can handle scale and data, but our unique role is to seek depth and meaning.
This isn't a new set of tasks. It's a return to our core principles: empathy, context, and story, now with tools that can handle today's complexity. The models that follow aren't just about using AI—they're about reclaiming our humanity as designers.
4.1. The Great Inversion: Tools for Understanding, Not Optimisation
For a long time, our relationship with technology has been shaped by one question: How can this make us faster? We adopted every new tool to optimise, becoming masters of efficiency. In doing so, we hollowed out our own work, reducing the rich, qualitative human experience to a set of trackable, quantifiable metrics.
My own experimentation led me to a different, much quieter question: What if this tool could make us understand more deeply?
This is the big shift at the heart of this new partnership. It's a conscious choice to use AI not just for speed and scale, but for what we really need: to reclaim our time and ability to go deeper.
This is where our unique expertise matters most. Anyone can ask an AI for an answer. But it takes a designer's mindset—our training in empathy, systems thinking, and advocacy—to use that power for real human understanding. Our role has never been just about doing the work, but about guiding the purpose behind it.
Imagine a different workflow: Instead of using AI to generate 100 versions of a landing page to see which one converts best, we use it to analyse 10,000 support tickets to find the main source of user frustration that no one has had time to fix. The tool is the same. The output is data. But the intent is different. We're no longer optimising for extraction; we're building for understanding. This shift in intent—from quantitative optimisation to qualitative insight—is our unique value. That's why this is our work to lead.
This is the new way of working together. It's not about humans versus machines. It's about using the power of data to support deeper, more meaningful insights.
- Let AI handle the what and the how much: Let it find patterns in vast datasets, audit for statistical bias, and simulate the potential consequences of a design choice at scale.
- We must fiercely claim the why and the what for: Our role is to interpret those patterns, to investigate the human story behind the bias, and to make the ethical judgment call on which consequences are acceptable. This is our domain. This is why we are the right ones for the job.
This shift is our way out of the quantitative trap. It's how we stop being mechanics of the interface and become architects of the human experience. We're not giving up our role; we're finally returning to it.
4.2. Five Models for Qualitative Depth: The New Workflow
Shifting from quantitative optimisation to qualitative understanding is a mindset. But how does it actually change our daily work? In my own practice and from others, I've started to see new patterns of collaboration. These aren't strict rules, but patterns of practice. They're starting points for new conversations with our tools and within our teams.
Here are five models that illustrate this shift, turning the abstract principle into tangible action.
4.2.1. The Empathy Amplifier
I've often felt trapped by the limitations of data—knowing that users were frustrated, but struggling to understand the depth of why. This model uses AI's scale to give us the qualitative clarity we've been missing, finally. It's about turning big data into deep understanding.
- The Old Way(Quantitative Optimisation)
A/B testing button colours to incrementally improve a conversion rate. - The New Symbiosis(Qualitative Understanding)
AI analyses thousands of support tickets and forum posts to cluster themes of deep-seated frustration. - The Human Shift
We move from optimising a metric to investigating the root cause of human emotion. We design solutions that address pain, not just symptoms.
Our Human Work: The AI identifies the scale of the pain, a modern extension of the foundational principles of contextual inquiry (Beyer & Holtzblatt, 1998). Our irreplaceable role is to then do a deep, qualitative dive into the most critical cluster. We call users, sit with the nuance of their frustration, and understand the story behind the data point, much like the empathetic approaches described by Steve Portigal (Portigal, 2013). We then design from that empathetic understanding. Tools like Canny.io are beginning to operationalise this symbiosis, using AI to cluster feedback, but they ultimately serve to direct our human attention to what matters most.
4.2.2. The Bias Hunter
It's tempting to see algorithms as neutral, but they often reinforce our blind spots. This partnership is about moving from assuming objectivity to actively seeking fairness, a practice known as algorithmic auditing (Raji et al., 2020).
- The Old Way(Quantitative Optimisation)
Trusting that a data-driven algorithm is inherently objective. - The New Symbiosis(Qualitative Understanding)
AI runs fairness audits to surface statistical disparities across demographics. - The Human Shift
We move from accepting output to investigating the root cause of bias and leading the redesign for equity.
Our Human Work: The AI, using toolkits like IBM's AI Fairness 360, finds the statistical disparity. We then led the cross-functional effort to diagnose the why—was it flawed data, a biased feature?—as uncovered in studies like "Gender Shades" (Buolamwini & Gebru, 2018). We champion a more equitable design, taking full ethical responsibility for the system's outcomes.
4.2.3. The Consent Prototyper
Privacy is often treated as a legal afterthought. This model is about designing for trust and clarity from the start, making "Privacy by Design" (Cavoukian, 2009) a core part of the user experience.
- The Old Way(Quantitative Optimisation)
Treating privacy settings as a compliance checkbox, often buried in legalese. - The New Symbiosis(Qualitative Understanding)
AI generates plain-language explanations for data use and simulates different consent flow variations. - The Human Shift
We move from legal compliance to actively building trust as a measurable experience.
Our Human Work: We take the AI's simulations and prototype consent flows with the same rigour we apply to any core feature. We user-test for clarity and understanding, ensuring the experience of consent feels respectful and empowering, navigating the complex challenges of big data consent (Strandburg, 2014).
4.2.4. The Friction Curator
The push for a "frictionless" experience has often come at a hidden cost to user well-being. This is about making intentional, ethical choices, as Cennydd Bowles suggests (Bowles, 2018)—sometimes adding friction to protect people.
- The Old Way(Quantitative Optimisation)
Relentlessly removing all friction to maximise short-term engagement. - The New Symbiosis(Qualitative Understanding)
AI models the potential negative consequences of a frictionless design, like addiction or misuse. - The Human Shift
We move from valuing smoothness to valuing well-being, making ethical choices about the pace of interaction.
Our Human Work: The AI shows us the potential risks, echoing critiques of how technology hijacks our minds (Harris, 2016). We make the final call, often designing deliberate pauses, confirmations, or breaks to protect users from their own impulses or from malicious patterns. We become curators of a healthy rhythm.
4.2.5. The Context Synthesiser
We often design for a perfect, linear journey. But real life is messy. This model helps us accept that complexity, thinking in terms of systems (Meadows, 2008) and living with complexity (Norman, 2011).
- The Old Way(Quantitative Optimisation)
Designing for an idealised, "average" user journey. - The New Symbiosis(Qualitative Understanding)
AI analyses a multitude of real, anonymised user journeys to identify edge cases and unexpected pathways. - The Human Shift
We move from designing for a persona to designing for the full, complex spectrum of human behaviour.
Our Human Work: The AI reveals the chaos of real use. We synthesise these patterns into a coherent, compassionate system that is resilient and adaptable. We ensure the design works not just in theory, but for everyone, in the messy context of their lives.
These models are just a starting point. They're less about the AI itself and more about us—our intentions, our questions, and our willingness to bring back the depth that makes design meaningful.
4.3. The Unautomatable: Guarding the Space for Human Judgment
As we use these models, one important question remains: what must we, as humans, never give away to machines? This isn't about technical skill, but about awareness and real human experience.
Philosopher Hubert Dreyfus argued that human expertise transcends rule-based logic, relying on an intuitive, embodied understanding of context that machines cannot replicate (Dreyfus, 1972). Similarly, Michael Polanyi's concept of "tacit knowledge"—the idea that "we can know more than we can tell"—perfectly captures the ineffable nature of human judgment and skill (Polanyi, 1966).
AI can simulate empathy by recognising patterns, but it cannot feel the moral weight of a user's distress. It can optimise for a fairness metric, but it cannot debate the philosophical nuances of what "fairness" truly means in a specific human context—a capacity that moral philosopher Martha Nussbaum ties to narrative imagination (Nussbaum, 1990).
Our most critical role in this symbiosis is to be the stewards of this unautomatable domain. This is our sacred charge:
- The "Why": AI excels at the how and the what. Our unique capacity is to interrogate the why. Why are we building this? For whom? And at what potential cost? These are questions of purpose and ethics, not efficiency.
- Moral Imagination: We must envision the potential unintended consequences of a system and make value judgments that no cost-benefit analysis can resolve.
- Vulnerability: True empathy requires vulnerability—the willingness to be wrong, to sit with ambiguity, and to be genuinely changed by another person's experience. This is a profoundly human act.
- The Right to Silence: We hold the right to say "no." To decide that something, even if it is feasible and profitable, simply should not be built. This ultimate ethical veto is the purest expression of our responsibility.
The goal of this partnership isn't to make ourselves unnecessary. It's to raise our work so we can focus on the tasks only humans can do. By letting AI handle the numbers, we're freed up to do the deeper work of making sure technology serves people.
This is our greatest responsibility: to protect the space for human judgment. We must make sure that, even as machines work efficiently, the quiet voice of wisdom is not just kept alive, but made stronger.
Conclusion: The Infrastructure of Care
This new symbiosis is not a surrender of our craft to the machine. It is its redemption.
We began this chapter by seeking an answer to how we can possibly use AI well. The answer, it turns out, is not found in a new tool, but in a renewed purpose. The models we've explored are more than workflows; they are the beginnings of a new infrastructure of care—a system designed to channel the power of quantification back into the service of qualitative depth and human dignity.
This isn't about us versus AI. It's about us using AI for our own benefit.
We are building a practice where the machine handles the temporary things—the trends, the data points, the endless variations—so we can focus on what matters: meaning, context, and the moral consequences of what we create. We're not just designing better products; we're designing a better way to be designers.
The work ahead is to codify this infrastructure of care, to weave it into the fabric of our teams and our culture. It is the most important system we will ever design. The next chapter explores how we make this infrastructure not just an ethical choice, but a strategic one—proving that care is not a cost, but the most valuable asset we have.
5. The Pragmatic Defence: Ethical Design as a Strategic Imperative
"If you think compliance is expensive, try non-compliance."
When I started out, I thought "user research" was a nice-to-have, not a must. That view pushed me to find ways to incorporate user understanding into design without spending much, and eventually, I completed a Master's in low-cost usability. Later, I saw the same thing happen with "ethics." Teams would say, "We'll focus on it once we grow," confusing being busy with real progress—until big fines, failed products, and lost user trust made them rethink.
5.1. The Legal Mandate: From Ethics to Compliance
Regulation is forcing the issue from a theoretical debate into a concrete operational reality. The EU's Digital Services Act (DSA) explicitly bans "dark patterns" that "subvert or impair user autonomy, decision-making, or choice" (European Parliament, 2022). Meanwhile, consistent litigation has established that website accessibility is a requirement under the Americans with Disabilities Act (ADA), with courts ruling that digital platforms are "places of public accommodation" (U.S. Court of Appeals, 2021).
Why This Matters:Compliance isn't just about avoiding fines; it's about putting dignity into practice. Brazil's data authority (ANPD) fined online stores for using manipulative countdown timers (ANPD, 2023), and Australia's ACCC sued Meta for confusing consent flows (ACCC, 2024). The message is clear: dark patterns are now a legal risk. Lawyers set the minimum standard, but it's up to ethical designers to turn those rules into user-friendly, humane experiences. This isn't optional anymore—it's the price of doing business.
5.2. The Risk Mitigation Mandate: The Price of Cutting Corners
Unethical design can cost a company much more than just fines—it can ruin trust, damage reputations, and even threaten the company's future.
5.2.1. Direct Costs:
Since 2015, major companies have paid over $54 billion in fines and settlements for ethical design failures—a staggering figure that underscores the profound financial consequences of overlooking human values in technology.
Volkswagen's "Dieselgate" scandal alone cost the company $34 billion in fines, buybacks, and legal fees after engineers deliberately programmed cars to cheat emissions tests, eroding public trust on a monumental scale. Similarly, Meta's Cambridge Analytica scandal—enabled by poorly designed privacy settings—led to a historic $725 million settlement, the largest data privacy penalty in U.S. history at the time.
Not to be overlooked, Google faced €8.25 billion in EU antitrust fines for anti-competitive design choices, such as illegally prioritising its own shopping ads. These cases illustrate a clear pattern: what might once have been dismissed as "mere" interface or system design choices can carry existential financial, legal, and reputational risks. Ethical failures in design are no longer abstract concerns—they are quantifiable liabilities.
5.2.2. Indirect Costs:
The fallout often hits harder than the fines. When Meta's Cambridge Analytica scandal broke, the company lost $119 billion in market value in two days—a historic drop that saw a significant portion of that value never recover due to eroded investor trust (CNBC, 2018; WSJ, 2018). Boeing's 737 MAX crashes, rooted in unethical safety oversight and deceptive pilot training interfaces, wiped out $60 billion in market cap, with a substantial portion of the losses never recovered (Reuters, 2019).
5.3. The Regulatory Mandate: Evolving Enforcement and Anticipated Scrutiny
Regulators are no longer waiting for harm to occur—they're scrutinising design choices themselves as sources of risk.
5.3.1 Current Enforcement: Direct Targeting of UI Patterns
- Case Study: TikTok Lite's "Task and Reward" (2024): The EU invoked the DSA, arguing TikTok's "watch videos, earn points" system—with its addictive loops and obscured opt-out flows—qualified as a "dark pattern." The program was suspended preemptively (European Commission, 2024a).
Lesson: Even hypothetical harm can now trigger penalties. - Case Study: Microsoft's AI "Recall" Backlash (2024): Microsoft's AI-powered feature recorded user activity by default and buried privacy controls. Within days of public outrage, the company overhauled the design to make it opt-in (Warren, 2024).
Lesson: Privacy-by-default is no longer optional—it's expected. - Case Study: Meta's "Pay or Consent" Model (2024): Meta's binary choice—pay for an ad-free experience or surrender personal data—is under investigation by the EU. Regulators are targeting the choice architecture itself as a potential violation of the Digital Markets Act (DMA) (European Commission, 2024b).
Lesson: Coercion dressed as choice is still coercion.
5.3.2. Anticipated Scrutiny: New Frontlines in Design Ethics
- Trend 1: FTC's War on Addictive Design: Regulators are moving beyond data privacy to target engagement-driven UI patterns. FTC Chair Lina Khan has stated the intent to scrutinise "design practices that loot our attention and monetise our time" (Khan, 2023).
Implication: Features that drive "time spent" could soon drive lawsuits. - Trend 2: The AI Consent Crisis: As AI systems hunger for training data, regulators are scrutinising how companies obtain user consent. The EU's Data Act strengthens requirements for clear, informed consent for data use, including for AI training (European Parliament, 2023).
Implication: Clear, affirmative consent is the minimum viable product for AI development.
5.4. The Economic Mandate: The Hidden Costs of Unethical Design
Beyond fines and lawsuits, unethical design inflicts harder-to-measure—but equally devastating—harm.
- Loss of Trust & Brand Damage: Acquiring a new customer costs 5–25 times as much as retaining one. After Cambridge Analytica, Meta's teen user base shrank by 18% (Pew Research Center, 2023), prompting billions of dollars in rebranding campaigns.
Lesson: Once trust fractures, rebuilding it demands resources that could've fueled innovation. - Compliance Overhead: The global cost of GDPR compliance is immense. Large multinational corporations spent an average of over $1 million on initial implementation alone (Society for Corporate Governance, 2020).
Lesson: Ethical design isn't free, but cutting corners costs more. - Cybersecurity Fallout: Poor design choices often create security holes. The immense global cybercrime industry thrives on flaws like confusing privacy settings and deceptive permissions (McAfee & CSIS, 2023).
Lesson: Unethical UI isn't just exploitative—it's exploitable.
The growing costs show a clear truth: hiring experienced UX professionals early to focus on ethical design saves much more money than always reacting to problems later. And if you think hiring a designer is expensive, just look at what a legal team costs these days.
Conclusion: The Bottom Line on Values
The evidence is no longer anecdotal; it is financial, legal, and operational. The cost of unethical design has been quantified in the most unambiguous terms possible: billions in fines, trillions in lost market value, and immeasurable erosion of public trust.
This transforms ethical practice from a philosophical nice-to-have into a non-negotiable pillar of modern business strategy. The question is no longer if companies can afford to invest in ethical design, but how they can possibly afford not to. The return on investment is no longer just a healthier society—it is a healthier balance sheet. The pragmatic defence is now the only defence.
Synthesis: The Indispensable Human Role
The challenges are already here. AI, an overcrowded field, and a focus on quick results have all come together in our work. But as we've seen, this isn't just an outside problem—it's the result of a flawed approach we helped create, one that put numbers ahead of real understanding and efficiency ahead of ethics.
We can't just avoid these challenges. Instead, we need to rethink and broaden our role. We should move from being just UX designers to becoming ethical designers. This isn't a step back—it gives us a clearer purpose. It means asking not just "how," but also "why" and "should we?"
This new role isn't just an extra cost—it's a real source of value. As the Meta and Volkswagen cases show, unethical design can cost billions and ruin a brand's reputation. Ethical designers are the best way to catch big risks early, before they become lawsuits or major issues.
As Apple has shown, ethical design is now essential for staying competitive. When features are easy to copy, it's trust, transparency, and respect that set you apart. These qualities build loyalty that competitors can't easily match.
We can't stop AI from growing, but we can help guide it in a better direction. Our special advantage is that we helped build these systems, so we understand them well enough to make sure they serve people better.
And to any executive reading this who still sees design as a cost:
Hiring an ethical designer is cheaper than facing the legal and reputational problems that come with AI systems built solely on data and unchecked algorithms. My job was never just an expense. It's your protection against big risks and your way to build real user trust. The market now expects companies to have a conscience. That's what I'm here for.
The responsibility is immense, but it is ours to claim. The responsibility is huge, but it's ours to take on. Let's work together to build a future that's not just usable, but truly worthy of people.
The age of User Experience is over. The age of Human Experience Design has begun. It is time we built for it.