AI for Humans

Or, Why do AI assistants need so much assistance?

Last week, TIME magazine unveiled a new AI-powered feature for their Person of The Year articles–a chatbot that can summarize, read, and answer questions about the article in order to enable and encourage “meaningful, focused conversations about a single aspect of TIME’s journalism.” I was excited to test drive the new feature because I think that goal is incredibly important as we continue down a path of bite-sized, low-cognitive-lift media. But I was also intrigued for an admittedly selfish reason–I released a similar prototype last month, and wanted to see how my version stacked up.

The TIME experience was underwhelming. It was also familiar: a chatbot that needed me to guide it in order to get the response I wanted. A helpful assistant that needed helpful assistance. A wikipedia article in the guise of a conversation.

Too often, we pin user satisfaction on the quality of what an AI produces: was the answer comprehensive? Was the image realistic? Did the email sound professional enough? We assume advanced capabilities and high-quality outputs will necessarily produce fulfilling experiences. 

But that’s not quite right.

The status quo is too needy.

TIME’s chatbot needs more context in order to answer a question—even when the question is about the first sentence of the article.

Today’s AI software systems constrain the utility of the AI models that power them. It’s why we see dramatically less uptake on AI tools for school than the zeitgeist would have you expect. These tools limit the potential richness of human-AI interaction because they impose a set of hidden requirements for every user, every time they log in. Specifically, users often need to fulfill at least two of the following requirements:

  • the user needs to know what they need (eg, “a memo”)

  • the user needs to know how to get it (eg, the right prompt and context)

  • the user needs to have the time to perfect it (eg, continually refining output in chat)

Think about the last time you had a frustrating experience with a chatbot—it was likely because you couldn’t satisfy at least two of the above points.

Systems are designed this way because designers consider the final output to be the “table stakes”. But in this new age of unimaginably flexible computing, we have the ability to focus on experience. I mean this in a deeper sense than the common understanding of UI/UX: I’m not talking about delighting users with confetti, as fun as that may be. We now have the capacity to rethink entire ways of doing, because we no longer need to be constrained by the conventions of the “static” technology of the past 3 decades of computing.

Novel user experiences require novel design principles.

I’ve been thinking about this incessantly during my Fellowship with Teaching Lab Studio, and over the last six months, I’ve developed a set of design principles for creating rich AI-powered encounters.

These principles are my attempt at solving the issues mentioned above and shifting the paradigm from output to experience.

Cast Human Fingerprints

Growing up, I loved everything Aardman Animations created. Creature Comforts and Wallace and Gromit were charming claymation films, but what made them truly magical to me was the fact that you could see the animators’ fingerprints press and bump and jump across the moving clay figures. You knew that someone was responsible for this magic. Experiences are more compelling, more “sticky”, when we feel that another human had a part in their conception, creation, or execution: it’s why people flock to the Louvre to see the same smile they’ve seen on countless coffee mugs. It’s what NFTs tried to capitalize on (and ironically, the reason they failed). When designing AI experiences, we should also cast human fingerprints: intentionally show users the marks of humanity that molded the encounter.

Complete the Circuit

Retrieval Augmented Generation, or RAG, is the process of allowing a large language model (LLM) to access a large database of information, and use information relevant to the user’s query to generate a response: a little bit like an open-book test. But a RAG-powered chatbot is just as inert as a regular chatbot. It has access to deep knowledge, but only uses that knowledge when the user asks (in a specific way!). Instead, when designing AI experiences, we should complete the circuit: use the information available to the LLM and proactively do something with it.

Find the Right Path

The concept of “desire paths” is common in human-centered design. It refers to the phenomenon where pedestrians often veer off paved walkways to cut through a lawn to get to their destination. When thousands of people do this, they wear a “desire path” into the dirt, a representation of what pedestrians “actually want”. The design lesson is to figure out a user’s desire path, and make a product they “actually want”. This lesson makes sense, but it is a product of constraints. If I’m about to miss the bus, I might cut through the lawn to get to the bus stop faster. But if I’m on a walk on a sunny day listening to my favorite podcast, I might actually choose an even longer route to the bus stop. The desire path is dependent on my constraints (catching the bus on time, a non-direct sidewalk). But without those constraints, my path is different yet again. I call these constraint-free routes inspire paths. 

Inspire paths are the routes that we choose to take because we enjoy the path itself. It’s why some people still choose to make physical art when digital tools are readily available, why you might read a book instead of watching the movie version, why most people try to learn chess instead of using a chessbot to cheat against online players. Whether someone chooses a desire or inspire path is dependent on the constraints that person is under. We should make both available. When designing AI experiences, we should find the right path: figure out whether users need constraint-imposed efficiency, or if they want to engage more deeply in a process—or the option for both.

“I’m here to help you with information about Taylor Swift.”

If you watched the video at the start of this post, you might already see how powerful these principles can be, but I think it’s worth discussing here. Both TIME's implementation and my own experimental essay aim to create interactive, AI-enhanced reading experiences, but take different paths to get there.

TIME's implementation offers standard AI features: summarization, chat, and audio. The interaction remains relatively traditional – question and answer, with the AI serving primarily as an information retrieval tool. When testing both systems with the same interaction pattern (listening to a paragraph and asking a contextual question), the differences become clear. Where TIME's AI responds with requests for clarification, a more intentionally designed system can anticipate questions and provide nuanced, contextual responses that feel more like a conversation with someone who deeply understands the material.

In my implementation, I tried to commit to the principles above in several ways:

  1. Cast Human Fingerprints: I wrote the essay myself while giving the AI clone access to unreleased drafts, notes, and inchoate ideas from my Notion account. The result is that users get to really feel like they are interacting with a deeply thought out idea and can interrogate the progenitor of that idea.

  2. Complete the Circuit: I "prepped" my AI clone with not just the content but also example answers to questions I anticipated users might have. Paired with an essay written to elicit questions, this results in an experience that poses a provocative idea, anticipates both that the user will respond and the response itself, and then answers the response in the way I might.

  3. Find the Right Path: The essay allows for multiple ways to engage: maybe you want to just read the essay. Maybe you want to listen to it like a regular podcast. Maybe you have the time and space to interact with it. Maybe that changes from moment to moment. All paths are available at any moment.

Use the principles yourself.

I should say here that this is all experimental. My way is not the only way, and is likely not the best way. But it’s clear to me that there is a better way than what we interact with right now. And it will take continuous experimentation by as many AI developers and designers to find the next better way.

The promise of AI has been conflated with the promise of productivity. But to me, the real promise of AI is that we will be able to more deeply engage with each other, with ideas, and with the process of creation. We can do that—I can see that future clearly—but only if we commit. Commit to shifting paradigms, commit to deep engagement, commit to human experience. 

I hope, now, that you can see that future a little more clearly too.

Want to experience these principles in action? Check out my interactive essay Sentient AI Is Closer Than You Think.