top of page

Trust is built by aligning expectations with reality

Updated: 23 minutes ago

Trust in AI Isn’t Magic. It’s actually just expectation Alignment.


Ask ten people how they feel about AI and you’ll get everything within the range from “this is AGI, we’re cooked” to “it’s all hype, my grandfather does this better.”


Underneath all of that noise, there’s one simple idea that spells out whether people trust what you’re building:

Trust = expectations and reality are in sync.

In this week's Stage @ our Co-working Friday, Jeremy Soo, founder of emotional AI startup Curve and longtime builder across crypto, enterprise AI, and media, shared the following insights:

  • How trust actually works (or not work) in AI products

  • Why so much “breaks” at the UX layer, and,

  • What to do about it if you’re building anything for real humans


Read on if you're looking for a framework that you can apply tomorrow to guide your thinking on tactical decisions related to the products you are building.


The Trust Equation: Four Levers You Actually Control


Jeremy formulated a simple equation for trust. These are these 4 levers:

  • Predictability

    Does your product behave consistently over time? Same input, roughly the same type of output? Or does it feel like a slot machine?

  • Legibility

    Can users form a mental model of why it did what it did? They don’t need full chain-of-thought. They just need to feel like the behavior makes sense.

  • Calibration

    How well do your promises match reality? Are you under‑promising and over‑delivering, or the other way around?

  • Stakes / Cost of Failure

    When things go wrong, how bad is it? Is it “rewrite a paragraph” bad or “oops, lost a bunch of money” bad?


When trust collapses, it’s almost always because one or more of them breaks down.


Crypto vs AI vs E‑Commerce: Same Pattern, Different Stakes


Jeremy applies this framework to 3 worlds:


1. E‑commerce: High Trust, Low Drama


Buying something online is boring in the best way:

  • Add to cart → it goes to cart, remove from cart → it disappears

  • Price, title, reviews are clear.

  • Most purchases are low stakes (a few dollars, not your life savings).


In this case, predictability is high. Legibility is high. Stakes are usually low.


Therefore, people trust the flow, even if the UI is ugly, especially over time.


2. Crypto Wallets: Vision High, UX in the Basement


Crypto promised programmable money, sovereignty, and faster everything. What most people got:

  • Flaky infrastructure where “something went wrong” is a literal default error message.

  • Networks and protocols that break for reasons even the builders can’t fully explain.

  • UX that often feels worse than a clunky banking app.


In this case, crypto suffered from sky high expectations, extremely high stakes, and compounded by the fact that little to no one could fully understand what was going on in the backend.


Therefore, trust broke down.


3. AI Today: Somewhere in the Middle


AI feels magical for a lot of people, but:

  • Predictability varies wildly by model and interface.

  • Errors are “understood” (wait 15 minutes, retry, rate limits, etc.), but still frustrating.

  • Marketing and hype create massive over‑promising.


Customers are promised a co‑pilot, but they got one that occasionally flies into a wall. If you don’t design deliberately, that expectation gap may start to eat your product alive.


  1. The Uncanny Valley as a Trust Failure


To Jeremy, the uncanny valley effect of “too human, it’s creepy.” is actually a specific case where the trust equation breaks. Specifically, the uncanny valley shows itself when:

  • Capabilities look high – the agent writes, talks, or emotes like a human.

  • Confidence is high – the model answers decisively.

  • Legibility drops – people cannot tell why it’s doing what it’s doing.

  • Predictability drops – every now and then it outputs pure gibberish or weirdness.


That mismatch between “this feels like a person” and “I have no idea why it’s acting like this” is what triggers the discomfort.


A real example from his own product: a decoding bug once caused their emotional AI bot (Chat with Emma here) to spit out nonsense text. Because users didn't know what was going on in the backend, it felt deeply unsettling. Jeremy's friends literally messaged to say they blocked the bot.


A bug in a UI that people already expects to be robotic? Annoying.

Same bug in a UI that people exact to be “human”? Creepy.

If you’re pushing realism, you don’t get to be careless about failure modes.



The UX Superpower for Trust Building: Memory


Jeremy claims that if you can choose only one thing to focus on to build trustworthy AI experiences:

“Memory is actually all you need.”

If your system can:

  • Remember past interactions,

  • Surface the right memory at the right moment (“you asked about this two weeks ago…”),

  • And use that to shape its responses,

... trust skyrockets.


Why does this work?


Humans don't assess trust using complete information. Instead, we piece together moments and memories that come to mind, inferring intent from these sporadic snippets of recollection.

  • “You remembered my birthday, so you must care.”

  • “You recalled that event we talked about, so you must be paying attention.”


That means in the design of AI experiences:

  • A robust memory system beats a fake personality. 

  • Cultural nuance, sarcasm, acronyms make you more relatable. (you don’t need a giant rules table. You just need a history of misfires and corrections that the system can learn from and retrieve.)


So You Want to Build a Trustworthy AI UX. Where Do You Start?


A few practical takeaways from the session you can apply directly:

  1. State your intentions up front.

    “This agent is here to do X, from Y perspective.”

    For example, people trust AI characters more when they know the backstory and lens it’s using.

  2. Calibrate the promise to the reality.

    Don’t pitch an all‑knowing co‑founder if what you actually have is a decent assistant that occasionally trips AKA "if you don't know, say it".

  3. Design failure modes deliberately.

    Avoid generic “something went wrong” in high‑stakes flows. Tell people what kind of failure it is and what they can do.

  4. Invest in memory before personality.

    Accurate, contextually‑surfaced recall is more powerful for trust than another layer of “quirky” tone.

  5. Use explanation where it matters, not everywhere.

    For some users in some domains (fintech, law, health), a bit of extra transparency reduces fear. For others, over‑explanation is cognitive overload. Pick your battles.

  6. Take into account the stakes at play

    Sloppy UX around a social post? Annoying. Sloppy UX around wealth, health, or legal advice? Career‑limiting.


Your Turn: Build Products People Can Actually Trust

If you’re building AI tools for non‑technical people, you’re not just shipping prompts and APIs. You’re also designing expectations.


The guiding question for designing UX for trust should be: “What expectations am I setting, and how do I keep them aligned with reality over time?”


Start with the trust equation:

  • Make behavior predictable where it counts.

  • Make the system legible enough for your users.

  • Calibrate your promises ruthlessly.

  • Be honest about the stakes and design for safe failure.


Then add memory and emotional awareness on top.


That’s how you avoid the uncanny valley, and build AI experiences people actually want to come back to.

Come join us at SQ Collective's Co-working Friday here.

Bring your questions, your half-baked prototypes, and your worries about “AI slop.”

We’ll be in the room figuring it out together. Every Friday.


 
 
 

Comments

Rated 0 out of 5 stars.
No ratings yet

Add a rating
bottom of page