Please Don’t Optimize This Screen
Sacred interfaces and designing AI for moments too fragile to monetize.
Hey guardians of fragile moments—
Imagine this.
It’s 23:41. Your chest feels tight. You finally got access to your medical portal and the test results you’ve been waiting for are there, staring at you from a screen.
You click.
The language is cold, technical, coded in acronyms. Somewhere between the numbers and the Latin, a notification slides in from the side of the interface:
“Patients who viewed this also booked a premium tele-consultation. 15% off if you confirm in the next 10 minutes.”
At that exact second, the system isn’t treating you as a patient. It’s treating you as a conversion opportunity.
That’s the line I want to talk about. Because not every interface should be a marketplace.
What makes an interface “sacred”?
Let’s strip it of the poetry for a second.
There are moments in which you’re simply not a “user”. You’re the person who hasn’t slept for three nights, the parent who just got an email from the school, the worker who opens the banking app hoping there is still enough on the account.
When I think about sacred interfaces, I don’t think of icons or crosses. I think of three ingredients that tend to show up together:
- you don’t really understand how the system works, but it understands a lot about you;
- you’re not calm, rested, curious – you’re scared, exhausted, ashamed or overwhelmed;
- if something goes wrong here, you don’t just lose a bit of money, you lose health, rights, years of your life.
Medicine, debt, social services, school for your kids: these are not just “verticals”. They are places where people come with their guard down.
In those places, an interface is sacred not because it is holy, but because it touches something that can’t be easily repaired.
Let me put it in concrete scenes instead of theory.
I’ve seen hospital portals where, next to a lab result in bright red, there is a little panel suggesting a paid second opinion. The layout doesn’t scream “ad”, it whispers “people like you also booked this”. Technically it’s smart. Humanly it’s disgusting.
I’ve scrolled through wellness and mental health apps that talk about anxiety and burnout using the same mechanics of a mobile game: streaks, daily rewards, gentle guilt if you miss a day. Now add a recommendation engine that learns exactly when you’re most vulnerable and which push notification will bring you back.
I’ve watched bank interfaces and credit platforms do something similar with money problems. You confess to a chatbot that you’re in trouble, and the next screen is not a breathing space, but a menu of products you can add to the pile. The more desperate you are, the more noise the algorithm hears – and the more confident it feels in predicting how far it can stretch you.
None of this is science fiction.
It’s just the logic of conversion metrics applied to the wrong piece of human life.
How AI could actually care instead of convert
I don’t have any nostalgia for a pre-digital world. Paper forms and waiting rooms are not romantic. A well-designed AI system could genuinely make these experiences less cruel.
Imagine an AI that sits between you and a PDF full of acronyms and quietly rewrites it in a language your grandmother would understand. Or one that watches over the queue of messages arriving to a small clinic and makes sure the truly urgent ones don’t drown under administrative noise. Or a system that helps a social worker navigate rules and subsidies fast enough to actually have time left for a conversation.
The problem, again, is not the tool. It’s the intention behind it.
You can point an AI at the same data and ask two completely different questions:
- “How do we extract more revenue from people in this situation?”
- “How do we reduce confusion, delay and humiliation for people in this situation?”
The code may look similar. The interface will not.
In a sacred interface, AI should behave more like a translator and a bodyguard than like a salesperson: it protects your attention, explains what it can, and steps aside when it’s time for a human to show up.
An AI Care Charter for sacred interfaces
If you build products, you don’t have infinite time for philosophy. At some point you’re in a meeting, there’s a roadmap, someone shares a dashboard and says: “we could probably improve this flow by adding a nudge here”.
In that moment you need something brutally simple in your head.
For sacred interfaces, I would start with three questions:
- Would I be comfortable watching someone I love use this screen in their worst week of the year?
- Am I okay with an AI learning from this person’s fear in order to sell more?
- If this interaction goes wrong, can it be fixed with a refund – or does it leave a deeper mark?
If you can’t answer those questions without swallowing hard, you don’t need a longer framework. You need to pull the brake.
Call it an AI Care Charter if you like. In practice it’s just this: deciding that in certain spaces you don’t play with people’s vulnerability, even if the conversion graph begs you to.
What this means for people who build products
If you work in design, product, marketing, strategy, AI… you will increasingly face choices like these:
- Do we let the model optimize the wording of reminders in a mental health app purely on retention, or do we impose constraints?
- Do we allow upsell banners in a cancer diagnosis portal, because “it converts”, or do we draw a red line?
- Do we design the bank’s AI assistant to propose more credit by default, or to first show options for debt reduction and financial education?
The hard part is that these decisions will rarely come with a red blinking light saying:
“Ethical dilemma here.”
They’ll arrive as Jira tickets, quarterly OKRs, A/B test reports, roadmap discussions.
You won’t be asked:
- “Are you OK with monetizing pain?”
You’ll be asked:
- “Can you help us improve the conversion rate in this flow?”
Same question. Different packaging.
This is why the concept of sacred interfaces matters. Not as a poetic metaphor, but as a practical filter:
“Is this one of those spaces where we stop selling and start protecting?”
If the answer is yes, then the brief for AI changes. Completely.
Drawing the line before someone else draws it for you
Regulators will eventually move. Some already are.
But if the only reason you respect vulnerable people is fear of regulation, you’ve already lost the plot.
As AI spreads into every interface, the most interesting companies won’t be the ones that squeeze the most money out of every touchpoint. They’ll be the ones that consciously decide:
- “Here we optimize for profit.”
- “Here we optimize for care.”
And they will defend that line, even when the dashboard says:
“You’re leaving money on the table.”
Because sometimes leaving money on the table is exactly what proves you’re human.
Until next time, stay humane.
Alex
If you’re working on products in healthcare, education, finance or any space where people are vulnerable, you don’t just need “better funnels”.
You need better boundaries.
At Kredo Marketing, we help teams design strategies where AI, UX and business incentives are aligned with something bigger than conversion: trust.
If you want to stress‑test one of your critical flows together and see where AI might be quietly selling instead of caring, let’s talk.