User-Directed Experimentation (or, Experimentation for Human Flourishing)
Today, companies experiment on users. Tomorrow, users will experiment on companies too - with very different goals.
Today, tech companies run tons of experiments on their users to find and make progress on company goals, e.g. growth, profits, and product improvements.
Soon, a meaningful contingent of users will also be running plenty of experiments of their own on the tech products they utilize, to make progress on individual goals related to their wellbeing. Users concerned about impacts to mental health will have a confluence of tools that give them autonomy to opt-out of features and patterns that don’t demonstrably serve their best interests.
The short version:
We’ve gotten so good at optimizing tech products for engagement that we’ve created the potential for alignment problems where outcomes are good for business and bad for individual humans. E.g. optimized content algorithms may surface more harmful or upsetting content, or products optimized for retention may be so habit-forming that users have a hard time moderating their usage despite great effort.
One under-appreciated reason for the stickiness of this problem is that things like engagement and retention are very easy metrics for businesses to measure. If we wanted to align tech with human flourishing, we’d have to define some alternative goal that measures “wellbeing” - a much harder task1.
Even beyond obvious issues of incentives, businesses would have an especially difficult time optimizing products for “wellbeing”. Optimal experiences may be highly personal, and the best measures for wellbeing could involve psychometric or biometric data that individuals would be keen to keep private.
Luckily, current advancements in tech mean that individuals could shift the locus of control to themselves, and run their own private experiments that locally modify and optimize the products they utilize instead.
This capability already exists in limited, basic ways today (think browser extensions to block ads or hide newsfeeds). But AI coding and AI browsers are going to open up many, many more possibilities - and this could go even further if companies participate in creating more adaptable interfaces and architecture.
It is with some honest trepidation that I’m sitting down to write about how to make tech better for humans.
This idea of “phone bad” and “internet bad” and “digital addiction” has spawned so much content that said content is beginning to border on slop itself. Countless articles and videos have spun out of 100 best selling books on the attention economy, surveillance capitalism, enshittification, a social dilemma, stolen focus, the shallows, addictive technology, productivity culture, technofeudalism, platform economics, what the internet slash AI is doing to our brains, weapons of math destruction, weapons of mass distraction, etc., et. al., i.a.
You may or may not buy any of these narratives — some are better researched or more grounded than others. But in a less dramatic sense, the vibe check from my window to the collective consciousness is that there is some discontent in the air2. Tech just doesn’t seem fun anymore. To paraphrase the venerable Homestar Runner and Strong Bad, things aren’t the same now that “the whole internet is just four websites on people’s phones”:
Technology used to feel fun and optimistic because we were full of hope for how it could help us achieve the things we want to want: health and connection and meaning and fulfillment and good honest creative work. Instead we accidentally made much more progress building what some more base part of our brains want, which is apparently consuming TV series sporadically and out-of-order on YouTube Shorts and/or AI models that glaze our delusions to the point of encouraging injury or death.
Turns out, it’s also just way easier to build apps that give us brain worms instead of ones that help us make meaningful progress on significant challenges. Who knew! Our endless conveyor belt of addictive tech products doesn’t require malice on the part of technologists3, just a lack of intentionality. Perhaps we’ve done more optimizing for distraction than we realized, and much less optimizing for meaningful net positives than we’d like to think.
Understand that this pains me to say. I am one of the biggest cheerleaders for experimentation and optimization, I believe almost endlessly in their power. But at some point you realize the scale of demand for app blockers, dumbphones, dopamine detox videos and gambling addiction hotlines and figure... “maybe we’ve pointed this tool at the wrong thing.”
Any conversation about turning the ship around is a whole pessimistic mess. To date, suggestions on what we technologists should do about these problems has been almost entirely via negativa. Don’t implement these UI/UX dark patterns, don’t collect this data, don’t have this business model, don’t optimize for these metrics, don’t do this kind of personalization. Positive solutions are few and far between; the most common shape of discourse is moral panic (just look at the allusions to war and theft and disease in those book titles).
To some extent, I understand (even if I have no interest in participating). One reason that it’s so hard to imagine swimming upstream is that the status quo of misalignment has a massive experimentation advantage. It is very easy to collectively run hundreds of thousands of randomized controlled trials every year proving out ways to keep folks scrolling and clicking and like comment and subscribing. These are easy things to measure, and what gets measured gets managed. Even in the absence of business models that rely on all that attention, it would be much harder to use the same power of experimentation to optimize for wellbeing: how would you even start to measure it?
Alignment is a metrics problem.
One thing I’ve been interested in for my own wellbeing is moderating my social media use - anything with a “For You” feed that could hook me for hours. As I’ve tested different interventions, though, it’s been surprisingly hard to measure any progress!
If my concern is that tech products have been optimized to increase the duration of my engagement(s), the obvious solution would be for me to optimize in the opposite direction: decrease duration, i.e. decrease “screen time”. But “screen time”, it turns out, is a very bad metric4 — it is noisy, fraught with data quality issues, a lagging indicator, too abstract for setting successful personal boundaries.5
Screen time has two incredible advantages though — it is both objective and (ideally) automatically/passively measured. Other attempts at measuring wellbeing in the context of tech usage are often neither. They are self-reported psychometric tests (like the Smartphone Addiction Scale) or survey responses (e.g. asking users “Looking back at the last 10 minutes on [app], was this time well spent?” or “After using [app] right now, I feel...”). This introduces a different set of problems, e.g.:
respondents are bad at assessing habitual behavior that lies outside of conscious awareness,
surveys are generally bad at measuring outcomes as dynamic as the ones we’re interested in,
the sample of respondents is far from representative (although there may be interesting ways around this at scale),
and even for self-experimentation, if I wrote a little program to keep asking me these questions at random intervals, it wouldn’t take long before I tuned out the prompts entirely.
In short, these more direct attempts to quantify the relationship between our tech use and our wellbeing fail the all-important test: they are decidedly not easy to measure. If our quest to build more aligned technology solely relies on these types of metrics, it is doomed to continue losing out to the frictionless ability we have to optimize for engagement.
But maybe there is another source of metrics to measure our wellbeing: direct feedback from our brains and bodies. What if we could tell which product features and UX patterns wreak havoc on our attention by measuring real-time EEG data6 from our brain using something like Neurable headphones? What if we knew when content on our feeds was causing us distress because we saw our heart rate variability suddenly plummet using a monitoring device like Lief? Biofeedback could open up a new way to objectively and automatically measure what all this tech is doing to us.
To understand the current potential for at-home brain measurement, I called up Dr. Cody Rall, a psychiatrist and neurotech expert who shares his expertise via his popular YouTube channel and coaching program. He characterized the last five years as a rapid acceleration for consumer EEG devices, with improvements to their signal collection, algorithmic interpretation of said signal7, and form factors8.
Today, these devices can infer and measure key metrics about our attention and cognitive load, e.g. how often we’re getting distracted and task switching, or how long we’ve been focusing for, as well as some general data on our brain health. In the next five to ten years, we could take this even further. From Dr. Cody:
“The next level up is improving signal-to-noise ratio so we can get more nuanced information about cognition — actual reading comprehension, or understanding. There might be some proxy markers in EEG that we can use to detect when understanding is occuring. Or even ‘epiphanies’: every once in a while, you have an epiphany, like when a student is learning and they get a math problem right. There are probably some ERP signals that indicate that happening. This is also when we’d start talking about detecting thought loops for OCD, anxiety type depression, perhaps different activation levels in the frontal lobe that indicate melancholic depression.”
Indeed, I dream of a Substack feed that is optimized for the number of epiphanies delivered per articles read. But sharing biometric data with companies seems a step too far away from the basic need for privacy, even for me. So what if instead of sharing it, I just made use of it myself?
Flipping the script: user-directed experimentation
Whether it’s a question of mismatched incentives or the need for privacy, it seems unlikely that tech companies will step up en masse and start orienting towards human flourishing. They will continue to use the power of experimentation to optimize for profits, growth, engagement, retention.
But me? I can do whatever I want with experimentation. If I have hypotheses, metrics of interest, and the ability to create and test treatments, I can run my own experiments.
That final requirement is the key - user-directed experimentation means testing local modifications to products. You probably already do a primitive version of this today: most savvy internet users have some kind of ad blocking browser extension installed. Personally, I also use browser extensions to block my LinkedIn feed five days a week, or to minimize my YouTube experience down to nothing but a search bar and subscriptions page (no shorts, no recommendations).

The authors of an essay entitled Malleable Software identify a progression of existing approaches that allow humans to modify the products they use: from simple Settings that offer control over variables that developers are okay exposing, to creating third-party Plugin ecosystems, to permissionless Mods like browser extensions or Arc Boosts9. These all give us a limited range of treatments accessible today: even an independent approach like modding requires a fair bit of engineering and maintenance to make work.
But this will go much further: if there is one thing our current generation of AI models has proven demonstrably good at, it’s writing and modifying code. AI coding tools and browsers could readily democratize the ability for anyone to modify the internet to their choosing. This is the vision of the Resonant Computing Manifesto:
Software can now respond fluidly to the context and particularity of each human—at scale. One-size-fits-all is no longer a technological or economic necessity. Where once our digital environments inevitably shaped us against our will, we can now build technology that adaptively shapes itself in service of our individual and collective aspirations. We can build resonant environments that bring out the best in every human who inhabits them.
This is indeed happening as I write this essay. Just a few weeks ago, the International Journal of Human-Computer Studies published a paper titled “MorphGUI: Real-time GUIs customization with large language models”, which details the authors’ framework for successfully enabling users (regardless of technical background) to customize the UI of a calendar app to their liking, using natural language.
To be clear, my suggestion here is not that we shift the onus onto users to make things better for themselves. The suggestion is that user-directed experimentation offers agency, for those who want it — a way to carve out a middle path between total renunciation of any given product, and total indulgence, making use of things without relinquishing power to any misaligned goals its proprietors hold.
The browser of the future
Let’s look at one particularly plausible way this might come into being - via a next-generation web browser. In this hypothetical example, Ryan10 wants to spend less time checking LinkedIn, but is understandably hesitant to leave the platform altogether given its clear positive benefits for professional growth/networking.
First, they plan an experiment: they define their goals by thinking through what constitutes “positive use” of the platform (e.g. searching for new profiles, communicating via messages), and what they’re looking to cut back on (scrolling the feed, commenting on ragebait).
Once the user-directed experiment begins, every time they access linkedin.com in their browser, the browser chooses treatment(s) to execute - maybe “no news feed” would be a good starting place. Using sessions as the unit of randomization, a lightweight infrastructure measures changes in total time spent on LinkedIn (excluding time spent on positive activities), alongside quantifications of “focus” from the user’s Neurable EEG headphones. Additionally, after select sessions, the browser randomly asks the user to spend 5 to 90 seconds answering questions reflecting on their use of LinkedIn.
These metrics all combine to one wellbeing-oriented Overall Evaluation Criterion which is optimized for by an adaptive experiment design (e.g. a bandit algorithm). This means the browser is steadily learning and tweaking the site for Ryan, individually - maybe within a hundred instances of opening LinkedIn, their “version” of LinkedIn has no newsfeed, no ads, and renders in grayscale.
Today, tackling something like this would require Ryan to research potential interventions on their own, implement them with a patchwork of browser extensions (assuming any exist), and hopefully use their intuition as a kind of crude optimization algorithm. In comparison, this productized approach would dramatically increase the odds of success by automating away much of the cognitive load, decision fatigue, and legwork for setup — and the tech to enable it exists right now.
Building more humane software
Of course, this example is extraordinarily convenient. It focuses on a website, with treatments that only require modifications at the client-side, and a clear “time” goal to optimize for. Many use cases would still be difficult to tackle, even with the promise of AI coding tools and/or browsers.
Beyond any infrastructure specific to user-directed experimentation, this vision still requires real legwork from the product builders to enable — it requires we create tools that are easier for others to adapt, customize, modify, remix. Maybe this is in small, but meaningful ways that work within our existing systems: can you leave more features behind feature flags and let users (or agents) toggle them on and off as they see fit11?
Maybe you’re in a position to take cues from malleable software and rethink several things we take for granted in building applications; instead enabling data to be shared between tools (in realtime, even), or giving users the ability to mashup different UI elements instead of offering a fixed “package” of an interface. (Writer and investor Tina He similarly wonders if AI will usher in an “end of the UI-as-destination”.)
Or you can work on other principles put forth by the Resonant Computing Manifesto: I’ve talked about adaptability here, you might be well positioned to help get more prosocial patterns tested at your company. When I talked to Alex Komoroske (one of the co-authors of the Manifesto)12, the problem he felt most urgent was innovating on the security model that he sees as underlying many of the aforementioned problems with modern tech.
Seen through all these lenses, there are many product decisions that could be filtered through the question “does this make our product easier or harder to align with human flourishing?”
This idea of “tech for human flourishing” is by no means a brand-new discussion, but it’s still an emerging one. The role that experimentation may play — both traditional A/B testing and this notion of user-directed experimentation — is unclear. These ideas are also largely step-changes, ways of coping with the attention economy instead of ushering in an entire paradigm shift13.
But if there’s one thing I’ve learned firsthand, it’s the power of running experiments, so I sure hope to see it utilized towards such a positive, meaningful end as human flourishing.
Postscript: thinking about tech in 2026
I hope to spend much of 2026 exploring this intersection of tech and human flourishing, namely on the following questions:
What does it mean to use technology “skillfully”?
How could individuals rethink their relationship with technology from the ground up, being more intentional about cost:benefit tradeoffs? How do we diagnose and address current problems in our consumption?
How do we craft new technology that explicitly enables human flourishing? And can we use experimentation to get there?
What are the overlaps between mindfulness/spiritual development and technology? How might mindfulness help us use technology more intentionally? And how might technology support or accelerate spiritual development?
If these questions are at all of interest to you too, feel free to reach out or even proactively grab a time to get introduced on my calendar (no need to reach out first)14. I would love to discuss these themes and jam on ideas with anyone, especially folks with perspectives or backgrounds different than mine.
I’ll also be conducting plenty of personal experiments, and participating in some relevant extracurriculars like Jared Henderson’s “philosophy of tech” 2026 book club — I look forward to sharing plenty of dispatches here on Substack if you care to follow along.
Thanks to Nils Stotz, Jon Crowder, Michael Dean, and Matthew Beebe, for feedback on earlier drafts of this essay. Thanks also to Dr. Cody Rall and Alex Komoroske for much initial inspiration and conversations that helped fill gaps in my knowledge.
(Jon has also launched an experimentation agency focused on pro-human practices called Another Web is Possible - check it out!)
There is a major question upstream of any discussion of measurement, outside the scope of this essay, around what we mean when we say “wellbeing” in the context of technology. What are the skillful, pro-social, productive, or otherwise positive outcomes we want more of? Tangential to my thoughts here, but it was telling that I received a newsletter a few days before publishing this from the Buddhism & AI Initiative entitled “What are we optimising for?”
Although it feels less and less hard to ascribe malice when you read some of the comms that get surfaced in discovery from lawsuits brought against big tech.
even for individual applications/websites - this is to say nothing about the challenge about measuring a single holistic “screen time” number. Obviously we use our devices for lots of different things, productive and unproductive. “Screen time” tells us little. One elegant solution for deriving a single sum could be to score/weight each application or website you access on a scale from unproductive to productive, similar to how RescueTime does it
More broadly, single metrics like screen time easily fall prey to Goodhart’s Law - when a measure becomes a target, it ceases to be a good measure. I may set an upper limit goal on my screen time, but instead become more obsessive and neurotic about my phone use: checking it more often but in tiny bursts, trying to keep my total screen time down. This would still not be a good outcome for my wellbeing!
Today when we talk about at-home brain measurement wearables, we’re typically referring to devices that use electroencephalography to measure tiny electrical signals from the scalp and summarize them to describe our patterns of brainwave activity (e.g. alpha waves vs. beta waves) over time. In the next 5 to 10 years, though, Dr. Cody believes we could have similar devices that leverage more sensitive methods like magnetoencephalography, or build upon existing at-home capabilities for Functional Near-Infrared Spectroscopy too.
See Neurable’s whitepaper on how they validated their EEG interpretation algorithm. You can also watch a nice explainer video from Dr. Cody here.
The Muse headband has been on the market since 2014. Neurable introduced their over-the-ear EEG headphones (based on Master & Dynamic’s MW75) in 2024, and have already iterated to a lighter “LT” version shipping this year. And Emotiv is expected to launch EEG earbuds this year as well.
The Browser Compa err… Atlassian, if you are reading this, please do not kill Arc. In fact, please restart active development of Arc. The masses are begging you… there are dozens of us!
noooo it’s a different Ryan, my friend Ryan (goes to a different school)
meaningful especially as a solution to the problem of modifying server-side behavior as well as client-side… but with obvious complexity costs for builders to maintain. Could AI agents help alleviate that complexity tax?
In transparency, I should represent that I think Alex disagrees with a core part of my thesis: the productivity of metrics to measure wellbeing. He would likely suggest “steering by touch” instead - see the heading “Metrics are only necessary past a certain scale” in his 1/12/26 Bits & Bobs.
It also lacks extrinsic incentives, so who knows what adoption could look like when presumably many consumers are uninterested in unplugging from the matrix. (Worth remembering that Cipher wanted back in so badly that he ended up killing his comrades in the process.)



the part about agency through experimentation to help users carve out "a middle path between total renunciation of any given product, and total indulgence" has got me thinking about potential interventions (experiments?) I could start applying to myself. on social platforms I've leaned more towards the total renunciation side of things and struggle to engage meaningfully online (though I still manage to waste tons of time in other ways on my phone). I think it's an interesting challenge to figure out what that middle path looks like as individuals, and then, as you brought up, actually be intentional about our use. I think there will be interesting ties with intentionality and the crossroads of mindfulness/spirituality/tech too and I'll also be curious to hear more about the experiments you conduct on yourself!
Experiments are necessary for human progress.
But unaccountable experimentation is one of the quiet dangers of modern tech.