[Post-launch article — to be completed with real Yuko usage data]
Editorial note: This article is designed to be written after Yuko has accumulated sufficient usage data to support original research claims. The structure below outlines the intended argument and data journalism format. Placeholder sections should be replaced with real findings before publication.
For the first several months of Yuko's existence, we have been collecting something that most productivity apps discard: behavioral signal. Not just what tasks users set — but when they completed them, which nudges preceded completion, how many times a task was deferred before action, what time of day action occurred, and what the nudge looked like that finally moved the needle.
After [X] months of operation and [10,000+] completed tasks across [X] users, we have enough signal to ask a question that the productivity industry has rarely been able to answer with real data: what actually predicts whether someone will follow through?
The findings are not what we expected.
What We Expected to Find
Before running the analysis, we had strong hypotheses grounded in the behavioral science that informed Yuko's design. We expected to find that nudge variability would be the primary predictor of completion — that tasks receiving varied timing and framing would be completed at higher rates than those receiving uniform reminders. We expected context to matter significantly. We expected channel diversity to have a meaningful effect.
We were right about all of these. But the magnitude of some effects, and the direction of others, surprised us.
Finding 1: [Replace with real finding]
[Data journalism format: lead with the finding, explain the mechanism, show the data, discuss implications]
Finding 2: [Replace with real finding]
[Data journalism format: lead with the finding, explain the mechanism, show the data, discuss implications]
Finding 3: [Replace with real finding]
[Data journalism format: lead with the finding, explain the mechanism, show the data, discuss implications]
Finding 4: The Timing Effect Was Bigger Than Expected
[Replace with real data — placeholder argument below]
We expected timing to matter. We did not expect it to matter this much.
Tasks nudged during what we identified as each user's primary productivity window — determined by analyzing patterns in their prior completed tasks — were completed at [X]% higher rates than identical tasks nudged outside that window. The difference held even when the in-window nudge arrived later in the day than the out-of-window nudge.
This finding has a practical implication that most reminder apps ignore: the question of when to send a reminder is not best answered by the user's stated preference ("I prefer mornings") or by a generalized research finding about human chronobiology. It is best answered by each individual's revealed behavioral patterns — when do they actually do things, as demonstrated by what they have actually done.
Finding 5: The Context Effect Was the Strongest Signal
[Replace with real data — placeholder argument below]
The single strongest predictor of task completion in our dataset was the presence of motivational context in the nudge — a specific, personalized reason why the task mattered connected to the user's own stated goals.
Tasks nudged with contextual framing were completed at [X]% the rate of tasks nudged without it. This effect was consistent across task types, user segments, and time periods. It was larger than the timing effect, larger than the channel effect, and larger than the tone effect.
This is consistent with the implementation intention research and with our own 14-day single-subject experiment that preceded Yuko's development. But the consistency and magnitude of the effect across thousands of tasks and hundreds of users gives it considerably more weight than any single study could.
What We Still Don't Know
Data journalism that is honest acknowledges its limitations. Ours are real.
We cannot establish causality from observational data alone. The correlation between contextual nudges and completion may partly reflect that users who provide richer goal context to the system are also more motivated users to begin with. We have tried to control for this, but the confound cannot be fully eliminated without a controlled experiment.
[Additional limitations to be added based on actual data and methodology]
The Practical Upshot
For the people who have come to this article looking for actionable findings rather than academic hedging, the data converges on a few clear patterns:
The timing of a nudge matters more than most people think, and the best timing is individually determined rather than universally prescribed. The framing of a nudge — specifically, the presence of a genuine, personalized motivational reason — is the single strongest lever for follow-through. And variability — in timing, channel, and phrasing — prevents the habituation that ultimately defeats even well-intentioned reminder systems.
None of these findings are surprising in light of the behavioral science. What the data adds is scale: these effects are real, measurable, and consistent across the full diversity of tasks and people represented in our user base.
We will continue publishing these findings as our dataset grows. The productivity industry has been building on intuition and small studies for a long time. We think it deserves better data.
Yuko is building the AI nudge engine informed by this research — and your usage helps make it smarter. Learn more at yuko.ai