The Platform Deal Nobody Agreed To: What Big Tech Takes From Us, What It Leaves Behind, and What We Can Do About It
- John Pope

- Mar 18
- 11 min read
Updated: Apr 2
Published by midagent | March 2026

There is a transaction happening in your home right now that you never consciously agreed to. It happens every time your child opens Instagram. Every time a teenager swipes TikTok at 1 a.m. instead of sleeping. Every time a family sits in the same room, phones out, saying nothing to each other.
The terms of this transaction were never disclosed to you. No one asked for your signature. There was no box to check that said: "I consent to my child's attention being systematically harvested, her self-image being algorithmically eroded, and her developing brain being rewired for compulsive consumption — in exchange for free access to a photo-sharing app."
And yet that is, with increasing precision, what the evidence shows has happened. Not as an unfortunate side effect. As the business model.
This is not a post about being anti-technology. It is a post about reading a contract that has already been signed on your behalf — and deciding whether Canada, and the nations that share our values, are willing to keep honouring it.
The Algorithm Has One Job
To understand what social media platforms have done to society, you first have to understand what they were designed to do — and for whom.
The answer is not complicated. These platforms were designed to maximize attention.
Not connection. Not community. Not wellbeing. Attention. Specifically, the kind of sustained, compulsive, emotionally activated attention that can be packaged and sold to advertisers at the highest possible price.
This is not conjecture. It is the operating logic of surveillance capitalism, the business model that underpins Meta, TikTok, YouTube, and X. The product is not the platform. The product is you — your behaviour, your preferences, your fears, your desires — translated into a behavioural data profile and auctioned to the highest advertising bidder. Every scroll, every pause, every reaction feeds the model. The model's only optimization target is keeping you on the platform longer.
The algorithm does not care if the content making you stay is true. It does not care if it is making you anxious, angry, or ashamed. In fact, the research is now unambiguous on this point: content that provokes anxiety, outrage, and social comparison retains attention more effectively than content that informs, inspires, or connects. So the algorithm serves you more of it. Not because anyone decided to harm you. Because harm is, structurally, the most profitable output.
Do not take my word for any of this. Listen to what US Senator Bernie Sanders and Anthropic's Claude have to say on the subject in the short video below:
What the Internal Documents Said
Here is what makes this not merely a tragedy, but a moral failure of historic proportion: they knew.
Internal Facebook research, exposed in the 2021 whistleblower disclosures by Frances Haugen and subsequently cited in congressional testimony and ongoing litigation, showed that the company's own researchers had identified the harm their platform was causing to teenage girls with remarkable specificity. They knew that Instagram made body image issues worse for one in three teenage girls. They knew that the platform was a significant contributor to eating disorders, anxiety, and suicidal ideation. They knew that their recommendation algorithms were pushing vulnerable young users deeper into harmful content spirals, not pulling them out.
They did not fix it. The internal documents showed that proposed design changes to reduce harm were repeatedly deprioritized because they were projected to reduce engagement — and engagement reduction meant advertising revenue reduction. The fiduciary duty to shareholders, the legal obligation to maximize returns, was treated as the terminal value against which every other consideration — including the documented psychological destruction of children — was weighed and found wanting.
This is not a story about bad people. It is a story about a system that produces bad outcomes as reliably as a factory produces goods, and that is designed to resist correction for the same reason: correction costs money.
What the Research Now Shows
The evidence has moved well beyond whistleblowers and anecdote. We are now in the era of longitudinal data, and the findings are severe enough that they deserve to be stated plainly, without euphemism.
Social psychologist Jonathan Haidt, in his landmark work The Anxious Generation, identifies the years between 2010 and 2015 as the "Great Rewiring of Childhood" — the period when smartphones and algorithmic social media displaced the unsupervised, physical, play-based childhood that human development requires. The consequences followed within years and have compounded ever since.
In the decade that followed:
Rates of major depressive episodes among teenage girls rose by nearly 300%
Emergency room visits for self-harm among teenage girls rose 188%
Suicide rates among younger girls rose 167%
Among boys, the patterns were different but equally alarming: withdrawal from real-world social development into addictive gaming and pornography ecosystems, producing what researchers describe as a generation failing to develop the social competence required for adult life
These are not marginal statistical fluctuations. These are civilizational numbers. And they track with almost perfect temporal precision to the mass adoption of algorithmically-curated social media feeds — not smartphones per se, but the specific architecture of infinite scroll, variable reward loops, and social comparison engines that the major US platforms deployed and then optimised relentlessly.
The cognitive damage extends beyond mood. Research on what is now termed "TikTok brain" — a colloquial label for the neurological effects of sustained short-form video consumption — shows measurable impairment in working memory, inhibitory control, and sustained attention. EEG studies have identified abnormal white matter patterns in the brains of heavy users in regions linked to behavioural control. Students who grew up with algorithmic feeds find long-form reading increasingly aversive, not because they lack intelligence, but because the reward circuitry of their brains has been recalibrated by systems designed to make depth feel boring.
We are not talking about kids spending too much time on their phones. We are talking about a generation of young people whose capacity for the kind of sustained, focused, independent thought that democracy, science, and civic life depend upon has been measurably, neurologically compromised — as a direct consequence of decisions made in board rooms to maximize advertising revenue.
The Political Infection
The harm does not stop at the individual. It scales.
A breakthrough study published in Science in late 2024 provided the clearest causal evidence yet that social media algorithms directly reshape political attitudes.
Researchers modified the algorithmic feeds of users on X, reducing their exposure to content promoting partisan animosity and anti-democratic attitudes. The results: measurably improved attitudes toward the opposing political party. The more striking finding — 74% of participants did not notice the change. Their political reality had been constructed for them, below the threshold of conscious awareness, by an algorithm optimising for outrage.
This is the architecture of radicalization: not the dramatic, sudden conversion of fringe manifestos, but the slow, invisible drift of the information environment toward division, suspicion, and tribal hostility. It works because outrage retains attention. Moderate content, nuanced analysis, good-faith engagement with complexity — these do not. So the algorithm surfaces the extreme, normalises it through repetition, and exploits the "illusory truth effect" — the psychological phenomenon by which repeated exposure to a claim, however false or distorted, makes it feel more credible.
For democratic societies, this is not a social media problem. It is an infrastructure problem. The information environment is the substrate of democratic deliberation, and that substrate has been handed over to systems whose only objective is to keep citizens emotionally activated for commercial purposes.
The Trade We Are Actually Making
Now step back from the individual harms — the depressed teenagers, the fractured attention spans, the radicalized feeds — and look at the transaction from a national perspective. Because when a Canadian opens Instagram, TikTok, or Facebook, something specific is happening at the macroeconomic level that goes almost entirely unremarked upon.
Canada is exporting its national wealth and importing a cascade of social damage. And it is doing so, largely, for free.
The data generated by Canadian users — their behavioural profiles, their purchasing intent, their emotional states, their social graphs — flows to servers in the United States, where it is processed, packaged, and monetized. The advertising revenue generated from Canadian eyeballs flows to Menlo Park and Seattle. The tax revenue that should result from this economic activity is minimized through structures specifically designed to shift profits away from the jurisdictions where the value was created.
Meanwhile, what flows back into Canada? The negative externalities documented above. The mental health crisis in our teenagers. The cognitive fragmentation in our young adults. The political polarization accelerating through our civic discourse. The sleep deprivation. The eating disorders. The social isolation. The "TikTok brain."
The platforms take the data — which is wealth. They take the advertising revenue — which is wealth. They take the economic intelligence that would otherwise train Canadian AI models — which is future wealth. And they leave behind the psychological wreckage and the social repair bill, which is paid by Canadian families, Canadian schools, Canadian healthcare systems, and Canadian governments.
This is not a metaphor for an unfair trade. It is an unfair trade, precisely described.
The Attention Economy and the Amazon Tax: Two Sides of the Same Coin
It is worth pausing here to note that this extractive dynamic is not unique to social media. It is the defining logic of the entire Big Tech business model in its mature phase.
The same structural analysis applies to Amazon's marketplace fees — where Canadian merchants surrender 30–50% of their revenue to a foreign intermediary that extracts the commercial value of Canadian economic activity while leaving the inflationary burden on Canadian consumers and the productivity trap on Canadian businesses.
Google's advertising auction does the same to Canadian marketing budgets: Canadian firms pay rising cost-per-click rates to reach Canadian consumers on a platform that profits from the transaction while contributing nothing to the underlying economic relationship.
The social media attention economy and the platform commerce economy are part of the same interdependent Big Tech organism, only viewed from different angles. They are designed to support each other, and feed off the wealth created by others. In science they call those organisms parasites. Both are built on the same commercial logic: insert a foreign intermediary between Canadians, or any foreign citizens, and their own economic and social lives, capture the value of every interaction, export the revenue back to America, and socialize the costs in the home country.
The costs, in the case of social media, are not merely economic. They are social. They are psychological. They are developmental. They are democratic. But the mechanism is identical, and the response — if there is to be a genuine response — must address both dimensions simultaneously.
Anything less and problems persist.
A Different Architecture Is Possible
This is where it becomes important to be precise, because the argument here is not that technology is the enemy. It is that the current commercial architecture of dominant US tech platforms is extractive by design, and that the design can be changed.
midagent's model is, at its core, a demonstration of that proposition in the commerce layer: that a platform can connect buyers and sellers, generate real value for all parties, and sustain itself financially without treating the participants as raw material to be monetized. A flat utility-based pricing model is not merely a price point. It is a statement about what a platform's relationship to its users should look like — one where the platform earns a fair fee for the service it provides, rather than extracting rent from captive participants.
The same architectural logic applies to the social and informational layer. An attention economy that optimises for engagement above wellbeing is not a law of nature. It is a design choice, made in the service of a specific commercial model, and it can be unmade. Platforms can be designed to optimize for connection rather than addiction, for accuracy rather than outrage, for human flourishing rather than shareholder return — if the commercial incentives are restructured to reward those outcomes.
This is the deeper argument behind the midagent and Project Sovereign Nexus model: not merely that Canadian commerce should run on Canadian infrastructure, but that Canadian digital life — commerce, information, social connection, civic discourse — should operate on platforms whose architecture serves Canadian society rather than exploiting it.
What does that look like in practice? It looks like sovereign data infrastructure where Canadian economic intelligence trains Canadian AI models rather than enriching Silicon Valley. It looks like open-standards commerce protocols that cannot be gamed by algorithmic rent-seeking. It looks like a digital economy where the profits generated by Canadian activity circulate in the Canadian economy rather than being extracted to foreign balance sheets.
And it looks like a serious national conversation about whether the social media architecture we have imported from the United States — the one that has driven a 300% increase in teenage depression, a 167% increase in youth suicide, and the measurable erosion of the cognitive capacity of an entire generation — is an architecture that serves Canadian values and Canadian interests.
What Governments Can Do — And What They Must
For parents reading this, the immediate actions are clear enough, even if they are not easy: phones out of bedrooms, phone-free schools, delayed social media access, more unsupervised outdoor time. Haidt and the researchers who have documented this crisis have produced an actionable checklist, and it is worth following.
But the parental response alone is insufficient, because these platforms are not merely lifestyle choices. They are infrastructure. And the decision about what kind of infrastructure serves a society should not be left entirely to the market — especially when the market in question is dominated by a handful of foreign corporations whose fiduciary obligations run to their shareholders, not to the citizens of the countries in which they operate.
What governments can do — what the Canadian government in particular is positioned to do right now — is make a different set of infrastructure choices. Choices about which platforms receive public procurement. Choices about how Canadian data is governed. Choices about which commerce protocols become the standard for Canadian digital trade. Choices about whether the next generation of Canadian digital infrastructure is designed to serve Canadian society or to extract value from it.
These are not radical choices. They are the choices that every previous generation of Canadians made about physical infrastructure: roads, railways, telecommunications, broadcasting. The principle that critical infrastructure should serve the national interest is not foreign to Canada. It built the country. The question is whether we are willing to apply it to digital infrastructure before the damage becomes irreversible.
The Deal on the Table
The platforms will not change themselves. They cannot. Their commercial logic prevents it. An algorithm optimised for advertising revenue will always tend toward engagement over wellbeing, outrage over nuance, addiction over health — because those tendencies are profitable and their opposites are less so. The fiduciary duty is real, and as long as the business model remains intact, the internal researchers who document the harm will continue to be overruled by the revenue managers who quantify the cost of fixing it.
So the question is not whether US Big Tech will reform. The question is whether Canada — and the nations that share our commitment to democratic governance, individual dignity, and collective wellbeing — will choose a different architecture.
Not an architecture that walls Canada off from the world. Not a digital fortress that retreats into autarky. An architecture that is genuinely interoperable with the global digital economy, built on open standards, but designed to serve the interests of the people who use it rather than the interests of the shareholders who own it.
That is the Third Path that midagent and Project Sovereign Nexus represent. Not anti-American. Not protectionist. Simply sovereign. Simply honest about what the current arrangement costs us. Simply determined to build something better.
Do Canada and its allies in the West really want to continue this kind of relationship with US tech?
Or do we want superior alternatives?
midagent is a Canadian-built decentralized commerce protocol designed to replace extractive platform monopolies with utility-grade pricing and sovereign digital infrastructure. To learn more about how we are building a digital economy that serves Canadians — not foreign shareholders — visit midagent.ca or reach us at hello@midagent.ca.




Comments