About This Socratic Seminar Series of Reading Passages
Are We Teaching Kids to Think Critically — Are We Giving Them Anything Real to Think About?
Here's a question worth sitting with:
A medication costs $4.73 to manufacture. The company charges $935.77 for it.
Not $50. Not $200. Nine hundred and thirty-five dollars.
That's not speculation. That's from a peer-reviewed study published in JAMA Network Open by researchers at Yale University and Doctors Without Borders. The drug is Ozempic. The company is Novo Nordisk. The profit margin is real, documented, and — here's the part that should bother all of us — completely legal.
This is the kind of fact that doesn't fit neatly into a textbook. It doesn't have a clean takeaway. It demands that students — and adults — sit with a genuinely uncomfortable question: At what point does profit become extraction? And who decides?
That's the question at the heart of two free reading series I've developed for high school Socratic seminars.
What These Reading Series Are
Greed Nation and Poisoned for Profit are collections of close-reading passages built entirely on verified, primary-source facts — peer-reviewed research, Senate investigation reports, internal corporate documents released through litigation, and investigative journalism from major news organizations.
All facts in this series are drawn from peer-reviewed research, official government reports, and reporting from major news organizations. Each passage includes primary sources students can verify independently. Students and teachers are encouraged to check every claim, seek out counterarguments, and form their own conclusions. Critical thinking — not agreement with any particular viewpoint — is the goal.
SERIES 1 — GREED NATION: When Profit Becomes More Important Than People
Four reading passages for high school Socratic seminar. Subjects: corporate greed, pharmaceutical pricing, housing exploitation, economic bubbles, and regulatory capture. Each passage includes 6 Socratic questions, 4 Food for Thought prompts, and verified primary sources.
1 | "The $936 Pill" — When Medicine Becomes a Luxury A detailed examination of Ozempic's production cost ($0.89–$4.73/month) vs. its U.S. retail price ($935.77/month), using the 2024 Yale/JAMA Network Open study. Covers pharmaceutical patents as monopolies, the U.S.–Europe price gap (13x), Novo Nordisk's Senate testimony, and who goes without. Ideal for economics, health science, civics, and ethics units. |
2 | "Rusted Wheels for $2,500 a Month" — Vanlords and the Housing Crisis Documents the California vanlord phenomenon: property owners renting inoperable, unregistered RVs to unhoused residents for hundreds to thousands of dollars per month with zero tenant protections. Covers Bay Area housing economics, the regulatory gap that makes vanlording legal, policy capture in housing regulation, and the limits of market solutions to human need. Strong for civics, economics, and social justice. |
3 | "Boom, Bust, Bailout, Repeat" — Economic Bubbles and Who Pays Traces the shared anatomy of three U.S. economic collapses: the dot-com crash (2000, ~$6.2T in household wealth destroyed), the 2008 housing bubble ($700B bailout, 9 million jobs lost), and the early indicators of the AI investment surge. Explains mortgage-backed securities, regulatory capture by the financial industry, and the asymmetric distribution of profits and losses. Strong for economics, history, and financial literacy. |
4 | "Greed Nation: Connecting the Dots" — Synthesis A synthesis passage identifying the five-step pattern of corporate extraction visible across all three preceding passages: monopoly position → maximum extraction → policy capture → crisis → public pays. Addresses complicity, systemic thinking, and what changes would be required to alter the pattern. Designed as a culminating discussion for the full series, or as a standalone critical thinking exercise. |
SERIES 2 — POISONED FOR PROFIT: How Greed Is Destroying Our Environment and Our Health
Five reading passages for high school Socratic seminar. Subjects: EPA dismantling, PFAS contamination, pesticide regulatory capture, fossil fuel climate denial, and environmental justice. Each passage includes 6 Socratic questions, 4 Food for Thought prompts, and verified primary sources.
1 | "Dismantling the EPA" — When the Watchdog Is Defunded Documents the 2025 EPA budget and staffing cuts (proposed 55% budget reduction; 1,274 positions eliminated; all Environmental Justice programs ended), the reorganization of the Office of Research and Development, and the effort to reverse the 2009 Endangerment Finding that provides the legal basis for all U.S. climate regulation. Connects each cut to the industries that lobbied for it. Strong for civics, environmental science, and political science. |
2 | "The Chemicals That Never Leave" — PFAS and Corporate Cover-Up Examines the PFAS 'forever chemical' crisis: 12,000+ synthetic compounds, detectable in ~45% of U.S. tap water and in virtually every American's blood. Covers the 2025 NIH-funded USC study estimating 6,864 annual cancer cases from PFAS in drinking water; the 3M and DuPont internal document evidence of decades-long health risk concealment; and the 2025 partial rollback of the EPA's 2024 drinking water standard. Strong for environmental science, health science, and ethics. |
3 | "Banned Everywhere Else" — Pesticide Industry and Regulatory Capture Examines atrazine (banned in EU, still used on 60M+ U.S. acres; classified probable carcinogen by WHO in November 2025) and glyphosate/Roundup (Bayer facing 170,000+ lawsuits; $10B+ in settlements). Documents Monsanto's ghostwriting of academic papers, the 50+ private EPA-Syngenta meetings during atrazine review, the revolving door between the American Chemistry Council and EPA leadership, and the 322M lb annual U.S. use of EU-banned pesticides. Strong for science, ethics, and media literacy. |
4 | "The Longest Lie" — Fossil Fuels and 50 Years of Climate Denial Traces Exxon's 1977 internal climate research (which accurately predicted observed warming), the subsequent decades-long industry disinformation campaign, the IMF's estimate of $757B in annual U.S. fossil fuel subsidies, and the 2025 EPA rollbacks of vehicle and power plant emissions standards. Addresses environmental justice: air pollution's disproportionate impact on low-income communities and communities of color. Strong for environmental science, history, political science, and ethics. |
5 | "Connecting the Poison" — Synthesis and the System Identifies the six-step corporate deception playbook visible across all four preceding passages; examines how the costs of environmental harm are consistently shifted to communities with the least political power; challenges students to evaluate whether democracy can reliably produce protective environmental policy when the regulated industries have disproportionate access to regulators, policymakers, and media. Culminating discussion passage for the full series. |
GREED NATION
A Socratic Seminar Reading Series for High School Students
When Profit Becomes More Important Than People
This reading series examines one of the
most persistent and powerful forces shaping American life: the drive for profit
at the expense of human need. From the price of a life-saving medication to the
collapse of the global economy, the readings that follow trace how unchecked
greed has shaped—and repeatedly broken—the world we live in.
These are not opinions. They are documented
facts, supported by studies published in major scientific journals,
investigations by the United States Senate, and reporting by some of the most
respected news organizations in the world. Read carefully. Think critically. Be
willing to be uncomfortable. The questions raised here do not have easy
answers—but they are questions every generation must face.
How to
Use This Series
Each passage includes: (1) a reading built
on verified facts, (2) Socratic Seminar discussion questions designed to push
your thinking, (3) Food for Thought — deeper provocations to take the
conversation further, and (4) Sources to Verify — so you can check every claim
yourself.
PASSAGE 1
| The $936 Pill
When Medicine Becomes a Luxury: The Cost of Staying Alive in
America
The Numbers Don't Lie
In March 2024, researchers from Yale
University, King's College Hospital in London, and the nonprofit Doctors
Without Borders published a study in the prestigious medical journal JAMA
Network Open. Their finding was stunning, and it was not disputed: the diabetes
and weight-loss medication Ozempic — made by the Danish pharmaceutical company
Novo Nordisk — can be manufactured for as little as 89 cents per month. Even
including a profit margin and all production costs — the chemical ingredient
semaglutide, the disposable pen device, the filling and packaging — the most
the drug should cost to make is $4.73 for a one-month supply.
Novo Nordisk charges Americans $935.77 per
month for the same drug, before insurance and rebates.
|
To
put that in plain math: for every dollar it costs to make Ozempic, Novo
Nordisk charges approximately $200 to $1,000. That is not a business model.
That is a ransom. |
The active ingredient — semaglutide — costs
about 29 cents per monthly dose. The injection pen costs $2.83. Filling and
packaging costs 20 cents. Other chemicals cost 15 cents. Add it all up: you get
under $5. The company charges nearly $1,000. The difference — nearly $930 per
patient per month — is pure profit, extracted from people who need this
medication to manage life-threatening diseases including Type 2 diabetes and
obesity-related heart disease.
A Drug the World Needs, at a Price It Cannot Afford
Ozempic is not an obscure medication. As of
2023, an estimated 2% of all American adults who visited a doctor had been
prescribed Ozempic or its sister drug Wegovy. More than 10 million Americans
are currently taking GLP-1 medications. For many people with Type 2 diabetes or
severe obesity, these drugs are not optional lifestyle choices — they are
medical necessities. People who cannot afford them face amputations, blindness,
kidney failure, and death.
The company's own CEO, Lars Fruergaard
JΓΈrgensen, was called before the U.S. Senate Health Committee, chaired by
Senator Bernie Sanders, to explain why Americans pay so much. The answer became
clear when committee investigators compared prices around the world: Americans
pay $935 per month for Ozempic while patients in Germany pay the equivalent of
about $140, patients in Denmark (Novo Nordisk's home country) pay $186, and
patients in the United Kingdom pay $92. The same drug. The same company. A
price difference of more than 1,300 percent — in favor of every other wealthy
nation except the United States.
|
Novo
Nordisk earned more than $10 billion in just the third quarter of 2023 — more
than half of its revenue from these two drugs alone. In 2023, it became the
most valuable publicly traded company in Europe, surpassing the luxury goods
giant LVMH. |
The Patent as a Weapon
Novo Nordisk holds a U.S. patent on
semaglutide that does not expire until 2032. This patent — granted by the U.S.
government — gives the company the legal right to be the only seller of Ozempic
in America for the next seven or more years. No generic manufacturer can
produce a cheaper version. No competition is permitted. Meanwhile, Senator
Sanders revealed that executives from generic drug companies told him they
could produce and sell an identical drug for less than $100 per month — less
than 10 percent of what Novo Nordisk currently charges — and still make a
profit.
This is not a failure of the market. It is
a designed absence of one. The company lobbies for patent protections. The
government grants them. The patient — often a lower-income diabetic who cannot
afford $12,000 per year — is the one left without options.
Novo Nordisk's Defense
Novo Nordisk argues that its high U.S.
prices subsidize the billions it spends on research and development — nearly $5
billion in 2023 alone. The company also notes that 75% of its gross revenue
goes to rebates and discounts to insurance companies and pharmacy benefit
managers, and that patients with private insurance can access the drug for as
little as $25 per month through the company's savings card.
Critics, including the Yale researchers,
push back: the savings card does not help uninsured patients, Medicare
recipients, or Medicaid patients. The billions in R&D spending — while real
— cannot fully explain why the same drug costs thirteen times more in America
than in other rich countries. And the profits accumulated by Novo Nordisk
during this period reached levels that even the company's own CEO admitted were
unexpected.
|
π
Socratic Seminar Questions |
|
1. Is a company morally obligated to make a life-saving drug
affordable, or does it have the right to charge whatever the market allows?
Who decides? |
|
2. Novo Nordisk argues it needs high prices to fund research for
future drugs. Is this a valid argument? How would you verify it? |
|
3. The U.S. government grants pharmaceutical patents that block
competition. Should the government also have the power to cap prices? Why or
why not? |
|
4. Americans pay 13 times more than British patients for the same
drug. What does this tell us about how the American healthcare system works —
and for whom? |
|
5. If you were a Type 2 diabetic without insurance, what options
would you have? What does your answer tell you about the system? |
|
6. Can a business practice be both legal and wrong? Give an
example from this passage or from your own experience. |
|
π
Food for Thought |
|
• The researchers who published this study said their goal was to
'have receipts' — to make the math transparent so no one could deny it. Why
might powerful corporations prefer that this kind of math stay hidden? |
|
• The drug industry claims it needs patents and high prices to
fund innovation. But many of the original scientific breakthroughs behind
GLP-1 drugs were funded by public universities and government research grants
— meaning taxpayers already paid once. Should they have to pay again? |
|
• Novo Nordisk is a Danish company. Denmark has universal
healthcare and negotiates drug prices centrally. Its own citizens pay a
fraction of what Americans pay for its drug. Is this ironic? Is it wrong? |
|
• Martin Shkreli became infamous for raising the price of a
lifesaving drug 5,000% overnight. Public fury was enormous. But Ozempic's
markup over production cost is comparable. Why do you think the public
reaction has been different? |
|
π Verify the Facts —
Sources to Check |
|
Barber, M. et al. (2024). "Estimated Sustainable Cost-Based
Prices for Diabetes Medicines." JAMA Network Open. — The Yale/Doctors
Without Borders study on Ozempic production cost. |
|
CNBC (March 27, 2024): "Novo Nordisk's $1,000 diabetes drug
Ozempic can be made for less than $5 a month, study suggests." cnbc.com |
|
NBC News (Sept. 18, 2024): "Bernie Sanders says Ozempic can
be produced for less than $100 a month." nbcnews.com |
|
Fortune (March 28, 2024): "Ozempic maker Novo Nordisk facing
pressure as study finds $1,000 appetite suppressant can be made for just
$5." fortune.com |
|
U.S. Senate HELP Committee Report (2024) — Investigation into
Novo Nordisk pricing. Available via senate.gov |
PASSAGE 2
| Vanlords
Rusted Wheels for $2,500 a Month: The Profiteering of
Desperation
When Your Only Home Is Someone Else's Broken-Down Van
In the wealthiest county in the United
States — Santa Clara County, California, home to Apple, Google, and eight of
America's fifty most expensive zip codes — people are living in broken-down
recreational vehicles parked on public streets. They are not homeless in the
traditional sense. They have a roof over their heads. But that roof belongs to
someone else, and they pay for it every month.
These landlords — called
"vanlords" by housing advocates and city officials — purchase
dilapidated, often unregistered, uninsured, and inoperable RVs, then rent them
to people with nowhere else to go. The monthly fees vary, but reports from across
California's Bay Area confirm that renters often pay hundreds of dollars per
month — sometimes well over a thousand — for a vehicle that cannot legally be
driven, often lacks working plumbing, and sits on a public street with no
tenant protections whatsoever.
|
In
Santa Clara County, the share of homeless individuals sleeping in vehicles
has more than doubled since the pandemic, jumping from 18% in 2019 to 37% in
2025. Countywide, approximately 11,500 people are estimated to live in 6,800
RVs. |
There are no leases. There are no
inspections. There are no eviction processes — because legally, the renters
have no rights to the vehicles they sleep in. City officials across California,
including Los Angeles, San Jose, and San Francisco, have described the
conditions in these RVs as hazardous: fire risks, biohazards, no sanitation,
broken locks. One San Jose city councilmember described vanlords as individuals
who "manipulate our unsheltered residents to rent these places in unsafe
conditions."
The Economics of No Other Option
Why would anyone pay for this? Because in
California — and increasingly across the United States — it is the only thing
they can afford. California accounts for nearly one quarter of America's total
homeless population despite having only 12% of its total residents. In the Bay
Area, rents for a modest one-bedroom apartment routinely exceed $3,000 per
month. Workers earning $25 per hour — which is above California's minimum wage
— still cannot afford this.
Many of the people living in vanlord RVs
are employed. Some are immigrants recently arrived in the country. Some are
elderly. Some are families with children. They are not living on the street by
choice; they are living in someone else's broken vehicle because it is one step
above the street — and someone has found a way to profit from that one step.
|
An
investigation by CNBC in February 2025 spoke directly with a vanlord and
multiple tenants. One tenant reported paying several hundred dollars per
month for a vehicle on a public street — an amount that, annualized, rivals
what a decent apartment would cost in many parts of the United States. |
The Policy Capture Problem: Who Protects the Renter?
Landlords in California must comply with
detailed habitability requirements — working heat, plumbing, pest control,
smoke detectors, and more. Vanlords comply with none of these requirements,
because the vehicles are not legally classified as housing. This regulatory gap
is not an accident; it is the result of what economists call "policy
capture" — when the rules are written in ways that benefit those with
money and power, while leaving vulnerable people without protection.
In 2023, the Los Angeles City Council voted
12-0 to study a crackdown on vanlords. L.A. County supervisors followed. San
Jose proposed a ban on RV rentals to the homeless. Advocates for the unhoused
pushed back — not to defend vanlords, but because they argued that removing
even this deeply flawed housing option, without replacing it with anything
better, would simply push people onto the bare pavement.
One resident at a San Jose RV encampment
said it directly: "By restricting the sale, rental or even transfer of
RVs, you're targeting one of the last remaining options for shelter that
unhoused individuals can afford. If safety is the concern, then provide
sanitation services in designated areas with proper oversight — not further
displacement."
As of 2025, California has invested
billions of dollars in homeless response with mixed results. The average time
to move someone from an encampment to interim housing is three months. Many
never make it. Meanwhile, the housing market continues to price people out, and
vanlords continue to fill the gap — at whatever price desperation will bear.
|
π
Socratic Seminar Questions |
|
1. Is a vanlord doing something wrong, or are they simply
responding to a market need? Does your answer change if the RV is unsafe and
uninhabitable? |
|
2. Housing advocates say that banning vanlords without providing
alternatives just moves people from one dangerous situation to another. Is
this a valid argument against the ban? What would a better solution look
like? |
|
3. California has some of the highest minimum wages in the
country and some of the highest rents. What does this tell us about what
minimum wage laws can and cannot fix? |
|
4. What is "policy capture," and can you find other
examples of it beyond housing? (Think about other industries where
regulations seem to protect corporations more than consumers.) |
|
5. One councilmember said the city "does not want people
living on the street, in an RV, or in a tent — but does not want to build
enough housing to replace what is lost." If this is true, what is the
city's actual goal? |
|
6. Some of the people living in vanlord RVs are full-time
workers. What does it mean for a society when working full time is no longer
enough to afford a home? |
|
π
Food for Thought |
|
• In 19th-century England, factory owners housed workers in
company tenements and charged them rent — effectively taking back a portion
of every paycheck. Historians call this the "truck system." Is
vanlording a modern version of the same idea? |
|
• Silicon Valley has produced more billionaires per square mile
than almost anywhere on earth. It also has one of the worst housing crises in
the developed world. Can both of these things be true at the same time
without one causing the other? |
|
• The government often describes homelessness as a personal
failing — a result of addiction, mental illness, or bad decisions. The data
from California suggests it is increasingly a result of economics: people
simply cannot afford housing. Which explanation do the facts support? Does it
matter which one politicians believe? |
|
• Cities fine people for sleeping on sidewalks and ban RV
rentals. They do not fine corporations for keeping wages low or fine
landlords for leaving apartments empty as investments. What do these policy
choices reveal about whose interests the government is protecting? |
|
π Verify the Facts —
Sources to Check |
|
CNBC (February 20, 2026): "From 'vanlords' to safe parking
sites: How RVs became Silicon Valley's housing safety net." cnbc.com |
|
LAist / KPCC (October 2023): "LA County Supervisors Approve
'Vanlord' Resolution." laist.com |
|
California City News (September 2023): "Los Angeles to Crack
Down on So-Called 'Vanlords.'" californiacitynews.org |
|
San JosΓ© Spotlight (March 2025): "San Jose to crack down on
RV rentals to homeless." sanjosespotlight.com |
|
Greater Los Angeles Homeless Count (2023) — Published annual data
on vehicle homelessness. lahomeless.org |
PASSAGE 3
| The Cycle of Collapse
Boom, Bust, Bailout, Repeat: How American Greed Has Repeatedly
Crashed the Economy
A Nation That Does Not Learn Its Lessons
The United States economy has experienced
three devastating bubble collapses in the last 25 years: the dot-com crash of
2000, the housing collapse of 2008, and the stock market crash that accompanied
it. In each case, the pattern was identical: extraordinary profits for a small
group of insiders, ordinary Americans left holding the wreckage, and the same
forces that caused the crash being allowed to regroup and do it again.
This is not a conspiracy theory. It is
documented history, laid out in detail by Nobel Prize-winning economists, the
U.S. Senate, and the Financial Crisis Inquiry Commission — the official
government body created to investigate the 2008 crash. Their conclusion: the
financial crisis was "avoidable" and was caused by "widespread
failures in financial regulation and supervision" and "dramatic
failures of corporate governance and risk management." In plain language:
people knew what was happening, made enormous amounts of money from it, and
when it collapsed, ordinary Americans paid the price.
The Dot-Com Bubble: When Hype Became a Business Model
In the late 1990s, the invention of the
internet created a genuine technological revolution. But the excitement around
that revolution outpaced its reality. Investors poured money into internet
companies — called "dot-coms" — many of which had no profits, no
sustainable business models, and in some cases, no actual product beyond a
website. Stock prices soared on pure speculation. Companies like Pets.com
raised hundreds of millions of dollars in investor money before going bankrupt
within a year of their initial public offering.
When the bubble burst in 2000, the Nasdaq
stock index — where most tech stocks traded — fell approximately 78% from its
peak. An estimated $6.2 trillion in household wealth was destroyed over two
years. Small investors who had put their retirement savings into tech stocks
lost everything. The executives and investment bankers who had hyped these
stocks and collected massive fees? Most of them kept their money.
|
Adjusted
for inflation, the dot-com crash led to losses of approximately $9 trillion
in total wealth — more than the entire annual output of the U.S. economy at
that time. |
The Housing Bubble: The Crash That Shook the World
The dot-com crash led the Federal Reserve
to slash interest rates, which made borrowing cheap. That cheap money flowed
into the housing market. Between 1998 and 2006, the price of the average
American home increased by 124%. Banks and mortgage lenders, sensing enormous
profits, began issuing mortgages to people who could not realistically repay
them — a practice known as "predatory lending." These loans were
called "subprime mortgages."
What happened next is crucial to
understand. The banks did not hold these risky loans. They bundled them
together into complex financial products called "mortgage-backed
securities" and sold them to investors around the world. Rating agencies —
companies paid by the banks to assess the risk of these products — gave them
top safety ratings, often with little real analysis. Why? Because the banks
were paying them, and the banks wanted their products to look safe.
|
This
is one of the purest examples of what economists call a "conflict of
interest" — a situation where the person responsible for giving you an
honest assessment is being paid by the person who benefits from a dishonest
one. |
In 2004, the Securities and Exchange
Commission loosened rules that limited how much banks could borrow against
their assets. Investment banks including Bear Stearns, Lehman Brothers, and
Merrill Lynch began borrowing at ratios of 30-to-1 — meaning for every $1 they
actually owned, they were betting $30 of borrowed money. When housing prices
began to fall in 2006 and 2007, these institutions had nowhere to hide.
Lehman Brothers — once the fourth-largest
investment bank in the United States — collapsed in September 2008, triggering
the largest stock market crash since the Great Depression. The Dow Jones
Industrial Average fell more than 50% from its peak to its lowest point in
early 2009. Nearly 9 million Americans lost their jobs. Up to 10 million lost
their homes. U.S. housing prices fell nearly 30% on average.
The government response: a $700 billion
bailout of the financial sector, known as the Troubled Asset Relief Program
(TARP). The banks were saved. The homeowners who had been given predatory loans
were not. One analyst summarized it clearly: "Wall Street came out much
better than Main Street."
Policy Capture: When the Regulators Work for the Regulated
A key factor in both collapses was what
researchers call "regulatory capture" — the process by which the
agencies created to oversee an industry are gradually taken over, ideologically
or politically, by that industry's interests. In the years before 2008, the
Federal Reserve, despite having the legal authority to regulate mortgage
lending, declined to do so. The SEC loosened leverage rules at exactly the
moment when leverage was becoming most dangerous. The rating agencies gave safe
ratings to products they knew were risky.
How does this happen? Through lobbying.
Through campaign contributions. Through the revolving door between government
and industry, where regulators retire to the industries they once oversaw — and
where industry executives take government positions that allow them to weaken
the rules that once constrained them.
From 1998 to 2008, the financial industry
spent more than $5 billion on lobbying and campaign contributions in
Washington. The result: less regulation, fewer protections for consumers, and
eventually, a global economic catastrophe whose costs were borne by the people
least responsible for it.
What Comes Next? The AI Bubble
History does not repeat itself exactly, but
it rhymes. As of 2025-2026, trillions of dollars are being invested in
artificial intelligence companies, many of which have not yet demonstrated
sustainable profits. Nvidia's stock at various points in 2024 commanded a
price-to-earnings ratio — the measure of how much investors are paying for each
dollar of actual profit — of well over 60, meaning investors are paying $60 for
every $1 the company actually earns. By comparison, the broader stock market's
historical average is closer to 15 to 20. The hype around AI is real and the
technology is powerful. But the same things were said about the internet in
1999.
Whether the current AI investment surge
becomes a bubble or a justified transformation remains to be seen. But the
pattern — extraordinary profits for early insiders, valuations disconnected
from current reality, and ordinary Americans' retirement funds and savings
exposed to the risk — is familiar. Students studying this period of history
should ask: who profits if the predictions come true? And who pays if they
don't?
|
π
Socratic Seminar Questions |
|
1. The dot-com bubble and the housing bubble both followed a
similar pattern: excitement, overvaluation, collapse. Why do you think these
patterns repeat, even when the previous crash is still in living memory? |
|
2. Rating agencies gave toxic mortgage-backed securities top
safety ratings because the banks were paying them. Can you think of other
situations where the entity paying for an assessment has an incentive to
receive a positive one? (Think: college rankings, restaurant reviews, product
safety testing.) |
|
3. In 2008, the government bailed out the banks but not the
homeowners. Was this the right choice? What would have happened if the
government had done the opposite — helped homeowners but let the banks fail? |
|
4. What is a "conflict of interest"? Can you find three
examples of conflicts of interest in the financial crisis story? Can you find
one in your own school or community? |
|
5. Some economists argue that bubbles are an inevitable part of
capitalism — that human psychology naturally leads to cycles of boom and
bust. Others argue that better regulation can prevent them. Which side does
the evidence from this passage support? |
|
6. Do you think the AI investment surge we are seeing today is a
bubble, a justified revolution, or both? What evidence would you need to know
for sure? |
|
π
Food for Thought |
|
• The people who designed and sold the toxic mortgage products
that caused the 2008 crash were mostly not prosecuted. Not a single senior
executive from a major Wall Street bank went to prison. Does the absence of
legal consequences change how you think about what happened — or about who is
likely to do it again? |
|
• The U.S. government printed and borrowed trillions of dollars
to save the financial system in 2008. This money was ultimately paid for by
taxpayers over years and decades. Who received the benefit? Who paid the
cost? Is this a form of redistribution? |
|
• Economists call the cycle of stability leading to risk-taking
leading to collapse the "Minsky moment," after economist Hyman
Minsky. His argument was simple: the longer things are stable, the more
confident people become, and the more recklessly they behave — until they
can't. If this is true, is the next crash not a matter of if, but when? |
|
• The same financial firms that caused the 2008 crash — Goldman
Sachs, JPMorgan, Citigroup — are today among the largest investors in AI.
Does knowing this change how you think about the AI investment boom? |
|
π Verify the Facts —
Sources to Check |
|
Financial Crisis Inquiry Commission (2011). "The Financial
Crisis Inquiry Report." fcic.law.stanford.edu — The official U.S.
government investigation. Freely available online. |
|
Wikipedia: "2008 Financial Crisis" — Extensively
sourced summary with original references.
wikipedia.org/wiki/2008_financial_crisis |
|
FiveThirtyEight (2014): "Why the Housing Bubble Tanked the
Economy and the Tech Bubble Didn't." fivethirtyeight.com |
|
Council on Foreign Relations: "The U.S. Financial
Crisis" — Timeline of the crisis. cfr.org |
|
Retro Report (2025): "The 2008 Financial Crisis Explained:
Housing Bubble to Bailout." retroreport.org |
PASSAGE 4
| The Pattern
Greed Nation: Connecting the Dots
What Do a Diabetes Drug, a Broken-Down RV, and a Global Recession Have in
Common?
At first glance, Ozempic, vanlords, and the
2008 financial crisis may seem like unrelated stories. But they share a common
structure — a pattern that has appeared again and again in American economic
history, and which, if students learn to recognize it, becomes visible in
nearly every major public debate about money, power, and who benefits.
The pattern works like this:
Step 1: A company or industry identifies a human need —
health, shelter, retirement security, communication.
Step 2: The company gains a dominant or monopoly position
through patents, market power, or the elimination of competition.
Step 3: It uses that position to charge the maximum possible
price — not the fair price, not the price that reflects actual cost, but the
highest price it can extract before people either give up or die.
Step 4: It deploys money — through lobbying, campaign
contributions, and the revolving door between industry and government — to
prevent regulation that might limit its power.
Step 5: When the extraction causes a crisis — a health
disaster, a housing emergency, an economic collapse — the cost is transferred
to the public, while the private profits remain private.
|
Novo
Nordisk extracts profit from diabetics. Vanlords extract profit from the
unhoused. Wall Street extracted profit from homeowners and pension funds —
and when the entire system collapsed, it extracted $700 billion more from
taxpayers. The mechanism is the same. Only the industry changes. |
The Question of Complicity
One of the hardest parts of studying this
pattern is confronting the question of complicity. These systems do not run
themselves. They require lawyers who draft the patent filings. They require
accountants who structure the profit margins. They require politicians who
accept the campaign contributions and decline to act. They require regulators
who choose not to regulate. They require economists who write papers justifying
the status quo. They require journalists who don't run the story, and voters
who don't demand accountability.
None of these people think of themselves as
villains. Most of them are doing their jobs, following the incentives they have
been given, operating within systems they did not create. This does not make
the outcomes less real. It raises a harder question: How do you change a system
when almost everyone inside it is benefiting from it in some way, or simply has
too much to lose by challenging it?
What Students Can Do
This series is not designed to make you
feel hopeless. It is designed to make you see clearly. The first step in
changing any system is understanding how it actually works — not how it claims
to work, not the version taught in a textbook designed to make existing
institutions look neutral and rational, but the actual mechanics of power and
profit.
You will graduate into an economy shaped by
these forces. You will pay for insurance, take medications, apply for housing,
invest in retirement, and vote for people who make decisions about all of these
things. The more clearly you understand what is happening and why, the less
easily you can be manipulated by the language that is used to justify it —
words like "innovation," "market efficiency,"
"property rights," and "free enterprise." These words are
not wrong. But they are often used as shields behind which very specific
interests are protected.
The question
is not whether greed exists. Of course it does. The question is whether we
build systems that harness it productively, limit its worst excesses, and
distribute its benefits broadly — or whether we build systems that allow it to
feed on human need without limit or accountability.
That question is not settled. It is being
answered, right now, by the choices of the generation you belong to.
|
π
Socratic Seminar Questions |
|
1. In your own words, describe the five-step pattern identified
in this passage. Can you find a current news story that fits this pattern? |
|
2. The passage says that the people who sustain these systems
often don't think of themselves as villains. Do you find this convincing?
Does intent matter when the outcome causes harm? |
|
3. The passage lists several groups as "complicit" in
maintaining systems of exploitation: lawyers, accountants, politicians,
regulators, economists, journalists, voters. Is it fair to include all of
these groups? Are any of them more responsible than others? |
|
4. What is the difference between "free enterprise" as
a concept and the actual market conditions described in these passages? Can
you have free enterprise without competition? |
|
5. If you were designing an economic system from scratch, what
rules would you put in place to prevent the patterns described in this
series? What problems might your rules create? |
|
6. The series ends with the claim that your generation will
answer the question of what kind of economy we build. Do you believe this?
What would it actually take to change the systems described here? |
|
π
Food for Thought |
|
• Economist Milton Friedman famously argued that a corporation's
only responsibility is to maximize profits for its shareholders. Economist
Joseph Stiglitz and others have argued that this idea, taken to its logical
conclusion, produces exactly the outcomes described in this series. Who do
you think is right — and what evidence from these passages supports your
view? |
|
• The United States spends more on healthcare per person than any
other wealthy nation, and yet has worse health outcomes on most major
measures. It has some of the highest housing costs among developed nations.
It has the largest wealth gap of any G7 country. Are these facts connected to
the patterns described in this series, or are they coincidences? |
|
• History shows that major reforms — child labor laws, the
40-hour workweek, Social Security, civil rights legislation — happened not
because powerful interests willingly gave up power, but because ordinary
people organized, demanded change, and sometimes paid a price for it. What
would it take for a similar shift to happen now? What stands in the way? |
|
• We use the word 'greed' as though it is a character flaw —
something that bad people have. But economist Adam Smith, who is considered
the father of modern capitalism, argued that self-interest, properly
channeled, produces good outcomes for society. At what point does
self-interest become greed? And who decides? |
|
SECTION 3:
What Is a Socratic Seminar? |
An Overview for Educators and
Students, Based on Socratic Seminars International
The Idea
Behind the Method
A Socratic
seminar is a structured intellectual discussion in which a group of
participants — students, adults, or anyone seeking understanding — explore a
shared text through carefully prepared open-ended questions. The method draws
its name from the Greek philosopher Socrates, who taught not by lecturing but
by questioning: asking his students to examine their assumptions, define their
terms, and follow an argument to its logical conclusion, even when that
conclusion was uncomfortable.
Socratic
Seminars International, founded by educator Oscar Graybill, formalized and
expanded the method for use in modern classrooms. Their core definition: a
Socratic seminar is "a collaborative, intellectual dialogue facilitated
with open-ended questions about a text." The emphasis is on dialogue — not
debate. In a debate, participants defend fixed positions and try to win. In a
Socratic seminar, participants explore multiple perspectives and try to
understand.
|
The
goal of a Socratic seminar is not agreement. It is not the defeat of one
argument by another. It is a deeper, more nuanced understanding of difficult
ideas — the kind of understanding that only emerges when people are honest
about what they don't know and genuinely curious about what they might be
missing. |
How It
Differs from a Typical Classroom Discussion
|
TYPICAL
CLASS DISCUSSION • Teacher asks questions; students
answer • Conversation flows through the teacher • Right answers are the goal • Students respond to earn participation credit • Teacher evaluates accuracy of responses • Passive listening between turns |
SOCRATIC
SEMINAR • Students ask and answer each other's
questions • Conversation flows directly between students • Better questions are the goal • Students respond because they are genuinely curious • Participants assess quality of the dialogue collectively • Active listening is a core skill being practiced |
Three
Types of Questions
Socratic
Seminars International identifies three levels of questions, each serving a
distinct purpose in deepening understanding. These correspond roughly to
Benjamin Bloom's taxonomy of learning, and a well-run seminar moves through all
three:
|
1 |
Opening / Grounding
Questions These connect the text to the student's own experience. They
are designed to give everyone an accessible entry point and establish that
the conversation is rooted in real life. Example: 'Have you or has someone
you know ever been unable to afford a medication? How did that feel?' These
questions open the seminar and ensure no one starts from silence. |
|
2 |
Core Analytical Questions These push students into the text itself — asking them to
analyze claims, evaluate evidence, identify assumptions, and examine the
author's reasoning. Example: 'The passage states that Novo Nordisk's drug can
be made for $4.73 per month but costs nearly $1,000. What assumption must be
true for this pricing to be considered legitimate?' These are the heart of
the seminar. |
|
3 |
Closing / Synthesis
Questions These expand the conversation beyond the specific text to
larger principles, patterns, or implications. Example: 'If the pattern
described in this series repeats across tobacco, lead, PFAS, and pesticides,
what does this tell us about the reliability of corporate self-regulation?'
Closing questions send students out of the room still thinking. |
The
Recipe for Success (Socratic Seminars International)
Oscar Graybill
and Socratic Seminars International describe what they call a "Recipe for
Success" for facilitating effective seminars. The key elements:
|
ELEMENT — WHAT MAKES A
SOCRATIC SEMINAR WORK |
|
A rich,
challenging text — not too long to read closely, not so simple it yields no
complexity |
|
Preparation —
students must have read the text before arriving; annotation is strongly
encouraged |
|
Open-ended
questions — questions whose value lies in their exploration, not their answer |
|
A circle
arrangement — everyone must be able to see everyone else; hierarchy is
physically reduced |
|
Ground rules
established in advance — respect, evidence, no personal attacks, listening
before responding |
|
A
facilitator, not a teacher — the leader asks opening questions and steps
back; students drive |
|
Reflection —
after every seminar, participants evaluate the quality of the conversation,
not just its content |
|
No bad
seminar if you reflect — even a seminar that struggles teaches the group
something about dialogue |
The
Fishbowl Format (Recommended for These Readings)
For classes of
20 or more students, Socratic Seminars International and most classroom
practitioners recommend the fishbowl format:
|
1 |
Set Up Two Circles Arrange chairs in an inner circle (speakers) and an outer
circle (observers). The inner circle holds the active discussion. The outer
circle observes, takes notes, tracks participation, and prepares to swap in. |
|
2 |
Inner Circle Discusses The inner circle (typically 8–12 students) discusses the text
using prepared questions. The teacher/facilitator asks one opening question
and then steps back. Students call on each other, build on each other's
ideas, and refer back to the text. |
|
3 |
Outer Circle Observes Outer circle students use a structured observation form:
tracking which students speak, noting strong arguments, recording unanswered
questions, and preparing their own contributions for when they rotate in. |
|
4 |
Rotate and Continue After 15–20 minutes, circles switch. New inner circle students
bring fresh questions informed by what they observed. Repeat. |
|
5 |
Full Group Debrief Close with 10 minutes of whole-group reflection: What was the
strongest argument made? What question still hasn't been answered? What would
you want to say now that you didn't say during the seminar? |
Ground
Rules for Students
Post these or
distribute them before every seminar. Adapted from Socratic Seminars
International and NCTE guidelines:
|
SOCRATIC SEMINAR GROUND RULES |
|
Refer to the
text — support every major claim with evidence from the reading |
|
Listen before
responding — do not begin formulating your reply while someone else is still
speaking |
|
Build, don't
just react — acknowledge what the previous speaker said before adding your
own idea |
|
Ask questions
more than you make statements — curiosity is more valuable than certainty
here |
|
Invite
quieter voices — if someone has not spoken, invite their perspective |
|
No put-downs
— challenge ideas, never people; disagree with reasoning, not with the person |
|
Be willing to
change your mind — the best seminars end with people thinking differently
than when they began |
|
Stay in the
text — personal anecdotes are welcome as illustrations, not as substitutes
for evidence |
Assessing
Socratic Seminars
Socratic
Seminars International is clear: the most important measure of success is not
whether students reached correct conclusions, but whether the dialogue itself
was substantive, text-grounded, and collegial. Reflection is described as the
key to improvement.
Suggested
assessment approaches for these reading series:
|
ASSESSMENT APPROACH — OPTIONS
FOR EDUCATORS |
|
Self-assessment:
Students rate their own participation on dimensions of listening, evidence
use, and question quality |
|
Peer
assessment (outer circle): Observers track speakers and note strongest
contributions using a structured rubric |
|
Pre/post
writing: Students write a 200-word response to the opening question before
the seminar; revise it after |
|
Exit ticket:
One question they still have; one argument they found most persuasive; one
idea they changed their mind about |
|
Quality of
questions: Evaluate student-written questions on whether they are open-ended,
text-grounded, and generative |
|
Process over
content: Grade on demonstrated listening (did students build on each other?)
rather than correctness of views |
|
SECTION 4:
How to Read Closely — The Mortimer J. Adler Method |
Based on How
to Read a Book by Mortimer J. Adler and Charles Van Doren (1940; revised 1972,
Simon & Schuster)
Why
Adler Matters for These Readings
Mortimer J.
Adler was an American philosopher and educator who spent his career arguing
that reading is a skill most people are never fully taught — and that genuine
understanding requires far more than decoding words on a page. His 1940 book
How to Read a Book, revised in 1972 with Charles Van Doren, remains one of the
most practical and influential guides to active, critical reading ever written.
The passages in
Greed Nation and Poisoned for Profit are designed to be read, not skimmed. They
contain layered arguments, verifiable claims, and deliberate tensions. Applying
Adler's method to these passages will help students get far more from each reading
— and will prepare them to participate in Socratic seminars with the kind of
specific, text-grounded understanding that makes those seminars valuable.
|
Adler's
core argument: Most people read to be entertained or to collect information.
Very few people read to understand. Understanding — the kind that changes how
you think — requires active effort, good questions, and a willingness to
argue back at the page. |
The Four
Levels of Reading
Adler
identifies four levels of reading, each building on the previous. Think of them
as reading to decode, reading to survey, reading to understand, and reading to
master.
|
Level |
Name |
Core
Question |
What You Do |
|
1 |
Elementary Reading |
What does this sentence say? |
Decode the words. Understand basic grammar and vocabulary.
Identify unfamiliar terms. |
|
2 |
Inspectional Reading |
What is this text about? What is its structure? |
Skim quickly. Read titles, headings, callout boxes. Read first
and last paragraphs. Form a mental map before reading carefully. |
|
3 |
Analytical Reading |
What does this text mean? Is it true? |
Read deeply and completely. Identify the author's argument, key
claims, and evidence. Evaluate logic and evidence quality. Argue back at the
text. |
|
4 |
Syntopical Reading |
How does this compare to other texts on the same subject? |
Read multiple texts on the same topic. Compare claims, identify
agreements and conflicts, develop your own position through synthesis. |
For these
reading series, students should be working at Level 3 (Analytical Reading) for
each passage, and by the final synthesis passage of each series, they should be
working at Level 4 — comparing arguments across all the passages they have
read.
Adler's
Four Core Questions for Every Text
Adler argues
that every serious reader must ask four questions of every text they read
analytically. These are not optional. They are the work:
|
1 |
What is this text about, as
a whole? Not a summary of every paragraph — a single sentence or two
capturing the central argument or main idea. If you cannot state it in one
sentence, you do not yet understand the text well enough to discuss it.
Before your Socratic seminar, write this sentence in the margin or at the top
of your notes. |
|
2 |
What is being said, in
detail? What are the key claims and how are they supported? Identify the three to five most important specific claims in
the text. For each one, ask: What is the evidence? Is the evidence
sufficient? Could the evidence be interpreted differently? Mark these
passages. They are your ammunition for the seminar. |
|
3 |
Is this text true, in whole
or in part? This is the hardest question and the one most readers skip. Do
not simply accept what you have read. Ask: Does this argument follow
logically? Are the sources reliable? Does the evidence actually support the
conclusion? Are there facts that seem designed to provoke rather than to
inform? Even a passage you largely agree with deserves this scrutiny. |
|
4 |
What of it? Why does this
matter? What are the implications of this text if it is true? What
would have to change — in policy, in behavior, in how you think about the
world — if everything in this passage is accurate? This is the question the
Socratic seminar's 'Food for Thought' sections are designed to push you
toward. |
Adler's
Rules for Active Reading
Adler is
explicit: a book (or passage) you have not written in is one you have not
really read. He recommends treating annotation as a conversation with the
author — a way of arguing back, asking questions, and making the ideas your
own. For these reading series, students should annotate every passage before
the seminar.
|
ADLER'S ANNOTATION SYSTEM —
ADAPT THESE FOR EACH PASSAGE |
|
Underline
main ideas and key arguments — one main idea per paragraph maximum |
|
Circle
unfamiliar or significant words and look them up before the seminar |
|
Write a
question mark (?) next to anything you don't understand or don't believe |
|
Write an
exclamation mark (!) next to anything that surprises or disturbs you |
|
Star (*) the
one sentence per passage that you think is most important |
|
Write a brief
summary (2–3 words) in the margin next to each major section |
|
At the end:
write your answer to 'What is this passage about, as a whole?' in one
sentence |
|
Write your
answer to 'Is this true?' — note at least one point of agreement and one
point of doubt |
Applying
Adler to These Specific Passages
For Greed
Nation and Poisoned for Profit, here is how to apply Adler's method before your
Socratic seminar:
|
1 |
Before You Read:
Inspectional Reading (5 minutes) Read the passage title, all section headings, the stat boxes
and callout quotes, and the Socratic seminar questions at the end — before
reading the body text. This gives your brain a framework. You will understand
the detailed argument far better if you know where it is going. |
|
2 |
First Read: Analytical
Reading — Don't Stop (15–20 minutes) Read the entire passage through without stopping to look
things up. Mark confusing passages with a question mark. Underline sentences
that feel important. Keep moving. Adler is emphatic: the first read should be
complete, even if parts are unclear. |
|
3 |
Second Read: Active
Annotation (10–15 minutes) Re-read more slowly, applying the annotation system above. For
each major section, write a two-word summary in the margin. Circle the one
claim per section you most want to question in the seminar. Write your
question in the margin. |
|
4 |
Verify the Sources (10
minutes, shared with class) Each passage includes a 'Verify the Facts' source box. Look up
at least one source before the seminar. Does the source say what the passage
claims it says? Is there context the passage left out? Finding a discrepancy
— or confirming accuracy — gives you powerful material for discussion. |
|
5 |
Write Your Opening
Contribution (5 minutes) Before the seminar, write one sentence you intend to say in
the first five minutes — either a question you want to raise, a claim you
want to challenge, or an observation you want to test against the group.
Having something prepared prevents the paralysis of the blank page and
ensures you enter the conversation with purpose. |
Syntopical
Reading Across the Series
By the time
students reach the synthesis passages at the end of each series, Adler would
say they are ready for Level 4: syntopical reading — reading multiple texts on
the same subject in order to develop their own position through comparison and
synthesis.
After
completing both series, students should be asked:
|
Across
all nine passages (four in Greed Nation, five in Poisoned for Profit), what
is the single most important pattern you see? Where do the series agree?
Where do they diverge? If everything in both series is accurate, what is the
most urgent implication for your generation — and what is one concrete thing
that would need to change for that implication to be addressed? |
This is the
question Adler's method is designed to make answerable — not because there is
one right answer, but because close reading of multiple sources prepares the
mind to form an honest, well-reasoned position of its own. That is, as Adler
would say, what it means to become a truly competent reader.
|
QUICK REFERENCE: Combining
Adler + Socratic Seminar for These Readings |
Print or post
this for students to use as a preparation checklist.
|
THE COMPLETE PREPARATION
CHECKLIST (STUDENT VERSION) |
|
BEFORE YOU
READ: Skim titles, headings, callout boxes, and the Socratic questions at the
end |
|
FIRST READ:
Read all the way through. Mark confusing parts with '?'. Don't stop. |
|
SECOND READ:
Annotate. Underline main ideas. Circle key words. Star the most important
sentence. |
|
ANSWER
ADLER'S QUESTION 1: Write one sentence: 'This passage is about
________________.' |
|
ANSWER
ADLER'S QUESTION 2: List 3 specific claims and their evidence. Do you believe
the evidence? |
|
ANSWER
ADLER'S QUESTION 3: Write one thing you believe is true and one thing you
want to challenge. |
|
ANSWER
ADLER'S QUESTION 4: If all of this is accurate, what should change? Why does
it matter? |
|
VERIFY A
SOURCE: Look up at least one source from the 'Verify the Facts' box. Does it
check out? |
|
WRITE YOUR
OPENING CONTRIBUTION: One sentence or question you will raise in the first 5
minutes. |
|
DURING THE
SEMINAR: Listen before speaking. Build on what others say. Refer back to the
text. |
|
AFTER THE
SEMINAR: Write your exit ticket — one question you still have; one idea you
revised. |
References
and Sources for This Package
Adler, Mortimer
J. and Van Doren, Charles. How to Read a Book: The Classic Guide to Intelligent
Reading. Revised edition. Simon & Schuster, 1972.
Socratic
Seminars International. socraticseminars.com — Graybill's "Recipe for
Success" and ground rules.
Israel, Elfie.
"Examining Multiple Perspectives through Socratic Seminars."
ReadWriteThink, readwritethink.org.
National
Council of Teachers of English. "Crafting and Conducting a Successful
Socratic Seminar." NCTE.org, 2017.
Facing History
& Ourselves. "Socratic Seminar Teaching Strategy."
facinghistory.org.
Spencer
Education. "Designing Socratic Seminars to Ensure That All Students Can
Participate." spencereducation.com, 2026.
THE CHAOS CODE
A Socratic Seminar Reading Series for High School Students
How Social Media, Big Tech, and AI Are Engineering Fear,
Addiction, and Profit at the Cost of Your Mind
You did not
choose to be addicted to your phone. The addiction was built for you —
engineered by teams of behavioral scientists, neuroscientists, and machine
learning systems whose sole job was to figure out what keeps you scrolling one
second longer. That extra second is worth money. Billions of seconds, from
billions of users, add up to billions of dollars in advertising revenue. And
the currency those seconds are extracted with is not entertainment or
connection. It is fear. It is anger. It is the particular dread that something
important is happening right now, and you are missing it.
This reading
series examines four interlocking forces that are reshaping how young people —
and everyone else — experience reality: the algorithms engineered to keep you
in a state of emotional agitation, the collapse of online information into
AI-generated garbage designed to generate clicks rather than inform, the
conversion of every digital tool into a subscription that extracts money
indefinitely, and the concentrated power of a handful of billionaires who built
these systems and who are now using them in ways that go far beyond
entertainment.
These are not
opinions. They are documented engineering decisions, published research
findings, internal documents released through litigation and whistleblower
testimony, and Senate hearing records. The sources are listed. Check them.
These facts belong to you.
PASSAGE 1
| The Algorithm of Outrage
Engineering Anger: How Social Media Platforms Are Designed to
Keep You Afraid and Scrolling
The
Casino in Your Pocket
In 1953,
psychologist B.F. Skinner discovered something that would later become the
foundation of the most profitable business model in human history. He found
that laboratory rats pressed a lever most compulsively — not when they received
a food reward every time, but when rewards came randomly and unpredictably. He
called this "intermittent reinforcement." It is the same mechanism
that makes slot machines the most profitable gambling device ever invented. It
is also the core design principle behind every major social media notification
system on earth.
The engineers
who built Facebook's "Like" button understood this. Former Facebook
President Sean Parker — one of the company's earliest executives — has stated
publicly that the platform was built to exploit "a vulnerability in human
psychology." He said the design question they asked themselves was:
"How do we consume as much of your time and conscious attention as
possible?" Their answer: give people "a little dopamine hit every
once in a while" by showing them likes and comments on their posts, which
creates "a social validation feedback loop" — exactly like a slot
machine. Parker made these comments in 2017, after leaving Facebook, noting
that "God only knows what it's doing to our children's brains."
|
Neta
Alexander, an assistant professor at Yale who co-teaches a seminar called
"Media Anxieties," describes social media platforms as
"designed to be addictive by using intermittent rewards and trying to
invoke negative emotional responses such as rage, anxiety and jealousy, which
are known to prolong our engagement and deepen our attachment to our
devices." This is not a side effect of the design. It is the goal. |
Why Fear
and Anger Are the Algorithm's Favorite Emotions
Not all
emotions are equal in the attention economy. Research published in
Psychological Science in 2024 found that constant exposure to emotionally
charged social media content heightens stress, anxiety, and feelings of
paranoia — and that content eliciting extreme emotional responses, particularly
fear and anger, keeps users scrolling the longest. Platforms discovered this
not through academic research but through their own internal data: angry,
anxious users click more, comment more, share more, and return more frequently
than calm, satisfied users.
A 2021 internal
Facebook study — leaked through whistleblower Frances Haugen and reported by
the Wall Street Journal — found that the company knew its algorithm was
amplifying "anger and outrage" and that this was a direct result of a
2018 change to its ranking system designed to boost "meaningful social
interaction." When the company studied the change, it found that it was
increasing exposure to divisive, inflammatory content. An internal researcher
wrote that the change "may be one of the most powerful drivers of
misinformation." Facebook was shown this research. It proceeded anyway.
|
145 min |
The average time per day that humans worldwide now spend on
social media — the equivalent of more than 37 full days per year spent in an
engagement-optimization machine. |
The
neuroscience underneath this is increasingly well-documented. Research
published in PMC (the National Institutes of Health's open-access journal
archive) in 2025 confirmed that frequent social media engagement alters
dopamine pathways in ways "analogous to substance addiction." Brain
imaging studies show changes in the prefrontal cortex — the part of the brain
responsible for impulse control and rational decision-making — and in the
amygdala, which processes threat and emotional response. Heavy social media
users show increased emotional sensitivity and compromised decision-making.
These are not metaphors. They are measurable changes in brain structure and
function produced by an app on a phone.
The
Infinite Scroll: Designed to Have No End
The infinite
scroll — the feature that causes your social media feed to load new content
automatically, with no natural stopping point — was invented by Aza Raskin, a
designer at a tech company in the late 2000s. He has since publicly expressed
regret about the invention. In a 2018 interview, Raskin estimated that infinite
scroll causes users to spend an additional 200,000 hours on social media every
day — across all users — compared to a design that had natural stopping points.
He has said: "It's as if they took behavioral cocaine and sprinkled it on
the screen." He now works on technology ethics and has said the industry
has caused "a race to the bottom of the brain stem."
Infinite scroll
is not alone. "Autoplay" — YouTube's feature that automatically plays
the next video — was shown in internal Facebook research to be a significant
driver of radicalization, as the algorithm progressively serves more extreme
content in search of higher engagement. The notification system — the red badge
on your app icon, the vibration of your phone, the sound that signals someone
has responded to you — is designed to create what behavioral scientists call
"checking behavior": compulsive, semi-involuntary rechecking of the
app at intervals throughout the day. In a Yale class experiment, one in ten
students could not stay off social media for 24 hours even after voluntarily
committing to do so. One student touched the Instagram icon involuntarily — as
a reflexive motor habit — before catching themselves.
|
According
to Pew Research Center, 59 percent of U.S. teenagers report feeling pressure
to look good or appear successful on social media. A two-week digital detox —
reducing social media to 30 minutes per day — significantly reduced anxiety,
depression, loneliness, and FOMO scores in clinical studies. The platforms
know this. The limitation is not a feature they offer. |
Who Is
Responsible?
The companies
that built these systems — Meta (Facebook, Instagram, WhatsApp), Alphabet
(YouTube), TikTok, Snapchat, and X (formerly Twitter) — have, in every
public-facing statement, described their platforms as tools for connection,
community, and free expression. In testimony before Congress, their executives
have said they take the mental health of young users seriously. In internal
documents, emails, and studies made public through litigation and whistleblower
testimony, a different story appears: companies knowingly used design
mechanisms that exploited psychological vulnerabilities, observed the mental
health harms, and continued.
In January
2024, the CEOs of Meta, TikTok, X, Discord, and Snapchat were called before the
United States Senate Judiciary Committee. Parents of children who had died —
from suicide linked to Instagram use, from online extortion, from exposure to
harmful content — sat behind the executives. Senator Lindsey Graham said
directly to Mark Zuckerberg: "You have blood on your hands."
Zuckerberg turned to the gallery of grieving parents and said he was sorry. No
laws were passed. No executives faced personal legal liability. The platforms
continued operating under the same algorithmic structures that produced the
outcomes described in those parents' testimony.
|
π Socratic Seminar Questions |
|
1. Sean Parker said
Facebook was designed to exploit "a vulnerability in human
psychology." He helped build it anyway. At what point does building
something you know is harmful become a moral failure — or a legal one? |
|
2. The Facebook internal
study showed the algorithm amplified misinformation and divisiveness. The
company chose not to change it. Is this different from a car company knowing
its brakes are defective and choosing not to issue a recall? Why or why not? |
|
3. Aza Raskin invented the
infinite scroll and now campaigns against it. He knew it was psychologically
manipulative when he built it. How much responsibility does an individual
engineer bear for the large-scale consequences of a design decision they were
paid to make? |
|
4. Research shows that
limiting social media to 30 minutes per day significantly improves mental
health outcomes. Platforms know this and do not offer it as a default. Why
not? What does the answer tell you about whose interests the platform serves? |
|
5. The Senate hearing with
tech CEOs produced no legislation and no personal liability. What does this
tell us about whether our existing political and legal systems are equipped
to address harms caused by technology companies? |
|
6. If you discovered that
a drug company had internal research showing its product caused addiction and
depression, and chose to continue selling it, you would probably describe
that as criminal. Is the social media situation meaningfully different? If
so, how? |
|
π Food for Thought |
|
• The behavioral
mechanisms used by social media platforms — intermittent reinforcement,
variable reward, FOMO engineering — were first developed to understand and
treat addiction. The same science that helps therapists treat gambling
disorder was later used to maximize engagement on Facebook. What does it mean
that the tools of healing and the tools of exploitation are identical? |
|
• B.F. Skinner's rats did
not know they were in a Skinner box. Do you? The argument for platform
regulation is partly that users cannot give truly informed consent to systems
they do not understand. The argument against is that adults have the right to
make their own choices. Which side does the evidence support — and does your
answer change when the users are 13 years old? |
|
• Social media platforms
are free to use. This is their core value proposition. But the actual cost —
paid in attention, data, and mental health — is not disclosed in any terms of
service. If the full cost of using Instagram were printed on the login screen
the way cigarette warning labels are printed on packs, what would it say?
Write it. |
|
• Every major social media
company says its algorithm shows you what you want to see. But internal
research shows the algorithm shows you what keeps you engaged the longest —
which is not the same thing. If you could design the algorithm, what would
you optimize for instead of engagement? What would be lost? What would be
gained? |
|
π Verify the Facts — Sources to Check |
|
Parker, Sean (November 2017). Public remarks at Axios event:
Facebook was built to exploit 'a vulnerability in human psychology.' Reported
by The Guardian: theguardian.com |
|
De et al. (2025). 'Social Media Algorithms and Teen Addiction:
Neurophysiological Impact.' PMC / National Institutes of Health.
pmc.ncbi.nlm.nih.gov/articles/PMC11804976/ |
|
Wall Street Journal (September 2021). 'The Facebook Files' —
Frances Haugen whistleblower documents. wsj.com |
|
Pew Research Center (2023). Teen social media and technology
survey — 59% of teens feel pressure to look good online. pewresearch.org |
|
Alexander, Neta. Yale Daily News (November 2024): 'Algorithmic
Manipulation: How Social Media Platforms Exploit Student Vulnerabilities.'
yaledailynews.com |
|
Raskin, Aza (2018). Interview on infinite scroll design regrets.
Reported by the BBC and multiple outlets. Raskin now leads Center for Humane
Technology. |
PASSAGE 2
| The Slop Economy
AI Slop, Misinformation Farms, and the Death of Truth Online
The 2025
Word of the Year: Slop
In January
2025, the American Dialect Society selected "AI slop" as its Word of
the Year. The choice was both a linguistic recognition and a cultural verdict.
AI slop is the term for content generated by artificial intelligence — images,
videos, articles, posts — that is produced in bulk for the sole purpose of
generating engagement and advertising revenue, with no regard for accuracy,
quality, or the effect on the people who consume it. It is not a minor or
fringe phenomenon. It is currently one of the dominant forces shaping what
people see, believe, and share online.
The mechanism
is straightforward and, once understood, impossible to unsee. Social media
platforms — Facebook, TikTok, Instagram, YouTube, and X — pay creators through
engagement-linked advertising programs. Content that gets more clicks, more
shares, more comments, and more time-on-screen generates more revenue. AI tools
now make it possible to produce this content at industrial scale, at near-zero
cost. The result: content farms staffed by operators in countries around the
world produce hundreds of thousands of AI-generated posts per day, optimized
algorithmically to trigger the exact emotional responses — outrage, grief,
sentimentality, shock — that the engagement algorithms reward.
|
15B |
The number of 'higher-risk' scam ads that Meta's own internal
documents estimated its users were exposed to per day in 2024, according to
reporting by Reuters. Advertising for scams and banned goods was projected to
bring in 10% of Meta's total annual ad revenue that year. |
The slop
economy has a human face. A Facebook account that regularly posts AI-generated
videos of an elderly man describing incontinent misadventures reportedly
generates its operator upward of $5,000 per month through Meta's creator
monetization program. "Shrimp Jesus" — an AI-generated image of a
half-human, half-crustacean figure that went viral on Facebook in 2024 — became
a symbol of the crisis: bizarre, meaningless, human-free content that the
algorithm amplifies because it generates reactions. Facebook's feed, as of
2024-25, is estimated to be significantly composed of AI-generated or
AI-assisted content, alongside a significant volume of outright scams. The
company earned revenue from all of it.
|
The
term 'enshittification' was coined by author and technologist Cory Doctorow
to describe the lifecycle of digital platforms: they begin by being genuinely
good for users to build a large audience; then they begin exploiting users to
serve advertisers; then they degrade the product further to extract maximum
profit before users have nowhere better to go. The word was added to major
dictionaries in 2024. It describes exactly what has happened to Facebook,
Google Search, and Amazon. |
When AI
Slop Becomes Dangerous
The
consequences of an information environment flooded with AI-generated content
range from embarrassing to genuinely deadly. In 2024, in the aftermath of
Hurricane Helene, Republican influencer Laura Loomer shared an AI-generated
image of a young girl clutching a puppy in floodwaters as supposed evidence of
government failure. The image was fake. The activist who shared it acknowledged
it was not real. It was shared hundreds of thousands of times. On a smaller but
multiplied scale, this dynamic — emotionally compelling fake content amplified
by algorithms and shared by real people before being fact-checked — is now a
routine feature of every major breaking news event.
The
consequences are not limited to politics. During the COVID-19 pandemic,
AI-assisted misinformation about vaccines spread on social media platforms
faster than official public health guidance. A study published in The Lancet
found that social media misinformation was directly associated with lower
vaccination rates. On mental health: algorithms that track users who engage
with content about depression or self-harm have been shown to progressively
serve those users more of the same content, deepening isolation rather than
offering support. Instagram's own internal research, made public through
Frances Haugen's 2021 disclosure, found that the platform made body image
issues worse for one in three teenage girls — and that the company chose not to
act on this finding.
|
1 in 3 |
Teen girls whose body image problems were made worse by
Instagram use, according to Facebook's own internal research — research that
company chose not to act on, according to documents disclosed by
whistleblower Frances Haugen in 2021. |
The
Political Weaponization of Slop
AI slop has
moved from content farms into the political mainstream. In 2025, Wired magazine
described Donald Trump as "the first AI slop president" — documenting
his use of AI-generated images on official government social media accounts
depicting himself as a pope, a fighter pilot, and a muscular action hero
wielding a lightsaber. In August 2024, Trump shared AI-generated images on
Truth Social showing Taylor Swift fans in "Swifties for Trump"
T-shirts and appearing to endorse his campaign. Swift had not endorsed Trump.
The images were fabricated. They were shared by Trump's social media accounts
to millions of followers.
The initial
version of the Make Our Children Healthy Again Assessment — the health report
issued by Robert F. Kennedy Jr.'s commission in 2025 — reportedly cited
nonexistent references generated by AI. The AI had fabricated sources that did
not exist. Those fabricated citations were in a document released by a
cabinet-level U.S. government commission. More than half of the entries in
Grok's AI-powered encyclopedia (Grokipedia, created by Elon Musk's xAI) were
found to be at least partly copied from Wikipedia, with entries on
controversial topics significantly rewritten to reflect a specific narrative —
without disclosing to readers that the rewriting had taken place.
This is the
endpoint of the slop economy: when the tools designed to generate
engagement-optimized fake content are deployed not by anonymous content farms
but by heads of state and cabinet officials, the informational infrastructure
that democracy requires to function is directly threatened. A citizenry that
cannot distinguish verified fact from AI-generated fabrication cannot make
informed political decisions. That is not a side effect. For some of the people
deploying these tools, it appears to be the goal.
|
π Socratic Seminar Questions |
|
1. Meta earns revenue from
scam ads and AI slop content alongside legitimate advertising. Is this
meaningfully different from a newspaper that prints a fraudulent
advertisement? Who bears the moral and legal responsibility for the harm
caused? |
|
2. The platform algorithm
that amplifies AI slop is the same algorithm that amplifies misinformation
about vaccines, disasters, and elections. Is this a technology problem —
something that better AI could fix — or a business model problem that better
AI would only make worse? |
|
3. Instagram's internal
research showed its platform worsened body image problems for one in three
teenage girls. The company did not act on this finding. Using what you know
about corporate decision-making from the previous reading series, why might a
company choose not to act on evidence that its product is causing harm? |
|
4. Donald Trump shared
AI-generated images of Taylor Swift endorsing his candidacy. His cabinet's
health report cited sources that did not exist, apparently generated by AI.
What should the legal consequences of this be — if any? What makes it
difficult to hold public officials legally accountable for AI-generated
misinformation? |
|
5. The word
'enshittification' describes platforms that start good, then progressively
degrade their product to extract maximum profit. Can you think of other
products or services — beyond digital platforms — that follow this same
lifecycle? What does it suggest about what markets optimize for? |
|
6. You personally consume
information online every day. What specific practices could you use, starting
tomorrow, to identify whether content you are seeing is AI-generated,
algorithmically amplified misinformation, or genuine? What is making that
harder, and what is making it easier? |
|
π Food for Thought |
|
• In the 1930s, Nazi
Germany used film, radio, and posters to flood the information environment
with a specific narrative — not primarily by making people believe lies, but
by making people so confused and overwhelmed by competing claims that they
stopped believing anything was true. Historian Hannah Arendt called this the
goal of totalitarian propaganda: not the triumph of the lie, but the
destruction of the category of truth itself. Does the current AI slop environment
serve a similar function, intentionally or not? |
|
• Facebook earns
approximately 10% of its total annual ad revenue from scam and fraudulent
advertising, according to its own internal estimates. YouTube pays creators
whose content generates engagement, regardless of whether that content is
real or fabricated. The business model, in other words, is not accidental to
the slop problem — it is its cause. If you wanted to fix the slop problem,
would you fix the technology, or would you change the business model? Why? |
|
• A researcher who studies
algorithmic radicalization on YouTube described the process as follows: the
algorithm doesn't start showing you extremist content. It starts showing you
content that's slightly more engaging than what you just watched. Over
hundreds of sessions, 'slightly more engaging' leads to 'dramatically more
extreme.' Can you think of any personal experience — with media, with friend
groups, with any gradual change — that follows the same logic? |
|
• The American Dialect
Society's 2025 Word of the Year is 'AI slop.' Every year, the words a culture
chooses to name — or refuses to name — reveal something about what that
culture is processing and what it is afraid of. What do you think the 2026
Word of the Year might be? What does your answer reveal about what you think
is coming next? |
|
π Verify the Facts — Sources to Check |
|
American Dialect Society (January 2025). 'AI Slop' selected as
2025 Word of the Year. americandialect.org |
|
Reuters (2024). Meta exposed users to approximately 15 billion
higher-risk scam ads per day, per internal company documents. reuters.com |
|
Haugen, Frances (October 2021). U.S. Senate testimony and
document disclosures — Facebook internal research on teen mental health.
Available via Senate Commerce Committee records. |
|
Wikipedia: 'AI Slop' — extensively sourced entry covering
political uses, content farms, and platform responses.
en.wikipedia.org/wiki/AI_slop |
|
Doctorow, Cory. Enshittification: Why Everything Suddenly Got
Worse and What to Do About It. 2024. — Also see promarket.org review,
December 2025. |
|
RMIT Information Integrity Hub: 'How the Internet Drowned Itself
in Slop' (December 2025). rmit.edu.au |
|
Wired (2025). 'Trump: The First AI Slop President.' wired.com |
PASSAGE 3
| The Subscription Trap
Pay Forever: How Big Tech Turned Everything into a Subscription
and Why You Can Never Stop
The
Great Unbundling
There is a
business strategy so effective, so profitable, and so quietly inescapable that
it has fundamentally restructured how Americans spend money on technology — and
on almost everything else. It is called the subscription model, and over the
past decade it has transformed from a convenience offered by a handful of
streaming services into the dominant architecture of the digital economy.
Software that you used to buy once now bills you monthly. Tools you used to own
are now rented. Features that were included in a product are separated out and
placed behind paywalls. And because everything is connected to your data, your
workflow, and your creative work, the cost of leaving is often higher than the
cost of staying.
Adobe — the
company behind Photoshop, the industry standard in image editing used by
photographers, designers, and artists worldwide — sold Photoshop as a one-time
purchase until 2013. The price was approximately $700 for a permanent license.
In 2013, Adobe switched to a subscription model: Creative Cloud, which now
costs $55 to $85 per month depending on the plan. A user who bought Photoshop
in 1995 and used it for 30 years would have paid, in total, the cost of a few
one-time upgrades. A user who subscribes to Creative Cloud for 30 years will
pay between $19,800 and $30,600. If the user cancels, they lose all access to
the software entirely — including files created in formats only readable by
that software.
|
$19,800+ |
The minimum 30-year cost of Adobe Creative Cloud at its lowest
monthly tier, compared to the one-time purchase model it replaced. Files
created in proprietary Adobe formats become inaccessible if the subscription
lapses. |
Microsoft made
the same transition. Microsoft Office — Word, Excel, PowerPoint — was for
decades sold as a one-time license. Microsoft 365 now bills users $100 per year
for personal use or $130 for families. The features themselves have not
dramatically changed. The business model has. Microsoft's annual recurring
revenue from Microsoft 365 is in the tens of billions of dollars. What changed
is not the product. What changed is how long Microsoft can extract payment from
the same customer.
The
Lock-In Architecture
The
subscription model's most powerful feature is not its monthly fee. It is what
economists call "switching costs" — the real and perceived cost of
leaving the platform for an alternative. Tech companies design these switching
costs deliberately. They store your data in proprietary formats that are
difficult to export. They build workflows, integrations, and habits around
their specific tools. They make their products the default that other products
connect to, so that leaving one means disrupting many. And they use AI to make
their tools progressively more personalized — learning your habits, your
preferences, your professional needs — so that after two years, no competing
product can replicate your specific configuration.
|
Spotify,
Netflix, Amazon Prime, iCloud, YouTube Premium, Adobe Creative Cloud,
Microsoft 365, Google One, Apple One, Hulu, Disney+, Peacock, Paramount+,
Apple Music, Audible, LinkedIn Premium, Duolingo Plus, and Headspace. Add
them up. The average American household now spends between $900 and $1,200
per year on digital subscriptions — a figure that has more than doubled since
2018, and which many households do not accurately track because individual
charges are small and easy to overlook. |
A 2022 study by
C+R Research found that Americans spend an average of $219 per month on
subscriptions — nearly $2,600 per year — and underestimate their actual
subscription spending by approximately 80%. The design of subscription billing
is engineered to minimize the psychological salience of cost: small monthly
charges feel much less significant than equivalent annual or one-time payments,
even when the total spent is identical. Companies know this. Some companies
have been found to deliberately make cancellation difficult — burying
cancellation options in menus, requiring phone calls during limited hours, or
making users navigate multiple confirmation screens — in order to reduce churn.
The Federal Trade Commission in 2024 adopted new rules requiring that
cancellation be as easy as sign-up, after widespread complaints about these
practices.
AI as
the Ultimate Subscription Product
Artificial
intelligence tools represent the most significant new chapter in subscription
capitalism. OpenAI's ChatGPT charges $20 per month for its premium tier.
Anthropic's Claude Pro costs $20 per month. Google's Gemini Advanced costs $20
per month. Microsoft's Copilot is bundled into Microsoft 365 at higher tiers.
These tools are not standalone products — they are portals to AI systems that
learn from their users' data, making each individual's version of the tool
increasingly customized over time, which increases switching costs. The more
you use an AI tool, the harder it is to switch to an equivalent alternative,
because the alternative starts from zero with no knowledge of your preferences,
writing style, or workflows.
The scale of
capital being invested in AI — estimated at $650 billion committed by
Microsoft, Alphabet, Amazon, and Meta through 2026 — requires enormous revenue
to justify. The subscription model is how that revenue gets extracted. An
analyst report published in October 2025 estimated that big tech needs
approximately $2 trillion in AI-related revenue by 2030 to recoup the
infrastructure investment being made between 2024 and 2026. The pressure to
reach that revenue target will be passed directly to users through subscription
pricing, feature paywalling, and the progressive introduction of advertising
into tools that currently charge for an ad-free experience.
The trajectory
is already visible. When Netflix added an ad-supported tier in 2022 — after
years of insisting it would never run ads — it was widely described as a broken
promise. It was also an inevitable consequence of a business model that must
continuously justify its valuation. When Spotify began running ads on its paid
tier in 2023 and 2024, users reacted with outrage. The company had not promised
a strictly ad-free experience at all tiers, but the precedent was clear:
subscription does not mean ad-free. It means both.
|
Tech
analyst Cory Doctorow has noted that the subscription model creates a
permanent extraction relationship — what he calls "the right to keep
reaching into your pocket in perpetuity." The product is never finished.
The cost never ends. And because your data, your files, and your workflows
live inside the platform, leaving has costs that go beyond the monthly fee. |
Who Owns
Your Tools?
The deeper
question underneath the subscription economy is one of ownership and
dependency. When you buy a hammer, you own it. When you subscribe to a tool,
you rent it. The distinction matters because it determines who has power in the
relationship. A renter can be evicted. A tool can be taken away, changed, or
made worse. A subscription can be price-increased — and because your data lives
inside it, you cannot leave without losing your work.
In 2023, Adobe
attempted to add contract terms to its creative software subscription that
would give Adobe the right to access, analyze, and use any content created by
subscribers to train its AI systems. The backlash was significant and the terms
were partially revised. But the episode revealed a key tension in the
subscription model: the company providing the tool and the person using it have
fundamentally different interests, and the company — which controls the terms
of service, the pricing, the feature set, and the data — holds most of the
power.
|
π Socratic Seminar Questions |
|
1. Adobe's shift from a
$700 one-time purchase to a subscription model that costs up to $30,600 over
30 years represents the same software. What justifies the difference in total
cost? What does the scale of that difference tell you about how companies
think about long-term customer relationships? |
|
2. Americans spend an
average of $219 per month on subscriptions but underestimate that spending by
80%. Subscription companies deliberately make charges psychologically small
and cancellation deliberately difficult. Is this manipulation? Is it legal?
Should it be? |
|
3. The FTC adopted new
rules in 2024 requiring that cancellation be as easy as sign-up, because
companies were using confusing cancellation processes to retain customers.
Why do you think it took government action to require this, rather than
market competition? What does this tell you about whether the tech market is
actually competitive? |
|
4. AI tools like ChatGPT,
Claude, and Gemini charge subscription fees and train on the data users
generate through their use. When you use an AI tool, who owns what you
create? Who owns the interaction? Who benefits from the training data your
use generates? |
|
5. Big tech companies need
an estimated $2 trillion in AI revenue by 2030 to justify their current
capital investment. That revenue will come from users. What mechanisms do you
think they will use to extract it? What choices will users have about whether
to participate? |
|
6. If every digital tool
you depend on for school, work, or creative expression were taken away
tomorrow because you couldn't pay a subscription fee, what would you lose?
What does that vulnerability tell you about the relationship between digital
access and power? |
|
π Food for Thought |
|
• The feudal system in
medieval Europe was based on the idea that land — the fundamental productive
resource of the era — was not owned by the people who worked it, but rented
from lords who could raise rents, change terms, and evict tenants at will.
Some economists and technologists have begun using the term 'digital
feudalism' to describe a world where the fundamental productive tools of the
information economy are not owned by the workers who use them, but subscribed
to from corporations that can raise prices, change terms, and cut off access
at will. Is this comparison fair? What would need to be true for it to be
accurate? |
|
• Adobe claimed the
ability to access subscribers' files to train its AI. Your phone company
stores your location data. Your email provider scans your messages for
advertising targeting. Your streaming service tracks every second you watch
and every second you pause. You have consented to all of this in terms of
service that no one reads. If this data were being collected by a government
agency rather than a private corporation, it would be described as mass
surveillance. What is the meaningful difference between these two scenarios —
if any? |
|
• The subscription model
depends on inertia and switching costs. Its power comes partly from the fact
that leaving is hard. What would need to be true for competition to actually
discipline subscription pricing? What barriers exist to the kind of
competition that would force Adobe, Microsoft, or Spotify to genuinely
compete on price and user-friendliness rather than on switching costs? |
|
• One alternative to
subscription software is open-source software — programs that are free, whose
code is publicly available, and that anyone can modify or improve. GIMP is a
free, open-source alternative to Photoshop. LibreOffice is a free alternative
to Microsoft Office. Why do you think most users continue to pay for
subscription software rather than switching to free alternatives? What does
your answer tell you about the nature of the lock-in? |
|
π Verify the Facts — Sources to Check |
|
Adobe Creative Cloud pricing history and comparison to perpetual
licenses — documented across multiple reviews including PCMag and The Verge. |
|
C+R Research (2022): 'Subscription Service Study' — Americans
spend $219/month on average on subscriptions and underestimate spending by
~80%. crresearch.com |
|
Federal Trade Commission (2024): 'Click to Cancel' rule —
requiring cancellation to be as easy as sign-up. ftc.gov |
|
Campaign US (February 2026): 'Big Tech's AI Spend in 2026:
Following the Money' — $650B committed; $2T revenue target. campaignlive.com |
|
Doctorow, Cory. Enshittification: Why Everything Suddenly Got
Worse and What to Do About It (2024). — Also see promarket.org, December
2025. |
|
The Verge and Reuters (2023): Coverage of Adobe's AI training
data terms controversy. theverge.com |
PASSAGE 4
| The Billionaire Code
Five Men, Five Platforms, Billions of Minds: The Concentration
of Informational Power in the Digital Age
The Most
Powerful Media Owners in History
At no point in
recorded history has so much of human communication, information, and political
discourse been controlled by so few people. Five men — Mark Zuckerberg (Meta),
Elon Musk (X, formerly Twitter), Sundar Pichai (Alphabet/Google/YouTube), Shou
Zi Chew (TikTok/ByteDance), and Sam Altman (OpenAI) — control the platforms
through which the majority of Americans now receive news, discuss politics,
form social connections, and understand the world. They did not run for office.
They were not appointed. They are accountable to shareholders, not to the
public. And the decisions they make — about what content is amplified, what
content is suppressed, what speech is permitted, what information is labeled as
misinformation — affect the political and social reality of billions of people.
The scale of
this concentration is without historical precedent. When William Randolph
Hearst controlled dozens of newspapers and magazines in the early 20th century,
his reach was considered dangerous and led to widespread debate about media
concentration. Hearst reached perhaps 20 million readers. Meta reaches more
than 3 billion monthly active users across Facebook, Instagram, and WhatsApp.
YouTube reaches 2.5 billion users. TikTok has approximately 1 billion users
globally. These are not media companies in the traditional sense — they are the
infrastructure through which media flows, which gives their owners a
qualitatively different kind of power than any previous media baron possessed.
|
3.07B |
Monthly active users across Meta's platforms (Facebook,
Instagram, WhatsApp) as of Q4 2024 — roughly 38% of every person alive on
earth, controlled by a single private company and its founder. |
The Elon
Musk Experiment: What Happens When One Man Buys the Public Square
When Elon Musk
purchased Twitter for $44 billion in October 2022 and renamed it X, the
transaction was described by many observers as the most consequential private
acquisition of a communications platform in history. Twitter, despite having
far fewer users than Facebook or YouTube (approximately 350 million monthly
active users at the time of purchase), held an outsized role in public
discourse: it was the platform used by journalists, politicians, scientists,
public health officials, and world leaders to communicate information and set
the agenda for mainstream media coverage.
Musk's
subsequent decisions as owner of X have been extensively documented: he fired
approximately 80% of the company's workforce, including most of its trust and
safety team, its misinformation researchers, and its content moderation staff.
He reinstated accounts that had been permanently banned for violating platform
policies — including accounts associated with white nationalist content. He
eliminated the verified checkmark system that distinguished public figures from
impersonators and replaced it with a paid subscription (Twitter Blue/X
Premium), so that any account paying $8 per month could display a verification
badge regardless of who they were. He changed the algorithm to boost his own
posts to the top of every user's feed. He used the platform to directly
advocate for political candidates in the United States and in multiple European
countries, including posting more than 100 times in support of far-right
parties in Germany's 2025 elections — reaching hundreds of millions of users.
|
Musk's
Grok AI assistant, built into X, was found to generate responses that aligned
with specific political narratives rather than established facts,
particularly on topics where Musk holds strong political views. Grokipedia —
Grok's AI encyclopedia — was found by researchers to have copied more than
half its entries from Wikipedia and significantly rewritten entries on
controversial topics 'to highlight a specific narrative,' without disclosing
this to readers. |
Zuckerberg's
Pivot: When the Fact-Checkers Were Fired
In January
2025, Meta CEO Mark Zuckerberg announced that Meta would end its third-party
fact-checking program on Facebook and Instagram in the United States, replacing
it with a "community notes" system modeled on X's approach. He
announced the company would relax its hate speech policies and end its
diversity, equity, and inclusion programs. He said the fact-checking program
had resulted in "too much censorship." He announced these changes in
a video posted to Facebook shortly after the presidential inauguration.
The
fact-checking program, which Zuckerberg was now ending, had been implemented in
2016 after evidence that false information spread on Facebook had influenced
the U.S. presidential election. Its effectiveness had been debated, but
multiple studies found that labels on false content significantly reduced its
spread. The timing of the announcement — immediately following a presidential
transition that both Trump and Musk had been actively involved in — led many
observers to describe it as a political accommodation rather than a policy
evolution. Arthur Caplan, a leading bioethicist at NYU, wrote that the change
was "a capitulation to political pressure from the incoming
administration, not a principled free speech decision."
The
consequences of removing fact-checking infrastructure from platforms used by
billions of people are not abstract. During the COVID-19 pandemic, social media
misinformation about vaccines contributed to lower vaccination rates and higher
mortality. During election cycles, false information about voting procedures
has caused people to miss voting deadlines or show up at incorrect locations.
During natural disasters, misinformation about government response has caused
people to refuse evacuation orders. The fact-checking program was imperfect.
Its absence will not be.
The AI
Race and the Democratic Deficit
The four major
companies racing to dominate artificial intelligence — OpenAI (backed by
Microsoft), Google/DeepMind, Meta, and xAI (Musk's AI company) — have invested
a combined total that will exceed $650 billion by 2026. This investment is in
the infrastructure of AI: the data centers, the energy, the computing chips,
and the talent required to build and run large language models. The AI systems
produced by this investment will, in their most ambitious forms, be used to
write news articles, moderate political speech, generate educational content,
advise on medical and legal decisions, and serve as the primary interface
through which people access information.
There is no
democratic process governing who builds these systems, what values they encode,
what information they present as true, or what perspectives they exclude. The
decisions are being made by Sam Altman, Sundar Pichai, Mark Zuckerberg, and
Elon Musk — four individuals whose combined net worth exceeds $400 billion, who
were not elected, who are accountable primarily to their shareholders, and
whose personal political views and business interests will inevitably shape the
AI systems they build.
|
A
survey of software developers in Silicon Valley, published in a peer-reviewed
journal in January 2026, found that most developers recognize their products
have the power to influence civil liberties and political discourse — and
that they frequently face ethical dilemmas and 'top-down pressures that can
lead to design choices undermining democratic ideals.' They understand what
they are building. They are building it anyway, because that is what they are
paid to do. |
The
concentration of AI development in a small number of privately owned companies,
controlled by a handful of individuals, represents what some researchers are
calling a "new digital divide" — not a divide between those who have
internet access and those who don't, but between those who control the
information systems of the future and those who simply consume whatever those
systems produce. The people consuming are billions. The people controlling are
a handful. No democratic mechanism currently exists to give the billions any
meaningful voice in how those systems are built, what they optimize for, or
whose interests they ultimately serve.
|
π Socratic Seminar Questions |
|
1. William Randolph
Hearst's newspaper empire, which reached approximately 20 million readers,
was considered dangerous enough to inspire major debates about media
concentration. Meta reaches 3 billion users. Why do we have no equivalent
contemporary debate — and what would that debate need to look like to be
productive? |
|
2. Elon Musk fired 80% of
Twitter's workforce — including most of its trust and safety team — and then
used the platform to advocate for political candidates in multiple countries.
He was not elected. He faces no regulatory accountability for the political
consequences of his platform decisions. Is this a problem for democracy?
What, if anything, should be done about it? |
|
3. Zuckerberg ended Meta's
fact-checking program immediately after a presidential transition, in changes
that critics described as political accommodation. He describes it as a free
speech decision. How would you evaluate these two explanations? What evidence
would help you decide which is more accurate? |
|
4. A survey of Silicon
Valley developers found that most recognize their work can undermine
democratic ideals, and that they face top-down pressure to build things that
do. If you were a software engineer at one of these companies, what would you
do when faced with a design decision you believed was harmful? What are the
realistic options? What would you lose by choosing each one? |
|
5. There is no democratic
process governing who builds AI systems, what values they encode, or what
they define as true. Should there be? What would democratic AI governance
look like? What would the tech industry's objections be — and how would you
answer them? |
|
6. After reading all four
passages in this series, describe the system you see connecting them. Who
holds power? Who profits? Who pays the cost? And — the hardest question —
what would it actually take to change it? |
|
π Food for Thought |
|
• The First Amendment to
the U.S. Constitution protects free speech from government censorship. It
does not apply to private companies. This means that Facebook, YouTube, and X
can legally suppress any speech they choose — or allow any speech they choose
— without constitutional constraint. Some argue this means we need new laws
specifically covering platform speech. Others argue that government
regulation of platform speech is more dangerous than private control. Where
do you come down — and what principles guide your position? |
|
• Historians and political
scientists who study propaganda and authoritarianism have noted that one of
the most effective features of authoritarian information control is not the
suppression of all information, but the flooding of the information space
with so much noise — misinformation, spectacle, outrage, contradiction — that
citizens cannot form coherent political judgments. How does the current
digital information environment compare to this model? Is it authoritarian
propaganda? Is it something different? Does the distinction matter? |
|
• The people building the
most powerful AI systems in history — and making the decisions about what
values those systems embody — are overwhelmingly young, overwhelmingly male,
overwhelmingly wealthy, and overwhelmingly from a narrow band of elite
universities in a single geographic region. They are building tools for
billions of people who had no voice in their design and no representation in
the decision-making process. Is this a democracy problem? A market problem? A
culture problem? Or is it an unavoidable feature of where breakthrough
technology comes from? |
|
• This series began with
the algorithm of outrage, moved through the slop economy, examined the
subscription trap, and ended here — with the question of who controls the
most powerful information systems ever built, and to whom they are
accountable. If you were writing the conclusion of this series yourself, what
would be the single most important thing you would want students to
understand — and what would you want them to do with that understanding? |
|
π Verify the Facts — Sources to Check |
|
Groundy (February 2026): 'Facebook Is Cooked: Inside Meta's
Platform Decay' — Meta revenue $164.5B in 2024; AI slop dynamics; platform
decay thesis. groundy.com |
|
Tandfonline (January 2026): 'A New Digital Divide? Coder
Worldviews, the Slop Economy, and Democracy in the Age of AI.' Peer-reviewed.
tandfonline.com/doi/full/10.1080/1369118X.2025.2566814 |
|
Wikipedia: 'Elon Musk and Twitter/X' — documented changes to
platform policies, workforce reductions, and political interventions.
en.wikipedia.org |
|
Meta (January 2025): Zuckerberg announcement on ending
fact-checking program. Available via Meta Newsroom. about.fb.com |
|
Campaign US (February 2026): Big Tech AI spend — $650B committed;
revenue projections. campaignlive.com |
|
ProMarket (December 2025): Review of Doctorow's Enshittification
and Wu's The Age of Extraction. promarket.org |
|
U.S. Senate Judiciary Committee (January 2024): Hearing with CEOs
of Meta, TikTok, X, Discord, Snapchat. Available via senate.gov |
A
Note on Using This Series
The Chaos Code
is designed to be read alongside Greed Nation and Poisoned for Profit, as part
of a larger critical literacy curriculum. Together, the three series trace a
single pattern: the extraction of value from human need, human attention, and
human vulnerability — and the systematic weakening of the oversight systems
that might constrain it.
Students who
work through all three series in sequence should be asked a final syntopical
question: Across twelve passages and three topic areas, what is the one most
important structural change — in law, in technology, in education, or in
culture — that would most significantly disrupt the patterns described? There
is no right answer. The value is in the reasoning required to develop one.
Every claim in
this series is sourced. Every source is listed. Check them. The ability to
verify information independently — to go beyond the feed, beyond the headline,
beyond the algorithm — is not just a classroom skill. It is one of the most
important capacities a person can develop in the world these passages describe.
