Being cut off from health insurance at a time when you feel most vulnerable, when someone you love is in debilitating pain, leaves you feeling cornered and desperate.
In his famous novel 1984, George Orwell got one thing wrong. Big Brother is not watching you, he’s watching us. Most people are targeted for digital scrutiny as members of social groups, not as individuals. People of color, migrants, unpopular religious groups, sexual minorities, the poor, and other oppressed and exploited populations bear a much higher burden of monitoring and tracking than advantaged groups. Marginalized groups face higher levels of data collection when they access public benefits, walk through highly policed neighborhoods, enter the health-care system, or cross national borders. That data acts to reinforce their marginality when it is used to target them for suspicion and extra scrutiny. Those groups seen as undeserving are singled out for punitive public policy and more intense surveillance, and the cycle begins again. It is a kind of collective red-flagging, a feedback loop of injustice.
in 2014 Maine Republican governor Paul LePage attacked families in his state receiving meager cash benefits from Temporary Assistance to Needy Families (TANF). These benefits are loaded onto electronic benefits transfer (EBT) cards that leave a digital record of when and where cash is withdrawn. LePage’s administration mined data collected by federal and state agencies to compile a list of 3,650 transactions in which TANF recipients withdrew cash from ATMs in smoke shops, liquor stores, and out-of-state locations. The data was then released to the public via Google Docs. The transactions that LePage found suspicious represented only 0.03 percent of the 1.1 million cash withdrawals completed during the time period, and the data only showed where cash was withdrawn, not how it was spent. But the governor used the public data disclosure to suggest that TANF families were defrauding taxpayers by buying liquor, lottery tickets, and cigarettes with their benefits. Lawmakers and the professional middle-class public eagerly embraced the misleading tale he spun from a tenuous thread of data. The Maine legislature introduced a bill that would require TANF families to retain all cash receipts for 12 months to facilitate state audits of their spending. Democratic legislators urged the state’s attorney general to use LePage’s list to investigate and prosecute fraud. The governor introduced a bill to ban TANF recipients from using out-of-state ATMs. The proposed laws were impossible to obey, patently unconstitutional, and unenforceable, but that’s not the point. This is performative politics. The legislation was not intended to work; it was intended to heap stigma on social programs and reinforce the cultural narrative that those who access public assistance are criminal, lazy, spendthrift addicts.
Poor women are the test subjects for surveillance technology, Dorothy told me. Then she added, “You should pay attention to what happens to us. You’re next.”
digital tracking and automated decision-making hide poverty from the professional middle-class public and give the nation the ethical distance it needs to make inhuman choices: who gets food and who starves, who has housing and who remains homeless, and which families are broken up by the state. The digital poorhouse is part of a long American tradition. We manage the individual poor in order to escape our shared responsibility for eradicating poverty.
For all their high-tech polish, our modern systems of poverty management—automated decision-making, data mining, and predictive analytics—retain a remarkable kinship with the poorhouses of the past. Our new digital tools spring from punitive, moralistic views of poverty and create a system of high-tech containment and investigation. The digital poorhouse deters the poor from accessing public resources; polices their labor, spending, sexuality, and parenting; tries to predict their future behavior; and punishes and criminalizes those who do not comply with its dictates. In the process, it creates ever-finer moral distinctions between the “deserving” and “undeserving” poor,
Widespread reproductive restrictions were perhaps the inevitable destination for scientific charity and eugenics. In the Buck v. Bell case that legalized involuntary sterilization, Supreme Court Justice Oliver Wendell Holmes famously wrote, “It is better for all the world if, instead of waiting to execute degenerate offspring for crime or to let them starve for their imbecility, society can prevent those who are manifestly unfit from continuing their kind. The principle that sustains compulsory vaccination is broad enough to cover cutting the Fallopian tubes.” 11 Though the practice fell out of favor in light of Nazi atrocities during World War II, eugenics resulted in more than 60,000 compulsory sterilizations of poor and working-class people in the United States.
in response to threats that southern states would not support the Social Security Act, both agricultural and domestic workers were explicitly excluded from its employment protections. The “southern compromise” left the great majority of African American workers—and a not-insignificant number of poor white tenant farmers, sharecroppers, and domestics—with no minimum wage, unemployment protection, old-age insurance, or right to collective bargaining.
Excluded workers, single mothers, the elderly poor, the ill, and the disabled were forced to rely on what welfare historian Premilla Nadasen calls “mop-up” public assistance programs. 13 The distinctions between the unemployed and the poor, men’s poverty and women’s poverty, northern white male industrial laborers and everyone else created a two-tiered welfare state: social insurance versus public assistance. Public assistance programs were less generous because benefit levels were set by states and municipalities, not the federal government. They were more punitive because local and state welfare authorities wrote eligibility rules and had financial incentive to keep enrollments low. They were more intrusive because income limits and means-testing rationalized all manner of surveillance and policing of applicants and beneficiaries. In distinguishing between social insurance and public assistance, New Deal Democrats planted the seeds of today’s economic inequality, capitulated to white supremacy, sowed conflict between the poor and the working class, and devalued women’s work.
Goldberg v. Kelly (1970) enshrined the principle that public assistance recipients have a right to due process, and that benefits cannot be terminated without a fair hearing.
AFDC became so embattled that President Richard Nixon proposed a guaranteed annual income program, the Family Assistance Program (FAP), to replace it in 1969. The program would guarantee a minimum income of $ 1,600 a year for a family of four. It would provide benefits to two-parent families earning low wages, who were excluded from AFDC. It would do away with the 100 percent penalty on earned income, allowing welfare beneficiaries to retain the first $ 720 of their yearly earnings without reducing benefits. But the minimum income Nixon proposed would have still kept a family of four well below the poverty line. The NWRO proposed a competing Adequate Income Act that set the base income for a family of four at $ 5,500. Nixon’s program also included built-in work requirements; this was a sticking point for single mothers with small children. Unpopular with both conservatives and progressives, the FAP failed, and pressure on AFDC continued to mount.
As backlash against welfare rights grew, news coverage of poverty became increasingly critical. “As news stories about the poor became less sympathetic,” writes political scientist Martin Gilens, “the images of poor blacks in the news swelled.” 17 Stories about welfare fraud and abuse were most likely to contain images of Black faces. African American poverty decreased dramatically during the 1960s and the African American share of AFDC caseloads declined. But the percentage of African Americans represented in news magazine stories about poverty jumped from 27 to 72 percent between 1964 and 1967.
In 1943, Louisiana had been the first state to establish an “employable mother” rule that blocked most African American women from receiving ADC. Thirty-one years later, Louisiana became the first state to launch a computerized wage matching system. The program checked the self-reported income of welfare applicants against electronic files of employment agencies and unemployment compensation benefit data.
The Personal Responsibility and Work Opportunity Reconciliation Act (PRWORA) of 1996 is often held responsible for the demise of welfare. The PRWORA replaced AFDC with Temporary Assistance to Needy Families (TANF) and enforced work outside the home at any cost. TANF limited lifetime eligibility for public assistance to 60 months with few exceptions, introduced strict work requirements, ended support for four-year college education, and put into effect a wide array of sanctions to penalize noncompliance. Sanctions are imposed, for example, for being late to an appointment, missing a volunteer work assignment, not attending job training, not completing drug testing, not attending mental health counseling, or ignoring any other therapeutic or job-training activity prescribed by a caseworker. Each sanction can result in a time-limited or permanent loss of benefits. It is true that the PRWORA achieved striking contractions in public assistance. Almost 8.5 million people were removed from the welfare rolls between 1996 and 2006. In 2014, fewer adults were being served by cash assistance than in 1962. In 1973, four of five poor children were receiving benefits from AFDC. Today, TANF serves fewer than one in five of them.
The advocates of automated and algorithmic approaches to public services often describe the new generation of digital tools as “disruptive.” They tell us that big data shakes up hidebound bureaucracies, stimulates innovative solutions, and increases transparency. But when we focus on programs specifically targeted at poor and working-class people, the new regime of data analytics is more evolution than revolution. It is simply an expansion and continuation of moralistic and punitive poverty management strategies that have been with us since the 1820s. The story of the poorhouse and scientific charity demonstrates that poverty relief becomes more punitive and stigmatized during times of economic crisis. Poor and working-class people resist restrictions of their rights, dismantle discriminatory institutions, and join together for survival and mutual aid. But time and again they face middle-class backlash. Social assistance is recast as charity, mutual aid is reconstructed as dependency, and new techniques to turn back the progress of the poor proliferate.
Performance metrics designed to speed eligibility determinations created perverse incentives for call center workers to close cases prematurely. Timeliness could be improved by denying applications and then advising applicants to reapply, which required that they wait an additional 30 or 60 days for a new determination.
“Before modernization, they had someone to call up and say, ‘Listen, I received this notice. What do I need to do?’” recalled ACLU attorney Gavin Rose. “And the answer was ‘Run it down to me, fax it over right now. I’ll make sure it gets in your file and we’ll take care of this.’” Before the automation, “failure to cooperate” had been a last-ditch punishment caseworkers used against a few clients who actively refused to participate in the eligibility process. After the automation, the phrase became a chain saw that clearcut the welfare rolls, no matter the collateral damage.
Goldberg v. Kelly. This landmark case found that all welfare recipients have a right to an evidentiary hearing—a process that includes timely and adequate notice, disclosure of opposing evidence, an impartial decision-maker, cross-examination of witnesses, and the right to retain legal representation—before their benefits can be terminated. By successfully reframing public benefits as property rather than charity, the welfare rights movement established that public assistance recipients must be provided due process under the Fourteenth Amendment of the Constitution. The case hinged on the understanding, expressed by Justice William Brennan, that abrupt termination of aid deprives poor people of both their means of survival and their ability to mount an adequate challenge to government decisions. “From its founding, the Nation’s basic commitment has been to foster the dignity and well-being of all persons within its borders,” Brennan wrote. “Public assistance, then, is not mere charity, but a means to ‘promote the general Welfare, and secure the Blessings of Liberty to ourselves and our Posterity.’”
Her experience during the automation makes Lindsay hesitant to apply for benefits again. “They make it so difficult. If I applied now I could probably get it, but that experience with being denied … I mean, I cried. I did everything that they asked me to do. I don’t even know if it’s worth the stress.”
Reducing casework to a task-based system is dehumanizing, she suggests, for both worker and client. “If I wanted to work in a factory, I would have worked in a factory.… You were expected to produce, and you couldn’t do that if you listened to the client’s story.”
“We became slaves to the task system,” said Fred Gilbert, a 30-year FSSA employee specializing in refugee assistance. “Like any other private call center, it’s ‘just the facts.’ But the welfare system is very complicated. That’s the job of caseworkers, to help people wade through the mess.”
The goals of the project were consistent throughout the automation experiment: maximize efficiency and eliminate fraud by shifting to a task-based system and severing caseworker-to-client bonds. They were clearly reflected in contract metrics: response time in the call centers was a key performance indicator; determination accuracy was not. Efficiency and savings were built into the contract; transparency and due process were not.
Automated eligibility was based on the assumption that it is better for ten eligible applicants to be denied public benefits than for one ineligible person to receive them. “They had an opportunity to make a system that was responsive and effective, and ensure people who qualified for benefits received those benefits,” Holly said. “My gut feeling is that they did not respect the people who needed their help.”
Coordinated entry is based on two philosophies that represent a paradigm shift in the provision of homeless services: prioritization and housing first. Prioritization builds on research by Dennis Culhane from the University of Pennsylvania, which differentiates between two different kinds of homelessness: crisis and chronic. Those facing crisis homelessness tend to be experiencing “short-term emergencies [such as] eviction, domestic violence, sudden illness, or job loss, or reentering communities after incarceration.” 4 The crisis homeless, Culhane argues, often self-correct: after a short stay in a shelter, they identify family members they can stay with, access new resources, or move away. A small, time-limited investment can offer them “a hand up to avoid the downward spiral” into chronic homelessness. Those experiencing chronic homelessness, on the other hand, tend to be homeless frequently and for longer stretches. Chronically homeless adults, according to Culhane’s research, “have higher rates of behavioral health problems and disabilities, and more complex social support needs.” 5 For them, permanent supportive housing is an appropriate and effective solution. The shift to prioritization in Los Angeles acknowledged that the status quo was not serving the chronic homeless. There was a mismatch between needs and resources: the crisis homeless got resources most appropriate for the chronically homeless; the chronically homeless got nothing at all. The other conceptual shift in coordinated entry is its housing first philosophy. Until very recently, most homeless services operated on a “housing readiness” model that moved individuals through different program steps before they could be housed. Someone who had been sleeping on the street or in their car might first enter an emergency shelter, then shift to a transitional housing program, and finally attain independent housing. At each stage, a set of behavioral requirements—sobriety, treatment compliance, employment—were gateways that controlled access to the next step. The housing first approach emerges instead from the understanding that it is difficult to attend to other challenges if you are not stably housed. Housing first puts individuals and families into their own apartments as quickly as possible, and then offers voluntary supportive and treatment services where appropriate.
There’s three ways to go if you don’t get housed: jail, institutions, or death.
Before the Supreme Court found racially restrictive covenants unconstitutional in 1948, 80 percent of property in Los Angeles carried covenants barring Black families. To the east of Alameda Street were working-class white suburbs. To the west were South Central and Watts, two of the few areas where African American families were able to live.
Ko pointed out that coordinated entry allowed members of the CES network to arrive at city council and board of supervisors meetings with impeccable regional numbers showing exactly what kinds of resources were needed in each community. But the real driver behind Angelenos’ decision to take collective responsibility for the housing crisis was not better data. It was the spread of tent cities.
CES eased the way to some kind of housing resource for 17 percent of the overall homeless population at a cost of approximately $ 1,140 per person. It is easy to argue that this is money well spent.
There is a long history of social services and the police collaborating to criminalize the poor in the United States. The most direct parallel is Operation Talon, a joint effort of the Office of Inspector General and local welfare offices that mined food stamp data to identify those with outstanding warrants, and then lured them to appointments regarding their benefits. When targeted recipients arrived at the welfare office, they were arrested. According to Kaaryn Gustafson’s 2009 article “The Criminalization of Poverty,” before the 1996 welfare reforms, public assistance records were only available to law enforcement through legal channels. But today, she writes, “Welfare records are available to law enforcement officers simply upon request—without probable cause, suspicion, or judicial process of any kind.” 10 Operation Talon and other initiatives like it use administrative data to turn social service offices into extensions of the criminal justice system.
the pattern of increased data collection, sharing, and surveillance reinforces the criminalization of the unhoused, if only because so many of the basic conditions of being homeless—having nowhere to sleep, nowhere to put your stuff, and nowhere to go to the bathroom—are also officially crimes. If sleeping in a public park, leaving your possessions on the sidewalk, or urinating in a stairwell are met with a ticket, the great majority of the unhoused have no way to pay resulting fines. The tickets turn into warrants, and then law enforcement has further reason to search the databases to find “fugitives.” Thus, data collection, storage, and sharing in homeless service programs are often starting points in a process that criminalizes the poor.
homelessness is not a systems engineering problem. It’s a carpentry problem.”
Cherna built the data warehouse in 1999. Today, it lives on two servers in DHS headquarters and holds more than one billion electronic records, an average of 800 records for every person in Allegheny County.
The annual cost of the data warehouse, managed primarily through a contract with the multinational consulting firm Deloitte Touche Tohmatsu Ltd., tops $ 15 million a year, about 2 percent of DHS’s annual budget.
We might call this poverty profiling. Like racial profiling, poverty profiling targets individuals for extra scrutiny based not on their behavior but rather on a personal characteristic: living in poverty. Because the model confuses parenting while poor with poor parenting, the AFST views parents who reach out to public programs as risks to their children.
they’d rather have an imperfect person making decisions about their families than a flawless computer. “You can teach people how you want to be treated,” said Pamela Simmons, staffing the voter registration table across the street from the Kentucky Fried Chicken in Wilkinsburg. “They come with their own opinions but sometimes you can change their opinion. There’s opportunity to fix it with a person. You can’t fix that number.”
Poverty in America is not invisible. We see it, and then we look away. Our denial runs deep. It is the only way to explain a basic fact about the United States: in the world’s largest economy, the majority of us will experience poverty. According to Mark Rank’s groundbreaking life-course research, 51 percent of Americans will spend at least a year below the poverty line between the ages of 20 and 65. Two-thirds of them will access a means-tested public benefit: TANF, General Assistance, Supplemental Security Income, Housing Assistance, SNAP, or Medicaid. 1 And yet we pretend that poverty is a puzzling aberration that happens only to a tiny minority of pathological people.
Cultural denial is the process that allows us to know about cruelty, discrimination, and repression, but never openly acknowledge it. It is how we come to know what not to know. Cultural denial is not simply a personal or psychological attribute of individuals; it is a social process organized and supported by schooling, government, religion, media, and other institutions. When we passed the anguished man near the Los Angeles Public Library and did not ask him if he needed help, it was because we have collectively convinced ourselves that there is nothing we can do for him. When we failed to meet each others’ eyes as we passed, we signaled that, deep down, we know better. We could not make eye contact because we were enacting a cultural ritual of not-seeing, a semiconscious renunciation of our responsibility to each other. Our guilt, kindled because we perceived suffering and yet did nothing about it, made us look away. That is what the denial of poverty does to us as a nation. We avoid not only the man on the corner, but each other.
The AFST is run on every member of a household, not only on the parent or child reported to the hotline. Under the new regime of prediction, you are impacted not only by your own actions, but by the actions of your lovers, housemates, relatives, and neighbors. Prediction, unlike classification, is intergenerational. Angel and Patrick’s actions will affect Harriette’s future AFST score. Their use of public resources drives Harriette’s score up. Patrick’s run-ins with CYF when Tabatha was a child will raise Harriette’s score as an adult. Angel and Patrick’s actions today may limit Harriette’s future, and her children’s future. The impacts of predictive models are thus exponential. Because prediction relies on networks and spans generations, its harm has the potential to spread like a contagion, from the initial point of contact to relatives and friends, to friends’ networks, rushing through whole communities like a virus.
New technologies develop momentum as they are integrated into institutions. As they mature, they become increasingly difficult to challenge, redirect, or uproot.
Think of the digital poorhouse as an invisible spider web woven of fiber optic strands. Each strand functions as a microphone, a camera, a fingerprint scanner, a GPS tracker, an alarm trip wire, and a crystal ball. Some of the strands are sticky. They are interconnected, creating a network that moves petabytes of data. Our movements vibrate the web, disclosing our location and direction. Each of these filaments can be switched on or off. They reach back into history and forward into the future. They connect us in networks of association to those we know and love. As you go down the socioeconomic scale, the strands are woven more densely and more of them are switched on. Together, we spun the digital poorhouse. We are all entangled in it. But many of us in the professional middle class only brush against it briefly, up where the holes in the web are wider and fewer of the strands are activated. We may have to pause a moment to extricate ourselves from its gummy grasp, but its impacts don’t linger. When my family was red-flagged for a health-care fraud investigation, we only had to wrestle one strand at a time. We weren’t also tangled in threads emerging from the criminal justice system, Medicaid, and child protective services. We weren’t knotted up in the histories of our parents or the patterns of our neighbors.
Data mining creates statistical social groupings, and then policy-makers create customized interventions for each precise segment of society. Bespoke, individualized governance will likely harden social divisions rather than promote inclusion. Customized government might serve some individuals very well, but it will increase intergroup hostility as perceptions of special treatment proliferate.
It would stand us all in good stead to remember that infatuation with high-tech social sorting emerges most aggressively in countries riven by severe inequality and governed by totalitarians. As Edwin Black reports in IBM and the Holocaust, thousands of Hollerith punch card systems—an early version of computer software—allowed the Nazi regime to more efficiently identify, track, and exploit Jews and other targeted populations. The appalling reality is that the serial numbers tattooed onto the forearms of inmates at Auschwitz began as punch card identification numbers. The passbook system that controlled the movements, work opportunities, health care, and housing of 25 million Black South Africans was made possible by data mining the country’s 1951 census to create a centralized population register assigning every person to one of four racial categories. In an amicus brief filed in 2015 on behalf of Black South Africans attempting to sue IBM for aiding and abetting apartheid, Cindy Cohn of the Electronic Frontier Foundation wrote, “The technological backbone for the South African national identification system … enabled the apartheid regime to efficiently implement ‘denationalization’ of the country’s black population: the identification, forced segregation, and ultimate oppression of South African blacks by the white-run government.”
It will take more than high-tech tweaks to bring down the institutions we have built to profile, police, and punish the poor. It will take profound changes to culture, politics, and personal ethics. The most important step in dismantling the digital poorhouse is changing how we think, talk, and feel about poverty. As counterintuitive as it may sound, the best cure for the misuse of big data is telling better stories.
If you lack even one of the economic rights promised by the 1948 Universal Declaration of Human Rights—including health care, housing, a living-wage job, and quality education—PPEHRC counts you among the poor. The redefinition is tactical, an attempt to help poor and working-class people see themselves reflected in each others’ experiences.
do a quick “gut check” by answering two questions: Does the tool increase the self-determination and agency of the poor? Would the tool be tolerated if it was targeted at non-poor people?
Oath of Non-Harm for an Age of Big Data I swear to fulfill, to the best of my ability, the following covenant: I will respect all people for their integrity and wisdom, understanding that they are experts in their own lives, and will gladly share with them all the benefits of my knowledge. I will use my skills and resources to create bridges for human potential, not barriers. I will create tools that remove obstacles between resources and the people who need them. I will not use my technical knowledge to compound the disadvantage created by historic patterns of racism, classism, able-ism, sexism, homophobia, xenophobia, transphobia, religious intolerance, and other forms of oppression. I will design with history in mind. To ignore a four-century-long pattern of punishing the poor is to be complicit in the “unintended” but terribly predictable consequences that arise when equity and good intentions are assumed as initial conditions. I will integrate systems for the needs of people, not data. I will choose system integration as a mechanism to attain human needs, not to facilitate ubiquitous surveillance. I will not collect data for data’s sake, nor keep it just because I can. When informed consent and design convenience come into conflict, informed consent will always prevail. I will design no data-based system that overturns an established legal right of the poor. I will remember that the technologies I design are not aimed at data points, probabilities, or patterns, but at human beings.