Home / sciense


Why Republicans Might Be Forced To Oppose Tax Cuts

Welcome to TrumpBeat, FiveThirtyEight’s weekly feature on the latest policy developments in Washington and beyond. Want to get TrumpBeat in your inbox each week? Sign up for our newsletter. Comments, criticism or suggestions for future columns? Email us or drop a note in the comments. As Senate Republicans look for a way to save their struggling health care bill, some of them are floating a once-unthinkable possibility: keeping some of the taxes imposed by the Affordable Care Act. It may not happen, but that it’s even on the table helps illustrate why broader tax reform is going to be so tricky for the GOP. Democrats paid for their big expansion of the health care system via a series of new taxes on medical devices, health insurers and, especially, wealthy people. When Republicans came to power this year, they pledged to abolish most of those taxes as part of their plan to repeal and replace Obamacare. Both the bill that was passed by the House and the one that is now being considered by the Senate would cut taxes by billions of dollars. But getting rid of the Obamacare taxes poses two big problems for Republicans. The first is political: The cuts would go overwhelmingly to the richest Americans. The Tax Policy Center, a think tank that leans to the left but whose analyses are generally respected by both sides, estimates that nearly 45 percent of the Senate bill’s tax cuts would go to the top 1 percent of households by earnings. One tax that the GOP wants to repeal, the “net investment income tax,” is even more skewed: 90 percent of its revenue comes from the top 1 percent, and 62 percent from the top 0.1 percent. That has made it easy for Democrats (including former President Barack Obama himself) to tar the Republican plan as a tax cut for the rich. The tax cuts also create a math problem for Republicans: The more they give up in tax revenue, the more they have to cut spending on health care programs.1 That could make it harder to appease moderate Republicans who want more money to fight the opioid epidemic, smaller cuts to Medicaid and more generous subsidies for low-income Americans to buy insurance. Now some Republican senators are suggesting that they keep at least the investment tax, which is expected to generate $172 billion in revenue over the next 10 years, according to the Congressional Budget Office. “It’s not an acceptable proposition to have a bill that increases the burden on lower-income citizens and lessens the burden on wealthy citizens,” Republican Sen. Bob Corker of Tennessee told reporters. Two other GOP senators, Susan Collins of Maine and Mike Rounds of South Dakota, have expressed similar concerns. Some Senate conservatives are likely to oppose any effort to keep the investment tax, and, as with everything in health care right now, it’s unclear how things will shake out in the end. But whatever happens, the debate provides a preview of the coming fight over tax reform, which Republicans have vowed to tackle once the health care process wraps up. President Trump hasn’t yet released a detailed tax plan, but the outline he has provided suggests that the plan will, like the health care tax cuts so far, disproportionately benefit the rich and lead to big reductions in government revenue (making it harder to pay for spending on infrastructure, the military and Trump’s border wall, among other priorities). The parallels aren’t perfect. It’s unlikely that any GOP tax reform proposal will favor the rich quite to the degree that repealing the Obamacare taxes would. And a stand-alone tax bill won’t have the same one-to-one tradeoff of revenue and spending as the health care overhaul. But broad tax reform is also more complicated than simply repealing the Obamacare taxes. There are dueling interests and competing priorities, even among Republicans, that could prove difficult to resolve. If Republicans are having trouble repealing taxes that they all agree they want to get rid of, it’s a safe bet that real tax reform isn’t going to be any easier. Health care: Rural struggles Beyond the question of tax cuts, much of the discussion of the Senate health care bill has focused on insurance coverage, in particular the 22 million additional people that the CBO estimates would go without health insurance under the plan. What has been less talked about is the impact the bill would have on the broader health system, particularly in rural areas. Research suggests that the Republican approaches proposed in the House and Senate would have a large and negative impact on rural hospitals. An analysis from the Chartis Center for Rural Health released this week estimates that in states that expanded Medicaid under Obamacare,2 hospitals would lose more than $470,000 a year in revenue. In states that didn’t expand, the amount would be less, around $240,000. Those losses would largely be due to cuts to Medicaid included in the proposed Senate health care the bill, cuts that would total $772 billion. The bill includes additional payments to some rural hospitals, but it’s not nearly enough to offset the expected losses in insurance coverage, Chartis estimates. Those cuts would likely cause 140-1503 more rural hospitals around the country to operate at a loss. In all, 48 percent of rural hospitals could end up in the red, compared to 41 percent today. Financial struggles have already led to nearly 80 hospital closings since 2010, and these closings can be a major hit to rural communities. Earlier this week we profiled Greene County, Alabama, home to one of the country’s struggling rural hospitals. With about 200 employees, the county health system is essentially tied with a box manufacturer as the largest employer in the county. “You close the hospital here and now you’re talking about jobs, and you’re never gonna get an industry because there won’t be a hospital in a 30-mile radius,” hospital CEO Elmore Patterson III said, adding that the closure would also mean the area would lose professionals who are a key part of the area’s tax base. Greene County isn’t an outlier; in many rural counties, the local hospital is among the biggest employers in the area. The strain on hospitals also puts a strain on rural communities, which are already disproportionately burdened when it comes to health care. People in rural areas tend to be older and are more likely to be veterans, two high-needs groups when it comes to health care (and also two groups that supported Trump for president). Rural areas also experience higher rates of childhood poverty and higher rates of premature death. The rushed Senate process has made it hard to see beyond the basics of the bill, but its impact would be felt throughout the health system. Immigration: The ban is back The Supreme Court this week reinstated a limited version of Trump’s ban on travel from six predominantly Muslims countries: Libya, Sudan, Syria, Iran, Yemen and Somalia. The revived ban took effect on Thursday, nearly six months after Trump’s initial executive order. The court ruled that travelers can be barred entry if they don’t have a “credible claim of a bona fide relationship” with a person or entity in the country, but didn’t specify what qualified as bona fide relationship or who would be affected. According to guidelines issued by the State Department before the order became effective, a bona fide relationship with parents, children and in-laws in the U.S. is enough to gain entry, but not grandparents or cousins. (Late on Thursday, the State Department revised its guidelines to include fiancés.) But even with the guidelines, which have already drawn a legal challenge, there remains significant uncertainty about how many people will be affected by the ban. Data on travel from past years can provide some guidance. In fiscal year 2016, the State Department issued over 81,000 total visas to people traveling from the six affected countries. Of those, more than 28,000 were immigrant visas, issued to people looking to move to the U.S. permanently. Most of those people wouldn’t have been affected by the Supreme Court’s version of the ban, according to the State Department — about 80 percent of them having a family or employment connection. But more than 5,000 of them were so-called diversity visas, which are given to people from countries with historically low rates of immigration to the U.S. and are issued through a lottery program. Some of those applicants would likely now be blocked by the travel ban. The ban will likely have a much bigger impact on non-immigrants, people looking to travel to the U.S. temporarily. In fiscal year 2016, the State Department issued more than 53,000 non-immigrant visas for travelers from the six countries. Some of them were issued to students, travelers coming in through work exchange or government officials who aren’t likely to be affected by the order. But around three-quarters of them were visas for tourism, business or medical purposes.4 Applicants for those visas would only be allowed in if they could show they are visiting a close family member or have some other bona fide relationship. These rules are only temporary. In October, the Supreme Court will review challenges to the travel ban, which could eventually pave the way for rules that are stricter (if the court upholds Trump’s ban) or more lenient (if the justices rule large parts of the ban unconstitutional). But in the meantime, immigration lawyers are bracing legal battles over the relationship guidelines. Environmental regulation: The feds vs. the states The Environmental Protection Agency is the most politically polarizing agency in the U.S. government. And for its mostly conservative discontents, the EPA has become synonymous with capital-“B” Big Government. But despite that reputation, state control is at the heart of how the EPA was designed. The federal agency sets standards to meet the congressional mandates in legislation, such as the Clean Water Act, but enforcement, monitoring and other practical details are largely left up to the states.5 As of February 2016, 96 percent of the powers that could be in the hands of the states were. A report published in June by the Environmental Council of the States, a nonpartisan association of state environmental agencies, described states as “the primary implementers” of environmental statutes. But you wouldn’t know that from the speeches of EPA Administrator Scott Pruitt, who has focused heavily on a need to return control to the states — rhetoric that popped up again this week in a proposal to rescind an Obama-era regulatory rule. Together with the Army Corps of Engineers, the EPA is proposing to repeal the “Waters of the United States,” a 2015 rule meant to broaden the scope of the Clean Water Act by stretching it beyond navigable waterways to the streams, wetlands and seasonal creeks that feed them. The rule was never implemented because of legal challenges. Now, like the Clean Power Plan before it, it likely never will be. Rhetoric about correcting federal overreach is a major part of the justification for current EPA efforts to cut budgets and eliminate regulations. But how do we make sense of a policy argument that seems contradictory to the power structure as it exists on paper? According to Alexandra Dunn, executive director of the Environmental Council of the States, the answer is tied to a fundamental disagreement about quality control. The federal EPA retains veto authority over most of what states do in order to make sure that enforcement is carried out the same way everywhere. And while EPA officials defer to the states in general, the agency does step in if there’s a documented history of failure to make progress. The trouble, Dunn said, is that the states and the EPA don’t interpret that language the same way. States tend to think the EPA should step in only rarely; the feds have tended to be more aggressive. Dunn compared the states to people on an exercise regimen who aren’t losing weight. With all the sweat, they might feel like they’re working hard. But their trainer (the EPA) might look at the scale and feel like they aren’t. “It’s in the eye of the beholder,” she said. But the clash strains the limits of trust and, she believes, pushes some states to reject attempts to expand the scope of what they (and the EPA) would have to enforce. That’s why — despite high favorability ratings overall and general American support for its goals — the EPA is facing a major shift in priorities and funding. Pruitt has the perspective of the guy on the treadmill, not the one with the clipboard.

Read More »

Data On Drug Use Is Disappearing Just When We Need It Most

It’s no secret that heroin has become an epidemic in the United States. Heroin overdose deaths have risen more than sixfold in less than a decade and a half.6 Yet according to one of the most widely cited sources of data on drug use, the number of Americans using heroin has risen far more slowly, roughly doubling during the same time period.7 Most major researchers believe that source, the National Survey on Drug Use and Health, vastly understates the increase in heroin use. But many rely on the survey anyway for a simple reason: It’s the best data they have. Several other sources that researchers once relied on are no longer being updated or have become more difficult to access. The lack of data means researchers, policymakers and public health workers are facing the worst U.S. drug epidemic in a generation without essential information about the nature of the problem or its scale. “We’re simply flying blind when it comes to data collection, and it’s costing lives,” said John Carnevale, a drug policy expert who served at the federal Office of National Drug Control Policy under both Republican and Democratic administrations. There is anecdotal evidence of how patterns of drug use are changing, Carnevale said, and special studies conducted in various localities are identifying populations of drug users. “But the national data sets we have in place now really don’t give us the answers that we need,” he said. Among the key questions that researchers are struggling to answer: Is the recent spike in deaths primarily the result of increased heroin use, or is it also due to the increased potency of the drug, perhaps because of the addition of fentanyl, a synthetic opioid that can kill in small doses? “Everyone thinks they know the answer” to whether fentanyl is behind the increase in deaths among heroin users, said Daniel Ciccarone, who’s a professor at the University of California, San Francisco’s medical school and studies heroin use. “Well, show me the data. … When you don’t have data that leads to rational analysis, then what you’re left with is confusion, and confusion leads to fear, and that will lead to irrational consequences.” Researching illicit drug use has always posed challenges. One ever-present problem: Reliable information on illegal behavior is, almost by definition, difficult to collect. But researchers point to particular limitations in data sources that would help shed light on the heroin epidemic. And they say the problems are getting worse: Some data systems that were once used by government agencies to gather information on users, consumption and illegal markets have disappeared over the past several years. Other sources that are still available are becoming more difficult to access or don’t provide a clear picture of the problem. The National Survey on Drug Use and Health, which is an annual household survey, is sponsored by the Substance Abuse and Mental Health Services Administration, a division of the federal Department of Health and Human Services. Through roughly 70,000 interviews, the survey collects information nationwide on the use of tobacco, alcohol and illegal drugs, as well as Americans’ mental health. Experts who study illicit drugs say the survey is an important source for estimating the number of those who use alcohol, tobacco and, increasingly, cannabis (because of the normalization of marijuana use). But many consider it inadequate for calculating the number of users of harder drugs such as heroin, cocaine and methamphetamine, which carry a greater stigma. Moreover, the survey excludes people without a fixed address, meaning people who are homeless or transient — a category that includes many of the heaviest drug users. These factors lead experts to believe the survey significantly underreports the number of users of hard drugs in the U.S. (A spokesman for SAMHSA said the survey doesn’t capture certain populations, including the homeless, and acknowledged that it faces other “limitations inherent in surveys.”) Because of the well-known shortcomings of the National Survey on Drug Use and Health and other surveys that rely on self-reporting, experts often try to combine different sources to reach a more reliable estimate of total drug use. By looking at drug production and seizures, for example, they can estimate the supply of drugs that are reaching users. Fluctuations in the street price of drugs can give hints about changes in supply and demand. Each of those estimates carries its own challenges and caveats, but in theory, putting them together should give researchers a more complete picture of drug use — and the drug market more generally — than any one data point alone. “If you want to understand something about drug popularity, drug consumption, how much of a drug is going to be consumed in a given year, then you look at the economics,” Ciccarone said. These more comprehensive efforts tend to yield estimates of drug use that are much higher than those based on user surveys. A 2014 analysis by researchers from the RAND Corp. that was conducted for the Office of National Drug Control Policy, for example, estimated that there were roughly 1 million daily or near-daily heroin users in the U.S. in 2010, more than 15 times the 60,000 chronic users reported in the National Survey on Drug Use and Health for the same year. In a related report to the White House, RAND provided information on how much money heroin users were spending on the drug, estimating that expenditures were roughly $27 billion in 2010, with much of the spending driven by daily and near-daily users. That kind of analysis has become more difficult to conduct in recent years, however. RAND relied in large part on the Arrestee Drug Abuse Monitoring system, a federal program that conducted interviews with male arrestees within 48 hours of their arrest to collect information on drug use, treatment and market activity, among other topics, and then validated usage with urinalysis testing for 10 substances. The early version of ADAM, from the early 2000s, covered 35 jurisdictions. But the program was cut for a couple of years in the mid-2000s and then was revived on a smaller scale in 2007 before being eliminated in 2013. The Office of National Drug Control Policy, which took responsibility for ADAM in 2007, declined to comment on the record for this story. Experts who used ADAM said that interviewing and testing arrestees provided insights not available through other sources. ADAM was the only national data source on individual users’ expenditures and consumption, two important metrics for understanding drug markets. The loss of the program left a particular gap in researchers’ understanding of heroin use because it had provided data on a crucial and hard-to-study group: heavy users, who often drive the overall market. Beau Kilmer, co-director of the Drug Policy Research Center at RAND, said information collected on heavy users is important for evaluating enforcement efforts and making decisions about treatment availability. “In order to generate these estimates for heavy users, there isn’t one data source you can use,” Kilmer said. “We think about how do we combine the insights for multiple data sets, and they have their flaws, but by far the most important data set we used was ADAM.” Another source of data that is no longer being updated is the Drug Abuse Warning Network, which was once a project of SAMHSA. It monitored drug-related hospital emergency room visits and provided insight on drug use in metropolitan areas. Rosalie Liccardo Pacula, another co-director of RAND’s Drug Policy Research Center, said a crucial advantage of DAWN was that it provided data at the local level, allowing researchers to understand drug-use patterns in different parts of the country. In an email, a SAMHSA spokesman said the agency is working with the CDC to develop a new system for collecting data from emergency rooms that will replace DAWN and combine other data sets. President Trump’s proposed 2018 budget, however, doesn’t provide funding for the new program; the spokesman said the agency will continue to “explore the viability of this approach to collect these data in the future.” In the meantime, researchers say the loss of DAWN has left a big hole in their understanding of the opioid problem. “We know the importance of emergency rooms right now with the opioid epidemic because that’s where people are showing up,” Carnevale said. “Right now, there’s no systematic collection by the federal government for emergency room data — again, another major loss in our knowledge base.” Researchers pointed to a third government database that could help them examine drug trends — if they had better access to the information. The Drug Enforcement Administration collects information on the drugs it obtains through seizures and undercover buys by its agents in a database known as the System to Retrieve Information from Drug Evidence. The database isn’t designed for researchers — it is an internal inventory and can therefore be influenced by which drugs law-enforcement authorities are focused on at the time — but the data has nonetheless been valuable to researchers because it contains information on the price, purity and composition of seized drugs. In some cases, Ciccarone and other researchers say, the information has become more difficult to obtain. Pacula, who has worked with the STRIDE database and DEA data extensively, said the agency a few years ago became more restrictive with data requests after becoming concerned that researchers were improperly sharing raw data. Pacula said researchers can still access the data they need as long as they are careful about their requests; others, however, were more critical. “The DEA has basically substantially reduced access to their data, which taxpayers are paying for that collection of data,” Carnevale said. When asked about the access issues that the researchers described, a DEA spokeswoman wrote in an email that data entry into STRIDE had stopped in 2014 and was replaced with the National Seizure System, which isn’t available to the public and requires a formal request under the Freedom of Information Act to access. She declined to comment further on the researchers’ concerns. Experts argue that with the heroin epidemic showing no signs of slowing down, the government should bring back data collection systems like ADAM and DAWN and ideally expand them to include more areas and broader populations. (ADAM, for example, sampled only male arrestees in urban areas.) Researchers say the cost of these programs is relatively minor, pointing to ADAM’s peak annual cost of about $10 million. That’s about a fifth of what it costs to fund the National Survey on Drug Use and Health survey each year. The lack of reliable national data is hindering efforts to tackle the spread of heroin, experts say. Some big cities such as New York have embarked on their own data-collection efforts, something the many smaller cities and towns ravaged by heroin overdoses likely can’t afford to do. Researchers say they need the federal government to help fund and coordinate efforts to collect data from local coroner’s offices, emergency rooms and crime labs so that local officials know where and how to direct their efforts. And, more broadly, they say they need a wide range of groups — researchers, public health workers, law enforcement officers, as well as federal, state and local governments — to work together to understand the heroin epidemic and to figure out how to stop it. “We need to have public safety, public health partnerships,” Ciccarone said. “We need the government to be forthright. We need it to think that researchers and public health officials are on the same side as the people who also want to stop the drugs.”

Read More »

The Tangled Story Behind Trump’s False Claims Of Voter Fraud

Three-thousand Wisconsinites were chanting Donald Trump’s name. It was Oct. 17, 2016, just after the candidate’s now-infamous “locker-room” chat with Billy Bush became public knowledge. But the crowd was unfazed. They were happy. And they were rowdy, cheering for Trump, cheering for the USA, cheering for Hillary Clinton to see the inside of a jail cell. The extended applause lines meant it took Trump a good 20 minutes to get through the basics — thanks for having me, you are wonderful, my opponent is bad — and on to a rhetorical point that was quickly becoming a signature of his campaign: If we lose in November, Trump told the supporters in Green Bay, it’ll be because the election is rigged by millions of fraudulent voters — many of them illegal immigrants. That night wasn’t the first time Trump had made this accusation, but now he had statistics to support it. His campaign had recently begun to send the same data to reporters, as well. In both cases, one of the chief pieces of evidence was a peer-reviewed research paper published in 2014 by political scientists at Virginia’s Old Dominion University. The research showed that 14 percent of noncitizens were registered to vote, Trump told the crowd in Green Bay, enough of a margin to give the Democrats control of the Senate. Enough, he claimed, to have given North Carolina to Barack Obama in 2008. “You don’t read about this, right? Your politicians don’t tell you about this when they tell you how legitimate all of those elections are. They don’t want to tell you about this,” Trump said. The crowd cried out in shock and anger. But that’s not what the research showed. The 14 percent figure quoted by Trump was actually the upper end of the paper’s confidence interval — there’s a 97.5 percent chance that the true percentage of noncitizens registered to vote is lower than that. And, just as he got the data wrong, Trump also failed to tell his audience the full story behind the study. By the time it got into his hands, that Old Dominion paper had already been heavily critiqued in the scientific community. The analysis hinged on a single, easy-to-make data error that can completely upend attempts to understand the behavior of minority groups, such as noncitizen residents of the United States. And even the paper’s authors say Trump misinterpreted their research. A couple of days after the Green Bay speech, one of them wrote up a blog post that countered much of what Trump had said, but it was a whisper in a roaring stadium. Months later — and despite having won the ostensibly rigged election — the Trump administration is still citing that paper as proof that fraudulent voting (especially by noncitizen immigrants) is pervasive, is widespread and alters electoral outcomes. On Thursday, the administration confirmed to several news outlets that it would be establishing a commission to investigate this fraud. The back and forth on this single study has been seen as a liberal-vs.-conservative smackdown where both sides — Voter fraud is a myth! Voter fraud is rampant! — claim to have the backing of absolute scientific fact. But that wasn’t what it was meant to be about. Instead, the paper that is probably Trump’s most celebrated evidence of undocumented immigrants voting began as the work of an undergraduate Pakistani immigrant, who just wanted to know why people in her community who could vote, didn’t. Neither she, nor her adviser and co-author, ever expected their work to end up in the mouth of the president. The political scientists who have rebutted the paper didn’t foresee how easily data on voter behavior could lead research astray. And everyone involved says their work and words have been misinterpreted and misused, twisted toward political ends they don’t support. It’s a glaring example of just how easy it is, in a polarized political climate, for scientists to lose control of scientific results. Getty Images The Author Gulshan Chattha would end up with a precocious scientific publication that went viral through the popular media, her words morphed into talking points that screamed from headlines across the country. But, before any of that happened, her research on immigrants and voting was just about her dad, herself and the threads of love and civic responsibility that tied them to each other. Her father, Mohammad Afzal Chattha, had grown up in Pakistan, where he dropped out of elementary school to take a job that helped support his parents and siblings and, later, his wife and their children. In 1993, the family moved to the United States. Gulshan was just a year old. Her father opened a gas station and would eventually send his three kids to college. Through duty and perseverance, he created an entirely different existence from the one he was born into. And part of Mohammad Chattha’s American dream was voting. He became a U.S. citizen the year he immigrated, and, his daughter said, he has spent the 24 years since then defending his role in the electoral process — against the statistics that show most naturalized citizens don’t exercise their right to vote, against the doubts of family members who think voting makes no real difference to political outcomes, against forgetfulness and complacency and the long list of bureaucratic and administrative hurdles that make it easier to just stay home. “He said, ‘You know what, you became a U.S. citizen. It’s your duty,’” Gulshan Chattha told me. So when she got the chance to study the voting habits of naturalized citizens, the younger Chattha jumped at the opportunity. In 2013, she was an undergraduate in political science at Old Dominion. She wanted to know why some people came to this country with a mindset like her father’s while many others felt voting was optional. But her research ended up leading her in a different direction. “Do non-citizens vote in US elections?,” the paper Chattha published in 2014 with her professor, Jesse Richman, and David Earnest, a dean of research at Old Dominion, focused on evidence from a massive data set of voter surveys that, to Richman and Chattha, suggested noncitizen immigrants might be voting at a higher rate than most experts thought. The path from parental tribute to Trump talking point began with that pivotal shift in Chattha’s research focus — and put her on a path toward an understandable, but crucial, data analysis error. The problem starts with the sample population. About 7 percent of the people who live in the United States are noncitizens, roughly half of whom are undocumented. They come here on student visas, they come as refugees. They teach, they heal the sick, they build houses and care for children. They commit crimes — though, research suggests, at a rate lower than that for citizens. Some are immigrants who just haven’t quite yet gotten around to the process of naturalization. Others travel as migrants, working to support families thousands of miles away. None of them are supposed to vote in our elections. But we know, on rare occasions, a few do. The question is, really, how rare is “rare” — and nobody knows for sure. That’s something you need to understand up front. It may seem like this should be easily solvable, but one thing this paper does, if it does nothing else, is demonstrate how quickly apparently straightforward answers can fall apart. The Error Noncitizens who vote represent a tiny subpopulation of both noncitizens in general and of the larger community of American voters. Studying them means zeroing in on a very small percentage of a much larger sample. That massive imbalance in sample size makes it easier for something called measurement error to contaminate the data. Measurement error is simple: It’s what happens when people answer a survey or a poll incorrectly. If you’ve ever checked the wrong box on a form, you know how easy it can be to screw this stuff up. Scientists are certainly aware this happens. And they know that, most of the time, those errors aren’t big enough to have much impact on the outcome of a study. But what constitutes “big enough” will change when you’re focusing on a small segment of a bigger group. Suddenly, a few wrongly placed check marks that would otherwise be no big deal can matter a lot. DISCOVERY OR DISASTER? This is an issue that can affect all kinds of studies. Say you have a 3,000-person presidential election survey from a state where 3 percent of the population is black. If your survey is exactly representative of reality, you’d end up with 90 black people out of that 3,000. Then you ask them who they plan to vote for (for our purposes, we’re assuming they’re all voting). History suggests the vast majority will go with the Democrat. Over the last five presidential elections, Republicans have earned an average of only 7 percent of the black vote nationwide. However, your survey comes back with 19.5 percent of black voters leaning Republican. Now, that’s the sort of unexpected result that’s likely to draw the attention of a social scientist (or a curious journalist). But it should also make them suspicious. That’s because when you’re focusing on a tiny population like the black voters of a state with few black citizens, even a measurement error rate of 1 percent can produce an outcome that’s wildly different from reality. That error could come from white voters who clicked the wrong box and misidentified their race. It could come from black voters who meant to say they were voting Democratic. In any event, the combination of an imbalanced sample ratio and measurement error can be deadly to attempts at deriving meaning from numbers — a grand piano dangling from a rope above a crenulated, four-tiered wedding cake. Just a handful of miscategorized people and — crash! — your beautiful, fascinating insight collapses into a messy disaster. Try playing around with this hypothetical in our interactive. There are a few things you should notice. First, the difference between reality and the survey results gets bigger the larger your measurement error is — that’s obvious enough. (We’re assuming here that 60 percent of the voters miscategorized as black will vote Republican.) But you can make that error rate matter less by making your sample population larger, in comparison to the overall poll numbers. If you’re studying a group that makes up 30 percent of the survey, instead of 3 percent, then that same 1 percent rate of measurement error barely registers. Finally, if your sample population stays at a minuscule 3 percent of the total, you can’t make the reality and the poll results match just by surveying more people. That’s a really important point. Scientists are conditioned to think of larger polls as better polls. But if what you’re studying is a handful of outliers, even an 80,000-person survey can’t save your results from the risks of measurement error. And if the error rate is big enough — which could be, relatively speaking, still very small — you can end up saying something with a lot of statistical confidence … and still be wrong. That’s what makes risks posed by a skewed sample ratio and measurement error hard to spot, even for scientists who work with data every day. The combination distorts the statistical realities researchers are used to dealing with and the mistakes they’re used to preventing. Numerous people in political science — professionals with tenure, not just students such as Chattha — told me that few of their peers are really aware enough of the baby grand swinging over their heads. Chattha’s paper has been misinterpreted by Trump and his surrogates, and it’s been wielded as a political weapon. That blowback changed Chattha’s understanding of how her fellow Americans see her and her community. But at the heart of this story is a mistake — a mistake that almost no one in the political science world was watching out for. The Survey T o study the voting behavior of naturalized citizens, Chattha began working with data from the Cooperative Congressional Election Study, a national survey that has been administered online every year since 2006 to tens of thousands of people who live in America. The CCES asks people for their basic demographic information, their political opinions and their voting habits. Its size is what makes it special to political scientists. Historically, those researchers have had to draw on surveys that had something on the order of 1,000 respondents. In those voter surveys, noncitizens were unlikely to show up at all or, if they did, they were present in numbers so small that researchers couldn’t draw any statistically significant conclusions. The larger CCES data set changed that. Suddenly, the group of noncitizens swept up in a survey of voters was large enough to be useful to research. And it was large enough that Chattha was able to spot that even smaller subpopulation that she didn’t expect to find: respondents who reported not being citizens but who also reported voting. That seemed unbelievable to her. She knew that when there isn’t a presidency on the line, most people who can vote, don’t. And that even in presidential election years, only about half of the voting-age population casts a ballot. “I wonder if they even know,” she remembered thinking. “I wonder if they know they don’t have citizenship.” The question is, really, how rare is “rare” — and nobody knows for sure. That’s something you need to understand up front. It may seem like this should be easily solvable, but one thing this paper does, if it does nothing else, is demonstrate how quickly apparently straightforward answers can fall apart. That thought was grounded in personal experience. Chattha came to the U.S. as a baby. She can’t remember a time when her father wasn’t a citizen. Her younger brother was born into his citizenship. It wasn’t until she started taking civics classes that Chattha realized that her father and brother were members of a club she didn’t belong to. She was in seventh grade and got her citizenship that same year. Her story isn’t unique; some other people are equally confused about the details of their immigration status and what it means for voting rights — though few get all the way to the ballot box. In 2012, a group of college student journalists reviewed the previous 12 years’ worth of voting fraud cases. Out of 2,068 incidents, they found 56 that involved noncitizen voters, and they were told by their sources that confusion about status was a major factor. You could see that confusion in action in February, when a 37-year-old Texas woman, who had legal permanent residency status, was convicted of illegally voting in two elections. She was sentenced to eight years in prison. Her lawyer told reporters that she had a sixth-grade education and didn’t know she wasn’t supposed to vote. Of the 32,800 people surveyed by CCES in 2008 and the 55,400 surveyed in 2010, 339 people and 489 people, respectively, identified themselves as noncitizens. Of those, Chattha found 38 people in 2008 who either reported voting or who could be verified through other sources as having voted. In 2010, there were just 13 of these people, all self-reported. It was a very small sample within a much, much larger one. If some of those people were misclassified, the results would run into trouble fast. Chattha and Richman tried to account for the measurement error on its own, but, like the rest of their field, they weren’t prepared for the way imbalanced sample ratios could make those errors more powerful. Stephen Ansolabehere and Brian Schaffner, the Harvard and University of Massachusetts Amherst professors who manage the CCES, would later tell me that Chattha and Richman underestimated the importance of measurement error — and that mistake would challenge the validity of the paper. But that was yet to come. In 2013, Chattha and Richman concluded that, in some states and in some tight races (and if noncitizens all voted the same way) this rate of noncitizen voting could be enough to change the outcome of an election. Chattha was (and still is) confused about why these people would risk jail and deportation in order to vote. But she isn’t a professional scientist, and she told me in February that she was uncomfortable trying to parse the details of statistical analysis she last worked on three years earlier. She referred questions about the data to Richman, who, for his part, had seen the results as an opportunity: He hoped that this data would depolarize the debate about fraudulent voting and voter ID laws. Hindsight might make that sound a touch naive, but Richman knew something a lot of talking heads don’t seem to — it’s really difficult to study voter fraud. There isn’t much data on it in the first place. Richman knew the study wasn’t perfect, that it involved some assumptions and extrapolations that might be disproved by future research. But he reasoned that having some numbers, any numbers — even if they came with some big caveats — might help move the discussion from ideology to fact. Neither he nor Chattha intended the paper to be seen as definitive proof of voter fraud. Neither even expected many other people to read it. The Media Research that an undergrad does, even with a professor, doesn’t tend to get published or to get this much attention when it does. But Chattha’s paper, which was published in the December 2014 issue of the journal Election Studies, has been referenced by nine other papers in just over two years, according to Google Scholar. That’s more than is typical for that journal, and for political science as a whole. For better or worse, Chattha’s paper has been influential. And that fact is probably tied to Richman’s decision to write an essay about it for The Washington Post’s Monkey Cage blog. Multiple people I interviewed for this story, including Ansolabehere and Schaffner — who, along with their co-author Samantha Luks of the survey firm YouGov, went on to publish their own paper in Electoral Studies critiquing Chattha and Richman’s work — said they probably never would have heard of the paper without that essay. Published in October 2014, after Chattha had graduated and two years before Trump’s rally in Green Bay, Richman’s essay was titled “Could non-citizens decide the November election?” Not only did the article turn the paper into a media sensation, it also helped to create a series of misconceptions Richman would later struggle to correct. “That title misled people to a degree,” Richman told me. “The title suggested a ‘yes’ answer, where our ultimate conclusion was really one more that they probably wouldn’t. Maybe if there was a really, really close race they might, but otherwise [they] probably wouldn’t have much effect on the outcome of the elections.” John Sides, the editor of the Monkey Cage blog and a political scientist, said that the headline was based on the opening lines of Richman’s essay (“Could control of the Senate in 2014 be decided by illegal votes cast by non-citizens?”) and that he wasn’t aware Richman thought it was misleading. Either way, the headline, the misconceptions and the data mistake all came together to create a perfect political storm. This isn’t the only time a single problematic research paper has had this kind of public afterlife, shambling about the internet and political talk shows long after its authors have tried to correct a public misinterpretation and its critics would have preferred it peacefully buried altogether. Even retracted papers — research effectively unpublished because of egregious mistakes, misconduct or major inaccuracies — sometimes continue to spread through the public consciousness, creating believers who use them to influence others and drive political discussion, said Daren Brabham, a professor of journalism at the University of Southern California who studies the interactions between online communities, media and policymaking. “It’s something scientists know,” he said, “but we don’t really talk about.” These papers — I think of them as “zombie research” — can lead people to believe things that aren’t true, or, at least, that don’t line up with the preponderance of scientific evidence. When that happens — either because someone stumbled across a paper that felt deeply true and created a belief, or because someone went looking for a paper that would back up beliefs they already had — the undead are hard to kill. In Chattha’s case, her data came directly from an established, respected source, and the mistake that undermined it was so easy to make that even the peer reviewers who critiqued the paper before publication didn’t catch it. The misinterpretation that followed was a flood Chattha could do little more than watch as it rose around her. But the classic example of zombie research is far less defensible. The idea that vaccines cause autism is a story that began with a single research paper that has since been retracted because of fraud and conflicts of interest, its author stripped of his medical license. Although it has been discredited in the scientific community, believers keep on believing. Social media keeps on sharing. The stickiness of erroneous beliefs such as a connection between autism and vaccines is often cited as proof of a growing mistrust of science, as an institution, in American culture, but that’s probably not the most useful framing, said Dominique Brossard, professor of science and technology studies at the University of Wisconsin-Madison. Overall, Americans don’t trust science and scientists any less than they did 40 years ago — around 40 percent of us report “a great deal of confidence” in science, according to the National Science Foundation’s science and engineering indicators. That’s enough to make science the second-most trusted institution in America, after the military. Add in the people who have at least “some confidence” in science, and you get 90 percent of Americans — a group that probably shouldn’t be framed as being at war with science. Instead, Brossard said, these cases of people believing incorrect things have more to do with factors separate from their trust in science. “Political ideology,” she suggested, “religiosity.” Those nonscientific beliefs then get entangled with how they consume media. For instance, say you have a political belief system that leads you to be skeptical of government-mandated vaccination. If social media then handed you a paper showing those vaccination programs to be hazardous, then that science — a trusted source of information — would ring particularly true, especially devoid of context about other studies that show the opposite. Your existing sense of risk would make the paper urgent and would make you more likely to share it. The more you believe in the risk and the more your friends believe it, the more suspicious you’re likely to be of any attempts to downplay that risk. What you get, Brossard said, is a perfect social machine for amplifying an erroneous interpretation of an idea. This is essentially what happened with Chattha’s paper, and the problem is compounded, scientists said, by the fact that most Americans don’t understand what’s going on when scientists critique each other. No single scientific research paper, no matter how well done, is supposed to serve as absolute proof of anything. Chattha was horrified when advocates of a state voter ID law contacted her in the hope that she would testify as proof of its necessity. You shouldn’t create a law based on one study, she said. “Science is this never-ending process. I don’t think science is about getting the answer right now. It’s about getting closer and closer to the answer.” In that sense, what has happened with Chattha’s paper in the academic world is an example of how science should function. Chattha and Richman published a paper they said presented evidence for a hypothesis. Ansolabehere, Luks and Schaffner read it, saw flaws and published a critique. Richman is working on a rebuttal to their paper. Over time, this back-and-forth pushes both sides to examine their work, defend it and inch science closer to the capital-T Truth. But there are all kinds of incentives that can interfere with that, including scientists’ need to publish their own, novel work rather than spend time critiquing someone else’s. As a result, the real-world process of science usually doesn’t happen in such a textbook way as it did with Chattha’s paper, Brabham and Brossard said. “This occurrence is rare, and it’s really kind of beautiful,” Brabham said of the exchange. “But nobody [in the public] understands what is happening, so they just see ‘I’m right.’ ‘No, I’m right.’” And you can see this in the way the media and political partisans have gone on to misinterpret the rebuttal paper Ansolabehere, Luks and Schaffner wrote, just like they misinterpreted Chattha and Richman’s. The other three researchers simply wanted to make it clear that the original paper is flawed by the combination of skewed sample ratio and measurement error, and that it can’t be used to prove that noncitizens are voting. But that isn’t the same thing as saying illegal voting never happens, Ansolabehere said. The number of fraudulent voters isn’t zero. And he’s frustrated by people using his paper to promote that idea. “We have evidence from pieces of the electoral system that [fraudulent voting] is very small. But we don’t know how small,” he said. Other people have tried to figure this out, and they failed, too. In the bigger context of the scientific process, Chattha’s study is part of an ongoing and still unsuccessful effort to illuminate a dark hole in our knowledge. The Scientists And Chattha’s paper could still end up being very important — not for what she and Richman published but because of the mistake they made on the way there and what it means for scientists who study voter behavior. “Science is this never-ending process. I don’t think science is about getting the answer right now. It’s about getting closer and closer to the answer.” The data set Chattha and Richman used — that 50,000-person survey — was Ansolabehere, Luks and Schaffner’s baby. Their large survey gave Chattha and Richman the power to illuminate small subgroups that would otherwise languish in the unanalyzed darkness. But it also set up a statistical Piano of Doom-style risk. And an entire research field — with the exception of those political scientists who specialize in data methodology — was generally unprepared to deal with that risk, which is becoming more and more relevant as more large data sets make it possible to pull out statistically significant, but also itsy bitsy, subgroups for close inspection. Only when the problem was staring them in the face from the website of The Washington Post were they able to see it clearly. Remember that when you have a situation like this, where you’re studying a tiny subpopulation within a much larger group, even an extremely small rate of measurement error can alter your results. So it becomes crucial to know the error rate in your data. How often did people accidentally click “noncitizen” when they meant “citizen”? At what rate did noncitizens unintentionally report that they’d voted when they hadn’t? Nobody knows what the rate of measurement error was for the 2008 and 2010 CCES data Chattha and Richman analyzed. The pair tried to account for it by comparing other kinds of data collected by other surveys of noncitizens — including race, state of residence and how people answered other survey questions about immigration issues — with the noncitizens in the CCES data. Based on that, they decided the measurement error was too small to matter. But Luks, Schaffner and Ansolabehere found evidence that, in this case, small was still significant. In particular, they noted multiple cases of people who marked themselves as citizens in 2010 but, on the 2012 edition of the survey, marked themselves as noncitizens, and vice versa. Moreover, this rate of error that we do know exists between 2010 and 2012 — just 0.1 percent — turned out, by itself, to be enough to account for all the noncitizen voters in Richman and Chattha’s 2010 sample. In other words, there might not have been any noncitizen voters that year. And the actual error rate could be even higher. Ansolabehere, Luks and Schaffner’s paper doesn’t determine the exact rate of measurement error, but it does show that Richman’s assumptions about it were deeply wrong. Their analysis has also demonstrated how easy it is to make those wrong assumptions. So easy, it seems, that political scientists aren’t sure what to do with research that’s flawed in this way. To Schaffner, the answer is relatively simple: Someone, either the journal or a peer-reviewer, should have seen what a weird result this was and contacted him or Ansolabehere. Maybe then the article could have been corrected or simply never published. But Harold Clarke, the editor of Election Studies, said it wasn’t normal practice to contact the creator of data that a study was based on. To him, Chattha and Richman’s paper was “just a very standard sort of thing.” Controversial, yes, but still worthy of publication. Today, while he defends his and Chattha’s findings, Richman really does buy his critics’ concern that, in some cases, measurement error can become more powerful than most scientists give it credit for. “Where I come down on this is that measurement error needs to be taken very seriously,” he said in an email. Chattha and Richman’s paper represents one of the first big examples of political scientists dealing with the statistical Piano of Doom in a very public way. And everybody involved agrees that the issue hasn’t gotten enough attention. Schaffner is certain there are other published papers whose results are marred by the combination of imbalanced sample ratios and measurement error, it’s just that nobody has caught them yet. He and Ansolabehere are adding warnings to the user guides for the CCES, and they’re writing a paper that they hope will also draw attention to the issue. The realization of how important measurement error can be has shifted Richman’s research focus, as well; he’s now working on a paper about the risks of spurious correlations in analyses of big data. Who knows how long they all would have gone on not noticing how dire this particular mistake could be, Schaffner said, if not for Chattha’s curiosity. Chattha found some weird statistical results that didn’t match up with her lived experience or with what researchers know about American voting habits and prosecuted cases of voter fraud. She published those results, and they got caught up in a media-driven amplification of fears about the integrity of the electoral process. The research of an immigrant — who has spent the last year realizing, to her dismay, that many of her fellow citizens don’t think of her as a “real” American — was seized as a clobber text by people who want to make sure only “real” Americans vote. And all of that had to happen to make political scientists aware of how easily their data could fool them. “Politics ain’t beanbag,” Richman told me. “It’s not always about the truth, and, ideally, the scientific enterprise is about that.” There’s a fundamental disconnect here that means Gulshan Chattha is not going to be the last person to watch helplessly as politicians squeeze and stretch her findings until they take on a shape that’s no longer recognizable. In fact, the researchers I spoke to said that this was almost a natural consequence of the drive to produce science that is relevant to the real world and available to people outside the ivory tower. But those same people also said that scientists generally aren’t prepared for this eventuality and don’t know how to deal with the fact that scientific research is a baby bird that will, at some point, hop to the edge of the nest and jump out. Or, rather, everyone is ready to see the fledgling soar. They aren’t prepared for the times when it crashes to the pavement instead. Ansolabehere and Schaffner said they weren’t prepared for their data to inspire other researchers toward investigations that carry a big risk of failure. Meanwhile, Richman told me, he wasn’t prepared for his essay, about a topic most people would consider politically incendiary, to burst into flame. If he had it to do over again, he told me, he probably wouldn’t have written that Washington Post article. And because he wasn’t prepared for the explosion, he couldn’t protect his student. Writing the paper initially gave Chattha a big high — it was exciting to discover that she could play a part in the scientific community, publishing and sharing ideas with other researchers. After the initial project, she took a detour from her law school plans to get a master’s degree in political science — maybe, she thought at the time, she wanted to be a scientist instead. But everything that came afterward — from the initial reaction to Richman’s essay, to Donald Trump’s fiery speeches about her data — changed her mind. Today, she’s finishing up a law degree. “I didn’t have any control,” Chattha told me, and her feelings were echoed by her professor and by her critics. Both sides in an academic debate, burdened by the same sense that, somehow, the sharp knife of science shouldn’t have lost a gunfight with politics. But, if there’s anything we can learn from this story, Brossard said, it’s that being a part of the social world and the political world is part of a scientist’s job description — their work doesn’t exist separate from its interpretation. Or, as Chattha put it, “Once something is published, there’s no taking it back. It’s no longer yours.” UPDATE (May 11, 10:30 p.m.): This article has been updated to include all three authors in all references to “The perils of cherry picking low frequency events in large sample surveys” by Stephen Ansolabehere, Samantha Luks and Brian Schaffner. CORRECTION (May 12, 1:40 p.m.): A previous version of this article implied that David Earnest is the sole dean of research at Old Dominion. He is one of several. Interactive graphic by Matthew Conlen and Andrei Scheinkman

Read More »

How Do We Know When A Hunk Of Rock Is Actually A Stone Tool?

The first time archaeologist John Shea looked at what might be the oldest stone tools ever found, he almost blew them off. “Are you kidding me?” he remembers asking Sonia Harmand, his colleague at Stony Brook University who found the tools in 2011 along the shores of Kenya’s Lake Turkana, at a site now known as Lomekwi 3. Harmand’s analysis suggested that the tools were 3.3 million years old — 700,000 years older than the previously known “oldest” tools. And they were huge. The mean weight of the pointy flakes — the cutting tools that are “flaked” off larger rock cores — was 2 pounds. In comparison, the next-largest group of ancient hominin tool flakes have a mean weight of 0.06 pounds. It seemed like a lot for the hands of a primate that was probably half the size of the average modern human. Maybe, this time, a rock was just a rock. But then Shea looked at the Lomekwi tools more closely, and he saw multiple fractures, all running in the same direction — a telltale sign that the flakes weren’t just the lucky product of one rock bumping into another as it tumbled off a cliff or rolled through a stream. Something had pounded one rock against another in the same place over and over and over … until a sharp piece broke off.The archaeologists who study lithic technologies — “stone tools” to us lay primates — are used to making these kinds of determinations. For generations they’ve been sorting out what is a tool, what isn’t, and what those tools mean, and they’ve been doing it largely by sight, by context of the objects and the site where they were found, and by experience. They learn the signs. They use dating techniques and look for evidence of how tools were being used. They test hypotheses by making their own stone tools and using them to see, for instance, how an ax might wear down after it’s hacked through a few deer femurs. “But here’s the problem,” Shea said. “Archaeologists don’t have measurements to answer these questions. The decision rests on visual assessment. I’ve made stone tools for 45 years, started as an 11-year-old Boy Scout. So is this an artifact or not? Truthfully, I have no idea. I can’t tell you based on measurements, and my opinion shouldn’t count for science.”Archaeological analysis always involves some amount of informed interpretation, and not all archaeologists are this skeptical of the traditional methodology. Harmand, for one, thinks that visual assessment is crucial. But only during the past decade or so has there been any real alternative. Called geometric morphometrics, it’s a suite of digital tools that can turn one of Lomekwi’s 4-inch-wide flakes of volcanic basalt into a 2-D or 3-D map of its curves, cracks, shape and size. These maps could be put into databases that could, eventually, allow scientists to compare the minute variations among tools from different periods and different locations. This technology is still in the early stages of adoption. Many tools have been scanned; far more have not. It’s primarily used to give archaeologists around the world access to artifacts they would otherwise have to travel long distances to see. People such as Shea hope that it will, one day, allow archaeologists to more definitively answer questions that today are addressed using circumstantial evidence and logical conjecture.The central issue is not necessarily whether scientists can tell a tool from a naturally broken rock. The more complex question is how to tell the difference between tools intentionally created (probably by humans or our ancient ancestors) and those that another primate made by accident. The difference matters. Once upon a time, we considered tool use the factor that separated animals and humans. As scientists discovered tool use among more and more animals — including chimpanzees and octopuses — the goal post shifted. Today, as far as anyone yet knows, humans and our ancestors are the only animals that create sharp-edged tools specifically for cutting things. Humans have taught captive bonobos to make stone cutting tools, but we have no evidence of any living primates intentionally doing this in the wild.But the line is awfully fuzzy. Especially when you’re looking at really ancient tools such as those at Lomekwi. Somebody somewhere had to invent stone tool making. And “probably the first tools ever made by a hominin were made by accident,” Harmand said. Imagine your less-upright ancestor going to town on a nut with a rock, trying to get at the good stuff inside. “Of course, you will sometimes hit the stone with your other stone and then detach something,” she said. And although hominins seem to be the only primate to think, “Hey, I’ll take that bit of stone and go butcher me a hog,” there’s evidence that other primates are still accidentally making “tools” — and that those tools are very difficult to distinguish from the ones archaeologists attribute to our predecessors.Last month, Oxford archaeologist Tomos Proffitt published a paper that documents wild capuchin monkeys in Brazil cheerfully screwing with a whole scientific field by intentionally picking up rocks and banging them against other rocks over and over and over in the same place … until sharp pieces broke off. There’s no evidence that the monkeys were trying to make tools, and they generally ignored the flakes. Their behavior could be an aggressive display or possibly a means of getting at lichens or trace minerals in quartz dust. Either way, the accidental “tools” they produce from intentional rock-banging are virtually indistinguishable from what you might find in 3-million-year-old East African soil. “If you brought this to me [and said it came from] East Africa, I would have told you, ‘You’ve got a new Oldowan. Where’d you find it?’” Harmand said. “I’m convinced many of my colleagues would confirm.”The context in which the tools or “tools” are found is really the only way to tell the difference. Where did they come from? How old are they? Were the sharp flakes being used for anything, like butchering, which often leaves cut marks on bones nearby?Geometric morphometrics could shed some light on this. For instance, what would Shea do about the capuchin monkey “tools” and their similarity to ancient hominin tools? “One of the claims is that these objects are indistinguishable from artifacts manufactured by humans. Well, let’s see. You could scan them, the ones we’ve watched be modified by capuchins. Then scan early human stone tools, and compare them. Do they differ in specific ways, and is that statistically significant?” he said. “We can argue about what it looks like, or we can measure it.”Proffitt, the scientist who wrote the capuchin paper, is one of the archaeologists who have used geometric morphometrics for research. One paper he worked on investigated whether different methods of making stone tools might leave different signatures in the rock. The freehand technique is similar to what the capuchin monkeys do: hold a rock in each hand and mash them together. In what’s called bipolar knapping, on the other hand, you put your stone tool core on a stationary anvil rock on the ground and then hit the core with another rock. That may not seem like a lot of difference, but, theoretically, bipolar knapping gives you more control. “It speaks to intelligence and cognitive processes going on,” Proffitt said. “If you have only freehand knapping for a million years, and then you have bipolar knapping, you [now] have more varied behavior.”The team digitized hundreds of flakes and examined their 2-D outlines — and found no statistically significant differences between the ones made with the freehand technique and those made with bipolar knapping. It’s possible that a 3-D model would show something more. But this paper highlights that you need more than just the tool — whether you’re eyeballing it or digitizing it — to know how it was made. That’s not a totally new conclusion, but it’s important, because it helps to clarify what morphometrics can and can’t do.Some of the most detailed work in morphometrics has been with younger North American tools. Here, the questions are less about whether something is a tool — with 12,000-year-old artifacts, their human-madeness is obvious — but, rather, what differences in the styles of tools can tell us about the people who made them.No two tools are identical, said Andy White, an archaeologist at the University of South Carolina. There wasn’t an Old Spear-Getting Factory churning out duplicate objects. But what we don’t know is how much of the difference between them can be accounted for by the vagaries of artisan work and how much represents real, cultural distinctions between groups of craftspeople. “You can take measurements in a computer that I can’t imagine how you’d ever take them with a pair of calipers. [You can] fit curves to things. Quantify things,” he said. “[I use] computer modeling and simulation to try and give myself some kind of credible avenue to interpretation.” White is compiling a database of projectile points, dating from 8,800 to 6,600 B.C., from across what is now the eastern United States.Ultimately, though, Shea thinks geometric morphometrics hasn’t really had its first major test yet. It won’t really become widely used or relied upon for interpretation until it answers some kind of flashpoint, high-stakes question in archaeology, he said. And that could be Lomekwi. Some are still skeptical of whether those are actual, intentional tools. If there were another site from around the same time with tools that were either more clearly hominin-made or more clearly monkey accident, then digitization might be able to reveal, for certain, where on that spectrum Lomekwi falls. “They have this one site,” Shea said. “Somebody needs to find something almost, or slightly, older, and they need to go at each other.”

Read More »