If you look up “autism rate” or any variations of that phrase online, you will find an answer: 1 in 68. This estimate, obtained by the Center for Disease Control, can be found splayed across nearly any article or website that has anything to do with autism. Often, it is accompanied by frantic laments over a perceived increase in autism rates, creating an atmosphere where people like Stephanie Seneff, a computer scientist with no expertise in autism, going so far as to claim that 1 in 2 children will be autistic by 2025.
Autism isn’t even scary, but if it was, there would still be no reason to panic. Because of problems with how the 1 in 68 figure was calculated, it may not be accurate.
The CDC obtains its data by choosing certain sites to study and examining the school records of eight-year-olds within that area. If “descriptions of behaviors consistent with an [autism spectrum disorder] diagnosis” are present in a child’s school records, they are selected for further review, where experts examine their records more closely to determine whether they meet the criteria for an autism spectrum condition, and if so, counts them as autistic in its survey data.
The problem here is school records are unreliable. Obtaining a medical autism diagnosis involves a long series of one-on-one tests and interviews, meaning it requires direct, in-person observation.
As psychologists Luc Lecavalier and David Mandell pointed out in a 2014 editorial in the journal, Autism, the prevalence of autism is significantly different between the varying locations where the CDC has surveyed. In Alabama, the CDC found that 1 in 175 people had autism, whereas in New Jersey, the prevalence was 1 in 46. This difference, they explain, likely exists because in areas with higher awareness, teachers and clinicians are more likely to record potential signs of autism. If this is true, they warn, then “these studies are not measuring true prevalence. They instead measure the extent to which clinicians and educators test for and document the symptoms of autism, regardless of whether the practitioner ultimately assigns that diagnosis.”
In a truly accurate study of autism prevalence, Lecavalier and Mandell explain, researchers would pick a random sample of people and evaluate them in person to determine whether or not they were autistic. In 2001, the CDC did just that by going door-to-door in Brick Township, New Jersey, examining children, and found that 1 in 150 participants had autism.
In 2012, a review led by Mayada Elsabbagh, which examined over 600 studies, found the median prevalence was 1 in 160, which isn’t too far off from the 2001 CDC findings. It also lines up with a review published in 2015 led by A.J. Baxter, which found that after adjusting for changes in diagnostic criteria, the autism rate in 2010 was 1 in 132 and had remained unchanged since 1990.
These results differ considerably from the CDC’s data — and yet you can find the 1 in 68 statistic splayed prominently across a number of “awareness” materials on every side of autism’s political spectrum. Even Ari Ne’eman, the president of the Autistic Self-Advocacy Network, hails this figure as a sign that “we are gradually improving diagnosis and identification of autistic people.”
Why is the 1 in 68 result the one that seemingly everybody cites? For one thing, it’s convenient. It’s easy to look at the first page of Google search results and assume the information is accurate. It’s also easier to read and understand the CDC website than it is to parse multiple dense, lengthy academic papers, especially if they are pay walled.
However, large, powerful advocacy groups like Autism Speaks have no excuse. In 2014, the most recent year for which reports are available, Autism Speaks made over $122 million in revenue. They’ve funded and worked with a number of scientists. They even funded the study that found there to be a rate of 1 in 160.
Autism Speaks itself is not known for its statistical rigor. Michelle Dawson, an autism researcher in Quebec, noted in a blog post that its statements about autism prevalence frequently contradict themselves. In February 2006, for example, they stated in a press release that “a decade ago . . . 1 in 10,000 children were diagnosed,” whereas in March 2006, they claimed the rate was “1 in 10,000 just 13 years ago.” A year later, in March 2007, they also claimed “it was 13 years ago” when 1 in 10,000 people were autistic.
Some might find this distinction pedantic, but regardless of whether or not the 1 in 10,000 figure comes from 1997, 1996 or 1993, it blatantly contradicts their April 2007 statement that “our best estimate of the prevalence of autistic disorder prior to 1998 from non-U.S. studies was ~ 1 per 1,000 and for ASD, ~ 2 per 1,000.”
In 2008, Autism Speaks said “as many as 1 in 150 children are autistic,” but during the same year, they also said that autism “affects 1.5 million children in the United States.” If both these figures are true, Dawson explains, then “the total number of children in the U.S. would have to be 225 million.” To put this into perspective, in 2008, the U.S. had a total population of 304.1 million, which would mean that if Autism Speaks was correct, 73.9% of U.S. citizens would be children.
Autism Speaks, and all the organizations like it, profit from the idea that autism is an “epidemic.” Maybe they seriously think that the CDC’s methodology is the best or are mysteriously unaware of the other studies that they funded. Perhaps they chose this figure primarily based on its political expedience.
None of these scenarios are optimal for autistics. As Dawson puts it, “Another universal of autism advocacy is a gross disregard for accuracy and ethics,” and this is no exception. The least that any group or advocate who cites the 1 in 68 figure can do is examine the other statistics and explain why they believe this estimate is the most accurate.