We found more than seven million predictions PredPol sent to dozens of police departments and left unsecured on the web.
We analyzed predictions for 38 U.S. policing agencies between 2018 and 2021—and discovered a pattern.
In Portage, Mich., the neighborhoods most targeted by the software have nine times the proportion of Black residents as the city average.
In Birmingham, Ala., the least-targeted areas are overwhelmingly White in a city that’s half Black.
The most-targeted areas have at least twice the proportion of Latino residents as the city as a whole.
In Los Angeles, the most targeted areas are disproportionately low-income and more Latino than the city as a whole.
Even when predictions seemed to target majority White neighborhoods, they clustered on blocks that are nearly 100% Latino.
The fewer White residents in an area—and the more Black and Latino residents there—the more likely the neighborhood would be the subject of a PredPol crime prediction, our investigation found.
We found similar patterns regarding wealth and poverty: Middle-class and wealthier neighborhoods were targeted the least.
Neighborhoods where more households qualify for the national free and reduced school lunch program were targeted the most.
These communities weren’t just targeted more—in some cases they were targeted relentlessly.
Crimes were predicted every day, sometimes multiple times a day, sometimes in multiple locations in the same neighborhood.
PredPol, which recently changed its name to @Geolitica_PS, said our analysis is flawed because it’s based on reports “found on the internet” but also confirmed that they “appear to be legitimate.”
When we asked the company’s CEO if he was concerned about the disparities between those communities that are targeted and those that are avoided, he said that because the software uses “crime data as reported to the police by victims themselves,” its predictions are unbiased.
But research by two of PredPol’s founders in 2018 concluded that the software would target Black and Latino neighborhoods in Indianapolis 400% more than White neighborhoods.
PredPol didn’t change the algorithm.
It’s unclear how police respond to crime predictions. Criminal defense attorneys say they aren’t told when predictions result in arrests—and say that makes it harder to provide a good defense.
This #GivingNewsDay, here’s a look at the power of a nonprofit newsroom in 2021. 🧵
In February, @darakerr found Postmates couriers were the targets of a phishing scam that drained multiple workers of their earnings.
After publication, the workers we interviewed were reimbursed for their lost earnings. themarkup.org/working-for-an…
In March, an investigation from @ToddFeathers revealed that school-advising software was using race as a predictor of how likely students were to succeed.
Texas A&M stopped using the risk scoring feature of the software following our reporting. themarkup.org/news/2021/03/3…
Our colleagues at Germany’s @SZ used Citizen Browser data to uncover the messaging that made its way into voters’ news feeds during the country’s recent election cycle. getrevue.co/profile/citize…
They found voters of the far-right AfD party were more likely to see posts attacking issues like climate change, migration, and COVID-19 from their party leaders.
Meanwhile, voters from other parties were generally served coverage on those topics from established media outlets.
We’re thrilled that our tools are being used to reveal how polarization on Facebook is playing out beyond the United States.
This is a huge milestone for our small nonprofit newsroom—mind if we indulge in a quick recap of our recent work? ⬇️
This story from @darakerr was one of @ToddFeathers’ favorite pieces of journalism this year.
“An example of investigating an industry that tries to turn people into data and turning it around by using data to show the tragedies that attitude can create.” themarkup.org/working-for-an…
“This story investigates a system that upholds segregation through arbitrary and inconsistent rules. I especially appreciated students’ perspectives.” themarkup.org/news/2021/05/2…
Geofence warrants are a fairly new concept mostly involving data from Google.
Privacy advocates say they violate civil liberties. For example, the @ACLU found that law enforcement was using geofence data to track Black Lives Matter protesters in 2016. themarkup.org/ask-the-markup…
California is one of few states where law enforcement agencies must disclose geofence warrants to a state dataset.
We looked at that dataset—as well as a geofence transparency report from Google—and found the numbers didn’t add up.
NEW: Amazon placed items from its house brands and exclusives ahead of competitors with better customer ratings and more sales, @adrjeffries and @leonyin found after examining the results of nearly 3,500 popular product searches. themarkup.org/amazons-advant…
Take Amazon’s Happy Belly Cinnamon Crunch cereal, for example.
It had four stars and 1,010 reviews, but Amazon gave it the number one search result spot, ahead of Cap’n Crunch, which had five stars and 14,069 reviews.
We found that knowing only whether a product was an Amazon brand or exclusive could predict in seven out of 10 cases whether the company would rank the item first in search results.