, 22 tweets, 8 min read Read on Twitter
Since some people have been asking questions about the methodology of this project, here is a brief summary of the work we’ve done connecting computer vision and urban perception [THREAD] 1/
In 2011 we released a now defunct website (pulse.media.mit.edu) to crowdsource the collection of “high resolution data on urban perception. In the site, people could click in one of two images in response to one of 3 questions: 2/
Which place looks safer? Which place looks more upper class? Which place looks more unique? 3/
The grad student who did this website, @PhilSalesses, posted it multiple times in reddit until we got some traffic and begun to collect some data. After a few media interviews came out, we got to collect over half a million pairwise preferences for over about 4,000 images. 4/
This may seem like little data, but we were building on a visual perception literature, that while rich, was based on surveys involving dozens of images and hundreds of participants. So getting to evaluate 4K images by more than 10k people was good progress 6/
We use that data to explore the basic geometry of urban perception in a paper that came out in 2013 journals.plos.org/plosone/articl… 7/
There we showed that the 3 evaluative questions (safety, class, unique) provided specific information, and then measured segregation and the variance in perceived urban quality to measure the inequality of perceived urban quality. 8/
The next step was twofold. To create a site to collect data for more cities (52 cities across the world, with @dsmilkov & @dj247) and to develop a computer vision system that could scale the number of images that we could evaluate. 9/
The use of computer vision was critical, since crowdsourced data only provides enough clicks to score 10^4 to 10^5 images. But a single large city has more than 10^5 street segments. So it is impossible to score dozens of cities at a useful resolution without computer vision.10/
The computer vision stage of the project was developed by Nikhil Naik. We used the safety question because it had the most data. Nikhil trained multiple models, first using traditional computer vision ieeexplore.ieee.org/document/69100…, and then deep learning 11/ link.springer.com/chapter/10.100…
These higher resolution maps facilitated exploring the connection between urban perception and behavior. @denadai2 and @brulepri trained a computer vision model for Italian cities and collected mobile data on location. 12/
Together we wrote a paper showing that, after controlling for population density, employment density, and distance to center, people tended to significantly avoid unsafe looking places. dl.acm.org/citation.cfm?i… 13/
This effect was larger for women and the elderly, and reversed for people in their 20s and 30s. 14/
Then, Ed Glaeser from Harvard heard of Nikhil and invited us to collaborate. Together with him and @skominers we used computer vision to study urban change in NYC & Boston (streetchange.media.mit.edu) 15/
In that paper, we found that physical urban change was more likely in neighborhoods with a dense population of highly educated people, and that physical change is clustered and behaves diffusively pnas.org/content/114/29… 16/
With Nikhil we also did other papers, like this one, connecting urban perception to economic measures from the census: aeaweb.org/articles?id=10… 17/
But what is more interesting is the many other papers written by computer scientists such as @brulepri @danielequercia & @virgilioalmeida, architects such as Carlo Ratti, economists, like Ed Glaeser & Ingrid Gould, and transport engineers like @rhurtubia, to name a few. 18/
Nikhil graduated a few years ago & now is working on the hard problem of auto-tuning neural networks. So we are using the data from the second data collection effort to work with @martino_design and @datawheel to create the next generation of work. /19
We have some surprises, but the work has become difficult because obtaining many high resolution streetscape images from @Google for this purpose is no longer possible. So we are buying small focused sets of streetscape images from @mapillary to train new models. 20/
Hope this helps explain what the claim: “University trained AI to recognize street safety” means. It is a hard & slow process with technical hurdles & people turnover. Still, I think that we made progress during this decade. 21/
Eight years ago, when we built the first crowdsourced survey, most visual perception surveys included only a few hundred images. Now, we have digitally transformed that capacity & created the basis of the computer vision tools we need to map & evaluate the urban environment. 22/
In fact, what was recently research has become a professional capacity. If you would like to create a map of urban perception for your city, or a group of cities, you can connect with @datawheel to discuss how to do that at info@datawheel.us 23/[END]
Missing some Tweet in this thread?
You can try to force a refresh.

Like this thread? Get email updates or save it to PDF!

Subscribe to Cesar A. Hidalgo
Profile picture

Get real-time email alerts when new unrolls are available from this author!

This content may be removed anytime!

Twitter may remove this content at anytime, convert it as a PDF, save and print for later use!

Try unrolling a thread yourself!

how to unroll video

1) Follow Thread Reader App on Twitter so you can easily mention us!

2) Go to a Twitter thread (series of Tweets by the same owner) and mention us with a keyword "unroll" @threadreaderapp unroll

You can practice here first or read more on our help page!

Follow Us on Twitter!

Did Thread Reader help you today?

Support us! We are indie developers!


This site is made by just three indie developers on a laptop doing marketing, support and development! Read more about the story.

Become a Premium Member ($3.00/month or $30.00/year) and get exclusive features!

Become Premium

Too expensive? Make a small donation by buying us coffee ($5) or help with server cost ($10)

Donate via Paypal Become our Patreon

Thank you for your support!