THREAD: Facebook relies on the public, researchers, & journalists to moderate their platform. But even blatantly violating content does not get removed.
On Sat. we reported weapons for sale in an antiquities trafficking group—it went as expected.
On Saturday, November 28, ATHAR found and reported an advertisement post in a Facebook antiquities trafficking group that was offering weapons for sale to anyone in Egypt.
The user, listed in Cairo, was offering delivery to any governorate.
Facebook's Community Standards explicitly ban content that "Attempts to buy, sell, trade, donate, gift or solicit firearms...between private individuals, unless posted by a real brick and mortar store, legitimate website, brand or government agency"
Facebook has had this "more strict" gun sale policy in place for years.
In 2016, Hayley Tsukayama (now a legislative activist for tech-funded Electronic Frontier Foundation), wrote in @washingtonpost about Facebook's policy change to make gun sales harder washingtonpost.com/news/the-switc…
ATHAR reported the weapon offered in the antiquities trafficking group to Facebook through the platform's reporting mechanism.
Unlike trafficked antiquities, there is a dedicated reporting mechanism for weapons on Facebook. Presumably this should help the AI ID violating content
Based on Facebook's own listed policy, we confirmed that we did in fact want this post reported for a Community Standards violation.
After all, not only was the weapon sale a violation, it was in a group for antiquities trafficking, also against Facebook policies as of June '20.
Facebook confirmed that our report for an unauthorized weapons sale was received by the platform.
Today, we got back Facebook's ruling on this post that explicitly offered guns for sale (with delivery!)
Facebook said that its AI responsible for identifying the post found it not to be in violation and it would not be removed.
Facebook has made a major shift to AI, claiming it is helping moderators ID content that needs review.
But the weapons reporting in this trafficking Facebook group by ATHAR are the clearest example yet that the company's AI is not as good as claimed.
Facebook's AI is apparently unable to ID a reported post that is explicitly offering guns for sale. The only images in the post are a close-up of a gun and bullets.
Facebook appears to have been grossly overselling its AI abilities to Congress, the public, and its investors.
Remember when Facebook used the "broccoli or marijuana" challenge to highlight how great its AI was at identifying images?
That's great for broccoli & marijuana, but apparently it doesn't work so well for clear gun photos.
Aside from the image—which is a gun that FB's moderation AI apparently couldn't catch— there's the text of the post which is offering weapons and noting that they can be delivered to any governorate in Egypt.
But the text is in Arabic... images aren't FB's only AI blindspot.
Earlier this month, @marcowenjones pointed out that there is a blind spot for content in Arabic.
It's not just Arabic, our research on Facebook trafficking found blind spots for any language that does not involve Latin script (even then it isn't great)
Facebook's public and private groups for trafficking illicit antiquities continue to grow
We're going to take you through antiquities trafficking posts from this month to examine how group members communicate, field offers, and even mock those attempting to offer fakes.
THREAD
Facebook's black market antiquities groups allow anyone to become an amateur trafficker, democratizing the illicit trade
As such, many users don't know the value of what they find, and take to Facebook for info and buyers. Such is the case of this sword from a user in Morocco
The user is based in Ouarzazate, Morocco, and he tells the over 110,000 members in his trafficking Facebook group that he "found this sword old that has writing" but he's unable to translate it.
He needs to know what is says and how old it is to determine its value.