The hits just keep on coming. On Friday, China's cyberspace watchdog (CAC) released draft rules governing "deep synthesis tech" - deepfakes and other machine-generated or -edited text, voice, video, and 3D spaces. Paraphrasing the key points below. 1/18 cac.gov.cn/2022-01/28/c_1…
This doc puts rules around how deepfake apps, question-response bots, text-to-speech generators, voice / facial / gesture manipulation, etc can be used. Will impact the creators of core tech, and platforms where such content can spread - including metaverse platforms. 2/18
The short version: Anyone providing the tools to produce synthetically-generated or -edited content must take steps to ensure the content is trackable, controllable, and legal. 3/18
In part, that means applying China's existing illegal content restrictions to media generation tech - no generating porn, politically sensitive / disallowed content, fake news, etc. 4/18
It also means keeping users informed when they are viewing content that has been synthetically created or altered, and making sure that content can be traced to its source. 5/18
Super interesting - Article 12: If you're providing the ability to allow users to mess with biometric info like faces and human voices, the person whose identity / features are being edited has to give consent for their personal info to be used. 6/18
Also interesting - Article 13: Machine-generated or edited content has to include some kind of technical marker so that the content can be traced to the source where it was generated. 7/18
Related - Article 9: If you build an app or tool that lets users generate machine-made or machine-edited content, then you must collect the real identities of those users - another way to track content back to source. 8/18
Article 14: The generated content itself must come packaged with notifications that inform users that it is synthetically generated / altered. Example: A voice file that imitates human speech must include an audio clip identifying the audio as machine-made. 9/18
Article 16: If app stores find that any app violates these rules, the app store may not list the app, or must remove the app from shelves. 10/18
Article 17: Anyone providing deep synthesis services has to be on the lookout for, and prepared to refute, false news and rumors. If such content is discovered, providers must stop its spread, and report it to the authorities. 11/18
Article 19: Anyone providing deep synthesis services has to register with the CAC within 10 days of beginning to provide the service. 12/18
Article 20: If anyone providing deep synthesis services adds new features, or launches a new product with "public opinion attributes or social mobilization capabilities," the new product has to undergo a security assessment by CAC. 13/18
My take - One interesting piece of this is that pressure is placed on multiple actors to comply - app stores, developers, platforms all have obligations here. 14/18
My other take - Of course this has implications re: censorship - these rules seek to close the door on a future where activists make videos of Xi Jinping making anti-Party statements and whatnot. But that's not the interesting bit. 15/18
The interesting bit is that China is taking aim at one of the critical threats to our society in the modern age: the erosion of trust in what we see and hear, the inability to tell what is real and to separate truth from fact. 16/18
China is able to institute these rules because it already has systems in place to control the transmission of content in online spaces, and regulatory bodies in place that enforce these rules. 17/18
So, these rules underscore the policy problem of our age: How can Western democracies fight a war against disinformation and prevent the erosion of trust and truth online, but without resorting to censorship? 18/18
Making a separate note about this one - will be a problem to implement. Current draft says tech providers have to remind content creators that creators are responsible for getting consent from person whose data is being used. Seems... wishy-washy.
• • •
Missing some Tweet in this thread? You can try to
force a refresh
On Monday, Shanghai's trade unions submitted a proposal to China's top policy advisory body, suggesting that platform workers should be given a voice in how dispatching algorithms are deployed by internet companies. Thoughts and quotes below. 1/11 acftu.people.com.cn/n1/2022/0118/c…
The proposal specifically addresses algorithms that determine delivery schedules for gig workers — like restaurant delivery, freight drivers, couriers, and ride-hailing drivers. 2/11
Yesterday, Worker's Daily ran a follow-up on algorithms and labor rights, arguing that "a management mechanism that is only beneficial to one party and detrimental to the other cannot survive, and the same is true of algorithmic management." 3/11 media.workercn.cn/sites/paper/pa…
Yesterday, CPPCC bigwig and econ policy advisor Liu Shijin published an essay outlining the next evolution of China's digital economy — it's a pretty enlightening read. Good bits below. 1/12 finance.sina.com.cn/tech/2022-01-1…
Backstory: Lately, Chinese policymakers have been vocally critical of the ways in which the digital economy cannibalizes the real economy. Examples: e-commerce decimates offline retail, or tech startups hog all the capital while traditional SMEs struggle to get funding. 2/12
The loud criticisms, coupled with China's anti-monopoly drive, has led some to assume that the digital economy has fallen out of favor with policymakers and the real economy is the new policy darling. Liu says everyone is missing the point. 3/12
Well, here they are. Yesterday, China's cyberspace watchdog, the CAC, finalized China's groundbreaking new rules on recommendation algorithms. They take effect in early March. Thoughts on the final version below. 1/11 cac.gov.cn/2022-01/04/c_1…
If you're new to this conversation, here's what I posted on the draft version a few months ago. 2/11 👇
The final version of China's new algorithm rules mostly preserve the rules outlined in the draft, but three new stipulations were added (big thanks to @freefader for doing the heavy lifting on draft and final version comparison). 3/11
On LinkedIn's exit from China: Last month, Xiao Yaqing, China's Minister of Industry and Information Technology, had a video call with Brad Smith, president of MS US, to discuss "in-depth views on ... Microsoft's development and cooperation in China." 1/13 miit.gov.cn/xwdt/gxdt/ldhd…
Naturally, we don't know what was said during that meeting. Did Xiao insinuate they should get out of social? Totally unrelated? Who knows. What we do know is that LinkedIn is facing increasing regulatory pressure on multiple fronts. 2/13
One, the obvious: Social networks operating in China are increasingly caught in the impossible impasse between Chinese censorship rules and western values. Frankly, it's a miracle LinkedIn survived in China as long as it has. 3/13 axios.com/linkedin-block…
My goodness. China's cyberspace watchdog, the CAC, just published a long (and unprecedented) set of draft regulations for recommendation algorithms. The short version: they will be tightly controlled. Key points below. 1/ cac.gov.cn/2021-08/27/c_1…
Most interesting to me: Users must be provided with a convenient way to see and delete the keywords that the algorithm is using to profile them. 2/
And there are limits on the types of keywords algos can collect: "Providers ... shall not record illegal and undesirable keywords in the user points of interest or as user tags and push information content accordingly, and may not set discriminatory or biased user labels." 3/
Here it is! The final text of China's Personal Information Protection Law (PIPL). A quick off-the-cuff translation below of what was changed or added to the final draft. 1/ npc.gov.cn/npc/c30834/202…
1. If personal information is used in automated decision-making [example: marketing / ad algorithms, personalized product recs] , the decision-making must be transparent, and can't be used to impose different transaction terms on different individuals. 2/
What that means in practice: Platforms can't, for example, collect data about users, and then show those users different prices based on the algorithms assumptions about the user's ability to pay. 3/