In English, phrases like “social ethics” or “moral standards” may appear harmless. In China’s political and legal system, they carry specific ideological content. Marxist-Leninist ethics is treated as a formal discipline, and instruction in it is mandatory for political elites. As former Australian Prime Minister and China expert Kevin Rudd has written, “No matter how abstract and unfamiliar his [Marxist-Leninist] ideas might be, they are having profound effects on the real-world content of Chinese politics and foreign policy.” US and European AI entrepreneurs leveraging Chinese AI tools risk importing these politics into their products.
This is how I'd imagine someone writing an essay about ethics in AI slop, mass dismissal and greenwashing environmental destruction with the due date being the next day.
Very strange perspective and interpretation on licensing agreements of the open weight models. A lot more depends on how the users and companies use the AI.
Going with the same arguments, it's dangerous to use American closed source AI like ChatGpt because OpenAI lifted the blanket ban on "military and warfare" activities in 2024 and contracted with US military in war fighting domains. Anthropic also has contracts with Department of War on defence and intelligence domains.
The increasing reliance of US and European AI startups on Chinese open-source AI models poses significant political, legal, and ethical risks. With these models now accounting for a notable share of global downloads, their licensing agreements may impose restrictions that reflect Chinese regulatory norms, potentially exporting authoritarian practices to Western markets. Seth Hays calls for the establishment of stronger licensing standards and international collaboration to address these challenges by fostering responsible AI development and mitigating risks.
Yeah, LOL, "be careful about using their communist-driven/led something something opaque model and perhaps use our corporatist safety guard railing opaque model, both of which nobody really understands how they work, instead".
Oh no...
This is how I'd imagine someone writing an essay about ethics in AI slop, mass dismissal and greenwashing environmental destruction with the due date being the next day.
So basically this is another "but at what cost?"
Very strange perspective and interpretation on licensing agreements of the open weight models. A lot more depends on how the users and companies use the AI.
Going with the same arguments, it's dangerous to use American closed source AI like ChatGpt because OpenAI lifted the blanket ban on "military and warfare" activities in 2024 and contracted with US military in war fighting domains. Anthropic also has contracts with Department of War on defence and intelligence domains.
Remember, never trust china, you can only trust american CLOSED SOURCE AI!!
Literally look at their list of sponsors: https://cepa.org/about-cepa/our-supporters/ lmao
NOTICE: See below for a copy of the original post by CEPAORG in case it is edited or deleted.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Least obvious psyop
Ugh... Someone is going to source this one day
There's so many ai red flags in the "article" i almost don't want to point it out. But i guess it doesn't matter. It's the content i guess
The increasing reliance of US and European AI startups on Chinese open-source AI models poses significant political, legal, and ethical risks. With these models now accounting for a notable share of global downloads, their licensing agreements may impose restrictions that reflect Chinese regulatory norms, potentially exporting authoritarian practices to Western markets. Seth Hays calls for the establishment of stronger licensing standards and international collaboration to address these challenges by fostering responsible AI development and mitigating risks.
Yeah, LOL, "be careful about using their communist-driven/led something something opaque model and perhaps use our corporatist safety guard railing opaque model, both of which nobody really understands how they work, instead".