We’re in the process of making a rule banning the use of AI in comments/posts (not the discussion of AI) and while the mod team is in consensus, i’d like to hear some thoughts from other people not just on SAI but the use of AI as a whole
as usual, keep it civil, no ice picking your rivals without the proper permit and no re-animating Lenin from his mausoleum
I personally don't see any value in it at all. From a discourse perspective, it doesn't help people solidify their own thoughts, it makes it so that people don't HAVE to solidify their own thoughts. And from an educational perspective, even if it provides accurate information consistently (which I am deeply skeptical about) it does so in a less useful way than reading theory with full context or having it explained by an actual person. And Socialism AI itself is effectively a mouthpiece for one specific organization, which I admittedly don't know too much about the positions and history of but that seems counterproductive to the goals of this sub. Other AI models are produced by large corporations and at best repeat the viewpoints of mainstream liberal media outlets and at worst have Elon Musk putting his thumb on the scale.
Swing and a miss there. ICFI was the first party, not just Trotskyist, to make the switch from newspapers to digital media all the way back in 1998. Part of the reason WSWS has as large of a readership as it does. In fact I've seen particularly RCI members mock the fact that they don't sell newspapers because apparently getting people to pay for your agitprop is a mark of seriousness while free press is amateurish
No shit? Well, good for them on that score at least. They're still dogshit if they made this "Socialist AI" but at least they're not doing the paid newspaper thing.
I think people are making the assumption that they are promoting this as a replacement for theory or that it will magically replace IRL organizing but that's just not true at all. AI is good at sorting through vast amounts of information and there is a vast amount of Marxist theory as well as over 125,000 WSWS articles since it first went online. Having something that can be interacted with and have it give not only summary answers but references for further reading is IMO the best use of the technology as it currently is. AI is a tool and all tools have their uses. Also I think that people already familiar with theory are forgetting that this is primarily aimed at people who aren't yet familiar with Marxism by giving them a resource they can use
Why are you lying? Just like you claiming the mods you want removed from the Trotskyism sub are inactive, are living in an alternate reality
No tool can relieve individuals of the responsibility to think, evaluate and question. But to the extent that Socialism AI facilitates access to Marxist theory, clarifies strategy and illuminates historical experience, it will serve as an indispensable instrument in the political development of a new generation of socialist fighters.
It's lazy. I'm not a big theory guy (I have limited time for activism and I'd rather use it being OTG, engaging in mutual aid projects or teaching/attending community classes on useful skills like first aid, deescalation or defense than reading), and I hate it when I'm trying to have a conversation with someone and they break out "go read Lenin" like it's some kind of trump card, but if you want to use theory in your discussions, READ THE FUCKING THEORY.
Don't regurgitate AI slop and expect to be taken seriously.
I hope the consensus of the mods is to ban it and that every leftist community greets "Socialist AI" and all other generative AI, and everyone who uses it, with the hostility and disdain they deserve for trying to bring that slop into movement spaces.
A common occurrence I've seen online in general, not just in leftist spaces, is "Well chatgpt said x!" As if that's some kind of gotcha argument. We are giving over our ability to create and think critically to a machine. And a corporate money laundering one at that. Every leftist should be against AI in general except in very specific use cases such as data refining in scientific studies.
I vaguely support the industrial applications only because I think it will make revolution more likely. If it replaces a bunch of jobs, there is nowhere to go but revolution.
I just think it is like the steam engine. That radically transformed economics and the class structure. This will again happen with AI and robotics. It radically alters the way people work and live, thus being revolutionary. It forces the class structure to change.
Maybe. We are seeing some big gains in AI and robotics. I think it will radically change society. Maybe it is just a fad. We will see. But I think it could render a lot of jobs useless and upend the economy and therefore, class society.
I vaguely agree too, but I also don't think it will replace very many jobs other than white collar ones.
I've been in various parts of the electrical trade for my entire career, I don't see AI being able to do my job except possibly in the far far future when it's... You know, actually AI and not a fancy data-miner/autocorrect/auto fill. It requires actual finesse and finger dexterity, plus when troubleshooting electrical issues you really have no idea what you will find. You'll think it's one thing and it ends up being a completely different issue or multiple. AI as we know it now will never be able to do something like that.
Ai is a tool and has many advantages when it comes to learning. The extreme negative reaction to it is unwarranted and is mainly coming from the way capitalism uses ai except is being applied to all ai. It's really over the top and isn't materialist reaction at all. We should look more calmly at how it should and shouldn't be used.
I do not agree with posting ai answers and such, but I think there's a use in ai as a tool for learning precisely because it has a vast knowledge. I mean, you can ask a question on reddit and get 1 small reply and it's a dead thread. You can ask ai and prompt it to use a Marxist perspective and it can give you a specific answer.
Now you'll think, "aha but it's slop! It's stupid ai!". Right, it doesn't know everything and it makes mistakes, but so do people replying on reddit, or book authors, or google. You should never just believe any single source, but they can add to your knowledge.
AI from a leftist perspective is best for 2 things. Learning, and debating. For learning you can ask it questions and ask it to give a Marxist perspective, even a specific type of Marxist. Deepseek is the best at this.
For debating you can ask it to deconstruct and advise responses to an opponent. No that does not mean copying and posting the ai, it means seeing it's idea. (the trend here is using ai as a tool, not as a brain which people seem to constantly assume).
You may say it's bad to use ai to help in debates. I do not give a fuck. I am debating shitty liberals and capitalists, I want to win, I want to destroy them, I am going to use any tool I have. On that point, the left is so anti ai it's going to the length of literally excluding itself from it. But you know who isn't? The right. They use it for everything, they over use it of course, but they use it, and it's helping them. But the left as usual is being fucking purist and giving itself a disadvantage.
Anyway, to reiterate before the kneejerk strawmanners come flying at me, ai can be a useful tool for aiding in research, learning and debating, it's kind of like a better Google (which is almost useless now). It shouldn't be used for cheap copying and pasting, that's mindless. And the left should use ai for positive and beneficial means instead of totally rejecting it outright. I'd also recommend deepseek as its low energy usage and better at leftist perspectives.
I don't really know what this socialism ai is. You can already do that by prompting it to use a Marxist perspective.
The thing is it doesn't help you learn anything. It produces its own references or informations then feeds it to you. That's the problem, and since genAI functions with a black box, we're not yet able to explain how it produces its answers, sources, etc. It cannot be trusted in any way.
Hot take but I don't mind it. I won't use it, but I don't care that it exists.
The vast majority of online socialists have not read a single "core" piece of theory in years. What they do is they think they absorb it through cultural osmosis by hanging out in leftist communities. I saw someone in the discord asking if they could skip Marx because he did this, verbatim. ChatGPT/Deepseek theory takes from the same sources, aka flawed reddit posts.
A curated AI built off the actual texts would be useful. The value of it is entirely related to what you're asking it, so if you're asking stupid shit like "what org should I join??????" it's going to be garbage but if you ask it something like "what did Marx think about the Jacobins" it's going to send you to literary sources (that you have to double check but I digress). I'm not arguing for it as a replacement for actually reading, but I'd rather actual sources be used instead of just glossing over reddit posts to try and discern a general idea. That Jacobin example could be a platform to go read the 18th Brumaire or something.
What about the risk of misinformation (or bias)?
The implication of this question is that manually written sources are more "unbiased" and just look at practically any page on the prolewiki to know that is not the case. Everything is going to have a risk of bias, the solution is to engage your critical thinking abilities, especially with the benefit of hindsight for a lot of topics, especially around Leninism. You're not supposed to be an ideologue.
The social revolution of the nineteenth century cannot take its poetry from the past but only from the future. It cannot begin with itself before it has stripped away all superstition about the past. The former revolutions required recollections of past world history in order to smother their own content. The revolution of the nineteenth century must let the dead bury their dead in order to arrive at its own content. There the phrase went beyond the content – here the content goes beyond the phrase.
Blatant AI slop should be banned though. I'm on some fitness subs and holy shit the people that just copy down their chatGPT response (which is almost always wrong) drive me crazy. AI has a lot of potential value for humanity, but the problem is that it is used for capitalistic applications. Technology is not ideologically neutral. Marx wrote exactly about this topic when dunking on the Luddites in Capital v.1 in that they did have valid complaints, but they directed their anger at the machines rather than the actual oppressors. Most of the problems scapegoated to AI have long existed before AI, like standardized slop content before algorithms or fake experts with only Wikipedia level knowledge. In this case though, it's just to preserve posting quality because without AI bans I have seen subs become filled with annoying people endlessly posting their chat logs and asking, "WHAT DO YOU THINK ABOUT THIS HMMM?"
It's actually a really funny AI I think, but it's also trash. It's very obviously made by Trotskyists. Sometimes, the answers are funny, but the AI is just dumb.
Edit: Those comments on OOP's post were genuinely insane
the entire sub (trotskyism, not this one) is tired of WSWS advertising their Google Gemini but because the mods are WSWS nothing can be done
i’m going through the process of getting the subreddit myself via r/redditrequest but that takes ages and even if they’re not actively moderating the subreddit if the mods use reddit then there’s nothing anyone can do :/
This is from the same study you are citing, and that conclusion that so many people make regarding AI was disavowed by the original author, please stop spreading lies.
As a reminder, during Session 4, participants were reassigned to the group opposite of their
original assignment from Sessions 1, 2, 3. Due to participants' availability and scheduling
constraints, only 18 participants were able to attend. These individuals were placed in either
LLM group or Brain-only group based on their original group placement (e.g. participant 17,
originally assigned to LLM group for Sessions 1, 2, 3, was reassigned to Brain-only group for
Session 4).
So what your decontextualized quote is ACTUALLY saying is that the Brain-to-LLM group showed higher directed connectivity than those who used LLMs in session 1-3.
Recent empirical studies reveal concerning patterns in how LLM-powered conversational search
systems exacerbate selective exposure compared to conventional search methods. Participants
engaged in more biased information querying with LLM-powered conversational search, and an
opinionated LLM reinforcing their views exacerbated this bias [63]. This occurs because LLMs
are in essence "next token predictors" that optimize for most probable outputs, and thus can
potentially be more inclined to provide consonant information than traditional information system
algorithms [63]. The conversational nature of LLM interactions compounds this effect, as users
can engage in multi-turn conversations that progressively narrow their information exposure. In
LLM systems, the synthesis of information from multiple sources may appear to provide diverse
perspectives but can actually reinforce existing biases through algorithmic selection and
presentation mechanisms. (p21)
Our findings offer an interesting glimpse into how LLM-assisted vs. unassisted writing engaged the brain differently. In summary, writing an essay without assistance (Brain-only
group) led to stronger neural connectivity across all frequency bands measured, with
particularly large increases in the theta and high-alpha bands. This indicates that participants
in the Brain-only group had to heavily engage their own cognitive resources: frontal executive
regions orchestrated more widespread communication with other cortical areas (especially in
the theta band) to meet the high working memory and planning demands of formulating their
essays from scratch. The elevated theta connectivity, centered on frontal-to-posterior directions,
often represents increased cognitive load and executive control [77]. In parallel, the Brain-only
group exhibited enhanced high-alpha connectivity in fron to-parietal networks, reflecting the
internal focus and semantic memory retrieval required for creative ideation without external aid (p86)
The LLM undeniably reduced the friction involved in answering participants' questions compared
to the Search Engine. However, this convenience came at a cognitive cost, diminishing users'
inclination to critically evaluate the LLM's output or ”opinions” (probabilistic answers based on
the training datasets). This highlights a concerning evolution of the 'echo chamber' effect: rather
than disappearing, it has adapted to shape user exposure through algorithmically curated
content. What is ranked as “top” is ultimately influenced by the priorities of the LLM's
shareholders [123, 125].
...
Regarding ethical considerations, participants who were in the Brain-only group reported higher
satisfaction and demonstrated higher brain connectivity, compared to other groups. Essays
written with the help of LLM carried a lesser significance or value to the participants (impaired
ownership, Figure 8), as they spent less time on writing (Figure 33), and mostly failed to provide
a quote from theis essays (Session 1, Figure 6, Figure 7). (p143)
I don't think a socialist chatbot will be particularly useful. First of all, there is tons of debate over what is the "right" kind of socialism, so whoever tries to load it will bias it. Ask it about Stalin or Trotsky, and unless you get a detailed description of all POVs, you quickly run into whose side it picks. If it is just geared to search and regurgitate, then it's just an enhanced search engine.
And that's probably not so bad, but also we should consider how it is powered and the servers it is running on. We know the environmental effects of some of this AI stuff. When AI first started getting big, I had some fun making AI songs, but when I found out about the energy footprint, I pretty much stopped messing with AI. I do think the tech has tremendous and even possibly revolutionary potential.
In general, socialists ought to read the texts. We shouldn't need a chatbot.
I’m cool with many Trots as a whole and may or may not share some of their ideas, but I abhor the WSWS and the SEP and this is one of the most hilariously stupid things ever. Didn’t this thing stand up for Harvey Weinstein?
All AI is bad. The"good ai" that is used for scientific studies is just an algorithm designed by humans to do a very specific thing.
People will call me a Luddite for being anti-ai but if I'm a Luddite then all the AI tech bros are insisting that the spinning Jenny can not just spin fabric but produce garments and even dress you so that you'll never be cold again.
All AI is the problem because there are no good examples of AI that aren't actually just bespoke algorithms. Actual AI would be good but it's still pure fiction and what we now call AI is just bad voice and pattern recognition.
Yes, but that's genAI you're talking about. The rest are just classical programs acting as assistants and programed to do certain tasks. That's not AI.
I know disabled people for whom AI has been game changing and positive has an organizing tool. And that's the thing– it's a tool like any other. It has its uses and needs to be directed to what it's useful for. Which takes both time and effort on the part of the user to program and set up a bot, and a lot more regulation on the part of society to ensure that it is not abused.
The biggest problem right now is that it is largely unregulated and is set up as a hallucinating plagiarism machine to benefit billionaires, with inefficient data storage that is destroying the environment.
No, they are talking about GenAI, stop trying to move the goalposts somewhere else in an attempt to give absolutely no credit to these tools, be intellectually honest.
AI is a problem of mental offloading. It's very akin to the problems caused by children whose parents do everything for them.
Human brains are very good at mental offloading to save energy and time. We even offload stuff into other people with some regularity, using partners to store memories (what's on the shopping list again? When did you tell me about that dinner party?) or even certain skills. We do the same thing with technological assistants. If you find yourself checking 2+2 on your pocket calculator to "make sure" that's a sign your brain has offloaded your math skills onto your phone.
This allows specialization and saves a lot of time and energy in the short term, and is incredibly beneficial, but we have to be careful about how we use it because if we offload too much onto weak links we become useless. Can we do math if our phone is dead? How about plan a route home? Schedule an appointment and remember it?
Unlike all the other helpers we've made - calculators, pocket calendars, routing software, etc. LLMAIs offload the very concept of critical thinking and research, which are critical to your ability to think for yourself, take in new information, and make your own good decisions.
And these LLMAIs are also controlled by an uncontrollable outside source - if Grok is handling decision making for you and Elon turns it back into a more subtle form of Mecha Hitler... Well now you're going to start getting your advice from Mecha Hitler, as long as it's careful to avoid being too blatant about it.
We need to be doing our own cognition in order to keep ourselves mentally fit. Even if the AI was built by socialists and run as an open source project and didn't steal content I would STILL recommend staying away from it.
And I'd like to add an addendum - there are times when a normal person with reasonable math skills will see the stakes as high enough to double check every step with a calculator, including 2+2. You would think the same can be true of LLMAIs, but we must be careful because - unlike a calculator - the LLM may not be objectively correct. If you come to a conclusion and try to double check it with an LLMAI and it disagrees with you, you could potentially still be correct. So the idea that you can't even fully trust it to confirm your conclusions is what really puts the nail in the coffin for me. It's not entirely that it's going to "brain drain" everyone it touches, it's also not even useful as a way to double check your answers.
Honestly with how terrible Search Engine Optimization has gotten, AI is probably giving people less terrible information than doing a web search and then clicking on the first link.
This is not a good thing but what we had before was even worse.
Here is also the paper some people use to cite the idea that "AI makes people dumb" (Disavowed by the same author of the stuy btw) https://arxiv.org/abs/2506.08872 in that same paper they find that using LLMs can increase brain activity:
I love that your deboonking literally just highlights that the Brain-to-LLM group showed higher neural connectivity than the LLM-to-Brain groups. And of course Brain only did better than LLM only (and search engine group, which fell in the middle). We're not spreading lies, you just have poor reading comprehension.
Yes? Your claim is that LLMs make people stupider, the paper shows that it can increase brain connectivtiy, when the people using it are actually engaged with the subject and not using it as the problem solver.
Read what it says please, how can you support your claim based on what is stated? Stop lying.
Those who wrote three essays using their brain had stronger neural connectivity than those who wrote three essays using AI. Additionally, those who wrote three essays using their brain before writing one essay using AI had stronger neural connectivity than those who wrote three essays with AI before writing one with their brain.
On top of this, LLM-powered conversational search resulted in more biased information querying. AKA it makes you less likely to acquire info that challenges your preexisting biases.
the paper shows that it can increase brain connectivtiy, when the people using it are actually engaged with the subject and not using it as the problem solver.
Precisely. The problem is that far far far too many people are using it as the problem solver. It's not subject matter experts using it to bounce ideas off of - it's complete novices throwing out queries so the LLM can do the thinking for them. This is why the LLM groups couldn't even quote their own (LLM-made) work. There's no learning, no retention.
In summary, AI-assisted rewriting after using no AI tools elicited significantly stronger directed EEG connectivity than initial writing-with-AI sessions. The group differences point to neural adaptation: LLM group appeared to have a reduced network usage, whereas novices from Brain-to-LLM group's recruited widespread connectivity when introduced to the tool.
I'm not saying their are no use-cases for LLMs, period. And I'm not saying that it makes people stupider 100% of the time, as it seems like there is an exception for people who already have extensive knowledge on the subject they're querying LLMs for more info about. But LLMs are frequently being used as the starting-off point and that is making people stupider.
Right, but that can be said of essentially any informational tech in the modern day, in the study itself they were requested to use LLMs in that specific way, in real life the situation is different, depending on how people feel this thing is more useful, it can be benefitial or detrimental depending on how they are incentiviced to use it, so what we should focus on is on removing the incentives that cause us to use these tools, GenAI or otherwise, to offload cognition rather than enhance it.
I think GenAI just exposed the already very large holes in a lot of western academic systems, we should use it as a reason for changing it, not outright banning these things and maintaining a precarious status quo.
so what we should focus on is on removing the incentives that cause us to use these tools, GenAI or otherwise, to offload cognition rather than enhance it.
I'm open to suggestions. A good starting point could be removing gemini as the first thing you see when conducting a google search. In any case, using a supposedly socialist LLM as a jumping off point for learning about socialism seems like a bad idea, just like how this study seems to show that using LLMs to do your homework for you is an exceptionally bad idea.
In a way it's funny that the people most enamored by LLMs are the last people that should be using them.
I dunno if LLMs should be gotten rid of entirely, but it does seem like introducing barriers to entry would be beneficial, as certainly at this point in time the harm seems to outweigh the benefits, and the looming LLM bubble bursting is just the tip of the iceberg in that regard.
Using the tools at our disposal the best we can is a good thing. Socialism AI is no replacement for theory but the ability to interactively pose questions, get summary answers, and references for further reading is a good use of the tech as it is especially for people first starting out with Marxist theory. Anything that increases people's access to theory is a positive IMO
We’re in the process of making a rule banning the use of AI in comments/posts (not the discussion of AI) and while the mod team is in consensus, i’d like to hear some thoughts from other people not just on SAI but the use of AI as a whole
as usual, keep it civil, no ice picking your rivals without the proper permit and no re-animating Lenin from his mausoleum
I personally don't see any value in it at all. From a discourse perspective, it doesn't help people solidify their own thoughts, it makes it so that people don't HAVE to solidify their own thoughts. And from an educational perspective, even if it provides accurate information consistently (which I am deeply skeptical about) it does so in a less useful way than reading theory with full context or having it explained by an actual person. And Socialism AI itself is effectively a mouthpiece for one specific organization, which I admittedly don't know too much about the positions and history of but that seems counterproductive to the goals of this sub. Other AI models are produced by large corporations and at best repeat the viewpoints of mainstream liberal media outlets and at worst have Elon Musk putting his thumb on the scale.
Which org is behind it?
It's part of the World Socialist Website which is a mouthpiece for the ICFI - on first glance appears to be one of the larger Trot organizations?
Oooh. I wonder if they use it to write their newspapers now?
Could anyone tell the difference?
Swing and a miss there. ICFI was the first party, not just Trotskyist, to make the switch from newspapers to digital media all the way back in 1998. Part of the reason WSWS has as large of a readership as it does. In fact I've seen particularly RCI members mock the fact that they don't sell newspapers because apparently getting people to pay for your agitprop is a mark of seriousness while free press is amateurish
No shit? Well, good for them on that score at least. They're still dogshit if they made this "Socialist AI" but at least they're not doing the paid newspaper thing.
I think people are making the assumption that they are promoting this as a replacement for theory or that it will magically replace IRL organizing but that's just not true at all. AI is good at sorting through vast amounts of information and there is a vast amount of Marxist theory as well as over 125,000 WSWS articles since it first went online. Having something that can be interacted with and have it give not only summary answers but references for further reading is IMO the best use of the technology as it currently is. AI is a tool and all tools have their uses. Also I think that people already familiar with theory are forgetting that this is primarily aimed at people who aren't yet familiar with Marxism by giving them a resource they can use
their 'release post' for SAI was that it could replace reading theory
Why are you lying? Just like you claiming the mods you want removed from the Trotskyism sub are inactive, are living in an alternate reality
https://www.wsws.org/en/articles/2025/12/12/gpid-d12.html
yeah i misremembered a post from a month or so back, it happens
It's lazy. I'm not a big theory guy (I have limited time for activism and I'd rather use it being OTG, engaging in mutual aid projects or teaching/attending community classes on useful skills like first aid, deescalation or defense than reading), and I hate it when I'm trying to have a conversation with someone and they break out "go read Lenin" like it's some kind of trump card, but if you want to use theory in your discussions, READ THE FUCKING THEORY.
Don't regurgitate AI slop and expect to be taken seriously.
I hope the consensus of the mods is to ban it and that every leftist community greets "Socialist AI" and all other generative AI, and everyone who uses it, with the hostility and disdain they deserve for trying to bring that slop into movement spaces.
“It’s lazy” thank you! We need to do the work. It’s how we learn and improve.
Well said. Like just read some books bros. It's good for you lol. Or throw some Parenti lectures on lmao.
A common occurrence I've seen online in general, not just in leftist spaces, is "Well chatgpt said x!" As if that's some kind of gotcha argument. We are giving over our ability to create and think critically to a machine. And a corporate money laundering one at that. Every leftist should be against AI in general except in very specific use cases such as data refining in scientific studies.
Agreed. There ARE scientific applications for AI, I'm not dogmatically against it in 100% of cases. But using "socialism AI" in discourse ain't it.
I vaguely support the industrial applications only because I think it will make revolution more likely. If it replaces a bunch of jobs, there is nowhere to go but revolution.
Hmm. Sounds like accelerationism with extra steps.
I just think it is like the steam engine. That radically transformed economics and the class structure. This will again happen with AI and robotics. It radically alters the way people work and live, thus being revolutionary. It forces the class structure to change.
I think comparing this to the industrial revolution is a stretch.
Maybe. We are seeing some big gains in AI and robotics. I think it will radically change society. Maybe it is just a fad. We will see. But I think it could render a lot of jobs useless and upend the economy and therefore, class society.
I vaguely agree too, but I also don't think it will replace very many jobs other than white collar ones.
I've been in various parts of the electrical trade for my entire career, I don't see AI being able to do my job except possibly in the far far future when it's... You know, actually AI and not a fancy data-miner/autocorrect/auto fill. It requires actual finesse and finger dexterity, plus when troubleshooting electrical issues you really have no idea what you will find. You'll think it's one thing and it ends up being a completely different issue or multiple. AI as we know it now will never be able to do something like that.
Yeah we aren't quite there yet, but it is possible it will get there soon or in the not so distant future.
Ai is a tool and has many advantages when it comes to learning. The extreme negative reaction to it is unwarranted and is mainly coming from the way capitalism uses ai except is being applied to all ai. It's really over the top and isn't materialist reaction at all. We should look more calmly at how it should and shouldn't be used.
I do not agree with posting ai answers and such, but I think there's a use in ai as a tool for learning precisely because it has a vast knowledge. I mean, you can ask a question on reddit and get 1 small reply and it's a dead thread. You can ask ai and prompt it to use a Marxist perspective and it can give you a specific answer.
Now you'll think, "aha but it's slop! It's stupid ai!". Right, it doesn't know everything and it makes mistakes, but so do people replying on reddit, or book authors, or google. You should never just believe any single source, but they can add to your knowledge.
AI from a leftist perspective is best for 2 things. Learning, and debating. For learning you can ask it questions and ask it to give a Marxist perspective, even a specific type of Marxist. Deepseek is the best at this.
For debating you can ask it to deconstruct and advise responses to an opponent. No that does not mean copying and posting the ai, it means seeing it's idea. (the trend here is using ai as a tool, not as a brain which people seem to constantly assume).
You may say it's bad to use ai to help in debates. I do not give a fuck. I am debating shitty liberals and capitalists, I want to win, I want to destroy them, I am going to use any tool I have. On that point, the left is so anti ai it's going to the length of literally excluding itself from it. But you know who isn't? The right. They use it for everything, they over use it of course, but they use it, and it's helping them. But the left as usual is being fucking purist and giving itself a disadvantage.
Anyway, to reiterate before the kneejerk strawmanners come flying at me, ai can be a useful tool for aiding in research, learning and debating, it's kind of like a better Google (which is almost useless now). It shouldn't be used for cheap copying and pasting, that's mindless. And the left should use ai for positive and beneficial means instead of totally rejecting it outright. I'd also recommend deepseek as its low energy usage and better at leftist perspectives.
I don't really know what this socialism ai is. You can already do that by prompting it to use a Marxist perspective.
The thing is it doesn't help you learn anything. It produces its own references or informations then feeds it to you. That's the problem, and since genAI functions with a black box, we're not yet able to explain how it produces its answers, sources, etc. It cannot be trusted in any way.
Hot take but I don't mind it. I won't use it, but I don't care that it exists.
The vast majority of online socialists have not read a single "core" piece of theory in years. What they do is they think they absorb it through cultural osmosis by hanging out in leftist communities. I saw someone in the discord asking if they could skip Marx because he did this, verbatim. ChatGPT/Deepseek theory takes from the same sources, aka flawed reddit posts.
A curated AI built off the actual texts would be useful. The value of it is entirely related to what you're asking it, so if you're asking stupid shit like "what org should I join??????" it's going to be garbage but if you ask it something like "what did Marx think about the Jacobins" it's going to send you to literary sources (that you have to double check but I digress). I'm not arguing for it as a replacement for actually reading, but I'd rather actual sources be used instead of just glossing over reddit posts to try and discern a general idea. That Jacobin example could be a platform to go read the 18th Brumaire or something.
What about the risk of misinformation (or bias)?
The implication of this question is that manually written sources are more "unbiased" and just look at practically any page on the prolewiki to know that is not the case. Everything is going to have a risk of bias, the solution is to engage your critical thinking abilities, especially with the benefit of hindsight for a lot of topics, especially around Leninism. You're not supposed to be an ideologue.
Blatant AI slop should be banned though. I'm on some fitness subs and holy shit the people that just copy down their chatGPT response (which is almost always wrong) drive me crazy. AI has a lot of potential value for humanity, but the problem is that it is used for capitalistic applications. Technology is not ideologically neutral. Marx wrote exactly about this topic when dunking on the Luddites in Capital v.1 in that they did have valid complaints, but they directed their anger at the machines rather than the actual oppressors. Most of the problems scapegoated to AI have long existed before AI, like standardized slop content before algorithms or fake experts with only Wikipedia level knowledge. In this case though, it's just to preserve posting quality because without AI bans I have seen subs become filled with annoying people endlessly posting their chat logs and asking, "WHAT DO YOU THINK ABOUT THIS HMMM?"
It's actually a really funny AI I think, but it's also trash. It's very obviously made by Trotskyists. Sometimes, the answers are funny, but the AI is just dumb.
Edit: Those comments on OOP's post were genuinely insane
the entire sub (trotskyism, not this one) is tired of WSWS advertising their Google Gemini but because the mods are WSWS nothing can be done
i’m going through the process of getting the subreddit myself via r/redditrequest but that takes ages and even if they’re not actively moderating the subreddit if the mods use reddit then there’s nothing anyone can do :/
Admitting to trying to coup a sub because you dislike the mods is amazingly petty
where did i say i dislike the mods?
LLMs make people stupider.
Fuck plagiarism bots.
Incorrect, the idea that LLMs make people stupid is completely false:
https://preview.redd.it/n60h5wztj19g1.png?width=2534&format=png&auto=webp&s=519ead11eb58a2a9145d094418543cfc2fdde3a9
This is from the same study you are citing, and that conclusion that so many people make regarding AI was disavowed by the original author, please stop spreading lies.
Directly relevant to your excerpt (p38):
So what your decontextualized quote is ACTUALLY saying is that the Brain-to-LLM group showed higher directed connectivity than those who used LLMs in session 1-3.
Also from that study:
...
Let me guess, you're an LLM junkie?
I don't think a socialist chatbot will be particularly useful. First of all, there is tons of debate over what is the "right" kind of socialism, so whoever tries to load it will bias it. Ask it about Stalin or Trotsky, and unless you get a detailed description of all POVs, you quickly run into whose side it picks. If it is just geared to search and regurgitate, then it's just an enhanced search engine.
And that's probably not so bad, but also we should consider how it is powered and the servers it is running on. We know the environmental effects of some of this AI stuff. When AI first started getting big, I had some fun making AI songs, but when I found out about the energy footprint, I pretty much stopped messing with AI. I do think the tech has tremendous and even possibly revolutionary potential.
In general, socialists ought to read the texts. We shouldn't need a chatbot.
100% against the use of AI in text posts, except for translation (those cases should have disclaimers). No image/video/music slop.
I think that although I hate socialism AI and similar initiatives, they could be allowed in comments, also with an appropriate disclaimer at the top.
cybertrot and loji
I’m cool with many Trots as a whole and may or may not share some of their ideas, but I abhor the WSWS and the SEP and this is one of the most hilariously stupid things ever. Didn’t this thing stand up for Harvey Weinstein?
All AI is bad. The"good ai" that is used for scientific studies is just an algorithm designed by humans to do a very specific thing.
People will call me a Luddite for being anti-ai but if I'm a Luddite then all the AI tech bros are insisting that the spinning Jenny can not just spin fabric but produce garments and even dress you so that you'll never be cold again.
The problem is with genAI. But people conflate it with any kinds of AI, which creates the problem.
All AI is the problem because there are no good examples of AI that aren't actually just bespoke algorithms. Actual AI would be good but it's still pure fiction and what we now call AI is just bad voice and pattern recognition.
Yes, but that's genAI you're talking about. The rest are just classical programs acting as assistants and programed to do certain tasks. That's not AI.
I know disabled people for whom AI has been game changing and positive has an organizing tool. And that's the thing– it's a tool like any other. It has its uses and needs to be directed to what it's useful for. Which takes both time and effort on the part of the user to program and set up a bot, and a lot more regulation on the part of society to ensure that it is not abused.
The biggest problem right now is that it is largely unregulated and is set up as a hallucinating plagiarism machine to benefit billionaires, with inefficient data storage that is destroying the environment.
But this organizing tool is not genAI like ChatGPT or else. The problem now is genAI, not every kind of AI.
No, they are talking about GenAI, stop trying to move the goalposts somewhere else in an attempt to give absolutely no credit to these tools, be intellectually honest.
I'm sorry, you might have mistakenly responded to ly comment, cuz I'm saying the exact same thing.
Fuck ai
Bad and a waste. Mostly going to be slop, and the environmental devastation is even worse.
AI is a problem of mental offloading. It's very akin to the problems caused by children whose parents do everything for them.
Human brains are very good at mental offloading to save energy and time. We even offload stuff into other people with some regularity, using partners to store memories (what's on the shopping list again? When did you tell me about that dinner party?) or even certain skills. We do the same thing with technological assistants. If you find yourself checking 2+2 on your pocket calculator to "make sure" that's a sign your brain has offloaded your math skills onto your phone.
This allows specialization and saves a lot of time and energy in the short term, and is incredibly beneficial, but we have to be careful about how we use it because if we offload too much onto weak links we become useless. Can we do math if our phone is dead? How about plan a route home? Schedule an appointment and remember it?
Unlike all the other helpers we've made - calculators, pocket calendars, routing software, etc. LLMAIs offload the very concept of critical thinking and research, which are critical to your ability to think for yourself, take in new information, and make your own good decisions.
And these LLMAIs are also controlled by an uncontrollable outside source - if Grok is handling decision making for you and Elon turns it back into a more subtle form of Mecha Hitler... Well now you're going to start getting your advice from Mecha Hitler, as long as it's careful to avoid being too blatant about it.
We need to be doing our own cognition in order to keep ourselves mentally fit. Even if the AI was built by socialists and run as an open source project and didn't steal content I would STILL recommend staying away from it.
And I'd like to add an addendum - there are times when a normal person with reasonable math skills will see the stakes as high enough to double check every step with a calculator, including 2+2. You would think the same can be true of LLMAIs, but we must be careful because - unlike a calculator - the LLM may not be objectively correct. If you come to a conclusion and try to double check it with an LLMAI and it disagrees with you, you could potentially still be correct. So the idea that you can't even fully trust it to confirm your conclusions is what really puts the nail in the coffin for me. It's not entirely that it's going to "brain drain" everyone it touches, it's also not even useful as a way to double check your answers.
Honestly with how terrible Search Engine Optimization has gotten, AI is probably giving people less terrible information than doing a web search and then clicking on the first link.
This is not a good thing but what we had before was even worse.
At its best, gemini is just poorly plagiarizing those top search results, and then it frequently hallucinates on top of that, so no.
Just so people are aware and don't fall for the reactionary lies coming from a lot of the commenters:
AI tutoring outperforms in-class active learning: an RCT introducing a novel research-based design in an authentic educational setting
Here is also the paper some people use to cite the idea that "AI makes people dumb" (Disavowed by the same author of the stuy btw) https://arxiv.org/abs/2506.08872 in that same paper they find that using LLMs can increase brain activity:
https://preview.redd.it/oqgwsitbo19g1.png?width=2534&format=png&auto=webp&s=8546f1f42dbd14f5b5b6747f54963f7f3c890be2
Please do not fall for lies regarding this tool.
I love that your deboonking literally just highlights that the Brain-to-LLM group showed higher neural connectivity than the LLM-to-Brain groups. And of course Brain only did better than LLM only (and search engine group, which fell in the middle). We're not spreading lies, you just have poor reading comprehension.
?
https://preview.redd.it/w6lxdjirs19g1.png?width=2535&format=png&auto=webp&s=d2b70ee9269818f8c976b1064e863adaa4f877dd
Use of LLMs do not make people stupid, it can enhance learning, did you actually read the paper?
https://preview.redd.it/elhja3gys19g1.png?width=2538&format=png&auto=webp&s=97524017657048d963bddd12cf15c2309612f69e
Yes? Your claim is that LLMs make people stupider, the paper shows that it can increase brain connectivtiy, when the people using it are actually engaged with the subject and not using it as the problem solver.
Read what it says please, how can you support your claim based on what is stated? Stop lying.
Which it does.
Those who wrote three essays using their brain had stronger neural connectivity than those who wrote three essays using AI. Additionally, those who wrote three essays using their brain before writing one essay using AI had stronger neural connectivity than those who wrote three essays with AI before writing one with their brain.
On top of this, LLM-powered conversational search resulted in more biased information querying. AKA it makes you less likely to acquire info that challenges your preexisting biases.
Precisely. The problem is that far far far too many people are using it as the problem solver. It's not subject matter experts using it to bounce ideas off of - it's complete novices throwing out queries so the LLM can do the thinking for them. This is why the LLM groups couldn't even quote their own (LLM-made) work. There's no learning, no retention.
Right, but that can be said of essentially any informational tech in the modern day, in the study itself they were requested to use LLMs in that specific way, in real life the situation is different, depending on how people feel this thing is more useful, it can be benefitial or detrimental depending on how they are incentiviced to use it, so what we should focus on is on removing the incentives that cause us to use these tools, GenAI or otherwise, to offload cognition rather than enhance it.
I think GenAI just exposed the already very large holes in a lot of western academic systems, we should use it as a reason for changing it, not outright banning these things and maintaining a precarious status quo.
I'm open to suggestions. A good starting point could be removing gemini as the first thing you see when conducting a google search. In any case, using a supposedly socialist LLM as a jumping off point for learning about socialism seems like a bad idea, just like how this study seems to show that using LLMs to do your homework for you is an exceptionally bad idea.
In a way it's funny that the people most enamored by LLMs are the last people that should be using them.
I dunno if LLMs should be gotten rid of entirely, but it does seem like introducing barriers to entry would be beneficial, as certainly at this point in time the harm seems to outweigh the benefits, and the looming LLM bubble bursting is just the tip of the iceberg in that regard.
Using the tools at our disposal the best we can is a good thing. Socialism AI is no replacement for theory but the ability to interactively pose questions, get summary answers, and references for further reading is a good use of the tech as it is especially for people first starting out with Marxist theory. Anything that increases people's access to theory is a positive IMO