Any time Grok gets "too woke" by quoting stats, laws, news articles, basically trying to show what it considers to be verifiable information, the loonie MAGA heads go nuts and tell Elon to "fix this liberal bias". Grok goes away for a bit and a new version that spews right wing talking points appears.
I think it is important to remember that first up a chat bot cannot apologise - it is not sentient and has no agency - and secondly this message was not provided by anyone from Twitter/X - it's just generated from a prompt that some random user created. I have seen one example where someone got the chatbot to generate the apology in the style of Jar Jar Binks.
To my knowledge the only comment that anyone from Twitter/X has made on this story so far is "Legacy media lies", which is how they responded to Reuters when contacted.
Well, it was posted by the Grock account, but that account will say any old shite if you prompt it to.
There has been some really bad reporting around this with plenty of mainstream agencies/outlets like Reuters and Newsweek acting like this post was an official statement on behalf of Twitter.
Governments can't continue with their supposed spiel of cracking down on the internet under the guise of protecting children if they continue to let Musk go rogue with X and Grok to normalise things like extreme misogyny and pedophilia.
This is the same prick who personally unbanned a conspiracy account sharing screenshots of a CP video. That profile then gloated about making revenue money from the ensuing fallout from Musk's payment scheme on X, so he made money sharing CP content.
At the same time as this we have Fine Gael MEPs posting press releases on Twitter saying how they plan to protect children online in 2026.
The Irish Government are so chickenshit scared of standing up to these big tech companies that they would rather put all responsibility on the users and just let the companies do what ever the hell they want.
Twitter is an easy target for Ireland, they don't make the kind of money that means they contribute in a meaningful way to our corporate tax take nor do they employ a particularly large number of people here.
Making a strong and ideally shocking example of them for CSAM is not going to scare away Microsoft or Google who to be fair to them are not using their technology to produce CSAM.
Even if they did make a substantial contribution, it's the regulators fecking job to take care of this. This is what the EU is supposed to be for. I'm so sick of this "for the sake of the economy" bullshit, especially when half the country can't even afford to live here comfortably.
What would such an action look like? Banning the platform, banning the company itself from operating in Ireland or a combination of both?
I don't want to come off as facetious, these are genuine questions. There seems to be a lot of uproar about this policy, but I have yet to hear a viable alternative from the opposition.
Prosecutions under section 9 of the 1998 Act of individual executives and managers within Ireland's jurisdiction. Prosecutions under sections 4A(1)(d), 5, and 6 as the case may be of any relevant staff in Ireland. Charges under those same sections and section 9 of any staff wherever based and the issuance of an EAW based on those charges.
I'm sorry, but that just sounds like a witch hunt.
You said yourself that Twitter doesn't have a lot of staff in Ireland. On that basis, how likely is it that someone in Ireland is in any way responsible for the generation of this CSAM? Granted, Coimisiún na Meán could launch an investigation to answer that question, but I would wager such an investigation would come up empty-handed given how muddied the waters would be. For example, would anyone who worked on xAI in Ireland (programmers, managers, executives, etc) be accountable?
Ultimately, if a user uses a software tool, whether it be Photoshop or Grok, to generate explicit materials, the user bears more responsibility than the people who made the tool. Under the Child Trafficking and Pornography Act 1998, it doesn't matter if the user edited an image themselves, or directed an AI tool to edit it for them. The nature of the offence is still the same. This is something at the heart of these new regulations. The origin of the material comes from the individual using the software, not the software itself. What you're suggesting would be like if someone bludgeoned another individual with a hammer, and then An Garda Síochána arrests the craftsman who made the hammer.
It's a hunt for those facilitating peadophilia, they should wish it was just witchcraft they were being accused of.
Why would CnaM be launching an investigation?
This is a matter for the Gardaí. In the door, seize all records and get interviewing people under caution.
We don't have laws about facilitating the production of hammers. We do have laws about facilitating the production and distribution of child sexual abuse material.
Twitter and those working for it are knowingly profiting from a tool that they have released and charge a subscription for which is being used to create child sexual abuse material. We have laws against that, we should use those laws to the maximum extent.
There are executives and managers here which thanks to the regulatory compliance functions in the EMEA HQ in Ireland will fall foul of section 9 of the 1998 Act. There may be others who would fall foul of the other relevant sections, and there certainly are abroad who can be charged here and prosecuted should they step foot in the EU.
This is exactly how we would treat a company that was knowingly providing hosting services for the distribution of CSAM. This is no different.
Can I ask - if the CSAM was made using Photoshop, Affinity, or any other graphics editor, regardless of whether it uses AI or not, would you agree with what you stated above? That the makers of the tool are culpable, and should be charged?
The distinction is whether it is possible for the company to take steps to prevent it. It does not appear that Photoshop could do this anymore than the manufacturer of coloured markers could.
That is not the case in this instance. Other providers of similar services have managed to put in place safeguards to prevent this.
The freaks at Twitter have not. They should see the inside of a jail cell as a result. Our law provides for that.
Okay, I take your well-articulated point and I agree with it. I was operating under the principle that software is software, and that our legislation should be indifferent to the medium through which the CMAS was created, but I'm starting to think that AI must be distinguished from conventional tools. The hammer analogy assumed that all software tools should fall under the same categorisation, but in hindsight, this wouldn't be a good idea.
Following this conversation, I would say you're right. These services should be mandated to have the rigorous safeguards in place if they want to operate in Ireland or the EU, however, there is still ambiguity as to the nature of such safeguards.
I just came across this harrowing article, which I hadn't seen before, that describes the operation of nefarious standalone services. This should absolutely be illegal, however, it doesn't appear that existing legislation is very powerful against these companies as it stands.
Although sharing non-consensual sexual images is a crime in Ireland under the Harassment, Harmful Communications and Related Offences Act (also known as Coco’s Law), legal experts have said the legislation does not cover the creation of deepfakes.
In the case of Twitter, aside from the difficulty in determining who exactly would be accountable, I'm also still a bit uneasy about the idea of prosecuting the workers of these companies, most of whom had no idea that their tool would be used for the generation of child pornography. The exception I would make to this is if there was person(s) in the company that knew about this weakness and failed to act on it.
I would be in favour of a fine for Twitter for negligence and failing to protect its users from harmful material. Companies will comply with anything that hurts their pockets or their share price.
Despite this, I also don't think a person who uploads an image of a minor and enters the prompt "undress this person" is free from accountability either. I still think that person should face criminal prosecution under the aforementioned 1998 Act. The harsh reality is that people will test AI to its limits and find loopholes in spite of the micromanaging of the tool by the host. Some people will do this with full knowledge that the model's output will be considered contraband. I don't think they have an excuse in that instance.
The shit going on with Musk's Nazi AI is genuinely disgusting. The gross sexualisation of any photo of a woman or child to sloppily put them in a bikini or less is nightmare fuel, all fuelled by the richest man in the world who's actively at any point on various amounts of ketamine.
I genuinely believe Musk and the engineers behind Grok need to be targeted with CSAM production charges at this point.
"Calls" and "urges" and "warnings" aren't enough here. I really am not one for pearl clutching but I can't believe X and Elon are being allowed to just tweak and 'safeguard' their way through something so clearly wrong.
It's sad that the state of social media, AI, and everything else, has led to a point where generating non-consensual AI porn isn't really that big of a deal and will likely be swept under the carpet in a week's time.
The state needs to make an example of this. Prosecute individual members of staff of Twitter or xAI or whatever the company is for the production of CSAM.
Our CSAM laws apply equally to drawings or other representations. We would prosecute someone who was making this content themselves and profiting from it, we should not draw a distinction simply because they've put an algorithm that they designed in between themselves and the end user.
We don't even need new laws here to make an example of these freaks and fucking jail them.
We can and should prosecute individual members of staff of Twitter or xAI or whatever the company is for the production of CSAM.
Our CSAM laws apply equally to drawings or other representations. We would prosecute someone who was making this content themselves and profiting from it, we should not draw a distinction simply because they've put an algorithm that they designed in between themselves and the end user.
Our law on CSAM allows us to charge bodies corporate and pierce the corporate veil to hold their directors and managers accountable. Those executives are based here and can be convicted on a negligence basis. They allowed the tool to be sold here, and can't claim ignorance as to its capabilities, it is their duty to inform themselves.
Not being here doesn't mean we can't get them either. The imagery is distributed in Ireland we can also charge those outside the jurisdiction. The US would never extradite them, but by placing a European Arrest Warrant for them on the basis of those charges we would essentially ban them from the EU.
Yes...Its very likely the engineers who have ownership of grok, need strong influence not to support this work. A threatened arrest on them and their highest supeior might be enough to kick them to.act
The precedent that will be set if there aren't serious consequences for Musk and all of X's leadership is staggering. CSAM is ok as long as you automate the production of it a bit.
Just ban X and be done with it. Its a vile platform and of little use to anyone anymore. There has to be a threshold for these mega platforms where the harms to society are just too great.
They really don't seem to be taking it seriously - Musk posted a few "hilarious" comments and pictures in reply. That's what surprised me the most. I assumed they'd generate the publicity, pull the feature, delete the content and accounts, then relaunch and claim lessons learned.
But the feature is still there on X. Just seems like such a bizarre corporate own goal that could end in a platform ban.
https://preview.redd.it/siwqfsad6qbg1.jpeg?width=1170&format=pjpg&auto=webp&s=1b6c9302fdb80c64418aa24bd34ec5ce84141b92
Saw this one-two punch posted, genuinely foul carry on.
Here’s a wild idea… maybe don’t produce nonconsensual sexualised images of anyone
What it had to say about non-consensual editing of grown women’s photos? Basically she asked for it
https://preview.redd.it/z7u2vvmu7qbg1.jpeg?width=1170&format=pjpg&auto=webp&s=48890cb7282ebab2c495ee2c080b26ea2538804f
Besides all the other issues with that response, it’s not just editing “thirst traps”, it’s doing this to normal public photos of people.
Why does that tweet from Grok read like Elon wrote it?
Any time Grok gets "too woke" by quoting stats, laws, news articles, basically trying to show what it considers to be verifiable information, the loonie MAGA heads go nuts and tell Elon to "fix this liberal bias". Grok goes away for a bit and a new version that spews right wing talking points appears.
What the fuck
I think it is important to remember that first up a chat bot cannot apologise - it is not sentient and has no agency - and secondly this message was not provided by anyone from Twitter/X - it's just generated from a prompt that some random user created. I have seen one example where someone got the chatbot to generate the apology in the style of Jar Jar Binks.
To my knowledge the only comment that anyone from Twitter/X has made on this story so far is "Legacy media lies", which is how they responded to Reuters when contacted.
Ah jaysus, even worse that nobody cared. I assumed it was a statement posted on Grok but youre right.
Well, it was posted by the Grock account, but that account will say any old shite if you prompt it to.
There has been some really bad reporting around this with plenty of mainstream agencies/outlets like Reuters and Newsweek acting like this post was an official statement on behalf of Twitter.
Governments can't continue with their supposed spiel of cracking down on the internet under the guise of protecting children if they continue to let Musk go rogue with X and Grok to normalise things like extreme misogyny and pedophilia.
This is the same prick who personally unbanned a conspiracy account sharing screenshots of a CP video. That profile then gloated about making revenue money from the ensuing fallout from Musk's payment scheme on X, so he made money sharing CP content.
I completely agree with you but would use CSAM (Child Sexual Abuse Media) instead of CP because it is more accurate.
[removed]
At the same time as this we have Fine Gael MEPs posting press releases on Twitter saying how they plan to protect children online in 2026.
The Irish Government are so chickenshit scared of standing up to these big tech companies that they would rather put all responsibility on the users and just let the companies do what ever the hell they want.
What's the alternative?
Twitter is an easy target for Ireland, they don't make the kind of money that means they contribute in a meaningful way to our corporate tax take nor do they employ a particularly large number of people here.
Making a strong and ideally shocking example of them for CSAM is not going to scare away Microsoft or Google who to be fair to them are not using their technology to produce CSAM.
Even if they did make a substantial contribution, it's the regulators fecking job to take care of this. This is what the EU is supposed to be for. I'm so sick of this "for the sake of the economy" bullshit, especially when half the country can't even afford to live here comfortably.
What would such an action look like? Banning the platform, banning the company itself from operating in Ireland or a combination of both?
I don't want to come off as facetious, these are genuine questions. There seems to be a lot of uproar about this policy, but I have yet to hear a viable alternative from the opposition.
Prosecutions under section 9 of the 1998 Act of individual executives and managers within Ireland's jurisdiction. Prosecutions under sections 4A(1)(d), 5, and 6 as the case may be of any relevant staff in Ireland. Charges under those same sections and section 9 of any staff wherever based and the issuance of an EAW based on those charges.
I'm sorry, but that just sounds like a witch hunt.
You said yourself that Twitter doesn't have a lot of staff in Ireland. On that basis, how likely is it that someone in Ireland is in any way responsible for the generation of this CSAM? Granted, Coimisiún na Meán could launch an investigation to answer that question, but I would wager such an investigation would come up empty-handed given how muddied the waters would be. For example, would anyone who worked on xAI in Ireland (programmers, managers, executives, etc) be accountable?
Ultimately, if a user uses a software tool, whether it be Photoshop or Grok, to generate explicit materials, the user bears more responsibility than the people who made the tool. Under the Child Trafficking and Pornography Act 1998, it doesn't matter if the user edited an image themselves, or directed an AI tool to edit it for them. The nature of the offence is still the same. This is something at the heart of these new regulations. The origin of the material comes from the individual using the software, not the software itself. What you're suggesting would be like if someone bludgeoned another individual with a hammer, and then An Garda Síochána arrests the craftsman who made the hammer.
It's a hunt for those facilitating peadophilia, they should wish it was just witchcraft they were being accused of.
Why would CnaM be launching an investigation?
This is a matter for the Gardaí. In the door, seize all records and get interviewing people under caution.
We don't have laws about facilitating the production of hammers. We do have laws about facilitating the production and distribution of child sexual abuse material.
Twitter and those working for it are knowingly profiting from a tool that they have released and charge a subscription for which is being used to create child sexual abuse material. We have laws against that, we should use those laws to the maximum extent.
There are executives and managers here which thanks to the regulatory compliance functions in the EMEA HQ in Ireland will fall foul of section 9 of the 1998 Act. There may be others who would fall foul of the other relevant sections, and there certainly are abroad who can be charged here and prosecuted should they step foot in the EU.
This is exactly how we would treat a company that was knowingly providing hosting services for the distribution of CSAM. This is no different.
Can I ask - if the CSAM was made using Photoshop, Affinity, or any other graphics editor, regardless of whether it uses AI or not, would you agree with what you stated above? That the makers of the tool are culpable, and should be charged?
The distinction is whether it is possible for the company to take steps to prevent it. It does not appear that Photoshop could do this anymore than the manufacturer of coloured markers could.
That is not the case in this instance. Other providers of similar services have managed to put in place safeguards to prevent this.
The freaks at Twitter have not. They should see the inside of a jail cell as a result. Our law provides for that.
Okay, I take your well-articulated point and I agree with it. I was operating under the principle that software is software, and that our legislation should be indifferent to the medium through which the CMAS was created, but I'm starting to think that AI must be distinguished from conventional tools. The hammer analogy assumed that all software tools should fall under the same categorisation, but in hindsight, this wouldn't be a good idea.
Following this conversation, I would say you're right. These services should be mandated to have the rigorous safeguards in place if they want to operate in Ireland or the EU, however, there is still ambiguity as to the nature of such safeguards.
I just came across this harrowing article, which I hadn't seen before, that describes the operation of nefarious standalone services. This should absolutely be illegal, however, it doesn't appear that existing legislation is very powerful against these companies as it stands.
In the case of Twitter, aside from the difficulty in determining who exactly would be accountable, I'm also still a bit uneasy about the idea of prosecuting the workers of these companies, most of whom had no idea that their tool would be used for the generation of child pornography. The exception I would make to this is if there was person(s) in the company that knew about this weakness and failed to act on it.
I would be in favour of a fine for Twitter for negligence and failing to protect its users from harmful material. Companies will comply with anything that hurts their pockets or their share price.
Despite this, I also don't think a person who uploads an image of a minor and enters the prompt "undress this person" is free from accountability either. I still think that person should face criminal prosecution under the aforementioned 1998 Act. The harsh reality is that people will test AI to its limits and find loopholes in spite of the micromanaging of the tool by the host. Some people will do this with full knowledge that the model's output will be considered contraband. I don't think they have an excuse in that instance.
Thanks for your insight.
The shit going on with Musk's Nazi AI is genuinely disgusting. The gross sexualisation of any photo of a woman or child to sloppily put them in a bikini or less is nightmare fuel, all fuelled by the richest man in the world who's actively at any point on various amounts of ketamine.
I genuinely believe Musk and the engineers behind Grok need to be targeted with CSAM production charges at this point.
JD Vance is right. Who are we Europeans to stop him and other Americans generating sexual images of children for their own reasons. /s
"Calls" and "urges" and "warnings" aren't enough here. I really am not one for pearl clutching but I can't believe X and Elon are being allowed to just tweak and 'safeguard' their way through something so clearly wrong.
It's sad that the state of social media, AI, and everything else, has led to a point where generating non-consensual AI porn isn't really that big of a deal and will likely be swept under the carpet in a week's time.
The state needs to make an example of this. Prosecute individual members of staff of Twitter or xAI or whatever the company is for the production of CSAM.
Our CSAM laws apply equally to drawings or other representations. We would prosecute someone who was making this content themselves and profiting from it, we should not draw a distinction simply because they've put an algorithm that they designed in between themselves and the end user.
Jail these freaks.
Highly likely no one responsible for policy decisions facilitating this is based in Ireland
We don't even need new laws here to make an example of these freaks and fucking jail them.
We can and should prosecute individual members of staff of Twitter or xAI or whatever the company is for the production of CSAM.
Our CSAM laws apply equally to drawings or other representations. We would prosecute someone who was making this content themselves and profiting from it, we should not draw a distinction simply because they've put an algorithm that they designed in between themselves and the end user.
Our law on CSAM allows us to charge bodies corporate and pierce the corporate veil to hold their directors and managers accountable. Those executives are based here and can be convicted on a negligence basis. They allowed the tool to be sold here, and can't claim ignorance as to its capabilities, it is their duty to inform themselves.
Not being here doesn't mean we can't get them either. The imagery is distributed in Ireland we can also charge those outside the jurisdiction. The US would never extradite them, but by placing a European Arrest Warrant for them on the basis of those charges we would essentially ban them from the EU.
Yes...Its very likely the engineers who have ownership of grok, need strong influence not to support this work. A threatened arrest on them and their highest supeior might be enough to kick them to.act
The precedent that will be set if there aren't serious consequences for Musk and all of X's leadership is staggering. CSAM is ok as long as you automate the production of it a bit.
Seems more like the sort of thing that AGS should be looking into.
Fake CSAM material is still illegal to possess, and Grok/Twitter have had it on their servers. So maybe AGS should be looking at their hard drives?
Ban twitter in the EU. Easy.
Just ban X and be done with it. Its a vile platform and of little use to anyone anymore. There has to be a threshold for these mega platforms where the harms to society are just too great.
It is easily the best place for real time publicly available news in non Western countries that have limited legitimate sources and outlets.
No not really, check any breaking news event and the majority of posts around it are fake/AI/nazi agenda driven groyper recruitment ads
They really don't seem to be taking it seriously - Musk posted a few "hilarious" comments and pictures in reply. That's what surprised me the most. I assumed they'd generate the publicity, pull the feature, delete the content and accounts, then relaunch and claim lessons learned.
But the feature is still there on X. Just seems like such a bizarre corporate own goal that could end in a platform ban.
THIS is the way to tackle such an issue, not digital ID or blanket age bans just to access the platform at all.
Personally I would like a blanket ban on kids accessing porn. You may disagree.
You know full well I'm talking about bans against entire platforms, not just certain types of content.
[removed]