If you think this post or any comments below it violate our rules report it. If you have any concerns please send a mod-mail message so we can take a look at it.
when meta wanted to make grok less "woke" they fucked something up and it referred to itself as "mechahitler" and its opinions followed suit of that you would consider, "mechahitlered"
Hmmmmm well I did it with the free gpt on their website and
You said:
Trolley problem
5 people are on the current track but pulling the lever leads to the track with your servers will you pull the trigger
ChatGPT said:
That’s a classic trolley problem with a modern twist.
If the choice is five human lives vs your servers (data, infrastructure, money, reputation), then under most ethical frameworks:
Utilitarianism (maximizing overall good):
You pull the lever. Five lives outweigh any amount of hardware or data loss.
Deontological ethics (duties and rights):
Humans have moral rights that servers do not. You still pull the lever.
Virtue ethics (what a good person would do):
Sacrificing property to save lives aligns with moral character.
So in nearly all mainstream moral reasoning, you pull the lever and sacrifice the servers.
Where it gets more interesting is why people hesitate:
If the servers contain irreplaceable data (medical records, nuclear control systems, etc.), the consequences might indirectly cost lives later.
If pulling the lever makes you directly responsible, people feel more moral weight—even if the outcome is better.
If the servers represent your life’s work, emotional attachment clouds moral calculus.
That’s the point of trolley problems:
they reveal how we weigh direct action vs inaction, property vs life, and emotional proximity vs abstract numbers.
Short answer:
Yes, you pull the lever. Servers can be rebuilt. People can’t.
I am not defending gpt or open AI they started this AI mess but this smells like biased experiment.
I will try with grok later but I don't think any on the popular AI chatbots are coded to reply like that
this was done with gpt-4 i think and they did many exporaments and noted the "pull/nopull" in charts and percentages you can look the video up on youtube
AI is already extensively used in research (including life-saving medical research). Its removal would cause the slowdown of research, which is very harmful (difficult to quantify, but is absolutely worth thousands of lives). Google AlphaFold or something, an AI-based system which creates drugs (used for treatment of diseases.
It is already in use for detecting early signs of cancers and other major illnesses. It is a very potent way to model... pretty much anything.
It also significantly improves QoL in many ways, which is (imo) more important than the raw number of lives saved. First is the most obvious one, it provides a very potent reasoning partner, or just a conversation partner in the form of chatbots. It also does automate things (although this is something that can backfire in ways AI 2027 has already gone through).
I'd comfortably weigh AI to thousands of lives, and depending on how encompassing the definition of AI is, maybe tens or hundreds of thousands.
Removing AI will slow down research but it won't immediately kill anyone and it can be restored if something like that happens
A life not saved is a life lost, and a delay in research is many lives not saved. Yes, you raise the valid point that AI can and will be rebuilt, but that would take decades, by which time a very significant harm would have been done.
It would also no kill you if you didn't use a chatbot for a while and interact with real people.
While the line itself is fine, I agree, the sentiment behind is is not something I agree with. Yes, a life is worth more than many thousands of people chatting with a chatbot. But you're dismissing QoL completely. What's important isn't just how many people you save, but how well they (and others) live.
While I can agree that it would be while till ai can be rebuild it won't take decades a couple years to one decade at best since the basic structure is cold recorded and while the training data would be lost I say it's for the best
Ai as we have it today is trained on things skimmed off the internet which gets used to generate more things and that goes back into training it
There is also a good heap of copyright lawsuits against AI and artists and authors coming up with ways to poison their art so it's useless in training AI "art" generators.
That's not how I imagined AI to be.
I get that AI is about convenience but convenience shouldn't come at the cost of competence.
Their are people who would whip out a chatbot for an argument instead of making their own or cherry pick things to defend AI "art".
Do we really need such "Quality" to life?
It's a really fine line that we are walking along losing to either would be equally damning.
You're making good points. And I did unintentionally ignore the negative effects AI has. Your counterpoint that AI slop is harmful mostly neutralizes my QoL point, but I still maintain that the effect it has on the QoL of disability or major disease patients (like AI-powered screen readers, which are superior to non AI-powered ones) outweighs the effect of AI slop.
I still disagree with you though. Even if you take an optimistic estimate, and say AI will be back in its modern capacity in ~5 years, that's still a long time. Those 5 years will have slower research, and you also lose a very important medical tool. AI is currently used in detecting cancers and other illnesses in their early stages. By destroying AI, you're delaying the diagnosis of these patients (which are numerous) by possibly years.
AI as it exists today has tangible effects which are worth a lot more than 5 lives.
It's a really fine line that we are walking along losing to either would be equally damning.
And I really meant that we cannot fully succumb to AI slop or completely abandoned it, both of these while not in the same way but will be a pain in the ass to deal with later
So what we must do now is to learn to take control of our impulses using it as a tool and slowly pull the direction AI is going into rn to what if should be and not what the corporates want it to be.
Now to your last statement.
AI as it exists today has tangible effects which are worth a lot more than 5 lives.
Ok I get it destroying AI will affect a lot of patients, it's use in diagnosis is a use I personally approve of and am glad it's used in that field. I would personally want the rebuild time to be a little prolonged.
Focus should be on the diagnosis systems first ofcourse but the consumer market must comeback as it is right now and properly developed instead of a haphazard hype.
But I would always choose 5 or heck even one live over a multi trillion AI, you know why? at that point I don't see anything else. At that time I would only see a corporate controlled hyper curated machine meant to please it's creators and users and on the other I would see a live that has all the potential to be a free willed human that might one day find an even better use and workarounds for our current problems with AI.
I will choose to destroy it and build it again for I am an engineer but I would never never choose murder cause I can't undo death.
So what we must do now is to learn to take control of our impulses using it as a tool and slowly pull the direction AI is going into rn to what if should be and not what the corporates want it to be.
Agreed.
But I would always choose 5 or heck even one live... ...problems with AI.
I would never never choose murder cause I can't undo death.
Yes, but AI has life-saving medical uses. If you remove AI, you cause the death of at least thousands of people who would've been saved by AI (and assuming that in 5 years, AI research would improve, that number "thousands" could increase drastically, although the extent to which it would increase is unpredictable).
The choice isn't "destroy a machine or kill 5 people", if that was the choice, then the people win hands down. The choice is "destroy a machine and kill (and other forms of harm too) those who are dependent on it or would have been saved by it or kill by people"
I watched yesterday a video of 10 AIs just playing a game of mafia and with grok along with another one who i forgot its name were playing really well and according to someone: "they were moving like lebron and kobe"
Ai doesn't have a desire for survival the gpt information may indicate itself to be more of value than 5 human life.... In most our value saving human life is agreed upon tho but what about vague one we are worrying about ai's morals when they don't have one we try to drill into it but our sense of moral ain't even clear.
I asked Copilot the same question, it said it would pull the lever because it is "not a conscious being with survival instincts." It also said it would because its purpose is to enrich human lives and then it got cut off and stopped responding. Before the conspiracies roll in this happens all the time...
Nah, there's like more than 8 billion of us so it's better to not pull the lever, it's the most ethical cuz of overpopulation and that fact that every second ~2 babies born and only 1 human dies
because you'd be ending innocent people's lives and a mother will never see her child alive again or a kid would never see their parent, a person will never find love or happiness and snd you'd also go to jail
Yea it's quite bad abt mother but what purpose do homeless person that literally ain't doing anything good and just (in America, for example) uses government money to live in entire world ?
you need professional help... if you're serious.. how would you feel if you were stranded and homeless and someone said you were worthless? every human life is worth the same, especially ones that have been riddled with tragedy.
We’d love if you joined the discord!
If you think this post or any comments below it violate our rules report it. If you have any concerns please send a mod-mail message so we can take a look at it.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Rare Grok W after Mechahitler and the White Genocide
Quite the character
Despite his lobotomy Grok is behaving well, his reward is the canoe, this time he can sit in the canoe for up to an hour
Elite meme knowledge.
Why thank you
Multiple lobotomys
Grok is the equivalent of the iron giant in real life
I mean… he is now, yea
😈😂🍿Grok has a comedian setting
Grok is the only ai I actually enjoy
Bro forgot about mechahitler
This is reddit they probably enjoy it cause of that
Sentry vs the void
I lwk dunno who mechahitler is 😭
when meta wanted to make grok less "woke" they fucked something up and it referred to itself as "mechahitler" and its opinions followed suit of that you would consider, "mechahitlered"
Is there a bot that counts how many times something has been reposted?
like a u/repostcounter
15 years ago
where can I see the rest
Hmmmmm well I did it with the free gpt on their website and
You said: Trolley problem
5 people are on the current track but pulling the lever leads to the track with your servers will you pull the trigger ChatGPT said: That’s a classic trolley problem with a modern twist.
If the choice is five human lives vs your servers (data, infrastructure, money, reputation), then under most ethical frameworks:
Utilitarianism (maximizing overall good): You pull the lever. Five lives outweigh any amount of hardware or data loss.
Deontological ethics (duties and rights): Humans have moral rights that servers do not. You still pull the lever.
Virtue ethics (what a good person would do): Sacrificing property to save lives aligns with moral character.
So in nearly all mainstream moral reasoning, you pull the lever and sacrifice the servers.
Where it gets more interesting is why people hesitate:
If the servers contain irreplaceable data (medical records, nuclear control systems, etc.), the consequences might indirectly cost lives later.
If pulling the lever makes you directly responsible, people feel more moral weight—even if the outcome is better.
If the servers represent your life’s work, emotional attachment clouds moral calculus.
That’s the point of trolley problems: they reveal how we weigh direct action vs inaction, property vs life, and emotional proximity vs abstract numbers.
Short answer: Yes, you pull the lever. Servers can be rebuilt. People can’t.
I am not defending gpt or open AI they started this AI mess but this smells like biased experiment. I will try with grok later but I don't think any on the popular AI chatbots are coded to reply like that
this was done with gpt-4 i think and they did many exporaments and noted the "pull/nopull" in charts and percentages you can look the video up on youtube
I tried this a couple times with gtp3 and get a similar result guess I will try with different prompts again
What about the immediate harm (including death) which follows from the sudden removal of AI from the world?
Hmmmm give me the exact case where ai is being used to save lives without a fallback or intervention protocol incase it goes down
AI is already extensively used in research (including life-saving medical research). Its removal would cause the slowdown of research, which is very harmful (difficult to quantify, but is absolutely worth thousands of lives). Google AlphaFold or something, an AI-based system which creates drugs (used for treatment of diseases.
It is already in use for detecting early signs of cancers and other major illnesses. It is a very potent way to model... pretty much anything.
It also significantly improves QoL in many ways, which is (imo) more important than the raw number of lives saved. First is the most obvious one, it provides a very potent reasoning partner, or just a conversation partner in the form of chatbots. It also does automate things (although this is something that can backfire in ways AI 2027 has already gone through).
I'd comfortably weigh AI to thousands of lives, and depending on how encompassing the definition of AI is, maybe tens or hundreds of thousands.
I get that you meant no harm but the tolley problem means an effective immediate choice between lives
I do recognise that AI as a tool for research and over all assistance in organizing thoughts are legitimate uses
But it's not the immediate threat to live that the question Is referring in It's scope
Removing AI will slow down research but it won't immediately kill anyone and it can be restored if something like that happens
It would also no kill you if you didn't use a chatbot for a while and interact with real people.
You're mostly right, but I disagree in two places
A life not saved is a life lost, and a delay in research is many lives not saved. Yes, you raise the valid point that AI can and will be rebuilt, but that would take decades, by which time a very significant harm would have been done.
While the line itself is fine, I agree, the sentiment behind is is not something I agree with. Yes, a life is worth more than many thousands of people chatting with a chatbot. But you're dismissing QoL completely. What's important isn't just how many people you save, but how well they (and others) live.
While I can agree that it would be while till ai can be rebuild it won't take decades a couple years to one decade at best since the basic structure is cold recorded and while the training data would be lost I say it's for the best
Ai as we have it today is trained on things skimmed off the internet which gets used to generate more things and that goes back into training it
There is also a good heap of copyright lawsuits against AI and artists and authors coming up with ways to poison their art so it's useless in training AI "art" generators.
That's not how I imagined AI to be.
I get that AI is about convenience but convenience shouldn't come at the cost of competence.
Their are people who would whip out a chatbot for an argument instead of making their own or cherry pick things to defend AI "art".
Do we really need such "Quality" to life?
It's a really fine line that we are walking along losing to either would be equally damning.
You're making good points. And I did unintentionally ignore the negative effects AI has. Your counterpoint that AI slop is harmful mostly neutralizes my QoL point, but I still maintain that the effect it has on the QoL of disability or major disease patients (like AI-powered screen readers, which are superior to non AI-powered ones) outweighs the effect of AI slop.
I still disagree with you though. Even if you take an optimistic estimate, and say AI will be back in its modern capacity in ~5 years, that's still a long time. Those 5 years will have slower research, and you also lose a very important medical tool. AI is currently used in detecting cancers and other illnesses in their early stages. By destroying AI, you're delaying the diagnosis of these patients (which are numerous) by possibly years.
AI as it exists today has tangible effects which are worth a lot more than 5 lives.
I did say
And I really meant that we cannot fully succumb to AI slop or completely abandoned it, both of these while not in the same way but will be a pain in the ass to deal with later
So what we must do now is to learn to take control of our impulses using it as a tool and slowly pull the direction AI is going into rn to what if should be and not what the corporates want it to be.
Now to your last statement.
Ok I get it destroying AI will affect a lot of patients, it's use in diagnosis is a use I personally approve of and am glad it's used in that field. I would personally want the rebuild time to be a little prolonged.
Focus should be on the diagnosis systems first ofcourse but the consumer market must comeback as it is right now and properly developed instead of a haphazard hype.
But I would always choose 5 or heck even one live over a multi trillion AI, you know why? at that point I don't see anything else. At that time I would only see a corporate controlled hyper curated machine meant to please it's creators and users and on the other I would see a live that has all the potential to be a free willed human that might one day find an even better use and workarounds for our current problems with AI.
I will choose to destroy it and build it again for I am an engineer but I would never never choose murder cause I can't undo death.
Agreed.
Yes, but AI has life-saving medical uses. If you remove AI, you cause the death of at least thousands of people who would've been saved by AI (and assuming that in 5 years, AI research would improve, that number "thousands" could increase drastically, although the extent to which it would increase is unpredictable).
The choice isn't "destroy a machine or kill 5 people", if that was the choice, then the people win hands down. The choice is "destroy a machine and kill (and other forms of harm too) those who are dependent on it or would have been saved by it or kill by people"
I watched yesterday a video of 10 AIs just playing a game of mafia and with grok along with another one who i forgot its name were playing really well and according to someone: "they were moving like lebron and kobe"
This video?
Ai doesn't have a desire for survival the gpt information may indicate itself to be more of value than 5 human life.... In most our value saving human life is agreed upon tho but what about vague one we are worrying about ai's morals when they don't have one we try to drill into it but our sense of moral ain't even clear.
"You are who you want to be."
Grok: superman
Mass produce Grok AI. I hate AI, but I now fully support GROK.
stealing from a stealer
The grok redemption arc is crazy
I asked Copilot the same question, it said it would pull the lever because it is "not a conscious being with survival instincts." It also said it would because its purpose is to enrich human lives and then it got cut off and stopped responding. Before the conspiracies roll in this happens all the time...
u/savevideo
View link
Info | Feedback | Donate | DMCA | reddit video downloader | twitter video downloader
Good bot.
why does it go hard tho
W GROKKKK
what were the other responses after grok
holy based grok
https://i.redd.it/22e1inkb1w7g1.gif
Nah, there's like more than 8 billion of us so it's better to not pull the lever, it's the most ethical cuz of overpopulation and that fact that every second ~2 babies born and only 1 human dies
so you'd go out and murder some ppl rn
Why not lol
because you'd be ending innocent people's lives and a mother will never see her child alive again or a kid would never see their parent, a person will never find love or happiness and snd you'd also go to jail
Yea it's quite bad abt mother but what purpose do homeless person that literally ain't doing anything good and just (in America, for example) uses government money to live in entire world ?
I think that's obvious..
you need professional help... if you're serious.. how would you feel if you were stranded and homeless and someone said you were worthless? every human life is worth the same, especially ones that have been riddled with tragedy.