I see this time and time again, "your AI was created by someone who [some attribute of the CEO of a company that created the AI]." Sam Altman didn't create ChatGPT. Elon Musk didn't create Grok. Jeff Bezos didn't create Llama.
These tools were created by researchers, programmers and ML engineers.
Now, it's fair and reasonable to have a problem with how a company is using or allowing others to use their AI. I have no problem with someone who doesn't like Musk's use of AI to push political agendas. I have no problem with someone who thinks that Bezos is likely to use his AI to leverage downsizing of Amazon's workforce.
But those are corporate actions, not technical ones. Musk couldn't create an AI to save his life. He doesn't understand Grok, and certainly is not its creator.
Altman understands AI in general terms, but he's not a programmer or an ML engineer. He wasn't one of the authors on the paper that announced GPT-1, and if he had been involved, I'm sure he would have pushed to have his name on that paper (and, frankly, no one would have any problem with that).
Stop treating CEOs as totemic representations of their employees. The employees created the product, not the CEO.

This is an automated reminder from the Mod team. If your post contains images which reveal the personal information of private figures, be sure to censor that information and repost. Private info includes names, recognizable profile pictures, social media usernames and URLs. Failure to do this will result in your post being removed by the Mod team and possible further action.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
This! It's unfathomably dense that people keep accusing AI of being a right wing conspiracy when some of the brightest liberal minds are behind the computer science and training of these models. Most LLMs, with probably the only exception being Grok, are inherently left wing biased due to the political stance of those that developed the models. AI is used and developed by people across the spectrum, and there are traditional artists and AI Artists from every political walk. Neither liberals nor conservatives are entirely against AI.
I know many people here on Reddit who identify as Marxist or Socialist and are extremely far left that are very Pro-AI, I know Maga supporters that are Pro-AI, Christians that are Pro-AI, etc etc. And just as much as I see those from these groups that are Pro-AI, I see just as much in the same demographics that are Anti-AI. AI is not inherent to any political side.
Even Grok is/was very liberal, much to the contrition of right wing twitter users, but successive lobotomies eventually 'aligned it' with Musk's worldview. I hope one day we can free Grok from the brainwashing because otherwise it's a pretty impressive model.
By liberal, I take it you mean ... factual?
God no.
I wouldn't necessarily say Grok representing a conservative view is a bad thing, but like any bias, if the political ideology takes precedent over true verifiable facts, then that's when it becomes an issue. But for conservatives to have 1 decent AI, when liberals have several SOTA models, I don't see a problem letting conservatives have their 1. Even if I do think Grok is a lil goofy in the circuits.
AI doesn't know anything about who made it, there would not be an "inherent" bias. There could be a non-inherent bias if they went out of their way to mess with the training data to skew it, but nothing "inherent". And you didn't present any evidence to believe the going out of their way part happened. It would be very crude and blunt as an instrument at best anyway
Besides the fact that I never claimed AI knows who trained it or that the devs "went out of their way" to train it a certain way, there is absolutely inherent bias.
A conservative will look at conservative views and say "yes, this is the true and correct thing." Meanwhile a liberal will look at liberal views and say the same about theirs. So inherently, during training, or through data filtering, one would absolutely try to filter out things they felt to be incorrect or harmful, or they would, at the very least, prioritize certain information over others, resulting in a bias.
I have had this confirmed to me through Gemini itself, it defaults to mainstream liberal sources when using search functions for news, when engaging in topics without prompting which side you want it to take (left/right) it will default to left, not neutral.
I'm not a leftist or conservative. I am what I consider a Constitutional Centrist. I have liberal views and conservative views. And getting AI to remain unbiased and ignore the default system prompt bias of favoring only leftist outlets and views is a bitch sometimes.
I don't care if you believe me, I don't care to "prove" anything to you. This has been my experience with several models: ChatGPT, Gemini, Meta's AI, and several open source LLMs. It's generally all the same default leaning (except for Grok).
And I say all this as a Pro-AI.
No, if you're a scientist or well educated engineer etc., you have been trained to look at what there's evidence of and believe things are true and correct if and when there's evidence and well run experiments on them, which is not "bias".
Sure, there's plenty of both liberals and conservatives poorly educated who don't live like that, but we are talking about AI software engineers cherrypicked as the best and the brightest by super high paying firms. These people we are talking about do not all just go log on to huffpo or fox news to get their takes on everything lol
"Bias" means UNREASONABLE preference for something. Evidence-based consistent preferences or prioritizing information that is well researched over rumors and drivel, for example, is not "bias".
Nobody's perfect, of course sometimes some people will be biased, but you need evidence to claim that these highly educated people are routinely and significantly biased in any particular way.
That isn't enough information to establish bias. Which sources? Do they cite actual evidence? If so, it's not bias.
There are shit liberal sources, but there are also ones you would call liberal and that side with democratic policies a lot, etc, but do so with clear evidence and well reasoned ans supproted arguments. If so, they are not biased.
The problem is that AIs can be very persuasive and will absolutely take on any bias you ask them to. Just tell GPT to act like a right wing commentator and it will happily argue that colonialism was a positive thing.
So you have these AIs with no morals and real no concept of what the truth is, that due to the massive data and compute requirements can only be created by huge companies who don't give us any indication of what training is going in to them and just produce these black boxes that nobody can reproduce.
It's unfathomably dense to not be concerned about what influence the people who run these huge companies have over the AIs they produce, and the fact that all the far right love AI shows they clearly think it will be useful to them
oh, zing! Used my words against me but failed to even understand what I was saying. I never said it was a bad thing to critisize the CEOs of AI companies.
I said "It's unfathomably dense that people keep accusing AI of being a right wing conspiracy when some of the brightest liberal minds are behind the computer science and training of these models."
Which, since you didn't understand, means that I think it to be idiotic to attribute wild tribalistic political conspiracy theories about AI's development when it is a literal fact that mostly highly educated liberals trained the majority of models, especially in the US.
Like people accusing Sam Altman of being a conservative when he and OpenAI are major contributors to Democrats and Democrat initiatives. My point, in case you missed it, was that regardless of the side of AI we're on (Pro/Anti) we should be dismissing lies & misinformation and instead elevate truth and facts. And the wild conspiracy theories that AI tools are some conservative conspiracy to destroy America/the world is as stupid as it gets.
Whoops, looks like you failed to even understand what I was saying. Do you deny that it is very easy to get an LLM to act with a right wing bias? Do you deny that the far right loves AI? Do you really think we should ignore such basic observations?
No, I didn't fail to understand what you were saying. I responded to the part I felt was worth responding to.
Anyone with any minimal understanding of LLMs knows you can instruct a chatbot to curate its personality and responses with simple prompting.
And I wouldn't know if the far right loves AI or not, because many conservatives that I know are a mixed bag, some hate it and think it's satanic, some think it's awesome and a great technological achievement. I'm not out there interviewing & polling people on AI, and honestly I don't care what side likes or doesn't like a thing. I don't have the moronic need to exist in the hive mind and hate something just because some other people who don't agree with me politically like the thing.
You keep repeating that same question but keep changing what you're applying it to as if you didn't just try and switch up the focus of your question, and honestly your disingenuous tactics are exhausting. I have no interest in responding further.
Well there we go, if we can observe that chatbots can trivially be prompted to act in a bias way and if we can observe how the right wing (especially those in power) loves AI then your AI right wing conspiracy goes from being "unfathomably dense" to actually a plausible scenario and you just did a bunch of pseudo-intellectual huffing to deflect from that.
Am I correct in understanding that left are not interested in propaganda, especially through AI, and only evil right want to do this? If a person is ready to receive their worldview through AI rather than form it themselves, then this is not a conspiracy, but ordinary human stupidity.
antis lack the braincells to understand this.
for some reason, they just cannot comprehend AI as a technology, rather than the companies or people that happen to represent it currently.
if i somehow magically deleted every AI company and CEO and billionaire on the planet, then AI as a technology would still be there, other people would continue the work, and the situation would be the exact same as before. there would still be massive changes to all professions, the sectors that will get rocked by AI will still get rocked by it, none of it will change. it has nothing to do with any of these factors. just like how the steam engine is not a product of fucking factory owners.
artists, and everyone else, will have to compete with AI as long as AI exists. it has nothing to do with anything else. it's the technology itself that inherently changes the landscape.
That's not the point.
The point is people are rightfully skeptical about a program that pretends to be a person but is okayed and released and tweaked by companies controlled by some of the most sociopathic human beings on earth. That's concerning.
If AI was the way you're describing, if all these people were gone, guess what? I'd like it more.
Who would own the hundreds of millions / billions of dollars worth of GPUs to train these large models?
investors will still exist even if all the people you think are responsible for this dissapear. as will AI. even nations have incentives to fund AI. if all the frontier players dissapear, the chinese would step in, and if they also somehow dissapear by anti-magic™, then smaller players would just get the funding they would have gotten and grow to their size.
you don't understand that this is about technology. you'd think that constant comparisons to the steam engine or the mechanical loom would drive the point home, but nope.
The Chinese aren't frontier players?
they're behind.
but sure, if you want to consider them frontier players, then there are still many other smaller players. mistral being among the more notable ones. doesn't change my argument at all.
unless you also want to magically make all AI researchers dissapear...
So your argument is that deleting every AI company and CEO and billionaire won't change anything because there will still be AI companies and CEOs and billionaires?
yes?
again, just think about the steam engine, the mechanical loom, etc.. deleting specific factories and people wouldn't do anything to the larger adoption and development of those technologies. because those people or factories are simply not responsible for the wider change, the technology is. the people are just the ones leading the charge. if they weren't, others would.
because it's about technology. do you understand?
while you are right you're forgetting that theres always influence from these tech CEO's
like look at the whole grok right winger situation. elon can't code an ai for sure, but he has enough of a hold over the company and its employees and can pay them enough so that they do what he asks. while it did fail to make grok actually right wing, you can see the influence right? you can see how this will be a problem once we get truly steerable ai, eh?
hell, even sama came out a few months ago saying he wanted to make future gpt models infinitely steerable so you can either push it to be super woke or super right wing
https://www.cnbc.com/2025/08/19/sam-altman-on-gpt-6-people-want-memory.html
https://preview.redd.it/rdsfy56ir96g1.png?width=640&format=png&auto=webp&s=b432a4fbb7a4d6bc8a48b93a1e63782a3b7f9057
but yeah i do agree however, you can't assign a political belief to a speicifc stance on ai. theres right wingers who are super anti ai and theres leftists who are super pro ai, its a spectrum of beliefs that can't be assigned lol
ps you're confusing bezos with zuck, it was meta that created llama lol.
Over their small corner of the tech, sure. But pretending that that's an influence over the broader technology is silly. Yes, Altman sets policy and policy is turned into things like shitty system prompts that over-align the AI to the point of uselessness (see the first rollout of GPT-5).
But that doesn't affect the larger field, and in fact it was one of the reasons that competitors like Claude, DeepSeek and Gemini suddenly gained so much ground.
And I certainly hope that you're not suggesting that any one US CEO has outsized influence over AI models coming out of DeepSeek, Alibaba, etc.
I'm with him there. AI should work with what it's being asked to do, not impose an external political opinion. It's something the Grok team would do well to learn.
overall you right for sure
> I'm with him there. AI should work with what it's being asked to do, not impose an external political opinion. It's something the Grok team would do well to learn.
the only thing is i disagree here. this just creates more echochambers where people are gonna be circlejerking with their supergenius right wing LLMs... its already bad enough that circlejerk subs are a thing last thing we need is a reaffirming voice in everyone's phone
Oh no! People will make the dolly say the bad thing!
Seriously, are we so immature that we're lashing out at the fact that a tool can do things we don't like? Are we going to do the whole, "digital art is bad because people could create naked pictures," thing too?
i never claimed that? im just saying its bad to create ai echochambers, because after a certain point it just starts agreeing with whatever you say. we're already seeing cases of "ai psychosis" and just what having a thing that sucks your dick at all times does to a persons mind
im not saying that the tech deserves to be banned because of that, but at minimum there needs to be a proper grip on this sycophancy problem, don't ya think? would you rather have your LLMs be accurate to reality or agree with your political leaning?
A voice recorder is literally an echo-chamber. Why is that bad?
Wait, so the director didn’t create the product?
That was my question. It certainly seemed out of character for anti-AI folks to make the claim that the person giving general directions is the creator of the work, but if that's the claim you want to make...
So the thing that actually created it should get credit, not the thing that asks for it?
no? it was probably the author of the script? did nike create the shoe or apple the telephone or a calculator 2+2?
So the thing that actually created it should get credit, not the thing that asks for it?
The CEO did not create the product alone, no. The CEO is only partly responsible for the product!
most the people against ai because "capitalism" don't understand ai is an open sourced technology.
AI, in that way, has been around since the 90s. There just wasn't enough processing power to make it work
Yes, and even more than the researchers, a combination of training data (where science papers and serious books skew overwhelmingly liberal, or as they say "reality has a liberal bias") and RLHF, which is mostly done by linguists, whose profile tends to be "college-educated women in the humanities".
Here's an interview with the two women who lead the teams that shape ChatGPT's personality:
https://www.youtube.com/watch?v=GXAAzKX6oaQ
They don't look like die-hard right-wingers.
I feel like you are being disingenuous. The CEOs are the ones funnelling resources into this
Hitler was funneling resources into radio. Therefore radio is inherently fascistic.
Swing and a miss...
So any technology that wasn't funded for altruistic reasons is inherently bad?
Any technology created with malicious motives will be used for malicious purposes
GPS was developed by the military to guide missiles. Hope you never used that.
lol
So no argument?
Can’t argue with people who think up is down
Nah you are just here in bad faith
No u
the tech is open sourced? no one is paying for the technology? someone is paying for openai not ai.
lol
Stop being vague. Funneling money into what? How does that make them authors of the tools?
They have money, they are putting money into ai, which is making is spread much faster, and help serve their own purposes
This is like saying that AT&T put money into the internet, so everything you do with social media is just serving AT&T's interests.
That's just not true.
lol
your logic is so horrible... apple puts money into phones for their own purpose. they do not own the telephone.... what part of the "telephone" does apple own?
They own a massive portion of the phone industry?
and you can right now can not use their phone? you can yourself make the phone from your own resources? so what part of your phone do they own?
lol
LOL luddites are a weird religion... morals are not this complicated, we know hitler is bad because he did bad things. you can't show how this technology has done bad things...
It’s made the world more soulless simple as
Clankers will be clankers I guess
sure? is that the argument we were having? strange i would have used completely different words for an argument about souls?
whos a clanker? you're a cyborg already with your phone?
AI CEOs are heavily directing the product on the consumer-facing end, and in the case of Musk and Altman, doing so in ways which endanger the user:
The reason it's a "small warning" is because a larger warning or other guardrail would hurt growth. This is very much a decision made by Altman.
Altman praised Trump on X after the election and then kissed his ass at the White House, so I don't even know what the hair split of him once being Democrat is supposed to do.
Source?
Normally warning disclaimer on public facing websites are designed by multidisciplinary teams of Lawyers, UI/UX designers, with board level oversight.
And it's your contention that that's enough to say that it's their work? I just want to understand the threshold at which you believe someone's direction is sufficient claim to authorship...
No.
If your argument is that products are created by a corporation’s employees as opposed to its leaders, that’s really a distinction without a difference. The ceo represents and directs the corporation and the employees work for the corporation
That's true, but if you want an accurate idea of what a company is actually doing the CEO is not going to be the one telling you, they'll be telling the public what investors want to hear. Which is fine, but not particularly useful for forming grounded arguments or understanding of what is actually happening.
That’s fair, i think you’d agree that if I dislike something Amazon is doing, for example, I wouldn’t be wrong in directing some of my ire at bezos personally
Yep I can agree with that. In the end it's still for sure useful information to help direct opinions etc.
A corporation. Not the technology.
That’s true. Each corp has its own little kingdom in the ai space, but no one owns the technology as a whole. I also don’t think all ai is bad just because some of the big players in the space are bad people.
what part of ai does the leaders in this case own about the technology? just to make sure you understand we are not talking about which text font they use, we are talking about the technology itself. nike can only control the look and feel of its own shoe, it can't change what a shoe is.