lol, every couple of months i come to the same conclusion, that meta/Zuckerberg is still the most evil/deranged of them all, the fact that they are consistently at the bottom of the barrel when it comes to our security and privacy should have been a telltale sign that they don't care about safety.
Zuckerberg is definitely deranged. No consideration for anyone other than himself or how his systems affect people. Cutthroat capitalist that loves to steal. Uncreative but wants to be seen as a cool tech innovator. Steals everyone’s privacy and proceeds to build himself a huge, private compound in Hawaii. Just odd
Not really. The companies racing the fastest are usually the ones cutting the most corners. Safety slows you down, and none of these labs want to lose the arms race to someone else.
"Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute."
Yea, what do we expect. No surprise here. I wish there was a grade lower than an F that we could assign.
You can read one of the Anthropic system cards for an example of a company that’s trying. Not that they’d get an A or anything, that they’re certainly doing better in this dimension than xAI or Meta
You can tell these people have a supreme disdain for art. So many films and books warning us not to build the torment nexus and their only thought is to build the torment nexus.
Funny seeing distinct opinions here because this is exactly why many startups prefer DeepSeek or similar Chinese open source LLM.
DeepSeek's low FLI score is an affirmation of its efficiency and open-sourceness. They opts out of the "safety theater" to deliver customizable and cost-effective AI models that can be self-hosted without vendor lock-in. You have more control and power.
Which is why us entrepreneurs prefer DeepSeek and similar models. No API quotas, no data hoarding, no geopolitical strings like spying from D.C. or Beijing. Run it on your hardware, tweak it freely, and build it.
So the low grade safety is completely by design. All this fear mongering and I have yet to see DeepSeek being used to “take over the government” or “help build bombs”.
DeepSeek has also been consistently better at giving truthful responses. Gemini has improved, OpenAI continues to tell me what it thinks I want to hear.
Even obscure questions like details of winterizing my home that I called a professional to verify, DeepSeek was the only LLM to answer correctly.
In "thinking" mode, you can also see the way it's trained to tune answers. I'd say by comparison, they're doing pretty well.
If these people are truly chasing agi and super intelligences, there’s very real possibility of the economy being thrown to shit with 256 bit encryption solved, don’t get me wrong tho the company doing this “safety rating” sucks, they’re funded by Elon musk who also has skin in the game thru xai
The one where they mass promote AI generated propaganda videos to manipulate elections for dictators.
Your freedom is in serious and legitimate danger.
It's not science fiction, they've done it before, and now they have technology that is insanely powerful by comparison.
Companies like Google will do the same thing they did during the massive tidal wave of Russian propaganda all over their platforms: Nothing. They will just allow people to be mass manipulated like they did last time.
Is there really a choice? You have societies that are separate and in direct competition with one another. If they do not all race to the end to win they cede control to the other society with absolute certainty. So long as the world is separate and governed by separate governments working towards their own unique interests, then this equation will not change and that fact of our existence isn’t changing anytime soon. You can’t just choose to do it safely if it means your enemy is going to do it unsafely, get there before you, and entirely dominate you with their newfound abilities. Even if the choice to do it unsafely has risks, they are not nearly as bad as the risks the other guy gets there first… who also is going to be doing it unsafely no matter what you do and you will ultimately feel the brunt of the bad case scenario whether or not you choose safety.
This would be a way more convincing argument if these Silicon Valley dipshits weren’t completely ideologically captured by cud-chewing psychopathic idiots like Curtis Yarvin and actively working to destroy the country and society for the sake of having a few more balls in their playplaces. The assholes that are saying they’re in the race on our side have never acted like they’re on our side unless they need to take our money. They’re not moving fast and breaking things in any way that will benefit us and it’s magical thinking to believe this will change anytime soon.
Is it in national security interests to cede AI research to hostile foreign forces? No. Is it in national security interests to cede AI research to hostile domestic forces? Also no. Surely there is some position between these two poles that is better than what we have now.
If they want to chase superintelligences they need to treat the supercomputers they run them on like a nuclear reactor and have emergency shutoffs, because it only takes one that’s powerful enough to say solve the 256 bit encryption that most banks use along with cryptocurrency, that could potentially wipe economies
I know, but giving it easy access and a shut off is what I mean by making it like a reactor, i don’t work on servers but on fire pumps for sprinkler systems in building, and believe me, those electrical breakers and fiber lines are not as easy to find and shut off as you may think, I’ve had to deal with both to get alarms shut off, and it’s a hassle
That aspect is more impacted by improvements in quantum computing. Thankfully there are cryptography experts working to find better encryption methods as well, so it's more of a race rather than a doomsday countdown timer.
This makes sense. If they are in an existential crisis, they will burn down the world to come out ahead. This is capitalism and we celebrate it.
Similarly if you are hungry or need and can't afford Healthcare you should get a pass on any crime you commit (aka burn down the world) in pursuit of your needs.
Is this...surprising?
lol, every couple of months i come to the same conclusion, that meta/Zuckerberg is still the most evil/deranged of them all, the fact that they are consistently at the bottom of the barrel when it comes to our security and privacy should have been a telltale sign that they don't care about safety.
They don’t. You’re not their client. You are the product that is sold to their clients, advertisers.
Zuckerberg is definitely deranged. No consideration for anyone other than himself or how his systems affect people. Cutthroat capitalist that loves to steal. Uncreative but wants to be seen as a cool tech innovator. Steals everyone’s privacy and proceeds to build himself a huge, private compound in Hawaii. Just odd
Thiel, I think, is worse.
At least they do open source.
They only care about the value of their property... It costs money to implement safety features, so why would they do that?
Yeah. Not at all. Profit above all. Whatever has to burn so that Zuckerberg can add a new wing to his Hawaiian bunker is A-OK.
Not really. The companies racing the fastest are usually the ones cutting the most corners. Safety slows you down, and none of these labs want to lose the arms race to someone else.
"Perhaps most glaring was the “existential safety” line, where companies scored Ds and Fs across the board. While many of these companies are explicitly chasing superintelligence, they lack a plan for safely managing it, according to Max Tegmark, MIT professor and president of the Future of Life Institute."
Yea, what do we expect. No surprise here. I wish there was a grade lower than an F that we could assign.
What are the criteria for assessing "existential safety"?
You can read one of the Anthropic system cards for an example of a company that’s trying. Not that they’d get an A or anything, that they’re certainly doing better in this dimension than xAI or Meta
You can tell these people have a supreme disdain for art. So many films and books warning us not to build the torment nexus and their only thought is to build the torment nexus.
Feels like they didn’t even try. We need something worse than an F for this.
Why do we even have a future of life institute. The billion dollar companies run the world and don’t give a F
I mean I appreciate the effort, but I’m not convinced anyone with the ability to drive real change cares at all if it’s not impacting the bottom line
In an ideal world, the answer is to legislate, and these types of institutions would inform legislators.
I’m not jaded you are
And that's why we give them an F, because they sure as hell don't.
Unprofitable departments are not sustainable
In other news, the sun will rise in the East and set in the West tomorrow
But trickle-down economics told me billionaires would fill in the gaps :(
Did you know that the foundation (Future of Life institute) behind the safety index was funded by Elon Musk?
"AI bad, but Elon Musk big bad" 😂
Funny seeing distinct opinions here because this is exactly why many startups prefer DeepSeek or similar Chinese open source LLM.
DeepSeek's low FLI score is an affirmation of its efficiency and open-sourceness. They opts out of the "safety theater" to deliver customizable and cost-effective AI models that can be self-hosted without vendor lock-in. You have more control and power.
Which is why us entrepreneurs prefer DeepSeek and similar models. No API quotas, no data hoarding, no geopolitical strings like spying from D.C. or Beijing. Run it on your hardware, tweak it freely, and build it.
So the low grade safety is completely by design. All this fear mongering and I have yet to see DeepSeek being used to “take over the government” or “help build bombs”.
80% of US AI startups rely on Chinese open-source models for survival. Investors from Andreessen Horowitz are shocked. The top 16 on the global open-source list are all occupied by Chinese entries.
DeepSeek has also been consistently better at giving truthful responses. Gemini has improved, OpenAI continues to tell me what it thinks I want to hear.
Even obscure questions like details of winterizing my home that I called a professional to verify, DeepSeek was the only LLM to answer correctly.
In "thinking" mode, you can also see the way it's trained to tune answers. I'd say by comparison, they're doing pretty well.
Which made-up sci fi scenario are they using to give a safety grade? Sounds like total bullshit to me
If these people are truly chasing agi and super intelligences, there’s very real possibility of the economy being thrown to shit with 256 bit encryption solved, don’t get me wrong tho the company doing this “safety rating” sucks, they’re funded by Elon musk who also has skin in the game thru xai
The one where they mass promote AI generated propaganda videos to manipulate elections for dictators.
Your freedom is in serious and legitimate danger.
It's not science fiction, they've done it before, and now they have technology that is insanely powerful by comparison.
Companies like Google will do the same thing they did during the massive tidal wave of Russian propaganda all over their platforms: Nothing. They will just allow people to be mass manipulated like they did last time.
Move fast and break stuff…like human existence.
Is there really a choice? You have societies that are separate and in direct competition with one another. If they do not all race to the end to win they cede control to the other society with absolute certainty. So long as the world is separate and governed by separate governments working towards their own unique interests, then this equation will not change and that fact of our existence isn’t changing anytime soon. You can’t just choose to do it safely if it means your enemy is going to do it unsafely, get there before you, and entirely dominate you with their newfound abilities. Even if the choice to do it unsafely has risks, they are not nearly as bad as the risks the other guy gets there first… who also is going to be doing it unsafely no matter what you do and you will ultimately feel the brunt of the bad case scenario whether or not you choose safety.
This would be a way more convincing argument if these Silicon Valley dipshits weren’t completely ideologically captured by cud-chewing psychopathic idiots like Curtis Yarvin and actively working to destroy the country and society for the sake of having a few more balls in their playplaces. The assholes that are saying they’re in the race on our side have never acted like they’re on our side unless they need to take our money. They’re not moving fast and breaking things in any way that will benefit us and it’s magical thinking to believe this will change anytime soon.
Is it in national security interests to cede AI research to hostile foreign forces? No. Is it in national security interests to cede AI research to hostile domestic forces? Also no. Surely there is some position between these two poles that is better than what we have now.
If they want to chase superintelligences they need to treat the supercomputers they run them on like a nuclear reactor and have emergency shutoffs, because it only takes one that’s powerful enough to say solve the 256 bit encryption that most banks use along with cryptocurrency, that could potentially wipe economies
I mean, electrical breakers exist.
Or just cut the fiber line.
I know, but giving it easy access and a shut off is what I mean by making it like a reactor, i don’t work on servers but on fire pumps for sprinkler systems in building, and believe me, those electrical breakers and fiber lines are not as easy to find and shut off as you may think, I’ve had to deal with both to get alarms shut off, and it’s a hassle
That aspect is more impacted by improvements in quantum computing. Thankfully there are cryptography experts working to find better encryption methods as well, so it's more of a race rather than a doomsday countdown timer.
And all that will happen in a blink of an eye once the last byte is processed and executed.
[ Removed by Reddit ]
It’s jarring that companies and executives linked to enabling and profiting from wars and ethnic cleansing are bad at existential safety?
Yknow, this is a perfect case of “Your scientists were so preoccupied with whether or not they could, they didn't stop to think if they should”
Like they really care.
This makes sense. If they are in an existential crisis, they will burn down the world to come out ahead. This is capitalism and we celebrate it.
Similarly if you are hungry or need and can't afford Healthcare you should get a pass on any crime you commit (aka burn down the world) in pursuit of your needs.
It doesn’t mean disaster is around the corner, but it does mean we’re relying a lot on hope.
surprisedpikachu.png