I’m trying to think seriously about the long-term risks of advanced AI and not in a doomsday sci-fi way, but in practical terms like misinformation, labor disruption, environmental impact and concentration of power. I’m especially interested in your perspectives on this, because many of the proposed solutions lean heavily on regulation, which raises legitimate concerns about government overreach, unintended consequences, and loss of freedom. What needs to be done in this moment to prevent really big societal issues?
Please use Good Faith and the Principle of Charity when commenting. We are currently under an indefinite moratorium on gender issues, and anti-semitism and calls for violence will not be tolerated, especially when discussing the Israeli-Palestinian conflict.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
In the early 90s the government got really tired of arbitrating expensive lawsuits and paying medical costs related to tobacco use. So they sat down with the tobacco companies and created the Master Settlement Agreement, or Tobacco MSA. The MSA basically gave tobacco companies immunity from further lawsuits in exchange for 100 billion dollars. Big tobacco had to pony up the cash and it sits in a trust fund paying out previous lawsuits and works to reduce tobacco use.
It’s about time for an MSA with big tech. Have them commit to a dollar figure over a period of time that helps reduce the economic, environmental and social impact their products create. Because at some point the government is going to have to step in and pick up the tab anyway and I don’t want my tax dollars doing that. I want them paying because I didn’t make these algorithms and LLM’s, just as I’ve never grow tobacco. Give these companies some form of very limited immunity but have them pony up a few trillion (or more) dollars over a 5-10 year period.
Interesting idea. Thanks!
I had never heard of this so looked it up. I’d say there is a bit more to the MSA than just a one time payment: https://www.naag.org/our-work/naag-center-for-tobacco-and-public-health/the-master-settlement-agreement/
I think the payments in perpetuity are much smaller than the 100 billion that they originally had to kick in.
I was more getting at the non-financial controls it put on the industry as the “more to”. I could have been more clear.
How do you do that when no AI company is profitable? Tobacco at least made money. AI is just a money pit. How do you go "make a 100B fund" to a group of companies that are already 500B in deficit?
They’ll figure out how to commercialize it. Think about how well chat gpt gets to know you.
Any rough back of the napkin map I did basically tells me that that shit needs to be so goddamned over monetized I legitimately don't think it's possible, unless someone magics up like another 50 billion people.
AI isn't making actual non-imaginary money any time soon. So with what money is it going to also mitigate all the damage and load on the infrastructure it's bringing?
There will be a period of disruption just like any other big technological advancement as society adjusts, we've been here dozens of times before, it's not world ending Regulating a new economy changing technology before its usefulness and scope is fully realized or understood is just ceeding efficiency and utility over to our geopolitical opponents.
Could you imagine how different the world would be ordered today if in the mid '80s, the United States heavily regulated how much computers could be used?
For all the worst hypotheticals people point to AI being used for, I just respond that we already have a robust existing legal landscape to address those concerns and the fact that AI is used to do those actions as opposed to other tools is irrelevant to the legal environment.
Is the Trump administration’s broad move to ban AI regulation in the states the right move?
Not OP, but yes, it was the right thing to do.
AI is not just LLM's. If there is another world war, it will likely be fought in cyber space. At least a huge chunk of it. AI will be critical to both National defense and and offense. Imagine trying to manage a military strategy while trying to comply with 51 different sets of laws.
Then, there is the impact on sciences. Advances metalurgy, propulsion, space travel, energy production, medicine.. The list is endless.
I liken it to the internet. A state can limit what it's citizens can use it for, sorta, but have no say on the overall capabilities.
Imagine if there were 51 different sets of laws that controlled the development of the internet. AI development would grind to a halt, but only for the US.
This is going to be a long term problem, just like every other form of communication. However, if you want to legislate "misinformation" it requires you to have a legal source of truth. I certainly don't want that government to decide what's allowed to be true. Would you want Trump deciding what misinformation is? Or 50 different governors deciding for each state?
It's not just news information btw. I work in healthcare IT. We've learned the hard way it's hard for AI to unlearn misinformation. It's learned on 10,000 peer reviewed, cited journals that were found out to be fraudulent and woefully incorrect.
Study on AI has found that, depending on the source, they weight the value of a human being based on skin color, nationality. Most showing white American males had the least value.. This led to Google's AI making every historical figure black. Every pope, all the Vikings, Irish ancestors..
AI and it's applications are too new ro trust government to regulate it. We may need to Federal Guard rails to prevent abuse like the Deep Fakes and child abuse we have seen but overall, we should let the market work.
Like 99% of the "risks from AI" are just people bitching that not everyone has the exact same opinions as them.
Like 99% of the "risks from AI" are just people bitching that not everyone has the exact same opinions as them.