By 'We' I just mean the general Reddit stance.
Take a look at this. This isn't even the first time India has done something like this. I remember when they started teaching computer programming to 8yos in the 90s when they could see it was on the rise in the job market. I worked QA in my 20s and I recall an Indian women as the most proficient developer I ever worked with. There were hardly any bugs in her code and it was very easy.
The general stance I see on AI is one of anger and fear, a rejection of it and an attempt to quash it and that which is produced by it. It's reflective of the way people felt about Photoshop and Autotune when they were new. But that tech is still here. All of your favorite musicians use Autotune in the studio and someday they'll all use AI, because just like in those cases, I don't think we're actually going to succeed in putting it back into Pandora's Box, nor should we be trying to.
All the time we spend trying to do so is time that India is working towards turning out teenagers who already know how to build entire applications with it and whatnot.
/u/AccomplishedAir9550 (OP) has awarded 2 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
Delta System Explained | Deltaboards
The difference between photoshop, autotune and generative AI is that photoshop didn't make new images from photographers' portfolios and autotune didn't generate new audio files from artists' discographies. A serious concern raised by the presence and use of generative AI is artists who've spent, collectively, millions of manhours honing and exploring their crafts, all for their work to be copied for commercial purposes without permission to fuel a machine designed to replace them.
Setting aside the issues of intellectual property rights and/or theft, it's kind of a symbol of technology evolving in reverse; the dream of technology is for it to do the drudgery, the monotonous, menial and dangerous tasks that need doing so that humanity can be elevated to creative, scientific, intellectual, explorative and leisure pursuits. But what's happening is that journalism (including, very worryingly, scientific journalism), music, art, animation, coding, writing and the like are the jobs that are on the automation chopping block while real live humans are cleaning out sewer pipes, making deliveries, performing maintenance on wind turbines and fighting fires. It's exactly ass backwards.
You could call that idealism, you could level the argument that my position might be all well and good as a philosophical opposition to the widespread adoption of AI, but it is in no means practical, that, no matter how much it may suck, profit seeking competitive environments will still make its adoption a virtual guarantee; people aren't going to leave a cheap alternative on the table out of some sort of solidarity with the spirit of man's elevation. You could confront me with the seeming cold hard fact that Pandora's Box, as you put it, has been opened. The toothpaste will not go back in the tube. But, I'm not so sure of that, either.
So far as I can tell, AI has been somewhat overhyped. An inordinate amount of the liquid entering the industry is investments and what's swelling share prices is speculation. It's entirely possible, and postulated by many, that AI is in fact far less profitable than we've been led to believe, that its ravenous consumption of resources and need for immense infrastructure, in totality, could even outweigh the cost of the human labour it seeks to make obsolete. That it's a bubble. And that when it bursts, it will be relegated to being an expensive plaything as the price per prompt (at the absolute minimum it can be so that the company running the AI can make a profit from it) will outstrip the cost to commission a human to do the same thing. If this is accurate, which I am growingly convinced of, the more stock is put in what will turn out to be a boondoggle, the worse we will all be for it.
It looks like legislative action is being taken about this, though it does feel like a case of 'ask for forgiveness rather than permission'.
I feel like if this were the case, we'd see the same ire, just from those industries instead. And it kind of does make sense for the first things AI is capable of to be digital.
Do you have a source for this?
I'm not really sure what this means. The action being taken is action against the AI industry performed by people who are in opposition to how it's running with regard to plagiarism. It's not an internal audit from AI proponents and apologists that may keep them in check, it's active opposition, coming from people with the "attitude" you're decrying. Is your position that their attitude is right?
If technology were replacing firemen, bomb defusal crews and sewage workers, we would not be seeing the same ire, that being the ire that humans are still doing the dangerous drudgery while technology replaces the higher pursuits. We may see an entirely different ire borne from those already in those jobs, that being that they are now out of work and are forced to progress to get new work. Not an ideal situation, and one that should be mitigated by support networks, subsidised retraining, and ideally a phasing out of the necessity for a fully saturated workforce altogether (one can dream), but it's still leaps and bounds better than people being forced to regress to find work, getting sacked from their college-educated-to-so-much-as-apply jobs to go into dangerous, menial drudgery. I didn't say forwards was painless, I was saying that backwards is so much worse. And yes, of course, AI would be mostly digital. We should have invested in robotics. I'm not saying "AI should have been made to do drudgery" I'm saying tech should be made to do drudgery, by, among other things, taking the money going into AI and putting it into robotics and junk.
If you just mean people with more understanding and formal qualifications than myself echoing the same prediction, simply type "AI bubble" and "AI industry round-tripping"/"circular deals" into google and you'll see plenty of economists worried about the same thing, concerned there is a huge "overinvestment in infrastructure" and "overvaluing" of the industry. You'll see the amount of revenue that companies like OpenAI will have to make to fulfil their obligations and pay their debts and how much higher that is than is plausible. But if you want the cold hard numbers with no element of prediction, you want the autopsy, we gotta wait til the patient's dead, which is to say, the audits, reports and investigations are completed after the fact.
It means the people using the artists' music and photos and whatnot knew they were going to get shit for it eventually, but chose to deal with that when it happened rather than ask for permission to begin with.
Your third paragraph is the closest thing to making me think AI might actually be stopped but I guess we'll have to wait to see how it shakes out.
Yeah, no, I get that. But do you think their getting shit for it is a good thing? Because the shit they'll get (if they get it at all) is being prepared, not by themselves, but by people who oppose AI. So what do you think of their attitude? Without them having that attitude, no one would be catching any shit for it.
Also, if they caught the right amount of shit for it i.e. paying royalties and/or punitive damages to every artist whose work had been appropriated, even with all the money being pumped in, poof, there goes the entire industry. It would be a ruinous sum, because all of what AI is doing is plagiarism based. If I was certain that they would be forced to make good, I would no longer suspect a bubble but be damn certain of it.
AI is a threat to the average layman’s way of life, people see the job market collapsing with no recourse to themselves
People don’t see pure untapped technological advancement, they see an era of businesses replacing their livelihood with AI forcing them into destitution
Too much of the world lives on jobs that AI puts at risk and the Everyman doesn’t have the support necessary to wait for the economy to shake out to see what new jobs are made to replace the ones AI takes if an equal number are even created
Plus it’s awful for the environment which I feel like everyone has just accepted is going to collapse in 30 years anyway
People seem to have given up on trying to do anything meaningful for the environment and are just speed running to end times. It’s depressing as hell.
I’m in Michigan. There has got to be at least fifteen different data center proposals being fought right now. Those assholes want to use this state’s greatest resource: fresh water. They’re going to pollute it, wreck our electricity grid, then fuck off and leave us to pick up the pieces.
This again feels worth a !delta in explaining the reasoning for the fear, different from the last one, but I'm still left with the impression that despite people's fears, it will continue to progress and evolve, not be stifled.
Confirmed: 1 delta awarded to /u/Supersnow845 (1∆).
Delta System Explained | Deltaboards
It seems to me like your only consideration here is AI being a powerful technology.
Meanwhile, you're ignoring the ethical and environmental concerns entirely, when these are the main reasons people are reticent about AI.
Presenting it as purely a rejection of new technologies is either dishonest or mistaken.
I was inspired to post this by this thread. I don't see anything about the environment. It could be construed as an ethical argument, but it seems to me that OP simply wants to be able to avoid AI content. I've seen that attitude a lot, though I do also remember one user giving me shit for copy-pasting the top Google result from Gemini because I'd just wasted 'three bottles of water' or something like that, so that happens too.
Why do you Google for people? That is indeed a waste of resources. Either they have already done it before asking, then your answer doesn't help, or they haven't, then they don't deserve an answer.
I wasn't.
https://www.forbes.com/sites/eliamdur/2024/01/24/6-critical--and-urgent--ethics-issues-with-ai/
https://news.mit.edu/2025/explained-generative-ai-environmental-impact-0117
!delta for this and for both of your articles in general. I learned a lot about why people are afraid of it, even if I still think that despite people's fears, it's more likely to keep on trucking into the future than be stopped.
Confirmed: 1 delta awarded to /u/Cydrius (6∆).
Delta System Explained | Deltaboards
Such as?
Why do people have those beliefs? Why do you disagree?
You haven't really explained your view.
It might be as simple as a general fear of the unknown which I don't tend to experience. I'm more of a dive in head first and figure it out as I go type of person.
But this is you asking me why other people think what they do, so forgive me for not being able to provide something more substantial than a guess.
You can't possibly say people have the wrong attitude if you don't know what their beliefs are.
I know what they think because they're saying it. You asked me why they're thinking it.
I'm sorry but the article you linked smells like bs. All of the achievements are things that could have easily been 90% done by her parents while she just adds some stuff on top, then they slap her name on it and it gets loads of publicity because it was "made by a child".
The biggest red flag imo is that after reading the article I still have no idea what kind of prodigy she actually is. Usually child prodigies are good at one very specific thing like chess or math or languages. With this girl it seems like she likes writing maybe? But then it seems weird that she would launch a platform that teaches kids via video and not through text, and the lessons are about AI and prompt engineering instead of writing.
This whole thing really just smells like a PR stunt
And it's also possible that they weren't and you're shitting on an 11yo for no reason. I'm not sure what your point in speculating about it is.
With all of her Guinness records being related to writing, I'd say so.
From the article:
Guinness verifying it isn't good enough for you?
Because I've never heard of a child prodigy where it wasn't undoubtedly clear that they're way ahead of anything their parents could do and their achievements weren't things you can pay your way into.
Then why is her academy about AI and not writing? And why is it all video-based instead of, you know, her writing?
You mean the company that primarily makes its money by companies paying them to find a world record for them for publicity purposes?
It feels like you're trying way too hard to make a conspiracy out of this. Even if everything you're blindly and wildly guessing turned out to be correct, what does it have to do with the thread?
It's a great representation of the AI industry: lots of hype and fancy numbers but as soon as you question any of the claims things get a lot less impressive very quickly. People are skeptical of AI because 90% of articles written about it reek of PR stunts designed to get you to buy their product or pump stock prices
People are just using AI for the wrong things. I mean, what's right? But AI is incredibly useful for instensive tasks. I can make a constructed language (conlang) with Google Gemini if I am willing to interact with it and know a little bit / next to nothing. The way people talk down on AI, it sounds like they want to generate content on a prompt - or that is their experience with it.
People for the most part supported previous tech developments because they benefited most workers in the end. In many cases, one type of (often manual) labor was being replaced by a new type of labor, which still required those same humans to do the work. Their jobs just increased in responsibility and skill level. While some needed to retrain, their skills were still being needed.
With the AI revolution, the number of skills that are uniquely human, is shrinking. There is a risk that not many new jobs are going to be created to replace lost jobs, since automation is now capable of taking over 1) tasks that require fine motor skills and 2) tasks that require complex cognitive skills. Some jobs will remain, but it will likely be fewer jobs in total, and we're not making much headway in alternative income arrangements, like universal basic income.
I have no problem with AI as a tool as long as it is ethically trained and has some regulations to protect society from nefarious uses and fraud.
I think it’s also healthy to approach any new industry with some reservation. The investment in AI through money, energy and political capital is an effort to monetize its use…not out of charity. It could pop like the dot-com bubble and we should be careful about how much we rely on an unproven technology.
Yes in some cases it’s like photoshop but in other ways it isn’t. If photoshop is a musket AI is like a robot with a machine gun.
I appreciate your take. But nonetheless in my opinion, AI was developed and pushed out at a dangerously fast rate.
This is more of a philosophical take but as people we are meant to evolve, and advancements are meant to evolve with us.
Unfortunately there are a lot of greedy people in this world who wanted AI pushed out at light speed as a sole means of smash and grab in the job market.
It's sad, not exciting. People are not excited. I dont think anyone in the working class wanted this
I think that this is a point that gets ommited too much. I like AI. I see it as tech that can change the future. But the way in which it is used and pushed is contrary to that. Instead of another great equalizer, it becomes a profit-generating slop machine that is hamfisted into anything without care whether it is beneficial or detrimental. LLMs used in way they should not be used because it sounds good and brings profit. Generative AI that cuts jobs of people who are crucial for its existence.
AI will cause awesome things in the future. But first it would devastate everything due to greed and shortsightness of DotCom 2.0: Sloppy Boogaloo.
Honestly this is spot on. While we're over here having moral panic about whether AI art is "real art" or whatever, other countries are already teaching kids to actually use these tools productively
It's giving me major "the internet is just a fad" vibes from the 90s. Like yeah AI has problems but so did every other major tech when it first showed up. We're gonna get left behind if we keep acting like we can just ignore it away