Ai is very broad, but it’s mostly using huge datasets, doing stats, and machine learning to test out different parameters until something is right. It’s a broad category and I’m not an expert but it’s found itself in a broad array of categories.
But what I hear about and see the most is the use in business – like writing emails and reports – internet tech – streamlined data collection and UI personalized customer experience.
As well as in art, music, and video. This stuff can create still graphics pretty easily, it can create music that still has a bit of a robotic sound on the vocals usually but not much different than what auto tune sounds like, and film – this is the least developed and most uncanny but so many people are super determined to make these uncanny 10 seconds clips into blockbuster movie someday. Creators may not realize this because they’re just trying to make something as good as they can, but most creators intuitively understand that art is about actually listening to and or watching a human display their humanity by making everything else less distracting - they know the focus but it’s people biologically want to experience other people.
So everything, even if the intentions are good, just feels like a scam or propaganda or something. It’s a fish-hook. A magic trick. But we’ll be averse to it even if we’re fooled by it and someone tells us it’s not real.
Huge problem I’ve been seeing is these AI generated scam video ads, job postings, emails, probably astroturfers, marketplaces, etc. It’s making scamming way more efficient and social media and YouTube are propping it up. They know these are scams, but they’re just charging the scammers more and putting these scams in front of the people who are most likely to fall for it.
It’s almost like the AI industry has prioritized “Can we trick more people and how can we make this accessible to everyone really easily?”
What the AI industry should be prioritizing is genetics, protein folding modeling, the function of the human connectome, oncology, etc. If LLMs can generate text, why can’t it tell me why it came up with that? This is an educational opportunity for language learning and linguistics.
The priorities are totally wrong. Should be focusing on curing diseases and solving problems, not generating sound and videos.
Edit:
I changed my view because yes, you cant allocate resources to using AI somewhere else, it’s used where it’s and it’s good that someone is working on it at all.
Now CMV back to my original position.
/u/Optimistbott (OP) has awarded 2 delta(s) in this post.
All comments that earned deltas (from OP or other users) are listed here, in /r/DeltaLog.
Please note that a change of view doesn't necessarily mean a reversal, or that the conversation has ended.
Delta System Explained | Deltaboards
I get the frustration, man – scrolling past the 47th AI-slop ad promising to make you a millionaire with one weird trick makes anyone want to yeet the whole field into the sun.
But here’s the thing: the same stack that spits out fake Drake tracks is also the one that just knocked out a crisper-than-crisp 3D model of the Zika virus shell last month (I saw it on biorxiv, my labmate literally cheered). The money flooding in from those cringe AI pop songs is subsidizing the GPUs that researchers like my old PI now rent for pennies to screen chem libraries that used to cost pharma millions. The talent pipeline works the same way – the dude who built Netflix’s thumbnail generator just pivoted to protein design at DeepMind because the tooling he learned on memes now lets him fold a new kinase in a weekend.
So yeah, the scammy side sucks, but starving the “frivolous” uses would just kneecap the compute and cash that the science side secretly relies on.
Δ
I think this deserves another delta. I had this thought and I wasn’t sure if it was actually true. You have to push these models to do trivial stuff. It’s one thing to push them to do stuff that we know how to do every step of the way, but what’s the best way to troubleshoot bugs for complex stuff that we maybe don’t know the answer to?
Art is that sort of intuitive black box. We know when something is wrong with audio even if we don’t know exactly why. Timbre perception is a bizarre feature of the human mind. The cochlea perceives multiple frequencies – just like sine waves of different amplitudes and durations and the brain is able to group these frequencies into an object but also hear it partially in terms of just the mix of music. When you think about the high end transients just like noise and whatnot, the difference between a d and a t or a b and an f in a so minute, difference between an s and a sh varies in context but we know it intuitively. So to combine pitch, noise, instrumentation, and all of the facets of music, you’re preparing these programs to tackle some of the most difficult things that we can’t assist in learning. It’s a stress test. The problem is people trying to monetize these stress tests and saying that it’s going to replace the human-made arts. These are deeply cynical people. I think crossing the uncanny valley has diminishing returns and they’re not going to take it so far that an ai wins a Grammy or an Oscar.
We just need to be better about laws in regard to this stuff. The U.S. and the west needs to allocate more resources to crack down on cyber crimes including scams everywhere. U.S. needs to crack down on zuck, musk, and bezos allowing scams on social media and marketplaces. It truly does look like scammers and big tech are in cahoots.
Confirmed: 1 delta awarded to /u/Aggravating-Ant-3077 (1∆).
Delta System Explained | Deltaboards
AI tech is being used a ton for scientific discovery. I use and develop AI methods to discover new drugs and model protein interactions every day. Hell, some of my stipend and research funding is coming directly from NVIDIA
You just don’t hear about it as much because most people don’t find it as interesting as image/music generation.
Bingo. I sell lab instruments, and AI drug discovery companies are a solid chunk of my business. Will it pan out long term? Quite possibly, but my pay is tied to what I sell rather than what my customers do with it.
Yeah, I know a lot of people training AIs to do object and event discovery in astrophysical data sets.
It's also being used to read the carbonised books from the library at Herculaneum, which is very vool
Whats the best place for me to learn more about AI use in drug discovery and protein interactions?
I studied this at uni (biochem) and have gone into coding but using llms all the time, really hadnt imagined LLMs would be useful for that 3d modeling yet?
So LLMs like ChatGPT aren’t directly used in drug discovery and protein modeling, except to maybe debug some Python code haha.
The closest thing we’ve used are protein language models (PLMs), which borrow a lot of the concepts from LLMs. PLMs are still relatively new, but they’re finding their niche in some protein design tasks. I’d take a look at this preprint review that introduces the concepts pretty well
Wasn't it basically PLMs that the Nobel prize was awarded for last year, or maybe two years ago?
Not exactly. In their simplest terms, PLMs go from protein sequence to structure by interpreting the sequence as a “language” through a transformer architecture.
Models like AlphaFold2, which won the Nobel prize, also process sequence in a similar-ish way, but they also use genetic and coevolutionary information extracted from multiple sequence alignments as an additional input. Both of those inputs are passed through a transformer, but then outputs are fed into a graph network that maps the intermediate information from latent space to Cartesian atomic coordinates.
So the answer is “kinda”. Sorry if my answer was rambly, this is the kind of stuff my thesis is about
LLMs are a subsect of machine learning. You probably know how LLMs are trained, giving it a bunch of examples of language, then it identifies the patterns in the language to best estimate how to form a sentence.
The field of machine learning is kind of the same thing, you give a computer a bunch of data, it identifies the patterns, then it can do stuff. A simple example is if we just give it a big dataset of pictures of cats and dogs, with each picture being appropriately labelled as a cat or dog, the computer then looks at all of the data, do a bunch of complicated math to represent patterns seen throughout the data, and eventually you'll get a model where if you present it with a picture of a cat or a dog, you'll have a pretty good chance of it correctly guessing what it is.
You don't just go to ChatGPT and say "hey find me some new proteins", it doesn't have any logic for that, it's just really good at guessing how to construct a sentence. You instead train and apply a model specific for your needs and goals.
Can AI compensate for the lack of scientific experience? I build a lot of stuff with AI but I hesitate to try scientific experiments due to my lack of domain knowledge.
I kind of agree with OP because most people don't know how bitcoin works but a lot of them still mine bitcoin. But we can't say the same for LLMs. If we can build a AI persona which has the domain knowledge I believe a lot of science enthusiasts with curiosity and common sense but not technically trained will use AI for a lot of resource intensive scientific experiments.
Not in my opinion, no. I’ve always joked with my lab mates that it’s always easier to teach a biochemist computer science than to teach a computer scientist biochemistry.
Especially in my field, there’s a lot of domain knowledge and biochemical intuition that’s incredibly tough to teach, and LLMs like ChatGPT fail to grasp sufficiently. If you do decide to start tackling scientific problems, I’d recommend to start slow, and question everything.
Their loss, I'd like to know more about AI-driven scientific discoveries. But then again, I'm not like most people.
Its also a morally positive and intended use for AI, where as the AI used in media/looking up basic answers people are too lazy to do themselves is slowly dooming us to an uneducated feudal-capitalist hellscape.
Hi Ybot
So these use cases are being prioritized?
The people who were making scientific discoveries/doing medical research/etc in the pre-AI era aren't just sitting on their thumbs waiting for someone to tell them to "prioritize" use of AI. OBVIOUSLY they're actively exploring the ways AI can help them do their work better and more efficiently.
This is huge business, and a huge use case for AI. It's just that the low-effort stuff like making shitty AI ads is just much more visible to you. The fact that your algorithm isn't feeding you all the latest scientific research papers that use AI and ML tools is a reflection of your browsing tastes, not the state of scientific research in 2025.
It just means that you are only being exposed to what's within the reach of your convenience.
If you want to find the use cases beyond, you need to do due diligence to go look for them instead of just observing what's being fed at you through the media
What do you mean by “prioritized”? By who?
Why aren't burger chains prioritising making electric vehicles? They're different industries, doing different things. Scientists are doing science and using AI to do it. Marketing people are doing marketing and using AI to do it.
That's it. You're phrasing it like there's a limited pool of AI and somebody is coming along and deciding how much to allocate to each industry. Anyone can apply it at any time, for whatever they want
Bro there's no such thing. Scientists do science, programmers make apps
Why are you under the impression that AI isn't being used to solve those problems already?
I am under the impression that it is, but my view is about priorities.
They're totally independent things. It's not like I care if Mcdonalds is using ai to replace drive thru workers if I'm doing scientific research.
This has been a thing for a while with chain of thought models. You can't see the whole equation that they're doing to actually give you the answers but you can see the model's reasoning. On chatgpt you can click the text that says thinking and it'll show you the different reasoning queries it made while generating your answer.
i think your assumption of AI and science are probably misguided.
You probably think AI into science = new discoveries every year etc
Science is often slower than people assume, cause AI can say one thing but scientist gotta physically prove it via experiments.
No, AI is being used in surveillance and the ChatGPT, Sora stuff is a cover up
AI is being used in many scientific endeavors, including data analysis, experimental design, science publishing, automation, robotics, and others.
https://fastdatascience.com/ai-in-research/
https://publications.jrc.ec.europa.eu/repository/handle/JRC143482
https://www.sciencedirect.com/science/article/pii/S2666990024000120
Sure, but is this priority? We have finite resources.
What resources are you talking about? The people that are working on AI algorithms to help with, for example, protein folding are not the same people as those working on LLMs. If the LLM people were not working on that, there would not be more protein folding AI people. There just would simply be less LLM people. It seems that you are suggesting that LLMs, particularly for AI art, are taking people away from AI for scientific endeavors. That just isnt true.
Δ
This is the explanation. There is no lump of labor. Fewer people would just be working on AI stuff.
Confirmed: 1 delta awarded to /u/_BigmacIII (1∆).
Delta System Explained | Deltaboards
Not every problem in scientific research is a question of resources or headcount. The core presumption is off. 'Ai' as a general rule can only expound upon the rulesets it's given. You may have heard the old addage that a computer is only as smart as it's user, in this case, the user is all of humanity. Insofar as Ai models can assist with research, they are.
Art and code are more tangible, have an immediate displacement effect, and are still ultimately only able to iterate on existing concepts. Therefore you will hear about it more in general as it moves into those spaces while they will also effectively stagnate. Innovations still have to come organically.
How much have you looked into this question?
From just a quick Google search before I head to bed, a large portion of AI-service providers value is tied directly to the tools and services they provide to research purposes. The problem you seem to have is what you personally see it being used for and what reports you hear vs what it's actually being used for because that isnt as widely publicized.
Currently reports shows that 61% of all scientific research is utilizing AI tools at some point in the study. Which is up from just a year ago where it was being used in about 45% of research.
This is like saying the internet is all cat videos and TikTok dances. In reality, most internet traffic is invisible data being sent back and forth for purposes most of us don't even think about. Same thing with AI - typical users don't see protein folding, etc. because... why would we?
Because they're entirely different models that do different things. The models you're talking about exist and are developed all the time.
I fail to see how LLMs prevents this. I can see the argument for resource allocation but enterprise customers want LLMs for all kinds of things and enterprise customers are the ones who give these companies their billions.
Companies/universities that need specialized models for medical purposes or other ML needs will often just develop their own. Open sourcing these models allows others to continue their work, usually for free. So it's a seperate ecosystem of research that often has little to nothing to do with the goings on at OpenAi/Google etc.
It's not wasted because it's happening concurrently.
>Ai is very broad, but it’s mostly using huge datasets, doing stats, and machine learning to test out different parameters until something is right. It’s a broad category and I’m not an expert but it’s found itself in a broad array of categories.
I mean, right off the bat OP basically proves they're using "AI" to refer to "all magic-y computer stuff I don't understand", so I think this level of nuance is probably going to sail past
Yes I find most people don't know the difference which is understandable. I mean I don't. I just DO know that there are many kinds and the majority of them don't behave like LLMs.
Hell, even LLMS don't respond like LLMS if used without system prompts and prompt templates. Most OSS models are completion models, or they used to be and they often need to be modified to behave like instruct models.
What about this is specific to AI? Couldn't you just say all of Hollywood and every restaurant should close down and everyone should stay in eating rice and beans with a textbook and devote their lives to science?
How did you get that from OP’s statement?
They’ve said in the realm of art and creative practices, AI is being used to replicate human work cheaply but in uncanny ways, when we enjoy art because it’s an expression of humanity. Artists are very upset about the use of AI to replicate their work. ‘Hollywood’ and ‘restaurants’ would fall into this category.
Whereas the sciences, such as medicine, are not subjective and AI would help discover patterns that would be difficult for humans to detect.
There’s definitely truth to that and maybe that’s actually a good view to have in general, but I think what’s different about it is that people actually want to see art made by humans. I don’t find ai stuff to be particularly enjoyable on a base level. Like I said, it’s like enjoying a magic trick to me. People like magic, it’s enjoyable, but the goal is to fool.
The AI industry is an industry driven by profit. The industry does not perceive itself to have any obligations to society outside of its shareholders, as with any other business.
Research does not drive immediate profit. This is why we fund it primarily through our government. Profit-seeking enterprises do not wish to spend capital to do research that may or may not pan out.
Your view is predicated upon the notion that the AI industry and other profit-seeking actors have concerns for bettering the world. This is not what the market rewards, so it is not what the industry focuses on.
My understanding is predicated on that notion – that it’s manifesting this way because of profit seeking because social progress isn’t rewarded, but scams are.
My normative view is that it shouldn’t be like that.
I'll humor your argument with some math lore and point out that the AI you hear about and can interact with on a daily basis, that we've spent the most money on, is primarily generative AI. Generative models are great at predicting words or tokens based around features in a language mathematically extracted from the data they were trained on and context from the input prompt, but it really needs to be understood that generative models are generating output; not logically deriving output through sets of rigid and logical scientific steps or rules.
LLM's, in particular, represent weighted features derived from the correlations of training inputs to expected training outputs, which are notorious for not implying causation. To find causation, we need to derive discrete equations or logical steps that account for all variables and accurately calculate the correct output given an input, which is the antithesis of our current approach to AI using generative models. Instead, the whole reason we train up features in an LLM is to avoid the fact that we don't/can't know all of the variables to the equation or what the equation is or what solution we're even trying to reach. Because of this, generative AI is less fit for most scientific tasks than discrete logic, mathematical problem solving, and the standard scientific method.
There are, of course, one-off fields and types of research where guessing or predicting possible combinations of things without exploring or caring why a particular output was generated is a totally valid approach to the discovery portion of the scientific process in those fields. However, that's not because generative AI is the best possible fit - it's simply a good mathematical fit that approximates a formal equation or process we have yet to fully discover, understand, and logically define.
I wholly agree with you that it shouldn’t be that way. That would require a shared sense of communalism over individualism American culture does its best to stamp out of us.
Does it make monetary sense?
People are investing hundreds of billions in data centres.
Dont get me wrong, protein folding prediction by deepmind was and is and will be super useful and important. But our framework is one of capitalistic driven tendencies. The AI paying out on shitty art and short cut business hand waving is actually its true and intended purpose: to make returns on investment. It doesnt matter that we both disagree, that money being funnelled into data centres has to HAS TO make a return for investors. That was why it was invested in the first place. The slop focus of AI is part and parcel with its inception and growth. Its inextricably woven together and you cant easily seperate it.
Yes that’s definitely my view. If you can explain why that wasn’t my view before, I’ll give you the delta.
You say it's being wasted. I'm merely pointing out that it's entire growth is based upon investors expecting a return. Those guys aren't in it for the charity or the scientific progress. Anymore than I can compel you to give up 90% of your income you labor for for charity and scientific progress.
If you spend your income for personal gains is it being wasted if not on charity for science gains instead?
You are attributing a role on ai that does not give the same returns as the money being poured into it warrants. It's naive to suggest that you have the smoking gun better role for ai when it's not your hundreds of billions of dollars involved and on the line.
There's no centralized "AI industry." AI is a technology with broad application across industries and audiences, including science.
AI and machine learning have helped with breakthroughs for years--supercomputers integrating AI to boost simulations in climate, hurricane patterns, energy studies. Virtual clinic trial simulations to advance treatments. Computer vision in materials science and AI modeling to detect defects and analysis of materials.
Investments have surged with recent AI breakthroughs (genAI + agentic process automation). The U.S. Department of Energy just announced a $320 million investment for scientific discovery; the US National Science Foundation announced a $100 million investment for AI research in materials, drug development, STEM-AI education; another $2-3+billion for cross department scientific applications. Then you've got the hundreds of billions invested in the private sector for scientific research and applications. You can learn a lot about these advancements with a little research--there's some fascinating work being done with lots of breakthroughs thanks to AI.
AI isn't even good for art or business, why the hell would we apply it to science?
Paid AI services are pretty good at art now, I agree, 1 year ago it was still pretty bad but now we are at a point where you can't really distinguish between human and AI art anymore.
For business it's not bad either, you just need to know which tools to use.
Good point
Why not both?
I have access to all the worlds knowledge at my fingertips with the power of the internet, yet I laugh at shitposts and stupid memes. The internet isn't less valuable for learning because of memes. Similarly, generating images with AI doesn't prevent anyone folding proteins.
Because there are finite energy and resource constraints at any given time.
So I should feel bad about laughing at stupid memes on the internet because it uses up energy and bandwidth that could be better spent elsewhere?
If it uses up literally all the energy or bandwidth: yes.
I'm not sure where exactly the switchover point is, but there's no reason to think that it can't be somewhere between what you actually use and what generative AI is collectively using.
Capitalism. AI companies want to maximize their profit.
Yes, that’s my view
I take huge issue with this position of "AI has its uses in research and medicine" because most people have a fundamental misunderstanding of what AI is being used in certain circumstances.
I've done some research while getting my chemistry Masters and had to use what some would consider "AI" to digitally model proteins and molecular structures. The main difference though is that what I used was basically a self-checking algorithm that would solve many wavefunctions to the highest accuracy possible, which I could tweak various parameters to achieve greater accuracy and relevance to the data we would gather in lab. These "AI" models were made for a very specific purpose, performing large amounts of math and checking itself in specific ways as engineered with a purpose by mathematicians and could be further tweaked. I did this 4 years ago, and from what I understand these math algorithms have been a well-studied tool used for many years and I think I could call it a "Small contained mathematical model".
The type of AI we've seen pushed over the past few years are Large Language Models. These pull data from mind-boggling large sets of data (basically the whole of the accessible internet) to perform general tasks to the highest accuracy possible. The issue with this is that they work through a "blackbox" which self-regulates and adjusts parameters, but engineers seemingly can't understand what is tweaked. This would be like being provided a research paper but when asked for citations and methods the researcher/student would either respond "lmao I made it up to look right" or "I honestly can't tell you, I was blackout drunk the entire time". So in order to peer-review the paper you would need to replicate every single thing described in the experiment and compare one-to-one, at which point you might as well have not used an LLM to perform research in the first place.
There are many articles about the problems academic bodies are facing due to the prevalence and use of LLMs in research, especially in regards to reproducibility and honesty. I hope this can discourage you from thinking that LLMs have a place in science in medicine research.
LLMs have shown some use in cases where the result can be easily checked, but that's not really the point here.
LLMs (and other generative AI) use the same resources as other neural networks, to a somewhat smaller degree similar computational loads and, at this point on account of production capacity and electricity, all other computing to some degree.
These aren't mutually exclusive, it can be used for art, business and scientific discover.
But limited resources no
It does not have the ability to help with science at present. What it is being used for now is what it is able to do.
It can be used for coding, quite well. But mostly that's because there are so many code snippets available that it can just copy and extrapolate from. Coders can also go to code databases and copy and paste code from them, so the LLM isn't really doing anything that couldn't be done without it.
But other than that, it really doesn't work because it doesn't know things. I'm an engineer, and I have had several young engineers and interns turn to LLMs to help with their work. I have yet to see them be helpful, and I have seen them be dangerous several times. I have seen the LLM call out chemical mixtures that would release gases that you absolutely do not want released, chemicals that would react with the materials they are being told to store things in, and chemicals that flat out do not react in the way the LLM claims they would react.
They are not ready for this yet, and I doubt an LLM ever will be. I'm not saying AI never will be, but there is a difference between an LLM and a real AI.
Yes, LLMs just need the tools that we already have inbuilt. Just need to combine it with other tools, and the LLM can just be used as an interface of communication that makes science quicker.
There are ways to develop machine learning to train these AIs on the physical world and cause and effect.
But my point is that people are scrambling to make these tools work for these random purposes, and they fall short with actually useful scientific purposes because any ability to do science is just emergent from being able to make a realistic video of will smith eating spaghetti
AI is being used a ton for scientific discovery.
You can’t just ask an a LLM to just invent stuff out of the blue, but what it can do is look at really large datasets and show you patterns and variables that you didn’t know where there.
It’s also a big time accelerant in the processes, speeding up the meticulous bookkeeping / data entry / etc.
It has absolutely massive implication and it’s being used heavily there.
Honestly it’s actually probably the bulk of AI application.
But it’s behind the scenes and an accelerant, rather the thing that “discovers” - so you probably simply lack visibility into how those fields actually work.
AI is being used in medical research like crazy. AlphaFold is an AI used to help figure out secondary to quartinary structure. There are AI programs used to determine what known proteins have desired active sites as well as how to synthesize novel proteins if no known proteins meet the need.
It's not.
Lots of that work is being made, the effort now is to educate the damn thing and bring it to speed, that's why it's doing art, creative writing, etc etc
AI is being used for genetics and protein folding models. To me, this sounds like saying “computers should be use to better understand science, not make art”.
One thing I wonder is how, when AI is used for science, do people verify the sources used by AI? For example, when I myself as AI to do something simple like produce reviews of a certain software program I’m interested in using, AI produces very convincing and deceptive results. It makes up fake reviews and makes them look real, giving people names and titles. Then, when I ask for the source and link to said review, I find out it actually doesn’t exist, and when I find out that there aren’t actually any reviews like the ones I’m looking for, of the software in question, but that AI simply produced results from a “similar” application, I wonder how this process messes with the scientific process. How can we trust peer reviews, when the content peers are reviewing is deceitful and designed to be convincing and tell you what you want? What controls can actually be placed on AI to follow scientific ethics and the process properly, when AI is more motivated to simply just produce results, even if those results are false?
I guess what I’m arguing is that maybe AI shouldn’t be used for science at all. AI has no conscience or way to be critical of itself, and that is something that is crucial for scientists.
It has been used for decades in science and technology, its just not as media friendly and you aren't aware of it if you are outside these areas. I learned 10-15 years ago about the massive benefits for doctor panels on deciding how to treat patients with conditions that are unique or can be treated with cutting edge technology. One Dr. + AI is better than 15 doctors because AI can instantly keep up with literally every published document while humans can't. Its also extremely useful for doctors in rural areas where they are essentially the only doctor.
The focus has been on curing diseases and solving problems until recently when these companies figured out how to sell it to the average person. Now the more important, but now rarer by percent use cases, are seeing massive improvements from the recent popularity.
The AI bubble will pop eventually but the massive benefits to curing diseases and solving problems will remain.
I think you are misunderstanding the AI landscape.
The major successes in AI are driven by the huge datasets generated by trawling the whole internet.
This data is suitable for generating pictures and text etc.
Most scientific research needs new experimental data. This still requires a human in the loop. At the extreme example, think of developing a vaccine. You have to do a bunch of experiments in test tubes, then in animals etc and finally on people.
So its not simply that AI is just going after business reporting because its profitable, its because this data is easily available on the internet.
Protein folding is one area where you can use simulations to generate new data and that has proven extremely successful. See alpha fold
Don't worry, it absolutely is. One of the big open questions is if there is a way to solve the navier stokes equations as well as the quantum gravity / relativity unification problem (both are basically the same issue). Another big one is predicting material properties based on simulation, allowing them to find materials with exotic properties. The big one they are looking for is a room temp super conductor, which would make fusion viable. Last but not least, figuring out why tokamak reactors are so inefficient. There is no way to observe what is happening on the inside so we have to rely on models but the models keep getting it wrong.
AI is a very broad field indeed. In the last couple of years LLMs have become the part of AI that’s extremely well known to the public because it does things that humans understand well.
LLMs are GENERATIVE AI, they’re not analytical AI, and their use is for generating creative media, not making discoveries. They may look like they’re similar, but they’re really not. They work by creating a plausible and predictable response to prompts, not an original response
The AI programs that are use for groundbreaking science are just bit of interest to the public, because they’re difficult to understand and they don’t have a nice chatty interface, but there’s a lot more to AI than just generative AI
Even if AI were to be diverted strictly towards scientific and medical advancement, it still only has that “approximate knowledge of many things” byline. Also, assuming the AI hasn’t already corrupted itself, the wrong parties could have the galaxy brain notion to hack it.
AI is a pool of averages. If every artist is commenting how hard it is to get hands right, chances are the AI is going to mimic that “learned behavior” and fuck the hands up.
Likewise, if you have a mountain of medical research with a racial bias for example, the programming is bound to take those averages as gospel.
Doctors are already outsourcing to AI and it’s causing an absolute mess.
The thing is a scientist will just make their own on the fly as needed, niche ML programs are better than LLMs at real world tasks and far easier to make, which means there's no money in it and no AGI hype to boost the stock price so no CEO will talk about it. Deepmind solved Protein folding in a single pass with a brilliant purpose built Machine learning algorithm and made $0 out of it. The reason you don't hear about it in the media is because it's too effective and easy to be very profitable and they'd rather people forgot about it actually.
I have to agree. They should be putting laws and restrictions on some of this ridiculous stuff and instead keep ai technology just to genuine advancements. I can't wait for this ai fad to be over. Companies don't even know what to do with it and are just jumping in to stay trendy, and next thing you know now there's a robot taking your order at wendies for literally no reason other than it's a new fangled thing..since they have perfectly good employees to take your order or those giant touch screens for a seamless no contact experience.
[removed]
Comment has been removed for breaking Rule 1:
If you would like to appeal, review our appeals process here, then message the moderators by clicking this link within one week of this notice being posted. Appeals that do not follow this process will not be heard.
Please note that multiple violations will lead to a ban, as explained in our moderation standards.
Easy. According to Moravec's paradox, the kind of thinking used in the creation of art is the easiest for a computer to mimic, as it requires the least processing. With a middle length second going to business tasks. (This is why, stereotypically speaking, the two easiest kinds of degrees to get are art degrees followed by an MBA). It should therefore be simply reasonable for the tech to be ready to deploy to art and business first, then into science.
But it's been used like crazy in those fields already!
No one is missing out on what you're saying. Maybe what you're saying means it should be publicized more to the broader public? The news media doesn't think the public is intelligent enough to care about these stories, so we get all the other stuff that's basically ragebait. But it's happening, biotech isn't missing out on AI by a longshot.
These things are happening; they've used AI to read burned scrolls from Pompeii with a laser scanner. They're using it for pattern recognition in scanning images for signs of cancer.
The problem is they're also using it for a lot of crappy stuff like AI art and replacing human endeavor because that's just another way to maximize profits in our current capitalist hellscape.
Youre smoking if you think we are not using AI to help discover breakthroughs in technology research. Of course "they" are doing it. And the idea that the people using AI for scams could pivot and direct their energy and efforts to save humanity is kinda silly. They probably don't care and are mainly after immediate $$$. That is the scammer motivation.
That is exactly what AI /used to be about/.
It took in massive data sets and churned out patterns that humans could not find themselves.
The criminal activity started when people asked AI to take the data and predict things with it. Generative AI.
The first one distilled answers.
The second one hallucinates answers.
AI is not a mature technology.
Basic research into medicine, disease, and health is already difficult without having to use an inconsistent tool at this moment.
Imagine trying to unravel the complexity of gene expression and epigenetic factors while monitoring for potential hallucinations from AI.
That's like trying to land a plane in heavy fog, but your altimeter and GPS systems might be making shit up.
AI can be helpful for certain more tedious tasks like data entry and some pattern recognition. But it's not something that needs to be pushed into every field and every industry.
In practice, AI is already being used the same way other imperfect tools are used: as an assistive layer. Researchers already work with noisy data, flawed models, and instruments that require validation. AI is not unique in that regard, it just makes its uncertainty more visible.
The plane analogy actually cuts the other way. We already land planes in fog using instruments that are known to have error margins. The solution is cross checking, redundancy, and human oversight.
I understand. Nothing we use in research is 100% fool proof.
However, some of those tools we have been using for decades. We know what to watch for, we know how to spot mistakes, biases...etc.
AI is new. Even developers don't understand how their language models work sometimes. We are also feeding it imperfect and biased data, leading to things similar to how facial recognition struggles with dark skinned individuals because the coders are mostly white and tested their code on mostly light skinned individuals.
I'm not saying AI needs to be banned. But we need to be extremely careful. How much error do you think the aviation industry tolerates from plane manufacturers? If your average altimeter and GPS systems are hallucinating as often as ChatGPT, how many plane crashes/close calls do you think we might see?
During the tech boom in 1990-2000s, do you think that all the new equipment, sound technology, recording equipment, etc. should have been redirected to scientific study? It seems weird to artificially say what new technologies should be used for, and utterly unenforceable if regulated.
This is so dumb. Ai is being used massively for scientific discovery. But scientific discovery doesn’t reach the average person. AI IN Creativity and business (ie productivity) does reach the average person much more that’s why people know about it now
AI is very, very bad at making deductive leaps, or of lateral thinking. These are human traits, and thus far we've seen no reason whatsoever that they're in any kind of danger from AI.
The two things are independent of each other. AI is actively being used for the exact things you suggest, and this isn't taking bandwidth away from generating text and images.
AI *is* being used in many areas of science. Generative AI for text and images has dominated the popular discourse, but that doesn't mean those are the only uses.
It is being used for scientific discovery. You just don't see it plastered all over your social media feed because your average person isn't using it for that.
You have no idea how ai is being used all over the world. You think it's being used just to make pictures because that's what you see on social media.
It is being used for that. It’s just also being used for creative prospects which is fucking stupid because AI is shit at that
It is, and has been since long before it was something the average person even knew existed
AI went where data was easy to steal, it has little to do with utility based objectives. High utility objectives will require insightful application of AI tech, commoditized.
Generally pro AI. Agreed 100%. Blame the corporations in this case - not the tool.
There's more profit/evil to be had for the tech moguls in running GoebbelsGPT.
It's not exclusive to any of those nor is it a zero-sum game
It's being used for both...
Then lead the charge
dude… they are..