Looking at the discussion, there are some interesting points being made.
These people are much more qualified and smarter than me, but I do feel the need to point out that some other people did a much better job of defending her original thesis; AI can not generate new knowledge.
It does boil down to semantics after a fashion, but I think that - as someone else pointed out - a key difference between human knowledge and LLM is that humans observe things with our senses and then interpret and organize them as thoughts. Right now LLM have to have data entered into them and to be sure that the data is real there has to be a human involved at this point.
From that perspective I would humbly suggest humans cannot be excluded from the creation of new knowledge in the current eco system
You're mixing up AI with LLMs, which is their intent.
AIs can definitely generate new knowledge and do it every day in many scientific fields. They do this by organising data and turning that data into information that has meaning. Just like humans do naturally.
"ai" us properly speaking a field. There is no technical demarcation between "LLMs" and "ai". LLMs are a subject of study in the field of Ai. There is no such thing as "an Ai" as a technology.
Ai cannot create new knowledge. Even in your example it's just organising existing data which is not creating something new. At best it can absorb information and come to conclusions quicker than humans will do
That’s like saying the output of any process isn’t knowledge generation because all of the information was present in the input data and you’re just performing a transformation to make it the focal point. It’s just silly, though. To, for example, say that everything that happens after the retina is not knowledge generation is absolutely absurd.
Assuming AI were able to do that, no that's not new knowledge, that's only using preexisting knowledge that humans would have to programme. I'm not even sure that's AI, if a human is inputting all the data and using the radar that's no intelligence being used there. By that logic if I use a computer to write a book then the computer is the author.
There is no need to assume. AI is already doing this, along with cancer research and almost every other field that collects data.
'Humans would have to programme' that's not how neural nets work. They let it look at preexisting conclusions from past data, and it learns what kind of knowledge it needs to find, what patterns to look out for. Exactly how we train human data scientists.
The instruments collect the data. The AI looks at it for meaning and produces knowledge.
Well the issue really comes down to AI not actually thinking. An LLM is just predicting what comes next after you give it a prompt, with some randomness sprinkled in. Although, with the same random seed the output of the ai is deterministic.
It's just mixing texts it got during training to create a novel output, which makes it inherently not able to do new things, it can only give you the most likely combination of old things to a novel input, with some variance through pseudorandom numbers to fake the idea of thinking.
In the end, the thing that's thinking, the thing capable of independent thought, is only you and by that measure you are the one creating new knowledge, while the ai is only parroting old knowledge in a trenchcoat.
By definition, knowledge is something that is known. If the AI doesn't know it, it's not knowledge.
This might just seem like pointless, pedantic semantics, but it's the point. I could ask a random number generator to guess what 2+2 is and it might totally be correct, but there's no way to know it's correct unless you have a way to independently establish correctness.
Even if you build your random number generator to be very likely to be correct, you would still need to know the correct answer first to build it that way.
LLMs are just very sophisticated random number generators built on things we already know. They can spark inspiration or prompt a new line of thought (in the same way, say, a Magic 8 Ball can) but they can't evaluate the things they say, so they can't iterate, learn, or expand their own knowledge by themselves.
You might think that inspiration is valuable in itself (it is) but that's not anything new. Inanimate objects have always provided inspiration to human thinkers for all of history, from Archimedes in his bathtub to Isaac Newton and the apple falling on his head, to any painter that ever saw a sunset. The danger is thinking that LLMs are doing anything more profound than that - they're not. They're just dice. Very pretty, eloquent, weighted dice. LLMs are no more creating knowledge than bathtubs and apples wrote the laws of physics.
You and some others seem to think the AI is incapable of making connections. I agree it is better as an information search and not trying to create new information (i will say information instead of knowledge as per your definition). Even when it makes new info, a human has to check it to make sure it makes sense.
But I have several examples where it literally joined info to give me a result that didn't exist before.
I asked about plants that are not very common, and it "hallucinated" that the care for that plant should be similar to others of the same family, so it told me to check a similar plant. So, it made that connection and gave me valuable info. Maybe that info is bullshit and the plant will die, but it seems like a logical inference to me.
This is the problem, really. It's just straight up not logical inference.
Logical inference requires understanding concepts and following rules. When you ask a calculator to answer 2+2=? the calculator has specific binary data packets corresponding to the numbers and operations, and it has build-in procedures for how to handle them that are actually the exact same concepts as you as a human understand them, it just handles them more efficient and powerfully. If you ask it to add two numbers, it takes the two numbers and directly performs an 'add' function, exactly the same as a human does. It literally 'understands' and performs the exact operation by following established rules of logic, that's logical inference.
Similarly, a spreadsheet can meaningfully understand and logically interpret what you mean by 'rows' and 'columns.' A search engine understand and can operate logically on a 'tag.'
LLMs are completely different on the most fundamental level. They technically do follow logical inference (on, like, the circuit board level) but that has almost nothing to do with the conversation you're describing. They do not do logical inference on the actual meaning of their natural language inputs or outputs. They just don't. They don't even try. (Not unless you include like, RAG applications and stuff, but that's a separate technology and a separate discussion) The closest thing they have to human-recognizable logic processing is in how they understand semantics - LLMs model the general lexical rhythm of human speech in a way that (if you could parse the operations it was performing on tokens) would probably be somewhat familiar, or at least interesting, to anyone who studies linguistics, but that's about it.
When you ask an LLM for plant advice, there is nothing in the LLM idea of a 'plant' that maps at all to the understanding and rules you as a human know as a plant, not even in a very abstract, computational way. Genuinely, Wikipedia understands what a 'plant' is better than an LLM does. So it cannot do logical inference to conclude 'Plant Y and Plant X are related, so might have similar properties,' it doesn't have the definitions or operations to parse a sentence like that, never mind logically deduce it. If it says that sentence, it's only because mushing together thousands of very similar comments from the past happened to produce a series of words that happened to line up well enough to produce that sentence. It's just a drunk parrot, randomly repeating random stuff, slurring its words and randomly mixing things together. If it produces anything worthwhile at all, it's only because it previously heard a lot of other people saying very worthwhile things in the past. If it comes up with anything 'original' - it's only because slurring words sometimes sound like a new language.
I'm not saying this can't be useful (a drunk parrot could occasionally say very interesting things that make you think or inspire new ideas) and I get that it's very spooky and impressive how convincing it sounds. But it is so, so important everyone understands the difference between this and actual logic inference, which it is very much not.
I feel like you might be misunderstanding AI (at least, LLMs) as a concept.
It’s true that they’re trained off of things that people said in the past, but it doesn’t just adapt to specific sequences of words and mush them together in ways that it thinks make sense to remix existing statements. It picks up on patterns of language, and uses them to construct statements.
That might sound like a distinction without a difference, but simple statements of facts aren’t the only types of speech that exist. Logical reasoning can be done via words too, and these models will try to apply those reasoning “templates” where they believe them to be relevant when trying to come to a conclusion about something.
That means that it can absolutely “think up” a logical conclusion that a human hasn’t come up with before, it just has to follow the “if X and Y then Z” structure that it’s seen humans go through billions of times already. Using the other person’s example, while doing research on how to care for “Plant A” it would see “Plant A is a member of the XYZ Genus”, “remember” that plants within a genus might have similar care requirements, and seek out the care requirements for XYZ genus. Then it would see that it has the information:
1) Plant A is in XYZ Genus
2) Plants within a genus often have similar care requirements.
3) XYZ Genus is best cared for by doing ABC.
And combine those known facts to create a new conclusion that doesn’t exist in its data, “Plant A is likely best cared for by doing ABC.”
I think you’re getting the idea that LLMs can’t generate new information, which is actually true, confused for the idea that LLMs can’t come to original conclusions, which isn’t true. An AI chatbot is perfectly capable of logical reasoning, that’s impossible to deny if you just look at how they process requests these days, but they can’t discover new information because they’re constrained by the information that we gave them access to. Unlike a human, they can’t explore the world or conduct experiments in physical space, so they have no mechanism for discovering anything new that wasn’t already known or possible to discover via existing knowledge. On top of that, even if they manage to conduct “experiments” just using reasoning, the can’t test their hypotheses, so they can’t overcome the final step to bring them from hypotheses to knowledge.
But that doesn’t mean they’re literally incapable of coming up with thoughts that humans haven’t had before. That’s just a fundamental misunderstanding of what they are and how they work.
You are just getting philosophical at this point. I know it is not real logic, real inferences, real deductive reasoning. However, if it gets it right 98% of the time, I don't care how it got to that conclusion. It fucking works. And if it can mix a bunch of old ideas and (sometimes) vomit a valid solution that doesn't exist yet, that's good enough.
The irony here is that there are plenty of human theories that have to be proven by mathematical modeling. Are you saying that in those cases, the humans are not the ones creating new knowledge and that the computer program that verifies it is the one credited with expanding human knowledge?
An LLM would NEVER suggest that if no one has tried it before since it wouldn’t make the connection. It tries to make these connection based on the data it was trained on. If no such data exists it’d be about as likely to connect Planet X to planet Y as it would be connecting Planet X with a Big Mac
That makes no sense. I use chatgpt sometimes and it has recommended me to try some shit that has never been done before. It was stupid and it would not have worked, but it was NEW. As soon as I asked to show a real example, it told me there aren't. And of course googling the issue showed that it was a match of several concepts that seemed similar.
On the same example of the plants, I literally asked for the optimal conditions for a plant, and since it couldn't find the exact plant it gave me info on another similar one. So the example I used is completely real and based on my experience.
I get why this seems like new knowledge to you, but it's really not new in any meaningful sense. If I typed the sentence "You should care for your plant by pouring [chemical] on it" and then I programmed a random number generator to pick fifty random chemicals from a list of 10,000 options, then the result would probably also give you a list of shit that has never been done before, but so what? Even if some of the chemicals actually worked, it doesn't really mean anything. It's not 'new,' it's just a scrambled version of the information I already gave it.
ChatGPT is basically just doing that. It's a search engine with a random number generator. You're asking it to search for plant advice and its smooshing some results together from a bunch of pre-existing data. It's 'new' only in the sense that the literal order of the randomly generated words in the browser is probably new (by statistical necessity) but the information isn't new. It's just a scrambled version of extremely well known words and phrases. The only reason LLM output ever resembles good advice is if and only if loads and loads and loads of people have already tried and described doing very, very similar things in the past. The precise combination of words and phrases might seem original, but all randomly generated things always do.
You are oversimplifying. Comparing AI to random number generator is like saying a jackhammer is like a hammer. Yes, it is... But much more powerful. What I am saying is the AI didn't choose random info to give me, it made an inference (maybe not using human logic, but an inference anyway) about info that PROBABLY applies to my case. And it did apply to my case.
You literally just said you googled its answer and found a match. That’s where it got the idea. Because someone else thought about it. Just as someone else would have thought about the similarities of certain planets. It’s not new. It’s a variation of someone else’s thoughts
I know it is a variation of previous data. What I am saying is that variation could be valuable. By the way I said plant, not planet. That's because the example I got several days ago told me that it didn't find how to take care of some plant, but that some others of that family liked full sun. So yes, it literally made new info. That info could be wrong, of course, maybe I will put that plant in the sun and it will die, but it is a logical inference that plants of the same family may like the same treatment
I would say that it comes down to if you see knowledge as something that is “created” or rather something that is “gained”
Let’s consider chess ai bots for instance. They are able to create completely new and complex movesets, with the ability to evaluate the moves as “objectively” better than the other options.
So you might say that it is clear that the new moves are “created”, (as well as the evaluation of the engine). The question is if you can reasonably say that the knowledge is “gained” without a human to interpret what is happening? I would tend to say yes, but the implications are a bit unclear to me.
AI right now is like a toddler, it is more then capable of generating new information, hells it does it all the time. The problem is that it has a hard time distinguishing reality from fiction and hasn't mastered object permanence yet. Its also like a toddler in that we are growing it everyday and making it better at those things, teaching it primarily through play how to generate real useful information and tell fact from fiction.
AI can generate new knowledge though, I remember an AI was on par with the best Olympic* mathematicians in the world, with no prior knowledge, I don't remember what it was called but I can look it up for you if that might interest you
*Olympic for lack of a better term, basically there are some math problems you gotta solve and provide the proof for it and it's pretty cool
Im no expert either, but it seems like the philosophy professor is coming at the idea of learning from a purely philosophical standpoint, and she (and her evidence and videos) are coming at it from a technical standpoint in terms of how LLMs actually work.
Problem is that her response was to tell the philosophy professor to 'go study philosophy', when she could have asked him about his credentials in AI.
I see it as an initially innocent coming at the same topic from the perspective of different disciplines.
Her subsequent crash out however, is quite something. I particularly liked the part where she implies his correction and some kind of misogynistic abuse lol
Well the first thing would be to define new knowledge which would be a whole rabbit hole in itself, so we can just take the best faith argument which would be something like can ai intuit new knowledge from prior knowledge? If you ask an ai a never before asked math question it could answer that because it knows the laws and that would technically be new knowledge, but I don't think ai would be able to come up with the theory of relativity because that would require bending the Newtonian rules of gravity.
Sadly, it was a discussion about AI.
He owned them. But he entirely missed the point that the discussed AIs (LLMs) don't think like humans do, so his original point was basically wrong.
I think he was assuming it was an AGI conversation.
Heidi may not know who he is, but Matthew isn’t right either. Generative AI isn’t coming up with new ideas. It literally cannot make the kinds of leaps that people can. It is NOT thinking in the sense that we usually define it.
Isn't his logic sound though? It's just a matter of information, rather than method? Everything I know, is due to information I've gathered throughout my life. Every breakthrough comes from a combination of those pieces of information. I don't know if AI will ever reach humans' level of information, but I do think it's intelligence is working in the same way as humans. The limiting factors are "just" not living life (which means constantly gathering information) and feeling things.
This has been a debate in philosophy for decades upon decades. You stating that it isn't thinking like we are doesn't make it true. If there was a definitive objective answer, this wouldn't be a subject for debate.
Personally though, I don't know of anything that I've ever thought of in my life where I didn't gather some information previously that made me reach that thought.
An argument I often hear, is that intuition is intuitive and doesn't need thought or previous information, but is a completely natural "reflex" humans have. I completely disagree. A professional chess player will often say a move "feels right". That isn't normal intuition, that's merely countless hours of practice... which is countless hours of information gathering. In most cases however, our "intuition" is in problem solving, which we do subconsciously at all times, and thus we're always gathering information about it.
If that is the case, then AI would need to overcome two things; being able to have a fuckload of storage, and being able to sort through that storage really quickly. Our brains are really quite good at that. But I fully do not believe the method of "machine learning" is far for the method of "human brain learning".
LLM doesn't draw conclusions, it just summarisea conclusions made by others. That's the difference. If you give a human five related peices of information, they can put them together and come up with a sixth piece of information. An LLM can't do that as far as I understand it.
It absolutely can. It won't always be correct and often it won't be a groundbreaking discovery, but they are capable of coming up with new information.
Like I've used it to help me solve a difficult vector calculus problem that I very much doubt was in its training data.
So LLM may not but general machine learning methods does come up with new previously unknown pieces of information. Google's Deep Mind for example received the noble prize in 2024 by predicting protein folding which brings in viable new information to a critical field. While LLM gets all the attention, it's only one piece of the AI breakthroughs. AI is also starting to do new math proofs though limited. Compared to just 12 months ago when AI was not at all good at this. Things are changing so so fast.
Just for clarity, DeepMind is a company and a subsidiary of Alphabet, AlphaFold is the AI, and the Nobel went to the researchers who led the creation of AlphaFold
But to add, AI is also creating new inventions including drugs. There's even been legal disputes about whether AI inventions can be patented and who the patent goes to
That's not right either. It is a model that pedicts the next most plausible token. It has gotten really good at this, to the point that 99% of the time it might as well be summarizing existing ideas.
That 1% of the time it's wrong is mostly hallucination, but it's also not unthinkable that a small portion of that may lead humans to useful new ideas. The problem is volume and discernment, which will always require human intervention.
An AI can solve math issues it hasn’t seen before, isn’t that exactly doing what you’re saying it can’t? Obviously to a certain point, but that’s just about the amount of information it’s able to look through at once.
Yeah, at some point this argument just boils down to whether or not you believe in some form of determinism or materialism. Personally I do. I believe that every choice and action a person makes is entirely a result of their genetics, brain chemistry, and cultural/social upbringing. I don’t believe in a soul or higher consciousness so everything must be rooted in physical reality. At the microscopic level everything happens according to observable rules, even quantum mechanics may exist under a rule set that we do not understand yet. Even Einstein hypothesised as such.
Did you choose to have cereal vs eggs for breakfast? In my opinion the you that you are in that very moment would always choose the same answer, your thought and “free will” was actually a result of identifiable and measurable processes in your brain. If I cloned you under the exact same conditions, I believe the clone would make the same choice in that exact circumstance. Of course this is incredibly complex, it’s nearly impossible for us to take into account the countless information stored in your brain, we don’t really know how much of biology and upbringing affects your subconscious thought processes.
This is where a core disagreement on LLMs comes from. From my perspective, an LLM could be functioning the same way as a human brain, just on a vastly smaller and less complex scale. Do I believe that to be the case? Not really, but I accept that it is a possibility. For those that believe that consciousness is not an inherent/emergent property of our brains, who likely have a non-materialistic/non-physicalistic view of consciousness, it would be impossible for something without consciousness like a computer to truly “think” and come up with new information like a conscious human can.
Yeah, I believe our brain is limited by the same physical properties as a computer. Scale is the only thing I believe makes current AI not as intelligent as humans…
LLMs are glorified autocomplete algorithms; saying they are intelligent at all, let alone that their "intelligence" is working in the same way as humans', is absurd on its face.
It’s only absurd if you can define how intelligence works in humans, which we can’t. Do I think LLMs function the same way as humans? No. Do I have proof of that? Also no.
We do not know exactly how intelligence works in humans, but we don't have zero knowledge about it. By the best available evidence, AI works almost nothing like the human brain.
Human thought involves a lot of nonverbal processing, and much of what is involved in elucidating consciousness is a post-hoc overlay. When you verbalize your thoughts, externally or internally, it's to some extent your brain constructing a narrative of what has already happened.
LLMs take an input and construct a plausible text output in response based on weighted training data. It's entirely possible that one day in the future, an artificial intelligence might use an LLM subroutine to express its thoughts, analogous to the way a human brain has the Wernicke and Broca areas. But that day is far, far in the future.
It's clear from studying people who have suffered loss of function in those speech centers of their brain that most of what we think of as intelligence takes place elsewhere.
For other forms of AI, nothing about their capabilities even comes close to suggesting that they are intelligent.
I was talking about the machine learning method in general. It doesn’t really have anything to do with what exists, like I said in my comment, everything we’ve made is really far from that scale. My point was merely that scale is the only thing missing. LLM’s aren’t 100% Machine Learning though.
But you have this backward — AI is ONLY information, there is no method. AI has access to basically all the information it wants and it combines that information in the most predictable way, which convincingly mimics human thought. But that’s because it is using human thought as the source of its information. The difference is that humans interpret information and AI does not. And the real difference is that interpretation is not always logical or objectively meaningful or falsifiable—or predictable.
It doesn't have direct access to all the information it wants. These models are trained on many terabytes of data, yet the final model is only hundreds of gigabytes.
The training is designed in such a way that it finds patterns in the data, and then it uses those patterns to respond to new stimuli.
Yeah he is though. AI does synthesize information and does make inductive leaps. It is currently limited in its capacity to do so in the way we do, but to suggest it is somehow entirely incapable of this when its already demonstrated an ability to do so in particular contexts is just wrong.
I suppose we could then shift the goal posts and say its only generating knowledge when we deem an inductive leap // synthesis of information as sufficiently impressive, then define sufficient as human. But that seems like a pretty bad argument to me.
AI does not make inductive leaps. You can prompt an AI, very easily and with honest questions, into contradicting itself egregiously. Most recently a family member of mine was using Grok and he's into, lets say "crunchy" stuff. Grok described grounding as connecting to mother earth and absorbing positive ions among other things. I have electrician training and asked, for the sake of conversation, to relate that to the electrical concept of grounding. It then somewhat accurately described electrical grounding and said the two concepts were the same thing.
Actually AI is good at inductive reasoning it just struggles with deductive reasoning. I honestly thought it would be able the other way around, but it honestly can do a pretty damn good job with some complex things to create some general rule.
I think there might be a misunderstanding what I mean by inductive leaps? Inductive leap does not at all mean get everything right or be incapable of self contradiction, conscious, or even have coherent internal concepts. It just means producing a broader conclusion from specific data.
Inductive leaps, meaning inductive reasoning, generally refers to being able to essentially come to a logically sound conclusion about something. That coincides with your definition pretty well, I think. The example I gave was meant to highlight a lack in inductive reasoning with AI. The AI in question, Grok, did something entirely typical of AI across the board. Grok spat out a detailed conclusion for the same concept twice, with significant overlap but entirely different and contradictory conclusions. Notably, further discussion resulted in Grok unintelligently "combining" the two concepts in a messy hodgepodge way that wouldn't even make sense if it were trying to basically bullshit me on the topic.
It's like if I asked you what blazing is. You might be able to describe it two entirely different ways, such as something being very hot/spicy, or something being incredibly high. But you are capable of inductive reasoning and would likely not tell me, upon further questioning, that blazing means for something to be both incredibly high and spicy and hot, because that doesn't make sense. Typically, knowing how something tastes means it's food, and knowing something is high means it's a person or live animal, neither category which applies to food (generally and most accurately, with few exceptions). The concepts just aren't congruent, and you can tell from context by just understanding the two ways to interpret the word "blazing" that the two definitions don't apply to the same thing. AI cannot make that distinction. AI is trained on whether what they say sounds realistic or accurate or not. Note that it just needs to SOUND realistic or accurate, it does not necessarily have to BE realistic or accurate
Unfortunately we are still just using different definitions. I think your example is great: AIs are flawed at reasoning by our standards, though improvement has been rapid.
Inductive reasoning is broader than the way you are using it. You are taking one particular type of inductive reasoning and saying it is flawed with AIs and I am fine with that. Here’s a few minutes of a computer science professor talking about this: https://youtu.be/oI6jv6uvScY?si=oKhxUegalXaSBRhC
When you make a prediction about something, what does that involve?\
Usually, I think that involves gathering relevant information, looking for a logical rule in a pattern, and plugging in your information into that logical rule.
There are some cases where this rule is quite simple. The Fibonacci Sequence can be replicated with just a few lines of code.\
But...what does it take to recognize the Fibonacci Sequence?
There's always the brute force method of storing the string of numbers in a list and checking if that's the the prompt the AI receives. But...isn't that exactly what a human does? Don't you and I have the chain of "1, 1, 2, 3, 5, 8, 13..." memorized?
I think reducing the LLM's activity to "just" predictive text is a little short sighted. Prediction requires logical rules. Determining logical rules requires reasoning.
But thinking alone cannot generate new knowledge... That was the whole problem with people's focus on the ancient Greek philosophers prior to the scientific revolution.
Thinking alone still doesnt generate new knowledge, you have to test it and prove it.
We make observations, from those observations, we make hypotheses, we test those hypotheses, and then we have to reliably reproduce the results, then we can say we have new knowledge.
Saying thinking generates knowledge, is like saying cutting vegetables and meat creates a completed, fully cooked dish
Okay? Regardless, it doesnt yield "knowledge" unless it can be tested and proved.
It's not like Newton sat there and thought "hmmm, I wonder if we plot an objects velocity on a graph, and then take the rate of change along every point, and created a new function from that... I bet we could get the object's acceleration at each point in time.... thoughts are so cool, I just gained so much knowledge"
This is a very narrow view of knowledge and while it may be defensible, you're being rather aggressive about it.
At the risk of ending up like the meme in question, I'd recommend you take a philosophy class or two so you're aware that there are competing forms of knowledge. For example I believe you're describing what Kant would call phenomena knowledge, but Kant also discussed noumena knowledge (or something, it has been a while since I took a philosophy course) which I think is what math and logic fall under.
Very late to the party but he's generally wrong in his assertion that we do generate "new knowledge" as opposed to just rearranging preexisting information. Literally have post-grads in information theory. Information functions the same as energy in the science. First law of thermodynamics is you can't create or destroy, simply change or move the energy (in this case information) in any given system. The fact is you don't create new knowledge so much as find what already exists.
I really would like to know what he was responding to.
Pretty sure it's an analogy on AI.
Looking at the discussion, there are some interesting points being made.
These people are much more qualified and smarter than me, but I do feel the need to point out that some other people did a much better job of defending her original thesis; AI can not generate new knowledge.
It does boil down to semantics after a fashion, but I think that - as someone else pointed out - a key difference between human knowledge and LLM is that humans observe things with our senses and then interpret and organize them as thoughts. Right now LLM have to have data entered into them and to be sure that the data is real there has to be a human involved at this point.
From that perspective I would humbly suggest humans cannot be excluded from the creation of new knowledge in the current eco system
You're mixing up AI with LLMs, which is their intent.
AIs can definitely generate new knowledge and do it every day in many scientific fields. They do this by organising data and turning that data into information that has meaning. Just like humans do naturally.
I think that's what they guy was getting at.
"ai" us properly speaking a field. There is no technical demarcation between "LLMs" and "ai". LLMs are a subject of study in the field of Ai. There is no such thing as "an Ai" as a technology.
Ai cannot create new knowledge. Even in your example it's just organising existing data which is not creating something new. At best it can absorb information and come to conclusions quicker than humans will do
That’s like saying the output of any process isn’t knowledge generation because all of the information was present in the input data and you’re just performing a transformation to make it the focal point. It’s just silly, though. To, for example, say that everything that happens after the retina is not knowledge generation is absolutely absurd.
You’re confusing data with information.
I think you should look into what knowledge means.
'Organising existing data' is exactly how humans gain knowledge too.
I think you should look into what the word "new" means.
Data isn't knowledge until is organised to have meaning.
If a human studies deep space radar data and finds a new orbital body. That's new knowledge.
If an AI does it (which is currently does), what then?
Assuming AI were able to do that, no that's not new knowledge, that's only using preexisting knowledge that humans would have to programme. I'm not even sure that's AI, if a human is inputting all the data and using the radar that's no intelligence being used there. By that logic if I use a computer to write a book then the computer is the author.
There is no need to assume. AI is already doing this, along with cancer research and almost every other field that collects data.
'Humans would have to programme' that's not how neural nets work. They let it look at preexisting conclusions from past data, and it learns what kind of knowledge it needs to find, what patterns to look out for. Exactly how we train human data scientists.
The instruments collect the data. The AI looks at it for meaning and produces knowledge.
AI can be a tool that helps humans generate new knowledge because of its ability to process information much faster.
However a human is needed to give it direction, to point out what to look for and guide it's association.
So I don't think so creates knowledge yet.
I've described below why this is wrong.
AI can be set autonomously to gather data and produce knowledge.
That would mean AI is able to create new knowledge. It's just that it won't know it is new knowledge or just bullshit until a human checks.
Well the issue really comes down to AI not actually thinking. An LLM is just predicting what comes next after you give it a prompt, with some randomness sprinkled in. Although, with the same random seed the output of the ai is deterministic.
It's just mixing texts it got during training to create a novel output, which makes it inherently not able to do new things, it can only give you the most likely combination of old things to a novel input, with some variance through pseudorandom numbers to fake the idea of thinking.
In the end, the thing that's thinking, the thing capable of independent thought, is only you and by that measure you are the one creating new knowledge, while the ai is only parroting old knowledge in a trenchcoat.
It depends on what you consider new knowledge.
Let's say I ask it tips to take care of a plant X. It checks and finds that plant Y is the same family, and it likes to get a lot of nitrogen.
It tells me that it has found that maybe nitrogen is good for plant X, since it is related to Y.
So, if nobody had tried nitrogen with plant X, it just generated "new" knowledge.
It's a stupid example but it shows what I mean.
By definition, knowledge is something that is known. If the AI doesn't know it, it's not knowledge.
This might just seem like pointless, pedantic semantics, but it's the point. I could ask a random number generator to guess what 2+2 is and it might totally be correct, but there's no way to know it's correct unless you have a way to independently establish correctness.
Even if you build your random number generator to be very likely to be correct, you would still need to know the correct answer first to build it that way.
LLMs are just very sophisticated random number generators built on things we already know. They can spark inspiration or prompt a new line of thought (in the same way, say, a Magic 8 Ball can) but they can't evaluate the things they say, so they can't iterate, learn, or expand their own knowledge by themselves.
You might think that inspiration is valuable in itself (it is) but that's not anything new. Inanimate objects have always provided inspiration to human thinkers for all of history, from Archimedes in his bathtub to Isaac Newton and the apple falling on his head, to any painter that ever saw a sunset. The danger is thinking that LLMs are doing anything more profound than that - they're not. They're just dice. Very pretty, eloquent, weighted dice. LLMs are no more creating knowledge than bathtubs and apples wrote the laws of physics.
You and some others seem to think the AI is incapable of making connections. I agree it is better as an information search and not trying to create new information (i will say information instead of knowledge as per your definition). Even when it makes new info, a human has to check it to make sure it makes sense.
But I have several examples where it literally joined info to give me a result that didn't exist before. I asked about plants that are not very common, and it "hallucinated" that the care for that plant should be similar to others of the same family, so it told me to check a similar plant. So, it made that connection and gave me valuable info. Maybe that info is bullshit and the plant will die, but it seems like a logical inference to me.
This is the problem, really. It's just straight up not logical inference.
Logical inference requires understanding concepts and following rules. When you ask a calculator to answer 2+2=? the calculator has specific binary data packets corresponding to the numbers and operations, and it has build-in procedures for how to handle them that are actually the exact same concepts as you as a human understand them, it just handles them more efficient and powerfully. If you ask it to add two numbers, it takes the two numbers and directly performs an 'add' function, exactly the same as a human does. It literally 'understands' and performs the exact operation by following established rules of logic, that's logical inference.
Similarly, a spreadsheet can meaningfully understand and logically interpret what you mean by 'rows' and 'columns.' A search engine understand and can operate logically on a 'tag.'
LLMs are completely different on the most fundamental level. They technically do follow logical inference (on, like, the circuit board level) but that has almost nothing to do with the conversation you're describing. They do not do logical inference on the actual meaning of their natural language inputs or outputs. They just don't. They don't even try. (Not unless you include like, RAG applications and stuff, but that's a separate technology and a separate discussion) The closest thing they have to human-recognizable logic processing is in how they understand semantics - LLMs model the general lexical rhythm of human speech in a way that (if you could parse the operations it was performing on tokens) would probably be somewhat familiar, or at least interesting, to anyone who studies linguistics, but that's about it.
When you ask an LLM for plant advice, there is nothing in the LLM idea of a 'plant' that maps at all to the understanding and rules you as a human know as a plant, not even in a very abstract, computational way. Genuinely, Wikipedia understands what a 'plant' is better than an LLM does. So it cannot do logical inference to conclude 'Plant Y and Plant X are related, so might have similar properties,' it doesn't have the definitions or operations to parse a sentence like that, never mind logically deduce it. If it says that sentence, it's only because mushing together thousands of very similar comments from the past happened to produce a series of words that happened to line up well enough to produce that sentence. It's just a drunk parrot, randomly repeating random stuff, slurring its words and randomly mixing things together. If it produces anything worthwhile at all, it's only because it previously heard a lot of other people saying very worthwhile things in the past. If it comes up with anything 'original' - it's only because slurring words sometimes sound like a new language.
I'm not saying this can't be useful (a drunk parrot could occasionally say very interesting things that make you think or inspire new ideas) and I get that it's very spooky and impressive how convincing it sounds. But it is so, so important everyone understands the difference between this and actual logic inference, which it is very much not.
I feel like you might be misunderstanding AI (at least, LLMs) as a concept.
It’s true that they’re trained off of things that people said in the past, but it doesn’t just adapt to specific sequences of words and mush them together in ways that it thinks make sense to remix existing statements. It picks up on patterns of language, and uses them to construct statements.
That might sound like a distinction without a difference, but simple statements of facts aren’t the only types of speech that exist. Logical reasoning can be done via words too, and these models will try to apply those reasoning “templates” where they believe them to be relevant when trying to come to a conclusion about something.
That means that it can absolutely “think up” a logical conclusion that a human hasn’t come up with before, it just has to follow the “if X and Y then Z” structure that it’s seen humans go through billions of times already. Using the other person’s example, while doing research on how to care for “Plant A” it would see “Plant A is a member of the XYZ Genus”, “remember” that plants within a genus might have similar care requirements, and seek out the care requirements for XYZ genus. Then it would see that it has the information:
1) Plant A is in XYZ Genus 2) Plants within a genus often have similar care requirements. 3) XYZ Genus is best cared for by doing ABC.
And combine those known facts to create a new conclusion that doesn’t exist in its data, “Plant A is likely best cared for by doing ABC.”
I think you’re getting the idea that LLMs can’t generate new information, which is actually true, confused for the idea that LLMs can’t come to original conclusions, which isn’t true. An AI chatbot is perfectly capable of logical reasoning, that’s impossible to deny if you just look at how they process requests these days, but they can’t discover new information because they’re constrained by the information that we gave them access to. Unlike a human, they can’t explore the world or conduct experiments in physical space, so they have no mechanism for discovering anything new that wasn’t already known or possible to discover via existing knowledge. On top of that, even if they manage to conduct “experiments” just using reasoning, the can’t test their hypotheses, so they can’t overcome the final step to bring them from hypotheses to knowledge.
But that doesn’t mean they’re literally incapable of coming up with thoughts that humans haven’t had before. That’s just a fundamental misunderstanding of what they are and how they work.
You are just getting philosophical at this point. I know it is not real logic, real inferences, real deductive reasoning. However, if it gets it right 98% of the time, I don't care how it got to that conclusion. It fucking works. And if it can mix a bunch of old ideas and (sometimes) vomit a valid solution that doesn't exist yet, that's good enough.
The irony here is that there are plenty of human theories that have to be proven by mathematical modeling. Are you saying that in those cases, the humans are not the ones creating new knowledge and that the computer program that verifies it is the one credited with expanding human knowledge?
An LLM would NEVER suggest that if no one has tried it before since it wouldn’t make the connection. It tries to make these connection based on the data it was trained on. If no such data exists it’d be about as likely to connect Planet X to planet Y as it would be connecting Planet X with a Big Mac
That makes no sense. I use chatgpt sometimes and it has recommended me to try some shit that has never been done before. It was stupid and it would not have worked, but it was NEW. As soon as I asked to show a real example, it told me there aren't. And of course googling the issue showed that it was a match of several concepts that seemed similar.
On the same example of the plants, I literally asked for the optimal conditions for a plant, and since it couldn't find the exact plant it gave me info on another similar one. So the example I used is completely real and based on my experience.
I get why this seems like new knowledge to you, but it's really not new in any meaningful sense. If I typed the sentence "You should care for your plant by pouring [chemical] on it" and then I programmed a random number generator to pick fifty random chemicals from a list of 10,000 options, then the result would probably also give you a list of shit that has never been done before, but so what? Even if some of the chemicals actually worked, it doesn't really mean anything. It's not 'new,' it's just a scrambled version of the information I already gave it.
ChatGPT is basically just doing that. It's a search engine with a random number generator. You're asking it to search for plant advice and its smooshing some results together from a bunch of pre-existing data. It's 'new' only in the sense that the literal order of the randomly generated words in the browser is probably new (by statistical necessity) but the information isn't new. It's just a scrambled version of extremely well known words and phrases. The only reason LLM output ever resembles good advice is if and only if loads and loads and loads of people have already tried and described doing very, very similar things in the past. The precise combination of words and phrases might seem original, but all randomly generated things always do.
You are oversimplifying. Comparing AI to random number generator is like saying a jackhammer is like a hammer. Yes, it is... But much more powerful. What I am saying is the AI didn't choose random info to give me, it made an inference (maybe not using human logic, but an inference anyway) about info that PROBABLY applies to my case. And it did apply to my case.
You literally just said you googled its answer and found a match. That’s where it got the idea. Because someone else thought about it. Just as someone else would have thought about the similarities of certain planets. It’s not new. It’s a variation of someone else’s thoughts
I know it is a variation of previous data. What I am saying is that variation could be valuable. By the way I said plant, not planet. That's because the example I got several days ago told me that it didn't find how to take care of some plant, but that some others of that family liked full sun. So yes, it literally made new info. That info could be wrong, of course, maybe I will put that plant in the sun and it will die, but it is a logical inference that plants of the same family may like the same treatment
I would say that it comes down to if you see knowledge as something that is “created” or rather something that is “gained”
Let’s consider chess ai bots for instance. They are able to create completely new and complex movesets, with the ability to evaluate the moves as “objectively” better than the other options.
So you might say that it is clear that the new moves are “created”, (as well as the evaluation of the engine). The question is if you can reasonably say that the knowledge is “gained” without a human to interpret what is happening? I would tend to say yes, but the implications are a bit unclear to me.
AI right now is like a toddler, it is more then capable of generating new information, hells it does it all the time. The problem is that it has a hard time distinguishing reality from fiction and hasn't mastered object permanence yet. Its also like a toddler in that we are growing it everyday and making it better at those things, teaching it primarily through play how to generate real useful information and tell fact from fiction.
AI can generate new knowledge though, I remember an AI was on par with the best Olympic* mathematicians in the world, with no prior knowledge, I don't remember what it was called but I can look it up for you if that might interest you
*Olympic for lack of a better term, basically there are some math problems you gotta solve and provide the proof for it and it's pretty cool
You also have to have your senses interpreted...
Im no expert either, but it seems like the philosophy professor is coming at the idea of learning from a purely philosophical standpoint, and she (and her evidence and videos) are coming at it from a technical standpoint in terms of how LLMs actually work.
Problem is that her response was to tell the philosophy professor to 'go study philosophy', when she could have asked him about his credentials in AI.
I see it as an initially innocent coming at the same topic from the perspective of different disciplines.
Her subsequent crash out however, is quite something. I particularly liked the part where she implies his correction and some kind of misogynistic abuse lol
Well the first thing would be to define new knowledge which would be a whole rabbit hole in itself, so we can just take the best faith argument which would be something like can ai intuit new knowledge from prior knowledge? If you ask an ai a never before asked math question it could answer that because it knows the laws and that would technically be new knowledge, but I don't think ai would be able to come up with the theory of relativity because that would require bending the Newtonian rules of gravity.
This should show the tweet he's responding to if you scroll up:
https://xcancel.com/MattLutzPhi/status/1997824022730686766#m
Seems this is where Heidi stops responding (again, have to scroll up):
https://xcancel.com/MattLutzPhi/status/1997830107684327874#m
Sadly, it was a discussion about AI. He owned them. But he entirely missed the point that the discussed AIs (LLMs) don't think like humans do, so his original point was basically wrong. I think he was assuming it was an AGI conversation.
Heidi may not know who he is, but Matthew isn’t right either. Generative AI isn’t coming up with new ideas. It literally cannot make the kinds of leaps that people can. It is NOT thinking in the sense that we usually define it.
I'm pretty sure he was more just criticising the logic of argument rather than defending ai.
Me when an argument isn't logically sound but I agree with the conclusion
No. He was suggesting that LLM thinking and human thinking were comparable. So it was wrong to dismiss LLM "creativity".
He would've been right about AGI. But the point was that LLMs are falsely thought of as AGI, causing people to mistake it as being creative.
Isn't his logic sound though? It's just a matter of information, rather than method? Everything I know, is due to information I've gathered throughout my life. Every breakthrough comes from a combination of those pieces of information. I don't know if AI will ever reach humans' level of information, but I do think it's intelligence is working in the same way as humans. The limiting factors are "just" not living life (which means constantly gathering information) and feeling things.
This has been a debate in philosophy for decades upon decades. You stating that it isn't thinking like we are doesn't make it true. If there was a definitive objective answer, this wouldn't be a subject for debate.
Personally though, I don't know of anything that I've ever thought of in my life where I didn't gather some information previously that made me reach that thought.
An argument I often hear, is that intuition is intuitive and doesn't need thought or previous information, but is a completely natural "reflex" humans have. I completely disagree. A professional chess player will often say a move "feels right". That isn't normal intuition, that's merely countless hours of practice... which is countless hours of information gathering. In most cases however, our "intuition" is in problem solving, which we do subconsciously at all times, and thus we're always gathering information about it.
If that is the case, then AI would need to overcome two things; being able to have a fuckload of storage, and being able to sort through that storage really quickly. Our brains are really quite good at that. But I fully do not believe the method of "machine learning" is far for the method of "human brain learning".
LLM doesn't draw conclusions, it just summarisea conclusions made by others. That's the difference. If you give a human five related peices of information, they can put them together and come up with a sixth piece of information. An LLM can't do that as far as I understand it.
There's actually lots of philosphers that have been arguing humans can't either https://en.wikipedia.org/wiki/The_Missing_Shade_of_Blue
It absolutely can. It won't always be correct and often it won't be a groundbreaking discovery, but they are capable of coming up with new information.
Like I've used it to help me solve a difficult vector calculus problem that I very much doubt was in its training data.
So LLM may not but general machine learning methods does come up with new previously unknown pieces of information. Google's Deep Mind for example received the noble prize in 2024 by predicting protein folding which brings in viable new information to a critical field. While LLM gets all the attention, it's only one piece of the AI breakthroughs. AI is also starting to do new math proofs though limited. Compared to just 12 months ago when AI was not at all good at this. Things are changing so so fast.
Just for clarity, DeepMind is a company and a subsidiary of Alphabet, AlphaFold is the AI, and the Nobel went to the researchers who led the creation of AlphaFold
But to add, AI is also creating new inventions including drugs. There's even been legal disputes about whether AI inventions can be patented and who the patent goes to
That's not right either. It is a model that pedicts the next most plausible token. It has gotten really good at this, to the point that 99% of the time it might as well be summarizing existing ideas.
That 1% of the time it's wrong is mostly hallucination, but it's also not unthinkable that a small portion of that may lead humans to useful new ideas. The problem is volume and discernment, which will always require human intervention.
An AI can solve math issues it hasn’t seen before, isn’t that exactly doing what you’re saying it can’t? Obviously to a certain point, but that’s just about the amount of information it’s able to look through at once.
Yeah, at some point this argument just boils down to whether or not you believe in some form of determinism or materialism. Personally I do. I believe that every choice and action a person makes is entirely a result of their genetics, brain chemistry, and cultural/social upbringing. I don’t believe in a soul or higher consciousness so everything must be rooted in physical reality. At the microscopic level everything happens according to observable rules, even quantum mechanics may exist under a rule set that we do not understand yet. Even Einstein hypothesised as such.
Did you choose to have cereal vs eggs for breakfast? In my opinion the you that you are in that very moment would always choose the same answer, your thought and “free will” was actually a result of identifiable and measurable processes in your brain. If I cloned you under the exact same conditions, I believe the clone would make the same choice in that exact circumstance. Of course this is incredibly complex, it’s nearly impossible for us to take into account the countless information stored in your brain, we don’t really know how much of biology and upbringing affects your subconscious thought processes.
This is where a core disagreement on LLMs comes from. From my perspective, an LLM could be functioning the same way as a human brain, just on a vastly smaller and less complex scale. Do I believe that to be the case? Not really, but I accept that it is a possibility. For those that believe that consciousness is not an inherent/emergent property of our brains, who likely have a non-materialistic/non-physicalistic view of consciousness, it would be impossible for something without consciousness like a computer to truly “think” and come up with new information like a conscious human can.
Yeah, I believe our brain is limited by the same physical properties as a computer. Scale is the only thing I believe makes current AI not as intelligent as humans…
LLMs are glorified autocomplete algorithms; saying they are intelligent at all, let alone that their "intelligence" is working in the same way as humans', is absurd on its face.
It’s only absurd if you can define how intelligence works in humans, which we can’t. Do I think LLMs function the same way as humans? No. Do I have proof of that? Also no.
We do not know exactly how intelligence works in humans, but we don't have zero knowledge about it. By the best available evidence, AI works almost nothing like the human brain.
Can you elaborate please?
Human thought involves a lot of nonverbal processing, and much of what is involved in elucidating consciousness is a post-hoc overlay. When you verbalize your thoughts, externally or internally, it's to some extent your brain constructing a narrative of what has already happened.
LLMs take an input and construct a plausible text output in response based on weighted training data. It's entirely possible that one day in the future, an artificial intelligence might use an LLM subroutine to express its thoughts, analogous to the way a human brain has the Wernicke and Broca areas. But that day is far, far in the future.
It's clear from studying people who have suffered loss of function in those speech centers of their brain that most of what we think of as intelligence takes place elsewhere.
For other forms of AI, nothing about their capabilities even comes close to suggesting that they are intelligent.
I never mentioned LLM’s?
what fucking AI are you talking about then?
I was talking about the machine learning method in general. It doesn’t really have anything to do with what exists, like I said in my comment, everything we’ve made is really far from that scale. My point was merely that scale is the only thing missing. LLM’s aren’t 100% Machine Learning though.
But you have this backward — AI is ONLY information, there is no method. AI has access to basically all the information it wants and it combines that information in the most predictable way, which convincingly mimics human thought. But that’s because it is using human thought as the source of its information. The difference is that humans interpret information and AI does not. And the real difference is that interpretation is not always logical or objectively meaningful or falsifiable—or predictable.
It doesn't have direct access to all the information it wants. These models are trained on many terabytes of data, yet the final model is only hundreds of gigabytes.
The training is designed in such a way that it finds patterns in the data, and then it uses those patterns to respond to new stimuli.
That’s not how machine learning works?
Yeah he is though. AI does synthesize information and does make inductive leaps. It is currently limited in its capacity to do so in the way we do, but to suggest it is somehow entirely incapable of this when its already demonstrated an ability to do so in particular contexts is just wrong.
I suppose we could then shift the goal posts and say its only generating knowledge when we deem an inductive leap // synthesis of information as sufficiently impressive, then define sufficient as human. But that seems like a pretty bad argument to me.
AI does not make inductive leaps. You can prompt an AI, very easily and with honest questions, into contradicting itself egregiously. Most recently a family member of mine was using Grok and he's into, lets say "crunchy" stuff. Grok described grounding as connecting to mother earth and absorbing positive ions among other things. I have electrician training and asked, for the sake of conversation, to relate that to the electrical concept of grounding. It then somewhat accurately described electrical grounding and said the two concepts were the same thing.
This is just the most recent example
Actually AI is good at inductive reasoning it just struggles with deductive reasoning. I honestly thought it would be able the other way around, but it honestly can do a pretty damn good job with some complex things to create some general rule.
I think there might be a misunderstanding what I mean by inductive leaps? Inductive leap does not at all mean get everything right or be incapable of self contradiction, conscious, or even have coherent internal concepts. It just means producing a broader conclusion from specific data.
Inductive leaps, meaning inductive reasoning, generally refers to being able to essentially come to a logically sound conclusion about something. That coincides with your definition pretty well, I think. The example I gave was meant to highlight a lack in inductive reasoning with AI. The AI in question, Grok, did something entirely typical of AI across the board. Grok spat out a detailed conclusion for the same concept twice, with significant overlap but entirely different and contradictory conclusions. Notably, further discussion resulted in Grok unintelligently "combining" the two concepts in a messy hodgepodge way that wouldn't even make sense if it were trying to basically bullshit me on the topic.
It's like if I asked you what blazing is. You might be able to describe it two entirely different ways, such as something being very hot/spicy, or something being incredibly high. But you are capable of inductive reasoning and would likely not tell me, upon further questioning, that blazing means for something to be both incredibly high and spicy and hot, because that doesn't make sense. Typically, knowing how something tastes means it's food, and knowing something is high means it's a person or live animal, neither category which applies to food (generally and most accurately, with few exceptions). The concepts just aren't congruent, and you can tell from context by just understanding the two ways to interpret the word "blazing" that the two definitions don't apply to the same thing. AI cannot make that distinction. AI is trained on whether what they say sounds realistic or accurate or not. Note that it just needs to SOUND realistic or accurate, it does not necessarily have to BE realistic or accurate
Unfortunately we are still just using different definitions. I think your example is great: AIs are flawed at reasoning by our standards, though improvement has been rapid.
Inductive reasoning is broader than the way you are using it. You are taking one particular type of inductive reasoning and saying it is flawed with AIs and I am fine with that. Here’s a few minutes of a computer science professor talking about this: https://youtu.be/oI6jv6uvScY?si=oKhxUegalXaSBRhC
It's literally just predictive text.
It's the most average of average answers possible.
Everyone makes a response based on the data they are fed, yes
Let's take a moment to think about this.
When you make a prediction about something, what does that involve?\ Usually, I think that involves gathering relevant information, looking for a logical rule in a pattern, and plugging in your information into that logical rule.
There are some cases where this rule is quite simple. The Fibonacci Sequence can be replicated with just a few lines of code.\ But...what does it take to recognize the Fibonacci Sequence?
There's always the brute force method of storing the string of numbers in a list and checking if that's the the prompt the AI receives. But...isn't that exactly what a human does? Don't you and I have the chain of "1, 1, 2, 3, 5, 8, 13..." memorized?
I think reducing the LLM's activity to "just" predictive text is a little short sighted. Prediction requires logical rules. Determining logical rules requires reasoning.
But it is though. That's literally what an LLM is.
It's not actually intelligent. It doesn't "reason". It chooses the most probable answer.
It's quite literally the most average answer to any question, as it's extrapolated from all possible answers found in its database.
AI is just an insanely complicated and intricate input/output device.
I see Heidi is of the “if you disagree with me you must be stupid” school of thought.
she has gotten much, much worse over the last few years, too
being a philosophy professor does not stop you from being dumb. In fact...
Classic example of “to a hammer, everything looks like a nail”.
Just because you can shoehorn an example into a philosophical framework doesn’t mean it’s useful for anyone.
But thinking alone cannot generate new knowledge... That was the whole problem with people's focus on the ancient Greek philosophers prior to the scientific revolution.
I counter you with the entire field of mathematics
You know what? Fair. I'd still argue that any type of application will need more than just thought, but the math itself is new knowledge
Thinking alone still doesnt generate new knowledge, you have to test it and prove it.
We make observations, from those observations, we make hypotheses, we test those hypotheses, and then we have to reliably reproduce the results, then we can say we have new knowledge.
Saying thinking generates knowledge, is like saying cutting vegetables and meat creates a completed, fully cooked dish
Math is famously non-empirical
Okay? Regardless, it doesnt yield "knowledge" unless it can be tested and proved.
It's not like Newton sat there and thought "hmmm, I wonder if we plot an objects velocity on a graph, and then take the rate of change along every point, and created a new function from that... I bet we could get the object's acceleration at each point in time.... thoughts are so cool, I just gained so much knowledge"
Coming in a bit hot for someone without the basics of philosophy down.
Math famously gets proven by... more math.
This is a very narrow view of knowledge and while it may be defensible, you're being rather aggressive about it.
At the risk of ending up like the meme in question, I'd recommend you take a philosophy class or two so you're aware that there are competing forms of knowledge. For example I believe you're describing what Kant would call phenomena knowledge, but Kant also discussed noumena knowledge (or something, it has been a while since I took a philosophy course) which I think is what math and logic fall under.
Typical Heidi.
Very late to the party but he's generally wrong in his assertion that we do generate "new knowledge" as opposed to just rearranging preexisting information. Literally have post-grads in information theory. Information functions the same as energy in the science. First law of thermodynamics is you can't create or destroy, simply change or move the energy (in this case information) in any given system. The fact is you don't create new knowledge so much as find what already exists.
Oh these students, convinced of their greatness when they barely started crawling...
Context would be great.
Isn’t this literally what makes us human, and why AI is never actually close to being a “true” AI?
Oh Heidi
Came here to post this.