OP, so your post is not removed, please reply to this comment with your best guess of what this meme means! Everyone else, this is PETER explains the joke. Have fun and reply as your favorite fictional character for top level responses!
I don’t (particularly) care about the “morality” or ethics of AI, (at least not in this case). However I do care about the fact it is unreliable, and just as likely to feed you a line of BS as your average drunk uncle.
so yes this time it absolutely gave the correct answer, bully for it. but unless you can otherwise verify that, it is not reliable and as such can’t be trusted.
The same is just as valid for if a redditor typed out an answer though, that has nothing to do with the fact it's an AI answer. Redditors confidently answer questions incorrectly all the time.
No lol. I promise you there are other websites that already had this answer and the screenshot could have been from there instead.
Before AI was shoved everywhere, this very answer would have probably been in the same spot - a paragraph or two from one of the top listed sites, not a newly generated maybe-hallucinated answer.
Lmao, this is always the excuse for AI bros, "oh well if you worded it this way ..." stfu. If it's so great why can't it tell when it doesn't have enough info to give a helpful response? Because they aren't made to be right, they're made to make you use them more, and the AI companies have to keep up the illusion that they are heading towards AGI so having them say "I don't know" isn't an option.
It literally can't say it doesn't know, that's not how the machine works. It's like asking a car to jump, it's just not something it can do / was designed to do in the first place.
It will sooner BS you, than say “invalid query” or “Bro what are you even asking” or any such variation.
Again much like asking the hypothetical drunk uncle, it will spit out something, because (as designed) even the most literally dangerous or harmful answer (mainly because the AI has no real concept of danger or harm) is “better” than “I don’t know.”
“Oh machine parrot, what is your wisdom” is basically where we are at rn.
You ask AI a question, ask it to provide sources for its claims and verify them independently. AI is a tool, not a solution to all problems on Earth in and of itself (not yet anyway).
Same with Wikipedia. Wikipedia is slammed as bad, but if you know how to consult it and check the citations, it's an extremely powerful research tool.
It can still be wrong if you ask for sources. Notoriously, a lawyer submitted a brief that contained references to non-existent court cases a couple years back. It happens less often now, but that doesn’t mean it won’t happen
Yeah, so the that's the lawyers fault for not independently verifying the claims. Again, relying solely on AI is not using it properly (depending on the subject).
I always find it insane, that validation is treated like a boogeyman. You should verify the things other humans tell you to. Ai has more latent knowledge than all of us and the only tax is it requires maybe 50% more verification than the average thing you hear someone yap about.
Except AI answers that are incorrect, often sounds 100% correct. And you're not asking AI questions you already know the answer to. It really is useless, Except I suppose to trick dumb people out of their money. It would be far and above the most believable Nigerian prince.
This is the case with humans too. Sometimes people will give you the right answer, sometimes people will tell you to take colloidal silver to live forever.
Okay fair with reliability issues but uh, how is that different from literally everything else. Other web sites get stuff wrong all the time too, but when ai gets it wrong suddenly it’s different?
There is valid criticism of ai but reliability is not one because everything outside of scientific published studies is “untrustworthy” and even they get stuff wrong sometimes.
because, among other things. AI (specifically LLMs) is not designed to tell truth or give real answers.
it’s designed to say things that sound human. it’s aggregating data to spit something out.
Other sources may intentionally lie, or simply get something wrong. however this can be somewhat countered by finding reliable sources. Wikipedia uses its citation system for this, other locations simply use reputation.
If you need to know about programing, there are sources dedicated to that, with detailed answers, that are almost always right (true, not always, but even when wrong it’s usually by small degrees. especially on technical core details). meanwhile trusting comments on R/gardening for answers on say orbital mechanics is an overtly bad idea, because there is no system of credibility there.
AI, will just spit out whatever. with any given change to the blackbox at its core possibility wildly changing the results.
stepping outside google’s specific quick answer AI, and into say chatGPT. it is a glorified yes man (as agreeing helps with engagement), so simply how you phrase your question may encourage it to BS you, all without you knowing. and all in a vacuum without anyone there to call it out as bullshit.
Plus, AI happily sources sarcastic reddit comments as one of its datapoints for answers (again, see the pizza glue thing)
So yes, other sources can be wrong. but there are other ways to verify them. with AI you abjectly can’t trust it without confirming it against other places, which themselves could be as you pointed out wrong. so once you’ve verified your answer, it’s exactly the same work as without AI. but you also wasted time looking at the AI.
I am not exaggerating when i say AI overview has given outright false (and in 5 of these cases batshit unhinged) answers to about 8 of my searches in the past 2 weeks. of those batshit answers, 3 were entirely believable, nearly impossible to debunk from other sources as info on the topic was scant, and I only knew them to be wrong because I had explicit first hand experience with the real answers. (one was for an obscure game mechanic from a 15 year old Mmorpg so admittedly not surface level knowledge).
So yeah, other sources can be wrong. but AI’s reliability is closer to a magic 8 ball, than to say your average high-school professor. and not even close to someone with even moderate practical knowledge in a subject.
At least the old preview, while it was prone to making mistakes, showed you exactly where it pulled its excerpt from, with a quick link to get there to read in context if the source was reliable. giving a good measure of trust.
I think it's pretty accurate when you have done a little research on the topic and can prompt it properly. So ya for random off the wall questions you may get unreliable results but if you are using it to supplement or enhance stuff you have researched properly it's fantastic. It won't do everything for you but it will make doing things easier and that's what people should realize about current commercial AI models.
You can always click the little “link” icon and read the source. Just like wikipedia, which used to say that Hendersonville, NC was primarily populated by monkeys.
There’s still a lot of racism in my hometown, sadly.
It’s unreliable af but as long as you’re able to verify it and not rely on its bs answer, then it’s isn’t so bad. Basically do a normal google search and click links to get a better idea, AI overview is a general idea that 60% of the time is wrong
But ChatGPT 5.2 claims to be 50% better at coding prompts. Im hesitant, I recently had a wild interaction where I asked ChatGPT to tell me the date n past. And it was a hot mess. I argued actual fact for several prompts.
Also, ChatGPT can’t decipher color correctly. I showed my code, and a screenshot. And 5.2 could not decipher color patterns.
So, ai, prove it. Or don’t. Maybe don’t at this point. All of our jobs might end up irrelevant.
I understand the rejection of broad acceptance of AI as a primary source for info, it’s truly annoying. It isnt a source of information at all, and i love the way you argued that.
However….you should understand that very soon AI will be virtually perfect for correctly answering any objective question and likely even ethical (providing authoritative sources to support every part of its generated solution). AI online search engine innovation has GREATLY improved over the past year alone. It wont be long before we have to accept the use of AI as a faster and better alternative to Wikipedia. I just hope young people know they need to sometimes search for and identify primary, authoritative information—especially while the internet AI search engines are frankly quite shit compared to what they will be in a couple years. Dont be so upset, its a good thing times are changing just like they always have
Okay then complain about every comment that gives an answer. Redditors are not reliable either and random reddit comments cannot be blindly trusted, whether they are written by humans or AI.
int c = 1;
int x = c - c++;
int x = 1 - c++;
int x = 1 - 1;
int x = 0;
If the compiler evaluates right to left, no.
int c = 1;
int x = c - c++;
int x = c - 1;
int x = 2 - 1;
int x = 1;
Neither c nor c++ specifies an evaluation order, which allows compiler vendors to optimize under the assumption that you never did this in your program.
The ++ operator can be used pre-operatively or post-operatively to help control how the operation is performed. Placing it before or after the variable allows control of incrementing before or after the value in the variable is examined. But yeah, in an evaluation like this where the variable is referenced multiple times, if the order the compiler performs the evaluation is uncertain, that's gonna inject confusion.
I thought that would just be unspecified behavior, but apparently you are correct: https://en.cppreference.com/w/c/language/eval_order.html: "If a side effect on a scalar object [here, c++] is unsequenced relative to a value computation [c] using the value of the same scalar object [c], the behavior is undefined."
That isn't the difference between c and c++. All that's really saying is what the ++ is.\
The difference between c and c++ is c++ is object-oriented.\
This is the difference between answering your question and looking it up on AI crap.
Yeah I hate it when I search the internet for an answer and it's right. The least they could do is impersonate a human and tell you something that sounds correct but is wrong and biased.
Honestly, I could have copy pasted the summary. But I am too lazy to do that and in the end it would still be an AI summary. Once I knew the answer is correct, I didnt event bother copy pasting it.
Note: Sorry for my run on sentence cause I am drunk af
It’s new but I was put on by a friend of mine and it has good privacy terms they don’t use your data, and there’s 0 guard rails… like absolutely none if you click the brightside model
It’s great that you think that large language models are such a cool technology. Your concerns about the environmental impacts of artificial intelligence are totally valid! Companies that utilize AI work hard to offset their carbon footprint — corporations are your friends!
Do you want me to tell you more about the environmental impact of AI? Or I can create an image of a chatbot cleaning up an ocean?
It wouldn't. All the new data centers construction is for the future use they believe will come. So hopefully that short burst before they are all up and running
If you interpret the left hand side variable as its address and the right hand side variable as its value, then this just places the value of variable into the memory address of variable + 1. It's a bit of an ambiguous pseudocode variant of:
Technically, the difference C++ - C is undefined because there’s no sequence point been the two times we use C, so the difference could be nasal demons
That's true
But they are comparing C++ and C so if the original value of C was X, they'll return X and X+1
On the other hand, if they were comparing C and C++ they will both return X
Technically that last statement is incorrect, c++ evaluates to the value c was originally, but afterwards c is 1 higher, so c++ < c should evaluate to true, as should c++ == c-1. And c == c++ should evaluate to true, since they are compared before c is incremented.
c++ must return the value of c when evaluated (you're right there), but then is required to increment the value of cbefore the next sequence point. This is typically the semicolon at the end of the statement, but not necessarily.
So, c++ == c++ may evaluate true or false! IIRC, there's not even reason to believe that the left side of the == operator will be evaluated before the right side.
Yeah, that was why I put should instead of would. In c++ specifically I'm almost certain it evaluates left to right and does increment before evaluating the equality, but I also know that isn't as rigid as the requirement that it evaluates c before increment.
Nah modifying and reusing the same var is undefined behavior in C and Cpp. All that the standards require for "a op b" is that the sub expressions a and b are evaluated before the whole expression with op (unless op is sequenced like , (comma). Meaning it can evaluate a first then b or b first then a. So c++ < c can be both true and false depending on how the compiler compiles it. Cuz everything has to move into a register. I made another comment explaining that w an example in this thread.
In code, you have variables. variables are little boxes that store numbers. when you change that number, it is referred to as "performing an operation" on that variable, so if I say
var a = 3
(set the variable)
and then
var a = a * 3
then if I check a, it will be equal to 9.
it turns out the adding 1 to a variable (also called "incrementing") is a very useful ability, so some programming languages have a specific function to do that, typically written "++"
so:
var a = 3
a++
a will be equal to 4.
in the case of the meme, C is some variable, and C++ is that same variable incremented once, so the difference between C and C++ is 1.
the joke comes from the fact that C and C++ are also coding languages, so the person is asking "How do the coding languages C and C++ function differently" but because of the phrasing, John interprets it as "what C - C + 1" (subtraction being the operator that computed differences.)
Tl;dr It's undefined behaviour and don't modify stuff you're evaluating on the fly.
Order of evaluation is not guaranteed as - is unsequenced. That is, for an expression "a op b", the only thing the standards guarantee is that both subexpressions a and b will be evaluated before op but it doesn't say which has to be evaluated first (for optimization freedom), so either a can be evaluated first then b or b first then a and it gets moved into registers after which it doesn't matter. Meaning the compiler can rearrange it however it wants for optimization, so it can go two ways:
for clarity I'll refer to C - C++ as C0 - C1++ but C0 and C1 are the same
mov tmp1, C0 ; tmp1 = first C = say 0
mov tmp2, C1 ; tmp2 = second C = 0
inc C1 ; C++, now C is 1. x++ happens after, ++x happens before
sub tmp1, tmp2 ; tmp1 = tmp1 - tmp2 = 0 - 0 = 0
mov ans, tmp1
if it evaluates the other way
mov tmp1, C1 ; tmp1 = second C = say 0
inc C1 ; C++, now C is 1. x++ happens after, ++x happens before
mov tmp2, C0 ; tmp2 = first C = 1
sub tmp2, tmp1 ; tmp2 = tmp2 - tmp1 = 1 - 0 = 1
mov ans, tmp2 ; its flipped here as tmp2 is what has the result of C, order of evaluation means for `a op b`, either a or b can be evaluated first, not that the op itself changes order, if it was still `tmp1 - tmp2` then it would be `b op a`
So it can pretty much go either route, and doing any variant of C - C++ or C++ - C or ++C - C etc is also undefined behaviour as - is unsequenced. (P.S. , is a sequence point meaning everything before it is evaluated before the next part.)
Lol, I was doing a bit of x64 asm recently for my bachelor's thesis, which funnily enough is related to compilers. So I thought eh might as well top it off with some asm.
That's actually wrong. C and C++ are in the same message so they are both the same value as in C languages if the ++ is following the variable it is incremented after the statement.
Not really, because C++ is mentioned first, so by the time they ask about C, the value has incremented. If the question would've been "What is the difference between C and C++?" then you comment would apply.
Not really. It's undefined behavior in both c and c++. Depending on whether the operands to operator- are evaluated left to right or right to left, you get different answers. The standards do not specify this ordering, and languages that do specify an ordering such as java have the issue that you get different answers by reordering arguments, which isn't really satisfying either.
"In C programming language from which C++ inherits its 0.1% of problems, C++ means increment the value of C. But the expression C++ itself evaluates to the value of C. Which means C++ offers zero benefit over C, according to Rob 'lol no generics' Pike who also has experience in creating languages that offer no practical advantage over C." (https://en.uncyclopedia.co/wiki/C++)
so, C is a programming language, some guys got together and made C a lot worse, but wanted to call it in a way, that hints it being better…
C has a short way of loading a value, incrementing it by one and storing it in the same register -> myVariableName++ (internally changed to myVariableName = myVariableName + 1)
You know what's crazy about reddit? Anything, ANYTHING can turn into something unexpected like someone talking about taking a shit in a coding discussion 🤦
Without knowing the type of C, '++' operator could be defined to be anything. The friend is making the assumption that C is a standard numeric type in which case it would increment the value by one.
The other comments covered it so I’ll just say to be VERY careful with ++C as it probably doesn’t work exactly the way you think it does. Just tracked down a defect and it was because someone thought they were being clever doing this.
You will find that “a” (set to c++) is one less than “b” (set to c). This is because the ++ after the variable is a post-increment, happening after the current operation.
OP, so your post is not removed, please reply to this comment with your best guess of what this meme means! Everyone else, this is PETER explains the joke. Have fun and reply as your favorite fictional character for top level responses!
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
[removed]
Get this AI Overview slop out of my face
But it's the correct answer
this time
Other times it says to put glue on pizza.
I don’t (particularly) care about the “morality” or ethics of AI, (at least not in this case). However I do care about the fact it is unreliable, and just as likely to feed you a line of BS as your average drunk uncle.
so yes this time it absolutely gave the correct answer, bully for it. but unless you can otherwise verify that, it is not reliable and as such can’t be trusted.
Seems like this is the perfect usage, no? AI answer, vetted by human users through voting
Gotta be careful of all the drunk uncles on Reddit voting on stuff too
Who do you think trained the AI in the first place?
Drunk uncles
I call mine drunkle
DRUNKLE STAN?
drunkles
I did not. Too busy drinking.
the internet? reddit?
Drunkles. I've had a few drunkles in my family frfr
Ok but why not just cut out the middleman and just get an answer from an actual person?
So AI isnt supposed to feed their kids now?
Naw the perfect use of AI is detecting breast cancer a decade before other tests would detect it.
Hahahahahahahahaha.
Oh, you're serious? Let me laugh even harder.
HAHAHAHAHAHAHAHAHAHA.
Democratizing truth does not guarantee accuracy
The same is just as valid for if a redditor typed out an answer though, that has nothing to do with the fact it's an AI answer. Redditors confidently answer questions incorrectly all the time.
Are people actually vetting the information? Redditors — and any social media users, really — often upvote factually incorrect information.
I mean... if you're gonna have people who know the answer evaluate the answer, you might as well just ask those people. AI isn't needed.
No lol. I promise you there are other websites that already had this answer and the screenshot could have been from there instead.
Before AI was shoved everywhere, this very answer would have probably been in the same spot - a paragraph or two from one of the top listed sites, not a newly generated maybe-hallucinated answer.
https://preview.redd.it/t0y9fdsr3s6g1.jpeg?width=897&format=pjpg&auto=webp&s=fc1ef8b3bf871e16824081b27c3596fc2d09f6fd
AI overview isn't always right because this can happen
Oh I see the mistake. Way more than one Reddit user said that.
All of that seems accurate to me…
This is a stupid question to ask google, fwiw. Give your mobo model and I bet it gets the answer correct
Lmao, this is always the excuse for AI bros, "oh well if you worded it this way ..." stfu. If it's so great why can't it tell when it doesn't have enough info to give a helpful response? Because they aren't made to be right, they're made to make you use them more, and the AI companies have to keep up the illusion that they are heading towards AGI so having them say "I don't know" isn't an option.
It literally can't say it doesn't know, that's not how the machine works. It's like asking a car to jump, it's just not something it can do / was designed to do in the first place.
true. and that’s part of the problem.
It will sooner BS you, than say “invalid query” or “Bro what are you even asking” or any such variation.
Again much like asking the hypothetical drunk uncle, it will spit out something, because (as designed) even the most literally dangerous or harmful answer (mainly because the AI has no real concept of danger or harm) is “better” than “I don’t know.”
“Oh machine parrot, what is your wisdom” is basically where we are at rn.
Did you see the unhighlighted answer?
“One Reddit user says “kill yourself””
But we are complaining about AI when Reddit is clearly the problem. Why are any of you even here?
Glue pizza is pretty tasty
Are you now, or were you ever a US Marine?
That would mean crayons, not glue.
It's a bit of both lol
You dip the crayons in the paste. I thought everyone learned that in Kindergarten.
Fair enough
Not yet
Then you don't know how to use it properly.
You ask AI a question, ask it to provide sources for its claims and verify them independently. AI is a tool, not a solution to all problems on Earth in and of itself (not yet anyway).
Same with Wikipedia. Wikipedia is slammed as bad, but if you know how to consult it and check the citations, it's an extremely powerful research tool.
It can still be wrong if you ask for sources. Notoriously, a lawyer submitted a brief that contained references to non-existent court cases a couple years back. It happens less often now, but that doesn’t mean it won’t happen
Yeah, so the that's the lawyers fault for not independently verifying the claims. Again, relying solely on AI is not using it properly (depending on the subject).
I always find it insane, that validation is treated like a boogeyman. You should verify the things other humans tell you to. Ai has more latent knowledge than all of us and the only tax is it requires maybe 50% more verification than the average thing you hear someone yap about.
Except AI answers that are incorrect, often sounds 100% correct. And you're not asking AI questions you already know the answer to. It really is useless, Except I suppose to trick dumb people out of their money. It would be far and above the most believable Nigerian prince.
I regularly have to google to find manufacturer guideline PDFs. 7/10 googles AI overview is wrong and even harmful wrong.
This is the case with humans too. Sometimes people will give you the right answer, sometimes people will tell you to take colloidal silver to live forever.
People are also unreliable.
Okay fair with reliability issues but uh, how is that different from literally everything else. Other web sites get stuff wrong all the time too, but when ai gets it wrong suddenly it’s different?
There is valid criticism of ai but reliability is not one because everything outside of scientific published studies is “untrustworthy” and even they get stuff wrong sometimes.
because, among other things. AI (specifically LLMs) is not designed to tell truth or give real answers.
it’s designed to say things that sound human. it’s aggregating data to spit something out.
Other sources may intentionally lie, or simply get something wrong. however this can be somewhat countered by finding reliable sources. Wikipedia uses its citation system for this, other locations simply use reputation.
If you need to know about programing, there are sources dedicated to that, with detailed answers, that are almost always right (true, not always, but even when wrong it’s usually by small degrees. especially on technical core details). meanwhile trusting comments on R/gardening for answers on say orbital mechanics is an overtly bad idea, because there is no system of credibility there.
AI, will just spit out whatever. with any given change to the blackbox at its core possibility wildly changing the results.
stepping outside google’s specific quick answer AI, and into say chatGPT. it is a glorified yes man (as agreeing helps with engagement), so simply how you phrase your question may encourage it to BS you, all without you knowing. and all in a vacuum without anyone there to call it out as bullshit.
Plus, AI happily sources sarcastic reddit comments as one of its datapoints for answers (again, see the pizza glue thing)
So yes, other sources can be wrong. but there are other ways to verify them. with AI you abjectly can’t trust it without confirming it against other places, which themselves could be as you pointed out wrong. so once you’ve verified your answer, it’s exactly the same work as without AI. but you also wasted time looking at the AI.
I am not exaggerating when i say AI overview has given outright false (and in 5 of these cases batshit unhinged) answers to about 8 of my searches in the past 2 weeks. of those batshit answers, 3 were entirely believable, nearly impossible to debunk from other sources as info on the topic was scant, and I only knew them to be wrong because I had explicit first hand experience with the real answers. (one was for an obscure game mechanic from a 15 year old Mmorpg so admittedly not surface level knowledge).
So yeah, other sources can be wrong. but AI’s reliability is closer to a magic 8 ball, than to say your average high-school professor. and not even close to someone with even moderate practical knowledge in a subject.
At least the old preview, while it was prone to making mistakes, showed you exactly where it pulled its excerpt from, with a quick link to get there to read in context if the source was reliable. giving a good measure of trust.
https://preview.redd.it/7xzrvrv95s6g1.jpeg?width=800&format=pjpg&auto=webp&s=c8156b3bb18914685093d1dbb86d68983860e2fb
One time I asked for the definition of a word, and it gave me the definition of the word in Arabic
Bro, are you seriously out here talking shit about Elmers Pizza when you haven’t even tried it?!
I only know of ++ being a C thing but I don't know all languages so.
Is it no less reliable that a random comment on Reddit though?
Peer reviews (i.e. upvotes) are what gives it credibility.
If you’ve never tried glue on pizza, you’re missing out.
I think it's pretty accurate when you have done a little research on the topic and can prompt it properly. So ya for random off the wall questions you may get unreliable results but if you are using it to supplement or enhance stuff you have researched properly it's fantastic. It won't do everything for you but it will make doing things easier and that's what people should realize about current commercial AI models.
Sounds like you've never tried glue on pizza.
You can always click the little “link” icon and read the source. Just like wikipedia, which used to say that Hendersonville, NC was primarily populated by monkeys.
There’s still a lot of racism in my hometown, sadly.
It’s unreliable af but as long as you’re able to verify it and not rely on its bs answer, then it’s isn’t so bad. Basically do a normal google search and click links to get a better idea, AI overview is a general idea that 60% of the time is wrong
But ChatGPT 5.2 claims to be 50% better at coding prompts. Im hesitant, I recently had a wild interaction where I asked ChatGPT to tell me the date n past. And it was a hot mess. I argued actual fact for several prompts.
Also, ChatGPT can’t decipher color correctly. I showed my code, and a screenshot. And 5.2 could not decipher color patterns.
So, ai, prove it. Or don’t. Maybe don’t at this point. All of our jobs might end up irrelevant.
Abort. Delete all photos of Ron.
Hate to break it to you, but a lot of reddit comments are unreliable.
We're talking about a random comment made on reddit here. LLMs make shit up an order of magnitude less tbh.
I understand the rejection of broad acceptance of AI as a primary source for info, it’s truly annoying. It isnt a source of information at all, and i love the way you argued that.
However….you should understand that very soon AI will be virtually perfect for correctly answering any objective question and likely even ethical (providing authoritative sources to support every part of its generated solution). AI online search engine innovation has GREATLY improved over the past year alone. It wont be long before we have to accept the use of AI as a faster and better alternative to Wikipedia. I just hope young people know they need to sometimes search for and identify primary, authoritative information—especially while the internet AI search engines are frankly quite shit compared to what they will be in a couple years. Dont be so upset, its a good thing times are changing just like they always have
Okay then complain about every comment that gives an answer. Redditors are not reliable either and random reddit comments cannot be blindly trusted, whether they are written by humans or AI.
https://preview.redd.it/pn3ivbeart6g1.jpeg?width=1080&format=pjpg&auto=webp&s=87fdb0294f30235fb12bc2b51949d99013762eb7
Ai says Australia is over half a month behind the rest of the world
HA holy crap
Kind of. But the difference between C and C++ is actually undefined behavior.
In theory x should equal 0, but I guess different compilers behave differently.
If the compiler evaluates left to right, yes.
If the compiler evaluates right to left, no.
Neither c nor c++ specifies an evaluation order, which allows compiler vendors to optimize under the assumption that you never did this in your program.
Dunno why someone downvoted you lol you're absolutely correct
Reddit gonna reddit
The ++ operator can be used pre-operatively or post-operatively to help control how the operation is performed. Placing it before or after the variable allows control of incrementing before or after the value in the variable is examined. But yeah, in an evaluation like this where the variable is referenced multiple times, if the order the compiler performs the evaluation is uncertain, that's gonna inject confusion.
I thought that would just be unspecified behavior, but apparently you are correct: https://en.cppreference.com/w/c/language/eval_order.html: "If a side effect on a scalar object [here,
c++] is unsequenced relative to a value computation [c] using the value of the same scalar object [c], the behavior is undefined."Thanks for looking the wording up. I was just going off of memory, so you definitely could have been correct
Counterexample:
That isn't the difference between c and c++. All that's really saying is what the ++ is.\ The difference between c and c++ is c++ is object-oriented.\ This is the difference between answering your question and looking it up on AI crap.
Redditors when they see anything that is or related to AI
“AI = bad durrrrr”
Yeah I hate it when I search the internet for an answer and it's right. The least they could do is impersonate a human and tell you something that sounds correct but is wrong and biased.
Sounding accurate and confident while being wildly incorrect? Yeah that doesn’t happen with AI at all…
Humans, of course, never would do such a thing
That’s what you get when you train the AI on reddit posts 🤷🏻♂️
Lmao fucking boomer.
Old man yells at cloud
you’re insufferable
https://preview.redd.it/fh8ddcod1s6g1.jpeg?width=638&format=pjpg&auto=webp&s=7a3c4033d439528eadca201aacdcffcf0dc7a87a
My bigger issue is that this even the AI pretending to be Peter Giffin or another character while explaining
Honestly, I could have copy pasted the summary. But I am too lazy to do that and in the end it would still be an AI summary. Once I knew the answer is correct, I didnt event bother copy pasting it.
Note: Sorry for my run on sentence cause I am drunk af
Calling a correct description slop lol this is peak Reddit
LLM is such a cool technology. Only if it doesn't literally evaporate all the water in an entire *Africa just to answer one prompt...
*Edit: changed "lake" to "Africa" since the original ragebait seems too believable
How much energy do you think social media companies consume?
I just find it weird people suddenly care about this while they post on Reddit or other social media. Or stream Netflix etc.
Ellydee shows you your carbon footprint for every prompt and response, kinda cool if you care about the turtles
Sounds cool. Will check out. I'm probably still going to use GitHub Copilot, though. It's integrated in Visual Studio and can be used for free.
It’s new but I was put on by a friend of mine and it has good privacy terms they don’t use your data, and there’s 0 guard rails… like absolutely none if you click the brightside model
https://youtu.be/H_c6MWk7PQc
It’s great that you think that large language models are such a cool technology. Your concerns about the environmental impacts of artificial intelligence are totally valid! Companies that utilize AI work hard to offset their carbon footprint — corporations are your friends!
Do you want me to tell you more about the environmental impact of AI? Or I can create an image of a chatbot cleaning up an ocean?
All this does is show how ignorant you are about AI and as usual react emotionally to misinformation.
It doesn't?! Agricultural water usage (for example) is much higher
It wouldn't. All the new data centers construction is for the future use they believe will come. So hopefully that short burst before they are all up and running
Ok so I'm not anti ai, but pasting the screenshot is diabolical lmao
https://preview.redd.it/heehhdxhrt6g1.jpeg?width=735&format=pjpg&auto=webp&s=10fb7d1fcf843af2ba4bf43afba0409f5453e92b
Upvote++
https://preview.redd.it/pyk5nqv9f07g1.jpeg?width=640&format=pjpg&auto=webp&s=2bef3fba131fd76c334597779c0ef0e246ee3da6
Thanks for the pointer!
Its a programming joke.
In C programming languge instead of saying like:
[Variable1 += 1]
You can say:
[Variable1 ++]
So C ++ can be translated as C + 1
You probably should've said Variable = Variable + 1 Because non programmer probably won't understand += operation either
Honestly a non programmer especially if they are a mathematician would lose their shit if they saw; variable = variable + 1
variable := variable + 1 then
I once had shit.
It is now gone.
Shit has been lost, is this a mathematician?
https://preview.redd.it/0c17aunwzt6g1.jpeg?width=1200&format=pjpg&auto=webp&s=c6a467ccce34a45d0c063f67213894b2215b8dcc
But where is it?
It has moved from the set of what I know the position of, to the set of what I don't know the position of.
variable' = variable + 1
But I hate Pascal!
Meanwhile, write
variable + 1 = variableand then everyone loses their minds!If you interpret the left hand side
variableas its address and the right hand sidevariableas its value, then this just places the value ofvariableinto the memory address ofvariable + 1. It's a bit of an ambiguous pseudocode variant of:mov %rax, 1(%rax)They’d have to be ancient.
At this point any serious mathematician will have had to have taken even the most basic of programming courses.
Source: me, who’s jimmies were rustled by this syntax in 2002 when in HIGHSCHOOL he took his first comp sci class
I remember dabbling in BASIC on my Amstrad a few years back and my non-numbers brain just melted at the self-referential idea of x=x+1.
Because ofc everybody knows the assignment operator and what it does.
CLS!
CLS!
Psst.. honey.. mathematicians are made of sterner stuff than you can dream of.
To be honest nonprogrammers won’t understand var = var + 1 either because it’s mathematically nonsensical.
Huh guess I was faster wasnt I 😏 You got outnerded.
Hmm c++ returns the old value. ++c would return +1
Technically, the difference C++ - C is undefined because there’s no sequence point been the two times we use C, so the difference could be nasal demons
That's true But they are comparing C++ and C so if the original value of C was X, they'll return X and X+1 On the other hand, if they were comparing C and C++ they will both return X
Correction. It is assigning result to C. So C = C + 1
Technically that last statement is incorrect, c++ evaluates to the value c was originally, but afterwards c is 1 higher, so c++ < c should evaluate to true, as should c++ == c-1. And c == c++ should evaluate to true, since they are compared before c is incremented.
It's.. weirder than that.
c++must return the value ofcwhen evaluated (you're right there), but then is required to increment the value ofcbefore the next sequence point. This is typically the semicolon at the end of the statement, but not necessarily.So,
c++ == c++may evaluate true or false! IIRC, there's not even reason to believe that the left side of the==operator will be evaluated before the right side.Yeah, that was why I put should instead of would. In c++ specifically I'm almost certain it evaluates left to right and does increment before evaluating the equality, but I also know that isn't as rigid as the requirement that it evaluates c before increment.
Probably most compilers would do it like that. But there's nothing that says they can't.
I forget where I saw this, but someone once pointed out that whenever correct behavior is undefined, it's legal for the computer to launch the nukes.
Access an array outside the bounds where it was initialized? bye-bye Cleveland.
Nah modifying and reusing the same var is undefined behavior in C and Cpp. All that the standards require for "a op b" is that the sub expressions a and b are evaluated before the whole expression with op (unless op is sequenced like
,(comma). Meaning it can evaluate a first then b or b first then a. So c++ < c can be both true and false depending on how the compiler compiles it. Cuz everything has to move into a register. I made another comment explaining that w an example in this thread.To clarify for others, the “joke” is literally the reason C++ was called that in the first place (while C# was meant to look like 4 plus symbols)
In code, you have variables. variables are little boxes that store numbers. when you change that number, it is referred to as "performing an operation" on that variable, so if I say
var a = 3
(set the variable)
and then
var a = a * 3
then if I check a, it will be equal to 9.
it turns out the adding 1 to a variable (also called "incrementing") is a very useful ability, so some programming languages have a specific function to do that, typically written "++"
so:
var a = 3
a++
a will be equal to 4.
in the case of the meme, C is some variable, and C++ is that same variable incremented once, so the difference between C and C++ is 1.
the joke comes from the fact that C and C++ are also coding languages, so the person is asking "How do the coding languages C and C++ function differently" but because of the phrasing, John interprets it as "what C - C + 1" (subtraction being the operator that computed differences.)
Tbh C - C++ can be 0 or 1.
Tl;dr It's undefined behaviour and don't modify stuff you're evaluating on the fly.
Order of evaluation is not guaranteed as
-is unsequenced. That is, for an expression "a op b", the only thing the standards guarantee is that both subexpressions a and b will be evaluated before op but it doesn't say which has to be evaluated first (for optimization freedom), so either a can be evaluated first then b or b first then a and it gets moved into registers after which it doesn't matter. Meaning the compiler can rearrange it however it wants for optimization, so it can go two ways:for clarity I'll refer to C - C++ as C0 - C1++ but C0 and C1 are the same
if it evaluates the other way
So it can pretty much go either route, and doing any variant of C - C++ or C++ - C or ++C - C etc is also undefined behaviour as
-is unsequenced. (P.S.,is a sequence point meaning everything before it is evaluated before the next part.)0 or -1 Also, in most modern languages (Java, Kotlin, C#, Swift, JS etc), it’s well defined, so c - c++ is 0 (and c - ++c is -1)
Omg thats the comment i was looking for, just wanted to write it myself. Thank you wise internet man
Never thought I'd see assembly on this sub
Lol, I was doing a bit of x64 asm recently for my bachelor's thesis, which funnily enough is related to compilers. So I thought eh might as well top it off with some asm.
That's actually wrong. C and C++ are in the same message so they are both the same value as in C languages if the ++ is following the variable it is incremented after the statement.
Now if it was C and ++C the joke would work.
Went to replies specifically for this comment.
Not really, because C++ is mentioned first, so by the time they ask about C, the value has incremented. If the question would've been "What is the difference between C and C++?" then you comment would apply.
Not really. It's undefined behavior in both c and c++. Depending on whether the operands to operator- are evaluated left to right or right to left, you get different answers. The standards do not specify this ordering, and languages that do specify an ordering such as java have the issue that you get different answers by reordering arguments, which isn't really satisfying either.
"In C programming language from which C++ inherits its 0.1% of problems, C++ means increment the value of C. But the expression C++ itself evaluates to the value of C. Which means C++ offers zero benefit over C, according to Rob 'lol no generics' Pike who also has experience in creating languages that offer no practical advantage over C." (https://en.uncyclopedia.co/wiki/C++)
Depends on the compiler, it would be -1 with gcc and 0 with clang I think
Exactly!
I feel sorry for myself that I understood this joke...
Could be worse. I understood it and thought it was funny/clever.
Wrong.
The difference between C and C++ would've been 1.
The difference between C++ and C is 0.
In Javascript (c)-c(++) is 0 and (c++)-(c) is -1
but (c)-(++c) is -1 and (++c)-(c) is 0
Eh... Yeah. I can totally see that if the unitary ++ returns the original rather than incremented value.
But that would go against my intuitive expectation of how ++ works...
Post fix returns the value before the operation, prefix returns the value after the operation
Um actually here... It could overflow and become a couple billion [difference] give or take a data type.
so, C is a programming language, some guys got together and made C a lot worse, but wanted to call it in a way, that hints it being better…
C has a short way of loading a value, incrementing it by one and storing it in the same register -> myVariableName++ (internally changed to myVariableName = myVariableName + 1)
So yeah, c++ => c + 1
C+ is when you beat gold stake in all the decks C++ is when you beat gold stake with all the jokers
Those who know
I'm fcking too early dude
You can come back now the tech bros have rallied
You know what's crazy about reddit? Anything, ANYTHING can turn into something unexpected like someone talking about taking a shit in a coding discussion 🤦
Without knowing the type of C, '++' operator could be defined to be anything. The friend is making the assumption that C is a standard numeric type in which case it would increment the value by one.
Actually c == c++, but c++ != c. So the difference: c - c++ is 0.
It's similar to:
There is none until you check again.
(Should have used ++C instead)
if u print
c -(c++)you'll get 0 thoThe joke is ++ in coding means += 1, which means the value of the variable is equals to itself + 1,
in coding, having ++ after a named variable, increasing it's value by 1
c + 1 - c = 1
Yeees Bos- I mean Yeeees Louis (I love the potential NoPanicButtonCalmDown reference!)
the answer is 0, because post increment
It's actually undefined. It depends on which expression was evaluated first, which is implementation specific.
In some programming languages, “++” is an instruction to add 1 to a variable.
This is wrong. c++ - c is either 0 or -1 depending on whether c++ is executed before c or not.
What is the difference though
Not if C is int and equal to INT_MAX.
Absurdity! C++ doesn't exist. C++ would be C+1 and nothing can go faster than the speed of light.
Since c++ is postfix, it's actually 0. 0 is the difference. If it was ++c, then it would be 1.
Depends on when you read the value.
The other comments covered it so I’ll just say to be VERY careful with ++C as it probably doesn’t work exactly the way you think it does. Just tracked down a defect and it was because someone thought they were being clever doing this.
If C is a variable in coding, C++ is a line of code that increases it by one. It's a shortcut for writing [ C = C + 1 ]
In coding, specifically C, C++, Java and other C-like languages, ++ is the increment operator.
Here’s the funny bit (if you are a programmer).
int c = 10; int a = c++; int b = c;You will find that “a” (set to
c++) is one less than “b” (set toc). This is because the++after the variable is a post-increment, happening after the current operation.In a lot of programming languages ++ increases the value of a variable by 1
eg X = 1
X++ = 2
Adding ++ after a number variable in coding just adds 1 to it
Haha, "Popcorn Islands" sounds like a pirate's snack adventure!
Neil here 🤓
I am pondering adding a code listing section to the school magazine. However nobody wants to read code listings anymore.
Actually an interesting question about operation evaluation in C and C++.
C++ reads the value of C and then increments it.
And since C++ is incremented post-read, the result will always be 0.
So that answer is wrong.
Neil out.
What do you suppose “C” and “C++” refer to?