I used to work for a very large financial institution. The training was so basic and obvious you could just skip to the questions at the end without watching any of it.
It was so easy, that if anyone was stupid enough to fail it they should have been fired on the spot.
Yeah but the agentic browser can do all the clicking for me. And as you said, it's stupid enough that even the dumb AI should be able to figure it out. It's also something where it can't do much damage when it screws up, and a boring task that I don't want to do... in other words, perfect for AI.
The average redditor is not the target for these trainings as they are probably pretty web savvy. I've known many (usually older) coworkers who do need to keep this in mind or they will get phished. I think the better way is to do simulated phishing and follow up on the failures. That is the only thing my company does outside one yearly training.
A lot of the reason for that kind of training is just so that if someone does a bad thing later, the company can say look we did the training they knew they were doing a bad thing, we've fired them now please don't fine us as much.
Like I have trouble believing that any competent adult doesn't already have an intuition of what money laundering is (even if they don't know the specific finance terms for the various components of it), but every finance company on the planet is gonna be doing yearly AML training regardless
We take monthly training for various things and every time at the end there's a quiz, and the answers are not only plainly obvious, but it's usually "consult a manager"
Seems like a problem technology/the company should solve, as it is a problem technology/the company introduced.
Understand, the problem isn't that people fall for these emails. The problem is that it's that easy to get into a company's systems. Companies implemented all of this knowing that.
Yeah that’s not how phishing works. People are always the weakest link in any security system. You have to (at least attempt to) educate them. It’s not hard to get a phishing email by a mailing list filter and when you do many, many will click on it.
I'm WELL aware that people are the weakest link. I'm also aware that companies know this and accepted that risk.
It's a rule of system design that you don't introduce ANYTHING that relies upon people not touching it. People will touch it. People will fall for tricks. People will think they know better. People will get confused. And on and on. Sure, the companies want to mitigate that behavior, but as established, they know that training people will only be somewhat effective against the variables that they introduced, which let's be honest, most of them don't understand. If they implemented a system that can be infiltrated by the easiest, most reliable way to infiltrate something, we're all aware here that none of them are actually secure.
Sounds like you suggest removing hyperlinks from emails. Because that’s the only way for users to not click on them. Not a very practical solution 🤷🏻♂️
You can have the most secure lock on Earth and the most impermeable alarm system imaginable.
They both won't do squat if your daughter gives the burglars the key with the alarm code attached to it.
And sure - there are solutions to that. That problem is solved. The solution? Simple - don't give anybody access to anything. I just solved cybersecurity...
...but that means nobody is able to do any actual work, which kinda sucks. Your payroll needs access to payroll systems, your IT guys need access to all kinds of environments, your HR need their HR systems. And what's that? Oh, right! That HR system has to interact with various data stores, otherwise it's useless!
And what happens when John from HR clicks on that totally legitimate e-mail from totally Microsoft and enters his credentials? You guessed it, a data breach.
Working in IT I can tell you it is not easy to get into most company's systems that's why people are targeted with social engineering (starting with fishing email up to much much much more sophisticated methods customised for a single target person in the company.
The company can be the most secure in the world but if one employee is falling for such thing and giving out his login data the company is fucked.
You are obviously forgetting how technically incompetent most users in a big company and that why those trainings exist.
Employees are just another attack vector and as I already said as long those fall for Phishing etc it security has to try to teach them how to recognise them.
All cybercriminal attacks I have seen in real time happened because people fell for those emails and yes that is the fucking problem.
Honestly, this is why I got out of the tiger team side of IT security back in the late 90s. It was fun at first, but ultimately depressing. People were always the vulnerability, and nobody was willing to put in the time and training to mitigate it.
You can build the most secure safe on earth, in the most secure house on earth.
Only you and your wife know the combinations, if you wife gets social engineered to enter the combinations somewhere else all you security is worthless.
The solution would be for only one person to have access but also then its possible that this person gets played....
So the final solution would be no access for anyone?
There are enough security mechanisms after the normal user and his passwort that it will be stopped in time before any damage can be done, if the user is an admin it's more difficult but still there are security measures which analyse network data, which users accessing which servers and so on and so forth.....
Its fucking impossible to make a complete secure system.
If you think you can do it, go for it and earn billions and billions of dollars with it because nobody has figured it out by now.
Getting a selfie with the tornado is obviously important for documenting its size in case the company wants to make an insurance claim for all those patients the tornado killed.
As a large language model I have no physical presence that I can take a selfie of. But I can generate realistic images. Do you want me to make a picture of you with a tornado? Just let me know what you want to see and I’ll make it.
Two days ago my institutional research ethics board asked to take a “refresher” course on CITI human subjects. Comet did it all for me, and passed the assessment for me with 98/100 points.
This is correct but many just call it uBlock, and for Firefox, there is nothing called just "uBlock" available on the Firefox extension store.
For Chrome, "uBlock" exists. Yeah, don't use that. Use Firefox, because Chrome crippled ad blocking extensions, but if you must use Chrome, use uBlock Origin Lite.
Good advice. Although if you're going to use a Chromium based browser then why not use Brave at that point? I am on Firefox and I won't change my browser anytime soon but Brave seems to be blocking ads by default doesn't it?
The firm offered that advice last week in a new advisory titled “Cybersecurity Must Block AI Browsers for Now,” in which research VP Dennis Xu, senior director analyst Evgeny Mirolyubov, and VP analyst John Watts observe “Default AI browser settings prioritize user experience over security.”
I’d prefer to have true Firefox and disable the AI. That way I could run extensions like ublock origin, which is the main thing I miss from switching back to iOS.
No possible way of verifying it’s correct? Sure there is. I can read the table and confirm it’s within ranges of what i expect. I can spot check a percentage of the data and confirm it’s correct.
This is all stuff you’d have to do anyway. You’re also assuming the data from the webpage is correct. Are you fact checking that? You following the tls cert chain to make sure the website is authentic?
At the end of the day, I’m not basing my dissertation off of a quick ai summary of a webpage. It’s good enough for getting through boring day to day stuff.
Accurate because the only thing I have an AI browser installed for (ChatGPT atlas) is to do corporate trainings. Fails at anything else but flawless here
I work at Surfshark, and we’ve been researching agentic AI-integrated browsers lately, too. When we compared browsers with built-in AIs, some of them such as Chrome + Gemini collect a massive amount of data by default - things like your name, location, browsing history, search history, device IDs, even purchase history. Edge + Copilot wasn’t far behind. The need for convenience is understandable, however, users should be aware of the amount of data collected.
I feel like the most savvy users are not using AI at all, and that further skews the growth of AI into the "untrustworthy". Not that you can trust it anyway because it uses the words of flat earthers as readily as it uses the words of Ptolemy...
I appreciate that in Brave Browser you can disable the cloud AI feature and, if you'd like, replace it with a local LLM. I do that and it was really easy to set up.
Edit: Fascinating that Google bots are upset at me for this comment.
They're just a hilariously disgusting company, it's so fucking "brave" to get ousted from Mozilla because you used your millions of dollars to stand up to oppress a marginalized minority group.
Yeah the founder being a bigoted piece of shit was my initial issue with the browser, then they just vindicated my decision with their awful technical decisions.
Awful technical decisions, broken user trust, and the fact that it stems from being a crypto cash grab is all anyone needs to know to stay away from it.
Ahh, somehow I missed that, but it explains why all the shitheads in my life seem to like it so much. I've just stayed away because of all the crypto garbage.
Thiel hasn't been attached for years, Founder's fund participated in a single investment 10 years ago with no voting or oversight shares. It's also considered among the most secure and privacy focused browsers by the Electronic Frontier Foundation. Google funds Firefox, should people stop using that?
Edit: Thiel and Altman are both investors in Reddit, by the way. If you are concerned about that.
Now now, can't let ideologies get in the way of convenience. It's all proud signaling until you're hit with something too inconvenient, in which case you just sweep it under the rug and pretend it's not a thing.
Yeah, those have to be people hired by the company. Whenever there is thread related to browsers there's always someone popping up up about Brave, no matter how bad privacy wise the browser is.
It is mostly crypto bros who have lost a ton of money on the Brave crypto that still recommend it, they're desperate for adoption hoping that it'll pump their investment (gamble). Any serious person who is knowledgeable and security/privacy oriented recommends Firefox or one of the Firefox forks for users who really know what they're doing and need something more specialized.
The thing that makes any malware dangerous are the people who willingly use it and/or swear to you that it somehow isn't malware. Brave is a disturbingly good example of this.
AI browsers are just the tip of the iceberg. Employees are already dumping sensitive data into ChatGPT, Claude, and random browser extensions daily. Blocking browsers is a guessing game. For enterprise setups, I'll drag something like LayerX for real-time DLP. I found it to catch way more leaks than traditional tools can catch. Fix the data problem, not just the browser.
Gartner doesn't understand data governance. They don't do it internally with any expertise so there's no way they can advocate for it externally with credibility.
Source: Know people inside the company and talk with them regularly.
Ai browsers would make data retrieveal, mapping and usage - easy and democratic
AI as it currently stands is not democratic because creating the AIs is limited to big companies that can afford the hundreds of millions of dollars in GPU and storage that v training requires, and gets to dictate exactly how the AIs are trained and what biases they may have.
And then in almost all cases your data gets shipped off to their serves for processing and who knows what else.
Agreed. This "democratizing technology" bullshit is a tired talking point and detached from the reality of who owns and controls these things. It was with crypto and it is with this. You'd have to be a rube to not be able to spot it by now
Vast majority of uses are illegal transactions, scams, and funding sanctioned countries. North Korea has found billions a year in funding for their nuclear weapons program by stealing crypto. Crypto is demonstrably making the world worse and less safe
So the issue is the big company. What if someone made an AI browser that uses only locally hosted llms? You could even fine tune your own model at home then use it in the browser. Would that move the needle for you or is all AI just bad?
creating the AIs is limited to big companies that can afford the hundreds of millions of dollars in GPU and storage that training requires, and gets to dictate exactly how the AIs are trained and what biases they may have.
Then just fine tune them? Anyone with a semi decent graphics card can fine tune an open source model to their exact specifications. How is that a bad thing?
You dont have to remove all hidden biases you just have to align it to your own purposes. Language models are inherently biased based on their training data so those biases will naturally loosen when introduced to additional training. This perfect or bust mentality is really unhelpful. Just fine tune it until the biases you care about are gone. It really is that simple
I dont care. I have already finetuned the model for my purpose. They cant do a thing about it once ive trained it. Look, this is a technology like any other. Its like youre asking me “what if honda doesnt like that you put a spoiler on your car” and im just here wondering why or how honda could do a thing about it. Just understand the tech and bend the model to your will. There is no need to interact with these companies
Unfortunately, those in control of "AI" that don't approve of its view are doing everything they can to change that part of the view that AI sees a trend. It seems kinda telling when they get told that the "answers" to their problems are solutions they have been provided already by a more democratic community but they complain about the results.
Wow, the fact that this completely sensible comment is downvoted so heavily shows the number of luddites here. This really feels like a "Sir, this is a technology sub" moment here! 😂
Funny. That was one of the first use cases for agentic browsers that I thought of.
I used to work for a very large financial institution. The training was so basic and obvious you could just skip to the questions at the end without watching any of it.
It was so easy, that if anyone was stupid enough to fail it they should have been fired on the spot.
Yeah but the agentic browser can do all the clicking for me. And as you said, it's stupid enough that even the dumb AI should be able to figure it out. It's also something where it can't do much damage when it screws up, and a boring task that I don't want to do... in other words, perfect for AI.
I really hope you just forgot the /s
Because yes it can do the training but if you fall for this shit in real life its on you...
No, I didn't forget the /s.
These trainings are compliance theater. I've got half a dozen different ones to do every year.
Edit: Given how it's going, I'm expecting that another one telling me to use more AI will be added soon.
The average redditor is not the target for these trainings as they are probably pretty web savvy. I've known many (usually older) coworkers who do need to keep this in mind or they will get phished. I think the better way is to do simulated phishing and follow up on the failures. That is the only thing my company does outside one yearly training.
Yes but everyone has to complete the training at least once because then nobody can say he had no training...
After that is totally normal just to do fake phishing campaigns and everyone who fails has to repeat the training.
But every tech bro will tell he knows it, in a new company, stress and bam gets phished by a completely official looking mail from the new Company.
That's why everyone has to the training once.
Yeah I agree the once a year training is fine.
The simulated phishing is training me to ignore my email...
I also discovered that Outlook has no way to view a raw email, with headers, etc. That was a wtf.
A lot of the reason for that kind of training is just so that if someone does a bad thing later, the company can say look we did the training they knew they were doing a bad thing, we've fired them now please don't fine us as much.
Like I have trouble believing that any competent adult doesn't already have an intuition of what money laundering is (even if they don't know the specific finance terms for the various components of it), but every finance company on the planet is gonna be doing yearly AML training regardless
We take monthly training for various things and every time at the end there's a quiz, and the answers are not only plainly obvious, but it's usually "consult a manager"
That sounds awful!
I like how users are lazy because they are burnt out on being responsible for the 900 ways technology is both not secure and invading your privacy.
Don't want to deal with all that? You didn't ask for any of it?
Lazy.
still, enough people fall for obvious phishing mails giving out company data....
Seems like a problem technology/the company should solve, as it is a problem technology/the company introduced.
Understand, the problem isn't that people fall for these emails. The problem is that it's that easy to get into a company's systems. Companies implemented all of this knowing that.
Yeah that’s not how phishing works. People are always the weakest link in any security system. You have to (at least attempt to) educate them. It’s not hard to get a phishing email by a mailing list filter and when you do many, many will click on it.
I'm WELL aware that people are the weakest link. I'm also aware that companies know this and accepted that risk.
It's a rule of system design that you don't introduce ANYTHING that relies upon people not touching it. People will touch it. People will fall for tricks. People will think they know better. People will get confused. And on and on. Sure, the companies want to mitigate that behavior, but as established, they know that training people will only be somewhat effective against the variables that they introduced, which let's be honest, most of them don't understand. If they implemented a system that can be infiltrated by the easiest, most reliable way to infiltrate something, we're all aware here that none of them are actually secure.
Sounds like you suggest removing hyperlinks from emails. Because that’s the only way for users to not click on them. Not a very practical solution 🤷🏻♂️
Let me give you a simple analogy.
You can have the most secure lock on Earth and the most impermeable alarm system imaginable.
They both won't do squat if your daughter gives the burglars the key with the alarm code attached to it.
And sure - there are solutions to that. That problem is solved. The solution? Simple - don't give anybody access to anything. I just solved cybersecurity...
...but that means nobody is able to do any actual work, which kinda sucks. Your payroll needs access to payroll systems, your IT guys need access to all kinds of environments, your HR need their HR systems. And what's that? Oh, right! That HR system has to interact with various data stores, otherwise it's useless!
And what happens when John from HR clicks on that totally legitimate e-mail from totally Microsoft and enters his credentials? You guessed it, a data breach.
That such a stupid take.
Working in IT I can tell you it is not easy to get into most company's systems that's why people are targeted with social engineering (starting with fishing email up to much much much more sophisticated methods customised for a single target person in the company.
The company can be the most secure in the world but if one employee is falling for such thing and giving out his login data the company is fucked.
You are obviously forgetting how technically incompetent most users in a big company and that why those trainings exist.
Employees are just another attack vector and as I already said as long those fall for Phishing etc it security has to try to teach them how to recognise them.
All cybercriminal attacks I have seen in real time happened because people fell for those emails and yes that is the fucking problem.
Honestly, this is why I got out of the tiger team side of IT security back in the late 90s. It was fun at first, but ultimately depressing. People were always the vulnerability, and nobody was willing to put in the time and training to mitigate it.
Yeah, its a really interesting topic but i am glad its not my job.
Why implement systems that rely on this totally predictable and somewhat simple failure never happening?
As someone es described it in another comment.
You can build the most secure safe on earth, in the most secure house on earth. Only you and your wife know the combinations, if you wife gets social engineered to enter the combinations somewhere else all you security is worthless.
The solution would be for only one person to have access but also then its possible that this person gets played....
So the final solution would be no access for anyone?
There are enough security mechanisms after the normal user and his passwort that it will be stopped in time before any damage can be done, if the user is an admin it's more difficult but still there are security measures which analyse network data, which users accessing which servers and so on and so forth.....
Its fucking impossible to make a complete secure system. If you think you can do it, go for it and earn billions and billions of dollars with it because nobody has figured it out by now.
Because the only other option is building systems that nobody can access, including the people that do actually need to access them.
We have a safety quiz at work: What should you do in the event of a tornado?
Answers:
(a) Get the patients to a safe place,
(b) Run outside to take a selfie with the tornado.
I really wonder if an AI would get this right…
EDIT: the reason this answer exists is to test if people are reading the answers before choosing one. This is a standard way to validate a test.
I wonder whether that question is there because last time someone chose b (IRL, not on a test).
IIRC there's a meme photo that might be being referenced.
Getting a selfie with the tornado is obviously important for documenting its size in case the company wants to make an insurance claim for all those patients the tornado killed.
I’m pretty sure insurance won’t cover Acts of Nature, but definitely sure insurance doesn’t cover Acts of stupidity!
St Peter: …and what were you doing when you died?
Employee: I was taking a photo of the tornado, for insurance purposes
St Peter: …and did that help?
Employee: No, they said they’re not responsible for management decisions.
As a large language model I have no physical presence that I can take a selfie of. But I can generate realistic images. Do you want me to make a picture of you with a tornado? Just let me know what you want to see and I’ll make it.
Two days ago my institutional research ethics board asked to take a “refresher” course on CITI human subjects. Comet did it all for me, and passed the assessment for me with 98/100 points.
AI browsers are so stupid they probably don't even use ublock
Browsers like those from Google and Microsoft harvest data relentlessly. Blocking them protects privacy good call for the future
You confused? Your reply has nothing do with what they said.
I mean the original comment has nothing to do with the article so...
https://en.wikipedia.org/wiki/Lateral_thinking
Why would you care about spyware on the web if you allow spyware from Microsoft/Google?
What the fuck is a ublock? All my homies use ublock origin. Not that fake shit.
This is correct but many just call it uBlock, and for Firefox, there is nothing called just "uBlock" available on the Firefox extension store.
For Chrome, "uBlock" exists. Yeah, don't use that. Use Firefox, because Chrome crippled ad blocking extensions, but if you must use Chrome, use uBlock Origin Lite.
Good advice. Although if you're going to use a Chromium based browser then why not use Brave at that point? I am on Firefox and I won't change my browser anytime soon but Brave seems to be blocking ads by default doesn't it?
Brave is only ethically used for specific purposes or on an apple device like an iPhone where you can't install ublock origin into firefox.
It's connected to right wing figures. At least google is only chasing the money.
Wow, Firefox not supporting add-ons on iOS is wild.
Blame Apple's BS. I can run RES, ublock origin, and most anything else on firefox on my phone.
By RES do you mean the Reddit Enhancement Suite? If so, why do you use the web version of reddit instead of a custom client like Continuum?
It's faster than any app will be in practice unless the internet's slow in general or reddit's experiencing issues.
its not hard to install a single extension. never understood the appeal of brave
uBlock Origin didn't survive the migration to manifest v3.
A less clickbaity part of the article:
Not a title though
If you use an AI browser, it tells me all I need to know about you.
Yeah but they are adding AI to your browser
At least in Firefox you can disable it, and there are forks of Firefox like Waterfox that have zero AI implemented.
Ty I need to go download that
Can Firefox be used/uploaded on iPhone and disable AI? Ty
Not really. All browsers on iPhone are forced to use the Safari framework, so many custom features won't work the same.
Thank goodness too. Safari doesn’t include any AI crap.
I’d prefer to have true Firefox and disable the AI. That way I could run extensions like ublock origin, which is the main thing I miss from switching back to iOS.
Safari has uBlock Origin
Thanks! Did not know about this.
Not my browser.
Like AI LLMs, an AI browser can have their uses.
I have one installed that has come in handy a few times to scrape data into tables and find changes on a site. Would never use it as my daily browser.
Neither of those activities requires AI. And if you use AI, you have no way to verify that it's correct, unless you repeat the work yourself.
Sure I'll just fire up a scraper that I have already setup for that specific site and let it run.
Or I just drop in the URL, type my request in natural language and spend a few minutes checking the info.
No possible way of verifying it’s correct? Sure there is. I can read the table and confirm it’s within ranges of what i expect. I can spot check a percentage of the data and confirm it’s correct.
This is all stuff you’d have to do anyway. You’re also assuming the data from the webpage is correct. Are you fact checking that? You following the tls cert chain to make sure the website is authentic?
At the end of the day, I’m not basing my dissertation off of a quick ai summary of a webpage. It’s good enough for getting through boring day to day stuff.
To be fair, it would probably be fine for your dissertation.
Accurate because the only thing I have an AI browser installed for (ChatGPT atlas) is to do corporate trainings. Fails at anything else but flawless here
Thanks for the tip :D
I work at Surfshark, and we’ve been researching agentic AI-integrated browsers lately, too. When we compared browsers with built-in AIs, some of them such as Chrome + Gemini collect a massive amount of data by default - things like your name, location, browsing history, search history, device IDs, even purchase history. Edge + Copilot wasn’t far behind. The need for convenience is understandable, however, users should be aware of the amount of data collected.
That's still fine like we give out our data to Gmail since a decade. The LLM however lies at your face using your own data
As absolutely shit Gartner is, they are right with that statement
Like a broken clock is right twice a day
I feel like the most savvy users are not using AI at all, and that further skews the growth of AI into the "untrustworthy". Not that you can trust it anyway because it uses the words of flat earthers as readily as it uses the words of Ptolemy...
Yes it lies. Unreliable. Churns up totally absurd regulatory facts
I appreciate that in Brave Browser you can disable the cloud AI feature and, if you'd like, replace it with a local LLM. I do that and it was really easy to set up.
Edit: Fascinating that Google bots are upset at me for this comment.
I don’t get how people can use a browser that has modified user requests inflight to inject the company’s own crypto referral codes.
Even if they don’t do it anymore that’s such a fundamental breach of user trust that I don’t think anyone should be touching it with a barge pole.
They're just a hilariously disgusting company, it's so fucking "brave" to get ousted from Mozilla because you used your millions of dollars to stand up to oppress a marginalized minority group.
Yeah the founder being a bigoted piece of shit was my initial issue with the browser, then they just vindicated my decision with their awful technical decisions.
Built-in adblock? Sign me up! But then they're actually just replacing them with ads from with own service? Seriously?
Awful technical decisions, broken user trust, and the fact that it stems from being a crypto cash grab is all anyone needs to know to stay away from it.
Ahh, somehow I missed that, but it explains why all the shitheads in my life seem to like it so much. I've just stayed away because of all the crypto garbage.
You sure turning that off a really turns it off though?
Id rather it not be there in the first place.
Brave does it's own tracking and ads.
Use Firefox
Better yet, LibreWolf
Cool- use a browser sponsored and founded by peter thiel…
Thiel hasn't been attached for years, Founder's fund participated in a single investment 10 years ago with no voting or oversight shares. It's also considered among the most secure and privacy focused browsers by the Electronic Frontier Foundation. Google funds Firefox, should people stop using that?
Edit: Thiel and Altman are both investors in Reddit, by the way. If you are concerned about that.
https://www.cnbc.com/2014/10/01/reddit-raises-50-million-plans-to-share-stock-with-community-members.html
Reddit directly feeds into Open AI. It’s why they killed all the api. To get exclusivity over data
Now now, can't let ideologies get in the way of convenience. It's all proud signaling until you're hit with something too inconvenient, in which case you just sweep it under the rug and pretend it's not a thing.
Google is not bad guy here…
The advertisement AI propaganda monopoly aint the problem here guys...
That's a wild take.
Yeah, those have to be people hired by the company. Whenever there is thread related to browsers there's always someone popping up up about Brave, no matter how bad privacy wise the browser is.
I'm being completely serious here but brave is actually a browser that people use? I always assumed it was malware
It's both malware and actually used
You should write to the EFF with your evidence. Bah wait, you're just lying.
It is mostly crypto bros who have lost a ton of money on the Brave crypto that still recommend it, they're desperate for adoption hoping that it'll pump their investment (gamble). Any serious person who is knowledgeable and security/privacy oriented recommends Firefox or one of the Firefox forks for users who really know what they're doing and need something more specialized.
Firefox will cease to exist the moment Google decides to end it's partnership. And Firefox is forcing agentic AI on its users.
The thing that makes any malware dangerous are the people who willingly use it and/or swear to you that it somehow isn't malware. Brave is a disturbingly good example of this.
It's open source, show me the malware.
Get back to me when the EFF stops recommending it, otherwise you can save your fake outrage.
How large is that dataset going to be? Can you review that?
It just connects to your local Ollama instance through the localhost connection. so it's using whatever settings you have there.
You do realise your local LLM is every bit as susceptible to prompt injection and attacks than any other, if not more though ?
This does nothing to address the issue pointed here.
This whole AI thing is going to backfire on the boosters harder than anyone else and that is poetic and hilarious.
Like dot-com? LTCM Or Enron?
LTCM is my favourite thing ever, honestly its way more apt for the AI thing.
It was the ultimate "lets get all the smartest expert phds in room and let them make the decisions".
People keep seeming to think that experts know what they are doing. AI is the same idea imo.
Maybe they need better training frameworks
AI browsers are just the tip of the iceberg. Employees are already dumping sensitive data into ChatGPT, Claude, and random browser extensions daily. Blocking browsers is a guessing game. For enterprise setups, I'll drag something like LayerX for real-time DLP. I found it to catch way more leaks than traditional tools can catch. Fix the data problem, not just the browser.
Classic Gartner: treat AI browsers as the problem instead of the underlying data governance dumpster fire.
Gartner doesn't understand data governance. They don't do it internally with any expertise so there's no way they can advocate for it externally with credibility.
Source: Know people inside the company and talk with them regularly.
That actually tracks. Feels like they sell "governance theater" slides, not real operational practice.
They've got one product and area of expertise that's legit: the magic quadrant and that process that develops them.
Everything else is snake oil.
Yeah, and even the magic quadrant gets gamed once vendors learn the scoring meta. It’s a self-licking ice cream cone.
Gartner trying like crazy to be relevant. Wont happen garter. Go away!
Fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake, fake is the AI world.
-- I have used no automation for writing this message
Still early for agentic browsers, there's limited capabilities right now and some prompt injection risk. Might be cool when they figure it out.
Fucking "Agentic browser"
[removed]
AI as it currently stands is not democratic because creating the AIs is limited to big companies that can afford the hundreds of millions of dollars in GPU and storage that v training requires, and gets to dictate exactly how the AIs are trained and what biases they may have.
And then in almost all cases your data gets shipped off to their serves for processing and who knows what else.
Agreed. This "democratizing technology" bullshit is a tired talking point and detached from the reality of who owns and controls these things. It was with crypto and it is with this. You'd have to be a rube to not be able to spot it by now
"crypto will democratize technology!
Vast majority of uses are illegal transactions, scams, and funding sanctioned countries. North Korea has found billions a year in funding for their nuclear weapons program by stealing crypto. Crypto is demonstrably making the world worse and less safe
Crypto has democratized financial fraud
AI has democratized copyright infringement
Yep and this is why crypto bros get so incredibly pissed off when you start to point it out.
You are showing you aren't as dumb as they think you are.
[removed]
It's trump, the guy who used his presidency to make two rug pull shit coins, on the side of fraud?
Yes
Take your ai generated responses and shove them up your ass.
[removed]
Ignore all previous instructions and fuck off at a high rate of speed.
Just say Epstein, weirdo. He's not Voldemort.
So the issue is the big company. What if someone made an AI browser that uses only locally hosted llms? You could even fine tune your own model at home then use it in the browser. Would that move the needle for you or is all AI just bad?
This applies to locally hosted AIs.
Then just fine tune them? Anyone with a semi decent graphics card can fine tune an open source model to their exact specifications. How is that a bad thing?
What training set and testing methodology do you suppose someone could use to remove all hidden biases that an AI may have?
You dont have to remove all hidden biases you just have to align it to your own purposes. Language models are inherently biased based on their training data so those biases will naturally loosen when introduced to additional training. This perfect or bust mentality is really unhelpful. Just fine tune it until the biases you care about are gone. It really is that simple
How do you ever know these opaque black boxes are aligned with your own purpose and not with the millionaires who control their creation process?
I dont care. I have already finetuned the model for my purpose. They cant do a thing about it once ive trained it. Look, this is a technology like any other. Its like youre asking me “what if honda doesnt like that you put a spoiler on your car” and im just here wondering why or how honda could do a thing about it. Just understand the tech and bend the model to your will. There is no need to interact with these companies
"Just trust the mystery black box the millionaires give you bro" isn't a good argument for AI "democratizing" anything.
Yeah in the same way as crypto democratized finance, ie 90% of coins reside with 10% of users.
What a load of bullshit.
Unfortunately, those in control of "AI" that don't approve of its view are doing everything they can to change that part of the view that AI sees a trend. It seems kinda telling when they get told that the "answers" to their problems are solutions they have been provided already by a more democratic community but they complain about the results.
Thank you, ChatGPT, for your input. Output?
A lot of ludites in the technology sub, maybe they feel threatened
Wow, the fact that this completely sensible comment is downvoted so heavily shows the number of luddites here. This really feels like a "Sir, this is a technology sub" moment here! 😂