They’re two different tools with different purposes, so why treat one like it can replace the other?
Because they’ve sunk billions into the hype train, and it’s clear most people don’t really want it. So it’s being force-fed to everyone in every product to try to get some kind of ROI.
That, and the more interactions it can get, the more data it can suck up to train on and/or sell.
Google’s delusional fantasy has always been you want to buy a guitar, you go on Google, their ad algorithm shows you a perfectly targeted ad for the guitar you want, and you love and trust Google so you click the ad and buy it. They think LLMs will make that actually work, they want to give you a Grima Wormtongue that can simper and manipulate until you do love and trust it, and once you’re a rotting husk it whispers the ad algorithm into your ear so you think it was your idea.
I said to take the Wizards staff!!!
Your product might not benefit from AI but you definitely can get more VC investment dollars if you bolt an LLM onto the side of it and feature AI as central to what you’re offering. This is because VCs treat tech like fashion and don’t actually understand how it works or how it would integrate into our lives.
This was true for the nascent internet, and for blockchain even more, but truly nobody really understands how llms work so its way worse.
Hype
theres serious overlap. they are not mutually exclusive. the ‘text generator’ is utilizing the search prompt to identify the most likely “next word” which would translate to most likely the best result for the search.
theoretically, its just a better search engine being able to handle an obscene number of variables. theoretically.
It’s more like: Traditional search pipes first page of results to the bot. The bot reads the pages from the results and tries to identify an answer or the best result from the set. Both the bot summary and the adjusted ranking for the results are returned. This gives a chance at a better experience for the user because they don’t have to read all the pages themselves to try find the answer they were looking for. However there is a huge margin for error since the bot is underpowered due to Google balancing the amount they pay for each search with the amount they earn for each search. So there end up being misinterpretations, hallucinations, biased content etc.
If they used a top end model like Claude Sonnet 3.7 and piped it enough contextual information, the AI summaries would be quite accurate and useful. They just can’t afford to do that and they want to use their own Gemini bs.
Perhaps true, but with the nature of the errors involved (generating anything instead of error messages for lacking info) and requisite reviewing, which itself demands research (which was what it was being used to shortcut to begin with in this context), isn’t it still something of an ill fit for this?
They know it’s bad. They want you locked in to their ecosystem. The goal is to be the first to get consumers locked in. So they’re rushing to market with incomplete products because if they don’t release NOW someone else might beat them to it.
AI hype is also Collusion among the ultra wealthy to artificially prop up their investments.
My 2c:
Control:
By adding rules to AI output, the ruling elite seek to regain what the internet took from them: Information control. Some scandal happens? AI monitoring erases all indication of it, or pushes the narrative in the desired direction.We have easy evidence of that on the Canada subreddit. Trudeau, for his faults, was unequivocally the single friendliest prime minister to Alberta that the country ever had, considerably more so than the hack that was Harper. But thanks to astroturfing and media control, the conservatives of Alberta see him as one of the worst.
Monetization of big data. The other that I can see is that AI can solve end of the big data issue. Sure, big data has reams and reams of data. But they’ve had trouble processing it and turning it into useful monetizable information/product. Even Google admits that for all of their data on everyone, their clickthrough rates are atrocious. The hope is that AI can sort through those massive data sets and give them the easy data they want.
There isn’t much of a reason for that besides money.
Psychopathy shouldn’t be disregarded though.
As much as I hate AI, you be amazed at how many times I’ve gone to a search engine typed in a question of how do I do X?
The search results when I click on the webpages may or may not be right half the time the people who designed the webpage didn’t put allthe information in so the information is either incomplete or wrong.
The AI result at the top of the page more often than not is actually correct.
It seems like everyone wants to get on board and try it in case it takes off.
If it works they want to be the first to get really good at it and if it doesn’t the normal search engines are still there to fall back on.
At the moment it feels like the 3DTV hype. A lot of cool technologies that were expensive and ultimately not worth it for a lot of people. Maybe that will change as time goes on.
Money and incentives are very powerful, but also remember that these organizations are made of humans. And humans are vain.
Amassing station and power can scarcely be divorced from the history of human civilization, and even fairly trivial things like the job title of “AI engineer” or whatever might be alluring to those aspiring for it.
To that end, it’s not inhuman to pursue “the next big thing”, however misguided that thing may be. All good lies are wrapped in a kernel of truth, and the fact is that machine learning and LLMs have been in development for decades and do have a few concrete contributions to scientific endeavors. But that’s the small kernel, and surrounding it is a soup of lies, exaggerations, and inexactitudes which somehow keep drawing more entities into the fold.
Governments, businesses, and universities seem eager to get on the bandwagon before it departs the station, but where is it heading? Probably nowhere good. But hey, it’s new and shiny, and when nothing else suggests a quick turnaround for systemic political, economic, or academic issues (usually caused by colonialism, fascism, debt, racism, or social change), then might as well hitch onto the bandwagon and pray for the best.
Because it’s useful. Have you tried? But the LLM has to be able to use conventional search engines as well. I tell my LLM agent to prioritize certain kinds of websites and present a compressed answer with references. Usually works way better than a standard Google search (which only produce AI generated junk results anyway).
You can get very good answers or search results by utilizing RAG.
Can you please share a simple prompt? I’ve heard of RAG, but was unaware how you could use it in this case. Is this something you can only do with a local LLM? Or can you plug this into GitHub copilot or the like?