

Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
Developer and refugee from Reddit
Ask ChatGPT to list every U.S. state that has the letter ‘o’ in its name.
Not true. Not entirely false, but not true.
Large language models have their legitimate uses. I’m currently in the middle of a project I’m building with assistance from Copilot for VS Code, for example.
The problem is that people think LLMs are actual AI. They’re not.
My favorite example - and the reason I often cite for why companies that try to fire all their developers are run by idiots - is the capacity for joined up thinking.
Consider these two facts:
Those two facts are unrelated except insofar as both involve humans, but if I were to say “Can you list all the dam-building mammals for me,” you would first think of beavers, then - given a moment’s thought - could accurately answer that humans do as well.
Here’s how it goes with Gemini right now:
Now Gemini clearly has the information that humans are mammals somewhere in its model. It also clearly has the information that humans build dams somewhere in its model. But it has no means of joining those two tidbits together.
Some LLMs do better on this simple test of joined-up thinking, and worse on other similar tests. It’s kind of a crapshoot, and doesn’t instill confidence that LLMs are up for the task of complex thought.
And of course, the information-scraping bots that feed LLMs like Gemini and ChatGPT will find conversations like this one, and update their models accordingly. In a few months, Gemini will probably include humans in its list. But that’s not a sign of being able to engage in novel joined-up thinking, it’s just an increase in the size and complexity of the dataset.
It’s absolutely taking off in some areas. But there’s also an unsustainable bubble because AI of the large language model variety is being hyped like crazy for absolutely everything when there are plenty of things it’s not only not ready for yet, but that it fundamentally cannot do.
You don’t have to dig very deeply to find reports of companies that tried to replace significant chunks of their workforces with AI, only to find out middle managers giving ChatGPT vague commands weren’t capable of replicating the work of someone who actually knows what they’re doing.
That’s been particularly common with technology companies that moved very quickly to replace developers, and then ended up hiring them back because developers can think about the entire project and how it fits together, while AI can’t - and never will as long as the AI everyone’s using is built around large language models.
Inevitably, being able to work with and use AI is going to be a job requirement in a lot of industries going forward. Software development is already changing to include a lot of work with Copilot. But any actual developer knows that you don’t just deploy whatever Copilot comes up with, because - let’s be blunt - it’s going to be very bad code. It won’t be DRY, it will be bloated, it will implement things in nonsensical ways, it will hallucinate… You use it as a starting point, and then sculpt it into shape.
It will make you faster, especially as you get good at the emerging software development technique of “programming” the AI assistant via carefully structured commands.
And there’s no doubt that this speed will result in some permanent job losses eventually. But AI is still leagues away from being able to perform the joined-up thinking that allows actual human developers to come up with those structured commands in the first place, as a lot of companies that tried to do away with humans have discovered.
Every few years, something comes along that non-developers declare will replace developers. AI is the closest yet, but until it can do joined-up thinking, it’s still just a pipe-dream for MBAs.
Dude… Go get screened for heart problems right away. You know you have a high likelihood of issues with your heart, so if you tackle them now instead of when they become serious, you up your odds of outliving your grandparents considerably.
Featuring in this community! Because… Onions!
Hmmm. If 10 is average and 30 is world-class…
All in all, my D&D stats would be pretty decent. Nothing anywhere near the peak of human potential, but not bad.
invested in his crypto schemebribed him
Fixed that.
I did exactly that for my mom. Totally non-technical, but she was beginning to absolutely hate all the invasive noise and crap from Windows. All she wanted was to write free of distraction.
So we backed up her files, set up Cinnamon, installed LibreOffice, and imported her files. I set the system up to be offline, since it’s her no-distractions computer, showed her the basics of using it, and basically haven’t heard a peep about it since.
Linux just works, without the bullshit.
Okay, at first I literally did think this was an Onion headline, it’s so fucking idiotic.
Unlike a lot of people, I think the orange shit-head is more evil than stupid… But goddamn, he’s also incredibly stupid.
The fire nation attacks!
We’re pretty stupid over here right now. Sorry, world.
The Alex Jones one actually gave me a good “That’s pretty Onion-y” hit.
And people will fall for it. Again.
Jesus fucking Christ I hate this timeline.
That’s actually really shocking. I looked it up, and you’re correct. Hijackings were extremely common in the 60s and 70s.
Family of four. We probably go through 10 to-12 eggs a day much of the time. Scrambled eggs, French toast, homemade bread, cookies, pancakes, frittatas, huevos rancheros tacos… It adds up. I recently started buying the 18-egg packs because it’s more cost-effective.
Looks like that was a mistake and he removed the “gay” reference. Not sure what he actually intended to write, though.
9.5 out of 10. In his dick.
Has anyone asked him why he wants to live forever if that’s what it takes?
That’s a different story from 2014, not about the 2005 broadcast.
Why doesn’t radio free asia let us verify their claims with the evidence they must have gathered to make the report? Y’know, like a reputable news agency would?
You might as well ask why journalists don’t put targets on the backs of their anonymous sources by publicly identifying them. ALL reputable outlets sometimes use anonymous sources to protect the lives of people living in precarious situations. North Korea is not exactly known for treating citizens who talk negatively about how the government operates there well.
I’m not sure why you linked that video. The haircut thing wasn’t strictly compulsory, but North Korean state television did, in fact, broadcast a show called Let’s trim our hair in accordance with the socialist lifestyle, along with another show that used hidden cameras to find and shame people whose haircuts didn’t meet their standards.
Ah, did they finally fix it? I guess a lot of people were seeing it fail and they updated the model. Which version of ChatGPT was it?