In Spain the fascist stayed in Spain, participation on ww2 was very small and Franco betrayed Hitler as soon as he saw he was going to lose, so there was no reason for the USA to invade Spain and we got 40 years of fascist dictatorship.
In Spain the fascist stayed in Spain, participation on ww2 was very small and Franco betrayed Hitler as soon as he saw he was going to lose, so there was no reason for the USA to invade Spain and we got 40 years of fascist dictatorship.
I will just copy my other response about datacenters energy usage, ignore the parts not related to our conversation:
Google is not related with chatgpt. Chatgpt parent company is openAI which is a competitor with google.
A more rational explanation is that technology and digital services on general have been growing and are on the rise. Both because more and more complex services are being offered, and more importantly more people are requesting those services. Whole continents that used not to be cover by digital services are now covered. Generative AI is just a very small part of all that.
The best approach to reduce CO2 emissions is to ask for a reduction in human population. From my point of view is the only rational approach, as with a growing population there’s only two solutions, pollute until we die, or reduce quality of life until life is not worth living. Reducing population allows for fewer people to live better loves without destroying the planet.
It also arises the question on why am I responsible if a big tech company decided to make an llm query of every search or overuse the technology, when I am talking about a completely different usage of that technology, that doesn’t even reach a 20-30 queries a day which would have a power usage of less than a few hundreds wh at most, which os negligible in the scheme of global warming and my total energetic footprint.
How it’s being a fanboy saying that “It works for me in some particular cases and not others, it’s a tool that can be used”.
Please, read again this conversation and do a second guess on who is a radical extremist here.
In the case we were talking, writing code, I am the auditor of the answers. I do not ““vibe code”” I read the code that’s proposed, understand it, and if it’s code that I would have written I copy it, if not I change it. “Vibe coding” is an example of bad usage of the tool that would lead to problems. All code not written by yourself and copied from other source should be reviewed. Once it pass my review is as good as my own code. If it fail it would fail the same as any other code witten by me, as it’s something that I was clearly unable to see.
For instance a couple of months ago I wrote a small API service that worked fine at first and suddenly stopped working a few weeks in production. It was a stupid mistake I made, and I needed no LLM to do that mistake. The service was so simple that I didn’t really even used LLM there. But I made a mistake regardless. I could have use AI and get the same bad function that caused the issue. And the blame would still be mine for not seeing the problem.
Once again is a tool. If some jackass decide to vibe code an app and it’s a shit app, is a bad use of the tool. But some other people can de proper reviews and analysis of the generated code and assume full responsibility of any failures of that code.
Not really.
I self host my own LLM. Energy consumption for queries is lower than gaming according to my own measures. And the models are not made so frequently (I use models made last year still). And once the model is done is infinitely reusable by anyone.
I get that you are starting by the axiom “AI is bad” and then create the arguments needed to support that axiom. Instead of going the other way around with an open mind.
I told you my own personal experience with it. Take it as you want. For me, my situation will be the same. I would keep using same as I use any other tool that works for me, and will stop using it when there’s something better same as I’ve done countless times. I’m not easy to peer pressure into any particular stance, so I can form my own opinions based on what I test for myself. I really think a lot of arguments against AI boil down to some sort of political stance. AI hurt a series of small artists which had a very big voice in some spaces, and thus an anti-AI political movement was created. My own copyleft morals made me undisturbed by this original complains about generative AI, and the rest of arguments have been very unconvincing, straight up fake, logical fallacies, or just didn’t really check out with the reality I was able to test by myself.
For instance I saw other post today saying how 3 watts hour per query was an absolute energy waste for a household. When that’s absolutely nothing compared to the 30.000 watts hour a typical household spend each day, even with quite and amount of queries. Sincerely I spent last few months with one of these devices to measure energy consumption attached to my computer and AI energy usage was really underwhelming compared with what people told me it was going to be. AAA gaming is consistently more energy hungry.
LLM also does not bully you for asking. Nor it says “duplicated question” for non duplicated questions… There’s a reason people prefer LLM to SO nowadays.
It’s not panacea. But it’s not the doom world destroying useless machine that some people like to tell it is.
It’s a useful tool for some task if you know how to use it. Everyone who actively use it is because we have find put that it works for us better than other tools for that task, of not we would not use it.
Giving my own personal experience, I tend to ask first to an LLM rather that what I used to do digging in old SO answers because I get the answer quicker and a lot of the times just better. It’s not perfect by any stretch of the imagination, but it serves me a purpose.
For instance last week I needed a PowerShell command to open an app compatibility menu from the command line. I asked and got this as a response:
(New-Object -ComObject Shell.Application).Namespace((Split-Path “C:\Ruta\A\TuPrograma.exe”)).ParseName((Split-Path “C:\Ruta\A\TuPrograma.exe” -Leaf)).InvokeVerb(“P&roperties”)
Worked at first try, exactly as I wanted.
You are free to try a search engine with the query “PowerShell command to open an app compatibility menu from the command line” and check for yourself how little help the firsts results get you.
It’s a tool, as many others. The magic lies in knowing when and how to use it. For other things I may not use it, but after a couple of years using it I’m developing a good sense of which questions does it handle well and which questions is better not even to try.
I scored 7 out of 10! Can you tell a coder from a cannibal? 💻🔪 https://vole.wtf/coder-serial-killer-quiz/
Not bad.
What’s the difference between copying a function from stack overflow and copying a function from a llm that has copied it from SO?
LLM are sort of a search engine with advanced language substitution features nothing more nothing less.
If you need to use a new language that you are not yet used to, it can get you through the basics quite efficiently.
I find it quite proficient at translating complex mathematical functions into code. Specially since it accept the latex pretty print as input and usually read it correctly.
As an advanced rubber duck that spits wrong answers so your brain can achieve the right answer quickly. A lot of the times I find myself blocked on something, ask the AI to solve and it gives me a ridiculous response that would never work, but seeing how that won’t work it makes easier for me how to make something that will work.
I do not live in a English speaking country. And my mother tongue is not english.
I still sometimes think in English. As I use it a lot.
Maybe we are statistical engines too.
When I heard people talk they are also repeating the most common sentences that they heard elsewhere anyway.
Isn’t there already a mobile app?
Developers can focus on whatever they seem appropriate.
But I think content discover and community (lack of) are the biggest issues of peertube right now.
I hop once in a while to the main peertube site and I can never find anything remotely interesting to watch. There may be some good content, but it’s impossible to find.
Mods be thinking that if they dig SO’s grave deep enough it will emerge on the other side of the world.
Author response:
Lena juggles lesson plans, bedtime stories, and plot twists—sometimes all in the same day. A teacher by day, a writer by night, and a mom 24/7, she crafts paranormal romances with magic, mystery, and just the right amount of chaos. When she’s not wrangling students or characters, she’s probably drinking coffee and pretending it’s a potion for extra energy.
Hi everyone,
I want to openly and sincerely address something that’s come to light regarding my book. A prompt was recently found in the text. It’s something that should never have made it into the final version. I want to apologize deeply to my readers and to the writing community.
The truth is, I used AI to help edit and shape parts of the book. As a full-time teacher and mom, I simply can’t afford a professional editor, and I turned to AI as a tool to help refine my writing. Teaching wages make it hard enough to support a family, and writing has been a passion project I pursued in the small pockets of time I could find. My goal was always to entertain, not to mislead.
That said, the appearance of an editing prompt in the final book was a mistake — one that I take full responsibility for. It has unintentionally sparked a broader conversation about AI in creative work, and I understand the concerns. I’m taking this seriously and will be reviewing the book carefully, making corrections where needed, and being more transparent in the future about my process.
To my readers: thank you for your support, your honesty, and your patience. I’m learning from this and will do better. To the wider community: I’m sorry.
Big “I’m a mom everything I do is excused because my motherhood” vibes.
All her books seems to be free on digital format and just about $3 on printed paper, and doesn’t seem like even for free, a lot of people read it. So it’s not like a big scam or something, I doubt she really makes any money with this.
Sad disabled person who use AI tools to help communicate noises.
I still stream. But I have my own stream service with jellyfin.
I’ve tried lots of options, and I still go back to vscode.
I’ve extensively used neovim and it has been my main IDE for years, but I got tired of having to spend entire afternoons configuring it. And I had too many total breaks, that had led me to recently abandon it as an IDE, still use it sometimes but much less. It relies on too many plugins, which makes breaks more common imho.
I tried helix. But features are far from what I expect for an IDE, even a modal command line one.
On the gui territory, I tried Lapce, but it’s still buggy and lacks features. Development pace is slow enough so I don’t consider it could become my ide in the near future, I have hopes for it, but not much as it could easily become abandoned before it’s usable.
I wanted to try Zed, but they seems to have a preference for macOS, which may have sense in the US but here I don’t remember the last developer I saw using a mac. There’s now a linux version, which I may try at some point, but some people commented that while in a better state than Lapce it’s not still a production ready option for an text-editor-IDE. Also the company behind it doesn’t inspire trust to me. There’s something about it that smells fishy, I cannot quite put my finger on what, but there’s something.
There are more options, some obscure, some old, some paid. For instance I usually hear good things about jetbrains ide. I tried intellij community and I’m not impressed, it’s slightly better than eclipse, but it’s not on the level of visual studio for dotnet. I’m not a student and I don’t get paid for my hobby developments so paid options are a no-go.
So it is visual studio code for me. Sometimes I still use neovim, as I really like modal editors, and vim/neovim is my go to text editor anyways. I’m due to try emacs, and I’m hopeful for the future of both helix and Lapce, though I manage my emotions as I’ve know too many projects that just never deliver, so I’m cautious.
It’s free will as long as you don’t know and/or control all of that chain of causality.
How do you know is not being used to develop open source code?
I have used AI assistance in many things, most of them are open sourced as I by default open source everything I make in my free time. The output code is indistinguishable, same as you wouldn’t know if I asked my questions on how to do something on reddit, stackoverflow (rip) or other forum. You see the source, not the process I followed to make that source code. For all we know linux kernel devs might as well be asking chatgpt question, we wouldn’t know.
As per explicit open source AI related tools there are hundreds. So I don’t really know what you mean here that “open source projects” have not adopted AI. Do you mean like “vibe coding”?
I’ve seen a lot of cheating. So I suppose it’s common. Not the norm, but common.
In the end it all just boils down to people not being really good on general.
That’s why you meed to know the cavieats of the tool you are using.
LLM hallucinate. People willing to use them need to know, where is more prone to hallucinate. Which is where the data about the topic you are requesting is more fuzzy. If you ask for the capital of France is highly unlikely you will get an hallucination, if you as for the color of the hair of the second spouse of the fourth president of the third French republic, you probably will get an hallucination.
And you need to know what are you using it for. If it’s for roleplay, or any not critical matters you may not care about hallucinations. If you use them for important things you need to know that the output needs to be human reviewed before using it. For some things it may be worth the human review as it would be faster that writing from zero, for other instances it may not be worth it and then a LLM should not be used for that task.
As an example I just was writing some lsp library for an API and I tried the LLM to generate it from the source documentation. I had my doubts as the source documentation is quite bigger that my context size, I tried anyway but I quickly saw that hallucinations were all over the place and hard to fix, so I desisted and I’ve been doing it myself entirely. But before that I did ask the LLM how to even start writing such a thing as it is the first time I’ve done this, and the answer was quite on point, probably saving me several hours searching online trying to find out how to do it.
It’s all about knowing the tool you are using, same as anything in this world.