You can get offline versions of LLMs.
And gpt-oss is an offline version of chatgpt
First thing that came to mind: GPT4All
I mean, most people have a local LLM in their pocket right now.
Unless I am missing something:
Most people do not have a local LLM in their pocket right now.
Most people have a client app that talks to a remote LLM, which ‘lives’ in an ecologically and economically dubious mega-datacenter, in their pocket right now.
Plenty of the AI functions on phones are on-device. I know the iPhone is capable of several text-based processing (summarizing, translating) offline, and they have an API for third party developers to use on-device models. And the Pixels have Gemini Nano on-device for certain offline functions.
My phone does speech-to-text flawlessly offline, it’s a crazy useful little LLM tool
Oh!
Well, I didn’t know that.
I’m too poor to be able to afford such fancy phones.
Gemini nano, Apple Intelligence On-device, etc.
Could you crunch an LLM into 700Mb that was still functional? Cause this looks like a fun thing to actually do as a joke.
Edit, I bet I could get https://huggingface.co/distilbert/distilgpt2 to run off a CD. How many tps am I gonna get guys 🤣
Qwen3-0.6B is about 400 MB at Q4 and is surprisingly coherent for what it is.
That’s so crazy that an LLM capable of doing anything at all can be that small! That’s leaves room for like an entire .avi episode of family guy at dvd resolution on there, which is the natural choice for the remaining space of course
a 4k episode of family guy using H265 (HEVC) and assuming not too many cutaway gags could produce a file about 240MB. You could probably fit a 480i episode of south park in the remaining 60MB
Wow, just popped it onto my very slow desktop and this little model rips haha. I really think tiny LLMs with a good LoRA on top are going to be a huge deal going forward
there’s also tinyllama, which is somewhere around 600MB. it’s hilariously inept. it’s like someone jpeg-compressed a robot.
also you’re only gonna load off of that cd once so it’ll perform fine.
FCKGW-RHQQ2-YXRKT-8TG6W-2B7Q8
make sure to disconnect the internet first
CrAcKeD
That’s just Dr Sbaitso.
It’s just audio of French farting cats.
Le pfffft.
My bet was on porn.
Or a copy of an old Encarta cd-rom
If we assume a CD, you can probably fit a 256M parameters model in it. But it will LOAD.
DVDs exist. They can fit approx. 7B params, enough to be somewhat productive.
Isn’t it possible to download all of wikipedia, and it being surprisenly a small file size? Can it fit on a CD?
No
(English) 24,05GB without media. Adding media adds 428,36TB.
No, you really can’t; It’s like 43 gb the text only version
yes you really can; it’s like 20-25 gb depending on how recent of a copy you have. I’ve been seeding wikipedia for almost a year and it barely takes any space on my computer
It could fit on a BDXL disc.
You can fit text-only wikipedia on a normal Blu Ray as it’s only about 24GB. You can also easily fit Llama 3.1 or any of the other open, offline capable ai models as they’re only about 4GB.
Offline LLMs exist but tend to have a few terabytes of base data just to get started (e.g. before LORAs)
I thought it was more like 10-20GB to start out with a usable (but somewhat stupid) model.
Are you confusing the size of the dataset with the size of the model?
Does anyone know of any OSS LLMs that can search the web the way ChatGPT can?
It’s not the LLM that does the web searching, but the software stack around it. On its own, an LLM is just a text completer. What you’d need a frontend like OpenWebUI or Perplexica that would ask the LLM for, say five internet search queries that could return useful information for the prompt, throw those queries into SearxNG, and then pipe the results into the LLM’s context for it to be used.
As for the models themselves, any decently-sized one that was released fairly recently would work. If you’re looking specifically for open-source rather than open-weight models (meaning that the training data and methodologies were also released rather than just the model weights), GPT-OSS 20B/120B and the OLMo models are recent standouts there. If not, the Qwen3 series are pretty good. (There are other good models out there, this is just what I remember off the top of my head.)
Thank you



















