Outdated image, everything goes through palantir now
Outdated image, everything goes through palantir now
There’s a lot of assumptions about the reliability of the LLMs to get better over time laced into that…
But so far they have gotten steadily better, so I suppose there’s enough fuel for optimists to extrapolate that out into a positive outlook.
I’m very pessimistic about these technologies and I feel like we’re at the top of the sigma curve for “improvements,” so I don’t see LLM tools getting substantially better than this at analyzing code.
If that’s the case I don’t feel like having hundreds and hundreds of false security reports creates the mental arena that allows for researchers to actually spot the non-false report among all the slop.
It found it 8/100 times when the researcher gave it only the code paths he already knew contained the exploit. Essentially the garden path.
The test with the actual full suite of commands passed in the context only found it 1/100 times and we didn’t get any info on the number of false positives they had to wade through to find it.
This is also assuming you can automatically and reliably filter out false negatives.
He even says the ratio is too high in the blog post:
That is quite cool as it means that had I used o3 to find and fix the original vulnerability I would have, in theory, done a better job than without it. I say ‘in theory’ because right now the false positive to true positive ratio is probably too high to definitely say I would have gone through each report from o3 with the diligence required to spot its solution.
I’m not sure if the Gutenberg Press had only produced one readable copy for every 100 printed it would have been the literary revolution that it was.
The Blog Post from the researcher is a more interesting read.
Important points here about benchmarking:
o3 finds the kerberos authentication vulnerability in the benchmark in 8 of the 100 runs. In another 66 of the runs o3 concludes there is no bug present in the code (false negatives), and the remaining 28 reports are false positives. For comparison, Claude Sonnet 3.7 finds it 3 out of 100 runs and Claude Sonnet 3.5 does not find it in 100 runs.
o3 finds the kerberos authentication vulnerability in 1 out of 100 runs with this larger number of input tokens, so a clear drop in performance, but it does still find it. More interestingly however, in the output from the other runs I found a report for a similar, but novel, vulnerability that I did not previously know about. This vulnerability is also due to a free of sess->user, but this time in the session logoff handler.
I’m not sure if a signal to noise ratio of 1:100 is uh… Great…
This would feel a lot less gross if this had been with an open model like deepseek-r1.
I’m not sure how you’re getting wallpaper engine to work on Linux because it’s not supported on anything other than windows.
Are you using Wallpaper Engine? If so you are likely going to keep having issues with your screen blanking while you try and use it, as it’s not supported on Linux.
The article you’re commenting on is about EU grocery store honey being fake
All thoughts are formatted in .docx
Thanks @ryan.gosling.stan
Jokes aside I actually do appreciate that almost all guix packages are verified source and not just copy scripts of already built tarballs.
Guix is awesome!
Nonguix substitute server is down for the fifth straight day, forcing me to rebuild the entire Linux kernel when updating
And you should Never use it!
I agree to some degree but the gnu project doesn’t have a great track record for performative hosting (savannah is very prone to going down for long periods of time.)
I don’t begrudge better hosting infrastructure from a different non-profit.
As a guix user and package maintainer I’m ecstatic.
I’m so proud of the community for rallying around the needs and pain points of everyone and making this decision. This reduces so many pain points for a guix user and will hopefully smooth out the package maintenance process a great deal. Email is simple but trying to do code change communication over it can be very complex and time-laborous.
If you’re curious about functional packaging systems grab guix on your distro and give it a try!
Special shout out to anyone burnt out on Nix lang. Come feel the warm embrace of Scheme’s parentheses. :)
On that front: to developers-
Please make sure you include bash completions for your tools
pre installing flatpaks
Did the room just get a bit colder or is it just me
Libreoffice has a database engine and frontend that’s pretty applicable to Microsoft Access
guix and/or nix
Both are functional package managers and manage dependency trees better than flatpak IMO (also the package description languages mean you can manipulate the package definitions at install time much easier)
If you can’t find a package in guix/nix then it behooves you to use flatpak
Would it be better to replace Gemini with a local model run through ollama or something?
I have also been done in many times by git-filter-repo. My condolences to the chef.