

No, because the thing they are naming is “The Github Dictionary”; they’re not applying scare-quotes to the word “dictionary” implying that what they’ve written is not really a “dictionary”.
No, because the thing they are naming is “The Github Dictionary”; they’re not applying scare-quotes to the word “dictionary” implying that what they’ve written is not really a “dictionary”.
“Scare quotes” definitely precede Austin Powers, though that may have spurred a rise in popularity of the usage. (Also, “trashy people never saw Austin Powers” is honestly a pretty weird statement, IMO.)
That said, in this case, arguably the quotes are appropriate, because “the github dictionary” isn’t something that happened (i.e. a headline), but a thing they’ve made up.
deleted by creator
deleted by creator
Most of those comments are actually just random people arguing about the merits of the experiment, not continued discussion with the bot.
Also, the bot is supposed to be able to run builds to verify its work, but is currently prevented from doing so by a firewall rule they’re trying to fix, so its feedback is limited to what the comments provide. Humans wouldn’t do great in that scenario either. (Not to say the AI is doing “great” here, just that we’re not actually seeing the best-case scenario yet.)
I’m addressing the bit that I quoted, saying that an interpreted language “must” have valid semantics for all code. I’m not specifically addressing whether or not JavaScript is right in this particular case of min()
.
…but also, what are you talking about? Python throws a type error if you call min()
with no argument.
Without one, the run time system, must assign some semantics to the source code, no matter how erroneous it is.
That’s just not true; as the comment above points out, Python also has no separate compilation step and yet it did not adopt this philosophy. Interpeted languages were common before JavaScript; in fact, most LISP variants are interpreted, and LISP is older than C.
Moreover, even JavaScript does sometimes throw errors, because sometimes code is simply not valid syntactically, or has no valid semantics even in a language as permissive as JavaScript.
So Eich et al. absolutely could have made more things invalid, despite the risk that end-users would see the resulting error.
So…like an old fashioned camera iris?
All the others are not very butthole-ish, though.
There are definitely more experienced programmers using it. I can’t find the post at the moment, but there was a recent-ish blog post citing a bunch of examples. [edit: found it: https://registerspill.thorstenball.com/p/they-all-use-it ]
Personally, I don’t use AI much, but I do occasionally experiment with it (for instance, I recently gave Claude Sonnet the same live-coding interview I give candidates for my team; it…did worse than I expected, tbh). The experimenting is sufficient for me to recognize these phrases.
It’s not in C, if that’s what you mean.
It’s a “stream manipulator” function that not only generates a new line, it also flushes the stream.
Probably moreso for expressing the opinion so strongly without actually knowing any of the three languages.
Edit: I’m just guessing why a different comment got downvotes. Why am I getting downvotes?
Doesn’t the first edition use K&R style parameter lists and other no-longer-correct syntax?
You don’t have to imagine it; you can browse the Linux Kernel mailing list!
That’s called a mailing list
/s
I think generally C compilers prefer to keep the stack intact for debugging and such.
Okay, yeah, I was indeed reading your original reply as a criticism of one of the people involved (presumably the security researcher), rather than as a criticism of the post title. Sorry for misunderstanding.
Apparently GCC does indeed do tail-call optimization at -O2
: https://gcc.gnu.org/onlinedocs/gcc/Optimize-Options.html#index-foptimize-sibling-calls
But in that case, I’m not sure why the solution to the denial of service vulnerability isn’t just “compile with -foptimize-sibling-calls
.”
…what is your point? Some software (in a language that doesn’t have tail-recursion optimization) used recursion to handle user-provided input, and indeed it broke. Someone wrote to explain that that’s a potential vulnerability, the author agreed, and fixed it. Who here is misunderstanding how computers implement recursion?
It’s valid usage if you go waaay back, i.e. centuries. You also see it in some late 19th/early 20th century newsprint and ads.