• 0 Posts
  • 9 Comments
Joined 7 months ago
cake
Cake day: February 6th, 2025

help-circle
  • Multi-cloud is far from trivial, which is why most companies… don’t.

    Even if you are multi-cloud, you will be egressing data from one platform to another and racking up large bills (imagine putting CloudFront in front of a GCS endpoint lmao), you are incentivized to stick on a single platform. I don’t blame anyone for being single-cloud with the barriers they put up, and how difficult maintaining your own infrastructure is.

    Once you get large enough to afford tape libraries then yeah having your own offsite for large backups makes a lot of sense, but otherwise the convenience and reliability (when AWS isn’t nuking your account) of managed storage is hard to beat — cold HDDs are not great, and m-disc is pricey.


  • In this guy’s specific case, it may be financially feasible to back up onto other cloud solutions, for the reasons you stated.

    However public cloud is used for a ton of different things. If you have 4TiB of data in Glacier, you will be paying through the absolute nose pulling that data down into another cloud; highway robbery prices.

    Further as soon as you talk about something more than just code (say: UGC, assets, databases) the amount of data needing to be “egressed” from the cloud balloons, as does the price.



  • The recent boom in neural net research will have real applicable results that are genuine progress: signal processing (e.g. noise removal), optical character recognition, transcription, and more.

    However the biggest hype areas with what I see as the smallest real return is in the huge model LLM space, which basically try to portray AGI as just around the corner. LLMs will have real applications in summarization, but largely otherwise they just generate asymptotically plausible babble, very good for filling the Internet with slop, not actually useful to replace all the positions OAI, et al, need it to (for their funding to be justified).



  • Writing tests is a good example. It’s not great at writing tests, but it is definitely better than the average developer when you take the probability of them writing tests in the first place into account.

    Outside of everything else discussed here, this is something I disagree with on a fundamental level, flawed tests are worse than no tests, IMO.
    Not to get too deep in to the very contentious space of testing in development, but when it comes to automated testing, I think we’re better off with more rigorous[1] testing instead of just chasing test coverage metrics.


    1. Validating tests through chaos/mutagen testing; or model verification (e.g. Kani) ↩︎




  • I don’t think it’s hyperbole to say a significant percentage of Git activity happens on GitHub (and other “foundries”) – which are themselves a far cry from efficient.

    My ultimate takeaway on the topic is that we’re stuck with Git’s very counterintuitive porcelain, and only satisfactory plumbing, regardless of performance/efficiency; but if Mercurial had won out, we’d still have its better interface (and IMO workflow), and any performance problems could’ve been addressed by a rewrite in C (or the Rust one that is so very slowly happening).