• 3 Posts
  • 312 Comments
Joined 1 year ago
cake
Cake day: March 22nd, 2024

help-circle
  • You misinterpreted my, to be fair, vague statement. I meant AA is seemingly a bad source to read about opposition parties like the PKK, because of the obvious conflict of interest.

    I mean, AP is a pretty decent source. It’s a nonprofit coop stretching back to 1846 in a country with, err, could-be-worse press freedom history, while AA has been explicitly state run since 1920, somewhat akin to VOA, BBC, Al Jazeera or RT I guess.

    And yes, I know, AP is still an objectively bad source for specific topics, you don’t have to drill that in. So would whoever shills for the PKK, in some respects. But I’m not playing the game of “they did this and this, they can’t be trusted like them and them!” either. One has to look for conflict of interests everywhere, but it’s also okay to respect the good work long running institutions have done (like AA and this article).



  • Interesting source. It’s basically a nationalized Turkish outlet:

    https://en.m.wikipedia.org/wiki/Anadolu_Agency

    After the Justice and Development Party (AKP) took power, AA and the Turkish Radio and Television Corporation (TRT) were both restructured to more closely reflect the administration line. According to a 2016 academic article, “these public news producers, especially during the most recent term of the AKP government, have been controlled by officials from a small network close to the party leadership.”

    Still, the writing is flat in a good way? I have found that reporting from politically captured sources (say, RT) can be conspicuously good, if it’s on an international subject that aligns with their incentives. For instance, Turkey’s AKP is no fan of Netanyahu, hence AA is motivated to produce (seemingly) original reporting like this.


  • brucethemoose@lemmy.worldtoADHD memes@lemmy.dbzer0.comIf only people knew
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    edit-2
    3 days ago

    At risk of getting more technical, some near-future combination of bitnet-like ternary models, less-autoregressive architectures, taking advantage of sparsity, and models not being so stupidly general-purpose will bring inference costs down dramatically. Like, a watt or two on your phone dramatically. AI energy cost is a meme perpetuated by Altman so people will give him money, kinda like a NFT scheme.

    …In other words, it’s really not that big a deal. Like, a drop in the bucket compared to global metal production or something.

    The cost of training a model in the first place is more complex (and really wasteful at some ‘money is no object’ outfits like OpenAI or X), but also potentially very cheap. As examples, Deepseek and Flux were trained with very little electricity. So was Cerebras’s example model.



  • brucethemoose@lemmy.worldtoADHD memes@lemmy.dbzer0.comIf only people knew
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    edit-2
    3 days ago

    It’s politicized.

    It even works in hindsight. I pointed out some cherished fan remaster of a TV show made years ago was machine learning processed, which apparently everyone forgot. I got banned from the fandom subreddit for the no AI rule.

    The ironic thing is this works in corpo AI slop’s favor, as anti-AI sentiment hurt locally runnable, open weight models and earnest efforts more than anything.


  • brucethemoose@lemmy.worldtoADHD memes@lemmy.dbzer0.comIf only people knew
    link
    fedilink
    English
    arrow-up
    17
    arrow-down
    3
    ·
    edit-2
    3 days ago

    Let’s look at a “worst case” on my PC. Let’s say 3 attempts, 1 main step, 3 controlnet/postprocessing steps, so 64-ish seconds of generation at 300W above idle.

    …That’s 5 watt hours. You know, basically the same as using photoshop for a bit. Or gaming for 2 minutes on a laptop.

    Datacenters are much more efficient because they batch the heck out of jobs. 60 seconds on a 700W H100 or MI300X is serving many, many generations in parallel.

    Not trying to be critical or anything, I hate enshittified corpo AI, but that’s more-or-less what generation looks like.



  • brucethemoose@lemmy.worldtoProgrammer Humor@programming.devPrompt Engineer
    link
    fedilink
    arrow-up
    48
    arrow-down
    1
    ·
    edit-2
    10 days ago

    Funny thing is correct json is easy to “force” with grammar-based sampling (aka it literally can’t output invalid json) + completion prompting (aka start with the correct answer and let it fill in whats left, a feature now depreciated by OpenAI), but LLM UIs/corporate APIs are kinda shit, so no one does that…

    A conspiratorial part of me thinks that’s on purpose. It encourages burning (read: buying) more tokens to get the right answer, encourages using big models (where smaller, dumber, (gasp) prompt-cached open weights ones could get the job done), and keeps the users dumb. And it fits the Altman narrative of “we’re almost at AGI, I just need another trillion to scale up with no other improvements!”


  • A fraction of his followers might. It’s a signal, but plausibly ambiguous. And then influencers can point fingers at each other as ‘hysterical’ to rake in big bucks.

    And yeah, tabloid sites like thedailybeast jump on the engagement train. They can’t help themselves. If they don’t, well, they’ll move closer to shutting down like ‘boring’ longwinded classical journalism that gets no engagement anymore.

    Our information environment is so screwed. It baffles me that pundits, researchers, political leaders and such ponder what’s going wrong when they walk to work and see every single human being doomscrolling this trash away, all while the companies hosting it become the richest on Earth.


  • brucethemoose@lemmy.worldtoLinux@lemmy.mlAre my DVD/VOB files broken?
    link
    fedilink
    arrow-up
    14
    arrow-down
    1
    ·
    edit-2
    19 days ago

    You need software (like MakeMKV) to read the metadata from the DVD and properly chop up or combine the video files. It should be able to export without any re-encoding.

    On a separate note, if you want to shrink the files, I’d recommend av1an if you are comfortable with a little CLI and want the best possible encoding efficiency. In a nutshell it chunks videos and encodes them in parallel, hence its great for really long files like movies/TV on DVDs.









  • brucethemoose@lemmy.worldtoComic Strips@lemmy.worldAnonymity
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    edit-2
    1 month ago

    The root cause is billionaires.

    There’s no stopping trolls completely, but they were self limiting when the internet was more disaggregated and a little less accessible. It’s greedy Big Tech, led by a few people, that weaponized them into world-scale attention farms.

    Advertising is a huge enabler yeah, but I have to wonder if they could’ve leveraged other schemes back then, like the Patreon/Onlyfans model, crypto, or whatever.