• nagaram@startrek.website
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    For simply productivity like Copilot or Text Gen like ChatGPT.

    It absolutely is doable on a local GPU.

    Source: I do it.

    Sure I can’t do auto running simulations to find new drugs and protein sequencing or whatever. But it helps me code. It helps me digest software manuals. That’s honestly all I want

    Also, massive compute projects for the @home project are good?

    Local LLMs runs fine on a 5 year old GPU, a 3060 12 gig. I am getting performance on par with cloud ran models. I’m upgrading to a 5060ti just because I wanted to play with image Gen.