• laenurd@lemmy.lemist.de
    link
    fedilink
    arrow-up
    22
    arrow-down
    2
    ·
    1 year ago

    “Retard” who bought Nvidia here.

    I know it’s 4chan banter and generally agree with anons points, but here goes:

    • ROCM wasn’t a thing when I bought. You need(ed) NVidia for machine learning and other GPGPU stuff
    • I have yet to hear from anyone with an 8GB card who maxes out that memory on current-gen games at 1080p
    • apart from frame generation, you DO get DLSS 3 features on 3000 series cards
    • PrivateNoob@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      “Based” who bought AMD here.

      ROCM is still in it’s infancy stage. Literally ROCM isn’t supported for my 6700 XT, so I had to return to Google Colab to work on my AI thesis project.

      • gaiussabinus@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        ROCM support makes me angry but NVidia also fumbled their drivers too. Their is no good option so pick your poison. I run ROCM right now with a work around on my 6900 XT to get the card detected. And i have also gone from 10 It/s to 4 or even 2 with updates. Shit sucks.

    • AlmightySnoo 🐢🇮🇱🇺🇦@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      edit-2
      1 year ago

      ROCM wasn’t a thing when I bought. You need(ed) NVidia for machine learning and other GPGPU stuff

      Same for me, had to buy an Alienware laptop with an NVIDIA GPU during my PhD for some GPGPU coding I had to do as CUDA was pretty much the only choice back then and OpenCL was a joke in terms of performance and wasn’t getting much love from GPU manufacturers. But right now, I know for sure I won’t ever buy an NVIDIA GPU again, ROCm works wonderfully well even on an APU (in my case, a Radeon 680M integrated GPU) and it’s also future-proof since you’re almost writing CUDA code so if you ever switch to an NVIDIA GPU again you mostly will just have to replace “hip” with “cuda” in your code + some magic constants (warp length in particular).

    • RealFknNito@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      8
      ·
      1 year ago

      Don’t put it in quotes. You know what you did.

      8GB of VRAM is fine for right now. Obviously nobody is complaining at 1080p but not a lot of people buying top of the line GPUs for a standard 1080p. I run 1440p and I know a good chunk of people who are hitting FPS limits of 240 that are now upgrading resolutions. 8GB of VRAM is like 16GB of RAM. Yes, most modern titles will work flawlessly but that isn’t going to be the case in the very near future especially if you plan of upgrading resolution.

      DLSS isn’t something to scoff at, I’ll give green credit there, but relying on it to make up for the shortcomings seems like cope. They made a stupid call in order to slash prices and you’re going to feel it later even with DLSS.

      If you need Nvidia for niche applications to do work and make money, get that bread king, but if you’re just a gaming enthusiast there’s no reason to subject yourself to a company that fucking hates you.