Hi all, a shout-out for assistance. I’m considering hosting a Lemmy instance (assuming I can pass the wife test on costs) and I’m looking for some guidance on specs.

Can anyone who’s currently hosting an instance (or who knows the inner workings of one) please reply with:

  • specs on the hardware / VPS that’s hosting your instance
  • how many users / posts that’s supporting
  • what the system load looks like with the above
  • if locally hosting, the type of bandwidth requirements you’re seeing

I previously posted this in the wrong community, and one of the responses asked how many users I’m expecting. To preemptively answer - I don’t know. I’m just trying to get an idea of relative sizing.

Thank you!!

  • moira@femboys.bar
    link
    fedilink
    English
    arrow-up
    16
    ·
    edit-2
    1 year ago

    Myself i’m running a instance for two people in a pretty small lxc container on my home server- 1vCore, 512MB of ram and 8GB storage. Currently it utilize around 5% of CPU, ~250MB of ram (+260MB of swap), and ~2GB of storage (nearly 50/50 picts/postgres), in terms of network traffic i see average of 20kb/s, depends how many communities are you subscribed for.

    My homeserver is running on i3-4150, 16GB ram and a couple of ssds, using Proxmox VE as hypervisor

    edit: typo

      • moira@femboys.bar
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        Proxmox itself is pretty lightweight, and yes, i’m also running other VMs and LXC containers (not much, about 9 containers with some lite services like teamspeak server, couple of bots, deluge and hestiacp, prometheus, k3s for testing and “vdi” in vm). Actually - i’m running docker - inside LXC containers. Not the prettiest way to do it, but it works fine

        • gravitas_deficiency@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          Fair enough. There are no rules for homelab; do what you want!

          Out of curiosity, are you running a repurposed 1L OEM box? I’ve picked up a handful of those for dirt cheap, and they’re kinda fun to play around with!

          • ThorrJo@lemmy.sdf.org
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            not the one you were replying to, but I’m 2/3 thru switching my servers over to the 1L form factor and am liking it. it’s amazing how much compute can be crammed into a tiny space these days.

          • moira@femboys.bar
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Close enough! I’m using a HP z230 SFF, not as small as those 1L USFF, but pretty practical for a small homeserver, have a couple of PCI-E slots to expand, can hold 2x HDD (if you count replacing 5,25 optical drive with a tray) or multiple SSD wherever they fit. Pretty happy with this build, day-to-day it draws about ~18-50W from the wall, depends on load.

      • moira@femboys.bar
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m using hestiacp to host some websites anyway, so i just added a new nginx template to create reverse proxy to lemmy+lemmy_ui containers

        • Shit@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          1 year ago

          I really want to figure out if it’s possible to stick it behind cloudflare or something. I would rather not expose any IP address directly to the internet. I’m leaning on just setting up a reverse proxy on a cheap cloud instance back to my home.

          • moira@femboys.bar
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            My instance is actually behind cloudflare and it works fine, but remember that it would be possible to “expose” ip of your server due to federation, as your server will talk to other server (directly, that traffic won’t go over cloudflare), so if you are paranoid about that, i would recommend setting up a wireguard tunnel to cloud instance, and forwarding the traffic that way, or just setup the lemmy on that instance

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    11
    ·
    edit-2
    1 year ago

    To answer what I think you are getting at lemmy scales based on two things:

    1. Database size (and write volume) scales mostly on what communities are being federated to you. Unless you are .world the volume of remote content is going to massively outweigh local content. On my (mostly) single-user instance I have found this to be the same with Pictrs as well, as it is mostly eating storage to store federated thumbnails.
    2. Database read load scales mostly on the number of users you have. For a single-user instance this is pretty minimal. For an instance like .world (with thousands of users) I imagine it is significant and for such an instance you would look at scaling postgres to have read-only replicas to handle the load.

    ~18 hours ago I wrote

    My instance has been running for 23 days, and I am pretty much the only active local user:

    7.3G    pictrs
    5.3G    postgres
    

    I may have a slight Reddit Lemmy problem

    As of right now

    7.5G    pictrs
    5.7G    postgres
    

    So my storage is currently growing at around 1G per day, though pictrs is mostly cached thumbnails so that should mostly level out at some point as the cache expires.

    To answer your stated question: I run a single user instance on a mini PC with 32G of RAM (using <2G including all lemmy things such as pg, pictrs, etc and any OS overhead) and a quad core i5-6500T (CPU load usually around 0.3). 32474 posts, 210065 comments. I don’t have good numbers for bandwidth, but my frp setup in general is using ~1Mb/s average or so for everything including Lemmy.

    You could probably easily run Lemmy on a Pi so long as you use an external drive for storage.

    • Haakon@lemmy.sdfeu.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      1 year ago

      But it probably couldn’t have hosted lemmy.world. The answer depends on what the plans are for the instance, I suppose.

  • InverseParallax@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    Storage seems to be the main requirement, so even a raspi4 should be fine, though you’ll want 4gb ram, you just want a large ssd attached somehow.

    Iirc it doesn’t really like nfs either.

  • b3nsn0w@pricefield.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    edit-2
    1 year ago

    i’m currently hosting an instance for about 20 users on a dual-core epyc-7002 based cloud vm with 2 gb of ram and currently a 50 gb ssd volume. memory tends to sit around halfway and total disk usage is 14 GB, of which it’s 4.5 GB for the picture server and 2.3 GB for the database for now, i’m monitoring both in case upgrades are needed. cpu usage is quite low, usually sits between 5-10% and never went above 25%. it was the highest during a spambot attack when they tried to register hundreds of accounts – speaking of, enable captcha (broken on 0.18.0) or set registrations to approve-only.

    i’m paying about $10-15 per month currently, which includes a cache to keep the instance snappy.

      • manitcor@lemmy.intai.tech
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        not terrible, db is about ~100mb a day, running about 20 days now and have a 4.4gb in images.

        thinking about a mod to move images off to IPFS.

        • sudneo@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 year ago

          I read about using s3 storage for pictures. I am planning to use maybe backblaze for that or if I end up taking the beefy server, use a separate minIO instance. This is also great for scaling horizontally in the future, maybe.

  • Soullioness@atosoul.asuscomm.com
    link
    fedilink
    English
    arrow-up
    6
    ·
    1 year ago

    Mine is running on a low performance VM on my MiniPC under my bed lol. I’ve had absolutely no lag or errors. No problems at all very smooth.

  • Ducks@ducks.dev
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    1 year ago

    It’s pretty lightweight. I’ve given each container 1/3Gi of memory and 1CPU limit with low requests. Utilizing kubernetes HPA to scale containers under load up to 4 replicas. It only scales when a user takes large actions (subscribing to hundreds of new to the instance communities at once). But once the initial federation begins it seems to quickly scale back down. The biggest bottleneck is pictrs since it is stateful.

    So far the database and pictrs is only about 2Gi of storage but I’ve allocated 25Gi to each since I have a lot to spare at the moment.

    I have to play with the HPA more since I’m not happy yet with my settings. I have 2 users and 1 bot on my instance.

    I’d like to start contributing to Lemmy’s codebase so I wanted to host my own instance to learn the inner workings.

    My postgres is a single replica at the moment but I may scale that if stability is becomes a problem.

  • hitagi@ani.social
    link
    fedilink
    English
    arrow-up
    3
    ·
    1 year ago

    I’m running mine (ani.social) on 4 cores and 16gb RAM for 17 users as of now. There isn’t a lot of posts/comments coming from us yet but there’s a couple of images uploaded already.

    The current load average is only 0.10, postgres db is at 1.6 GB and pictrs is only at 430 MB. The database has been growing a lot faster than expected though but it seems manageable.

    • Morethanevil@lmy.mymte.de
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I would like to be able to select more than one Community when I create a post, it could help smaller instances to get more activity

      At the moment only crossposts are possible 🤔

      • hitagi@ani.social
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        That sounds like a nice feature but perhaps maybe a limit to how many communities you can post to at once to avoid abuse from bad actors.

  • Dax87@forum.stellarcastle.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    I’m running my instance as a containerized app on an i9-12900H, 64gb ddr4 ram, a 128gb Intel optane as a swap drive (my mobo maxes out at 64gb ram), and on a SATA SSD. My bottleneck is my internet which is stuck on 5g home internet. Serving. Any service behind cgnat has been a challenge, but thanks to zero tier and a vps reverse proxy, it’s been possible.

    • redcalcium@c.calciumlabs.com
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      a 128gb Intel optane as a swap drive

      Interesting, does it actually help when your system run out of memory? My system is completely unusable when it starts swapping at one time (some app was leaking memory and exhaust the ram), so I decided to turn off the swap (I’d rather it crashed than have unusable system).

      • Dax87@forum.stellarcastle.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It won’t replace RAM speeds, but it’s supposed to be significantly faster. Caveat is that it’s not functioning like optane memory is supposed to, I just opted to make the whole drive partition swap, since it was simpler to do.

        I never have used a significant amount of my RAM to warrant heavy swap usage though. Swappiness is at its default 60.

  • borlax@lemmy.borlax.com
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 year ago

    Mine is running on a VPS with 1vCPU and 1GB of RAM, it is mostly okay except for going OOM on occasion. Luckily it’s just me in this instance right now lol. You may want to opt for more RAM depending on your planned usage tho.