I’m trying to fix this annoying slowness when posting to larger communities. (Just try replying here…) I’ll be doing some restarts of the docker stack and nginx.

Sorry for the inconvenience.

Edit: Well I’ve changed the nginx from running in a docker container to running on the host, but that hasn’t solved the posting slowness…

  • mo_ztt ✅@lemmy.world
    link
    fedilink
    English
    arrow-up
    5
    ·
    1 year ago

    Hey, I just want to echo what everyone else is saying - thanks much for hosting + all the efforts to keep things working well. It’s appreciated 👍

  • 00Lemming@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    Godspeed to you over the coming days man. Really appreciate you putting this together and the extra work it takes when tackling something like this (both being new to the platform and the tech still being in relative infancy) - not to mention the crazy scaling happening. I will definitely be pitching in to help make sure the server stays up!!

  • veroxii@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Any progress on this. I’ve been thinking about it too. Couple of ideas:

    Too many indexes needing to update when an insert occurs?

    Are there any triggers running upon insert?

    Unlikely but there isn’t a disk write bottleneck? Might be worth running some benchmarks from the VM shell.

    • veroxii@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      Another thought: how many db connections do you have? Could it be starved because there are so many selects happening and it needs to wait for them to finish first?

  • zikk_transport2@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Hey. From my own experience - Nginx is awesome and fast when it is working, but the more you want from it, the more difficult it becomes.

    Give Caddy a try. This reverse proxy has always been excellent for me. It has HTTP3 (QUIC) support, automatic ACME and overall excellent configuration in terms of simplicity and user friendliness.

    Caddy is not a good choice if you need TCP/UDP proxy. It’s only HTTP/HTTPS proxy.

    • God@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      1 year ago

      Someone said this about Caddy “it injects advertising headers into your responses”. Is this true? I don’t know anything about caddy but that doesn’t sound too good lo (to be fair it could be misinformation).

      • zikk_transport2@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        1 year ago

        Never heard about it. This is open source project, free to use.

        In case you want to understand why it’s good, check out Caddyfile example. Just specify something like this:

        example.com {
          reverse_proxy backend:1234
        }
        

        And that’s it! It automatically binds on 0.0.0.0:80 only for redirects to 0.0.0.0:443 + using ACME adds TLS, all behinds the scenes.

        Add 1 more line to my given example and it adds compreasion.

        I’ve been using it for my self-hosted stuff for prob 1-2 years and it kept working flawlessly all the time. Very satisfied.

        • God@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          Sounds very cool. Does running with that file also handle the SSL certificate and validation automatically? Or are there extra steps?

          • zikk_transport2@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            Everything is automated. As long as you know how ACME is working (port 80, accessible from the internet), everything is done in the background, including TLS (SSL) certificate maintenance.

          • Perhyte@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            A minimal config like that will default to provisioning (and periodically renewing) an SSL certificate from Let’s Encrypt automatically, and if there are any issues doing so it will try another free CA.

            This requires port 80 and/or 443 to be reachable from the general Internet of course, as that’s where those CAs are.

            There’s an optional extra step of putting

            {
                email admin@emailprovider.com
            }
            

            (with your actual e-mail address substituted) at the top of the config file, so that the Let’s Encrypt knows who you are and can notify you if there are any problems with your certificates. For example, if any of your certificates are about to expire without being renewed1, or if they have to revoke certificates due to a bug on their side2 .

            As long as you don’t need wildcard certificates3, it’s really that easy.


            1: I’ve only had this happen twice: once when I had removed a subdomain from the config (so Caddy did not need to renew), and once when Caddy had “renewed” using the other CA due to network issues while contacting Let’s Encrypt.

            2: Caddy has code to automatically detect revoked certificates and renew or replace them before it becomes an issue, so you can likely ignore this kind of e-mail.

            3: Wildcard certificates are supported, but require an extra line of configuration and adding in a module to support your DNS provider.

  • slimerancher@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    Something is weird.

    I opened this post from main page “subscribed listing”, but the title showed “I can’t find any cannabis cultivation community”, but the comments were same. I initially thought I have opened a wrong post, but the comments were mentioning “Good work Ruud”, so I refreshed and it fixed post’s title.

    Have you noticed the issue?

    • TeaHands@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      It’s happened to me a few times as well (not just on this instance, think it’s a bug in Lemmy itself). So far I’e not found a reproducible pattern though so it’s a tricky one to bug report effectively.

        • csos95@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          I had something similar happen yesterday.

          I opened a thread about pokemon, browsed it for a bit, did some stuff in other tabs, and clicked back to the pokemon tab maybe an hour later to browse some more.

          The post had changed to one where a user was asking for relaxing game recommendations and it was loading in new comments that seemed to be from that post, but I could still see the comments that had already loaded from the pokemon post when I scrolled down.

          When I refreshed it changed back to the pokemon post and only showed comments from that.

    • Ruud@lemmy.worldOPM
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      Hmm. I guess the delay in posting is not related to nginx. I now have the same conf as a server that doesn’t have this issue.

      • Acetamide@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        I’m only familiar with the high-level Lemmy architecture, but could it be related to database indices being rebuilt?