The post lagging seems to be front end only. So far I’ve had no issue simply refreshing the page when it hangs.
The post lagging seems to be front end only. So far I’ve had no issue simply refreshing the page when it hangs.
The upvote jumping is caused by issues with the websocket implementation. As far as I heard they are going to get rid of websockets completely in the next version and have static page rendering instead.
It’s a lowercase l followed by a lowercase w as far as I can tell.
Getting rid of websockets would help a lot. But you still might not be able to have standalone nodes. You might still need a cluster of nodes with a master and slaves due to the federated nature of lemmy. Such that only one node at a time can handles federation events with other servers. I don’t know enough about the protocol to know if that is the case or not. Just as an example I’m thinking of situations where one node gets a federation event for example for a post, then a different node gets a federation event with some sort of change to that post, and handles it faster than the first node. That event would then fail because the post hasn’t been created yet.
Yeah. I’m in the same boat. My SQL skills aren’t impressive either since there are other people at work that handle optimization. Haven’t used rust either (yet) so cannot really contribute there either. Though I’m considering potentially starting work on a cross platform mobile app. I haven’t worked with mobile apps for a good six or seven years, so I feel like it’s high time I get back up to speed. (But knowing me, I’ll end up making something half finished and the start procrastinating)
Yeah. But horizontal scaling (well horizontal scaling in a system like this where you need clustering so the instances talk to each other) is hard. And I think there are a lot of other things that need to be polished, added and worked on before that. It would probably also need somebody with knowledge of clustering to start contributing. I think step 1 needs to be that the dev team needs more help properly tuning the database use. The database is very inefficient, and they lack the skill to improve it:
We are in desperate need of SQL experts, as my SQL skills are very mediocre. https://github.com/LemmyNet/lemmy/issues/2877
So getting help improving the database is probably the #1 thing that can be done to deal with the scaling problem.
Thing is, lemmy doesn’t support clustering/horizontal scaling. So there are limits to how much increasing you can do. You can beef up with a database cluster, add a separate reverse proxy, and increase the specs of the hardware lemmy is running on (but hardware can’t be expanded limitless), but that’s about it. Once you hit the limit of what a single instance of the lemmy software can handle, you cannot scale anymore. Pretty sure you will hit the limit long before you reach thousands of dollars.
I mean it’s the same on reddit, just that they have to have slightly different names.
I’m pretty sure that duplicates will sort them selves out organically over time.
I disagree with your statement that centralization is almost a law of the universe. Anything big online these days is decentralized, it’s just done transparent through CDNs, so you as a user don’t notice as opposed to the fediverse where it is visible to the general public.
A very big forum I that I was on the moderator team of about fifteen or so years ago had a rather neat solution. You could freely upvote, but you could only downvote 5 comments every 24 hours. It actually worked rather well.
It probably boils down to what costs the most, making a universal model for everywhere, or making a European model and a separate “screw you” model for the rest of the world.