Also at k3can@mastodon.hams.social
Under the store page there should be a “steam replay” button if you scroll down a bit. It will only show the OS break down if you use more than one OS, though. No pi chart if you only game on Linux. 😕
AMD has been great on Linux.
I’m curious about Intel’s cards, though. They seem to be offering some solid competition now, but I haven’t heard anything about their Linux support.
Same. I don’t remember paying for it, but I know donated through the Reddit version, so it would make sense that I would have also purchased the lemmy version if I was given the option.
I just like using Boost because I can have the exact same interface between both Reddit and Lemmy. The Reddit version was removed from the Play store, though, so I needed to side-load it when I switched to this new phone. Still works, though.
You’re not a “target” as much as you are “a thing that exists.” These aren’t targeted attacks.
That said, you can look into adding some additional measures to your webserver if you haven’t already, like dropping connections if a client requests a location they shouldn’t, like trying to access /admin, /…/…, /.env, and so on.
On nginx, it could be something like:
location ^/\.|)/admin|/login {
return 444;
}
Of course, that should be modified to match whatever application you’re actually using.
Self hosted from my homelab on an nginx server. I also self host my blog, which has some info on my whole set up. My blog uses some basic bloging software, though, rather than being hand-made.
The “side menu thingy” is achieved through HTML “frames”. It’s an element of HTML that’s pretty much extinct nowadays, but was all the rage when I built my very first page back in the day.
Nice. I wrote mine “by hand”, too. No CSS, just raw HTML. I think it’s a more personal experience than just using whatever random template some all-in-one web hosting company offers.
A lot of how you set up your system is just going to depend on how you want to set it up.
I run podman (like an improved version of docker) in a single LXC container for applications that are primarily packaged as docker apps. I think I have 4 or 5 applications running on that LXC.
For things that are distributed via apt, git repo, etc, I’ll either create a new LXC or use an existing LXC if it’s related to other services I’m running. For example, crowdsec is run in the same machine as nginx since those two work together and I’ll always want them both running at the same time, so there’s no reason to separate them.
I have mariadb running in its own LXC so that it can follow a different (more frequent) backup schedule than the mostly static applications that interact with it.
Anything that needs to interact directly with hardware, like Home Assistant, or I want kernel separation for, will get a full fledge VM instead of a container.
It’s all about how you want to use it.
I use podman almost exclusively at this point. I like having the rootless containers and secrets management. If you’re on Debian, though, I strongly suggest pulling podman from Trixie. The version in Bookworm is very out of date and there’s been a lot of fixes since then.
For what it’s worth, though, you can proxy other services, like Gemini or gopher, through the same proxy for simplicity’s sake.
I self host.
I use nginx as a reverse proxy with crowdsec. The backends are nginx and mariadb. Everything is running on Debian VMs or LXCs with apparmor profiles and it’s all isolated to an “untrusted” VLAN.
It’s obviously still “safer” to have someone else host your stuff, like a VPS or Github Pages, etc, but I enjoy selfhosting and I feel like I’ve mitigated most of the risk.
I’d imagine that if your job is making YouTube videos, portainer and other graphical abstraction layers probably make more visually interesting videos than just watching someone type out a bunch of commands.
If you’re going to be playing with custom locations and such, it might be worth using nginx directly instead of through the limitations of NPM.
I know I’m a bit late to the conversation, so I don’t know if this is still helpful… But I have a camera with “AI Detection” built into it and it appears to send alerts via its ONVIF connection. I’ve disabled motion and other detectors on my NVR (AgentNVR) and instead configured it to just wait for an alert from the camera itself to start recording. It’s been working quite well.
My initial plan was to use a coral TPU and frigate, but the Coral/Gasket drivers appear to be pretty old and I couldn’t get them to work properly, myself.
Convenience. Unless you live right near the border, it’s probably faster/easier to shop in your own state than drive all the way to another.
But if you do live near the border of a state without a sales tax, then it’s pretty common to shop in the neighboring state, especially for larger purchases.
The US doesn’t have a national sales tax, so it depends whether the individual state imposes a tax or not.
I’ve also been running nginx in an unprivileged LXC container. I haven’t used fail2ban, specifically, but crowdsec has been working without issue.
You can mostly just treat an LXC like a normal VM.
I mostly learn from mistakes, and since homelabs are all about learning, there are bound to be mistakes.
I’ve borked my network multiple times, broken VMs, and redesigned things from the ground up, again.
Big lesson is to have backups. Lol
On a pi, specifically?
Mine is currently running Mailrise and serving as a qdevice for Proxmox. It used to run nginx as a reverse proxy, but I moved that to a different machine. I had a second pi specifically for sharing USB devices over the network, but I wasn’t using it very much so it’s currently not in use.
If you’re looking for general ideas, I think a pi would make a good appliance for ddclient, Homepage/Dashy, an SSH/VPN jumpbox, UPS monitoring, or a notification platform. Basically, any set-and-forgot service that you want to keep running 24/7.
I know you said you decided against it, but perhaps reconsider USB?
I was facing the same dilemma a few months ago, and ultimately decided that trying to break out those internal connections wasn’t worth it. The problem with these tiny PCS is that they are not designed with arrays of drives in mind. There’s typically not enough room in the case to properly add an additional drive, so you end up running the sata cord through a hole in the case and using an external drive and power supply anyway.
USB on the other hand, is intended to connect to an external device. The connectors themselves are more robust and they can even supply power.
I use my external drive for data I don’t have to access constantly, like templates and backups. 90% of the time it’s just sitting in standby. If you need to access a lot of data constantly, you might start to notice the slower USB speeds; if you can segment your data, keeping your “working” files on the internal drive and just use the external for storage, you probably won’t notice the USB at all. It’s certainly not the perfect solution, but with your particular restrictions, it might be the better tool for the job.
The other option would be Network Attached Storage, essentially a low power computer that just exists to allow other computers to access its storage. You can probably find an old, cheap desktop PC for sale locally, likely for less than it would cost you to bring out those internal ports and buy a drive enclosure and power supply.
Good to know.
I have seen damage to older monochrome displays before (from the 90s) but I don’t know how they differed from what ICOM used in the 7100. Obviously things had improved in those 20 years. But since monochrome LCDs are somewhat rare in consumer goods nowadays, I don’t have much experience with more modern variants.