• 0 Posts
  • 39 Comments
Joined 1 year ago
cake
Cake day: July 7th, 2023

help-circle
  • In general, on bare-metal, I mount below /mnt. For a long time, I just mounted in from pre-setup host mounts. But, I use Kubernetes, and you can directly specify a NFS mount. So, I eventually migrated everything to that as I made other updates. I don’t think it’s horrible to mount from the host, but if docker-compose supports directly defining an NFS volume, that’s one less thing to set up if you need to re-provision your docker host.

    (quick edit) I don’t think docker compose reads and re-reads compose files. They’re read when you invoke docker compose but that’s it. So…

    If you’re simply invoking docker compose to interact with things, then I’d say store the compose files where ever makes the most sense for your process. Maybe think about setting up a specific directory on your NFS share and mount that to your docker host(s). I would also consider version controlling your compose files. If you’re concerned about secrets, store them in encrypted env files. Something like SOPS can help with this.

    As long as the user invoking docker compose can read the compose files, you’re good. When it comes to mounting data into containers from NFS… yes permissions will matter and it might be a pain as it depends on how flexible the container you’re using is in terms of user and filesystem permissions.



  • In general, container root filesystems and the images backing them will not function on NFS. When deploying containers, you should be mounting data volumes into the containers rather than storing things on the container root filesystems. Hopefully you are already doing that, otherwise you’re going to need to manually copy data out of the containers. Personally, if all you’re talking about is 32 gigs max, I would just stop all of the containers, copy everything to the new NFS locations, and then re-create the containers to point at the new NFS locations.

    All this said though, some applications really don’t like their data stored on NFS. I know Plex really doesn’t function well when it’s database is on NFS. But, the Plex media directories are fine to host from NFS.




  • In a centralized management scenario, the central controlling service needs the ability to control everything registered with it. So, if the central controlling service is compromised, it is very likely that everything it controlled is also compromised. There are ways to mitigate this at the application level, like role-based and group-based access controls. But, if the service itself is compromised rather than an individual’s credentials, then the application protections can likely all be bypassed. You can mitigate this a bit by giving each tenant their own deployment of the controlling service, with network isolation between tenants. But, even that is still not fool-proof.

    Fundamentally, security is not solved by one golden thing. You need layers of protection. If one layer is compromised, others are hopefully still safe.


  • If we boil this article down to it’s most basic point, it actually has nothing to do with virtualization. The true issue here is actually centralized infra/application management. The article references two ESXi CVE’s that deal with compromised management interfaces. Imagine a scenario where we avoid virtualization by running Kubernetes on bare metal nodes, and each Pod gets exclusive assignment to a Node. If a threat actor has access to the Kubernetes management interface, and can exploit a vulnerability to access that management interface, it can immediately compromise everything within that Kubernetes cluster. We don’t even need to have a container management platform. Imagine a collection of bare-metal nodes managed by Ansible via Ansible Automation Platform (AAP). If a threat actor has access to AAP and exploit it, it then can compromise everything managed by that AAP instance. This author fundamentally misattributes the issue to virtualization. The issue is centralized management and there are significant benefits to using higher-order centralized management solutions.





  • This isn’t about social platforms or using the newest-hottest tech. It’s about following industry standard practices. You act like source control is such a pain in the ass and that it’s some huge burden. And that I just don’t understand. Getting started with git is so simple, and setting up an account with a repo host is a one time thing. I find it hard to believe that you don’t already have ssh keys set up too. What I find more controversial and concerning is your ho-hum opinion on automated testing, and your belief that “most software doesn’t do it”. You’re writing software that you expect people to not only run on their infra, but also expose to the public internet. Not only that, but it also needs to protect the traffic between the server on public infra and client on private infra. There is a much higher expectation of good practices being in place. And it is clear that you are willingly disregarding basic industry standard practices.



  • Git was literally written by Linus to manage the source of the kernel. Sure patches are proposed via mailing list, but the actual source is hosted and managed via git. It is literally the gold standard, and source control is a foundational piece of software development. Same with not just unit tests, but functional testing too. You absolutely should not be putting off testing.


  • Gotta be honest, downloading security related software from a random drive is sending off sketchy vibes. Fundamentally, it’s no different than a random untrusted git repo. But, I really would suggest using some source control rather than trying to roll your own with diff archives.

    Likewise, I would also suggest adding in some unit and functional tests. Not only would it help maintain software quality, but also build confidence in other folks using the software you are releasing.


  • After briefly reading about systemd’s tmpfiles.d, I have to ask why it was used to create home directories in the first place. The documentation I read said it was for volatile files. Is a users home directory considered volatile? Was this something the user set up, or the distro they were using. If the distro, this seems like a lot of ire at someone who really doesn’t deserve it.


  • I have a similar issue when I am visiting my parents. Despite having 30 mbps upload at my home, I cannot get anywhere near that when trying to access things from my parents house. Not just Plex either, I host a number of services. I’ve tested their wifi and download, and everything seems fine. I can also stream my Plex just fine from my friends places. I’ve chalked it up to poor (or throttled) peering between my parents ISP and my ISP. I’ve been meaning to test it through a VPN next time I go home.



  • I somewhat wonder if CloudFlare is issuing two different certs. An “internal” cert your servers use to serve to CloudFlare, which uses a private CA only valid for CloudFlare’s internal services. CloudFlare’s tunnel service validates against that internal CA, and then serves traffic using an actual public CA signed cert to public internet traffic.

    Honestly though, I kinda think you should just go with serving everything entirely externally. Either you trust CloudFlare’s tunnels, or you don’t. If you don’t trust CloudFlare to protect your services, you shouldn’t be using it at all.



  • I’m not saying they were purposefully cheating in this or any tournament, and I agree cheating under that context would be totally obvious. But, it is feasible that a pro worried about their stats might be willing to cheat in situations where the stakes are lower outside of tournaments.

    What I also don’t understand is, if this hacker has lobby wide access, why was it only these two people who got compromised? Why wouldn’t the hacker just do the entire lobby? Clearly this hacker loves the clout. Forcing cheats on the entire lobby would certainly be more impressive.

    PS. This is all blatant speculation. From all sides. No one, other than the hacker and hopefully Apex really knows what happened. I am mostly frustrated by ACPD’s immediate fear mongering of a RCE in EAC or Apex based on no concrete evidence.