Hey all,

Anyone familiar with the state of Raptor Lake performance + efficiency cores in Linux? I’m specifically curious about how the kernel balances things when running multiple containers (without pinned CPUs)

Thanks!

  • Onno (VK6FLAB)
    link
    fedilink
    arrow-up
    10
    ·
    2 days ago

    A Docker container is a security framework. The process running “inside” the container is just a Linux process like any other.

    So, as I understand it, the performance will be identical to a process that is running “outside” a container, subject to the overhead associated with any security restrictions.

    • jokro@feddit.org
      link
      fedilink
      arrow-up
      3
      ·
      1 day ago

      If you want to protect the system from untrusted software with containers be careful. Containers and images are mostly an abstraction tool to run and control the applications. Not saying it’s not possible, it’s just easy to make it insecure.

    • fmstrat@lemmy.nowsci.comOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 day ago

      Yes, this is the case, I’m more wondering about kernel support for CPU assignment as it relates to those processes.

      • Onno (VK6FLAB)
        link
        fedilink
        arrow-up
        1
        ·
        1 day ago

        I think that what you’re looking for is “CPU affinity”, but that is not something I know anything about.

        In the 40+ years I’ve been playing with computers, I’ve always let the OS worry about where and when to run a process and only rarely do I renice a process that needs to run, but not at the expense of everything else.

        • fmstrat@lemmy.nowsci.comOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          21 hours ago

          Agreed, just want to make sure the kernel can handle resourcing for two different types of cores. I know there was a time (recent) where it couldn’t. Others have said 6.x is the key.