toprerules 2 days ago

Just want to say everyone should be using podman. Its architecture is way more sane and integrates with Linux on a far more basic level (a regular process that can be started via systemd, etc. instead of a root daemon. Run it as root to get privileged containers).

They've also built an incredible ecosystem around podman itself. Red Hat has been absolutely cooking with containers recently.

  • mati365 2 days ago

    Systemd .container services (Quadlet) are excellent. I used them to set up multiple smaller sites without any issues. Containers work just like regular systemd services. I created a small Ansible template to demonstrate how simple yet powerful this solution is.

    GH: https://github.com/Mati365/hetzner-podman-bunjs-deploy

  • edwinjones a day ago

    I agree. I have been using it as a drop in docker replacement alongside podman compose via aliases for years now and I often just forget I am not using docker. The only time it bit me recently is when some scripts were looking for containers with docker specific labels and I had to figure out why they failed only for me.

  • koakuma-chan a day ago

    What are the benefits of using Podman and not Docker?

    • dharmab 13 hours ago

      From a technical perspective the big two are:

      - Pod is significantly simpler than Docker, notably Podman doesn't need to run a background process, your containers run directly as separate processes.

      - Podman avoids some long-standing security design weaknesses in Docker ("rootless"). Docker rootless _is_ a thing but has compatibility limits.

    • powerhugs 20 hours ago

      Rootless, daemonless, hardware passthrough, no Docker Inc pulling the rug, etc etc

    • krferriter a day ago

      Licensing. No root daemon.

      • koakuma-chan a day ago

        Docker Engine is Apache 2.0, is this not a good license? Docker has a rootless mode too.

        • throwaway81523 a day ago

          When you say "Docker Engine" that suggests other parts of Docker are licensed differently (I haven't looked into it). I'd say you have to compare the whole ecosystem and not just a single component either way.

          • koakuma-chan a day ago

            I said "Docker Engine" specifically because it is "Docker Engine" that is the counterpart of Podman, and therefore it is the only component that matters. The discussion here is "Docker" vs "Podman," but "Docker Engine" is what we really mean when saying "Docker."

          • cmiles74 19 hours ago

            Docker Desktop has a much more restrictive license. Unfortunately, on MacOS and Windows, the "Docker Desktop" product is often referred to as simply "Docker".

            FWIW, Podman has an open source alternative to Docker Desktop as well.

      • pydry a day ago

        No root daemon got replaced with "but if you want a replacement for docker compose you ought to be using systemd (quadlets)".

        Meh

        • officialchicken a day ago

          Incorrect, the use of a non-root daemon is essential for isolation and security.

          • pydry a day ago

            Im fully on board with the idea that root daemons shouldnt be necessary I just dont want systemd to become a dependency for yet again something else it shouldnt be a dependency for.

            • powerhugs 20 hours ago

              Huh, that's another uninformed take.

              systemd is at it's core an app for running services, such as containers.

              You should read up on podman and systemd before making up more arguments.

              • windexh8er 19 hours ago

                The point is that RedHat went on a tirade for years telling everyone: "Docker bad, root! Podman good, no root! Docker bad, daemon! Podman good, no daemon!".

                And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.

                For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.

                • jpeeler 18 hours ago

                  You can run Quadlets under the systemd user session just as well.

                • KAMSPioneer 19 hours ago

                  But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).

                  But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).

                  • windexh8er 19 hours ago

                    Exactly my point.

                    Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.

                    SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.

                    I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.

                    • amluto 18 hours ago

                      > And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point

                      I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.

                      Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.

                      • windexh8er 16 hours ago

                        Curious what your point is?

                        > I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.

                        Please share.

                        • amluto 16 hours ago

                          Off the top of my head:

                          FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).

                          There are no lockfiles to pin and commit dependency versions.

                          Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

                          Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.

                          Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.

                          Why on Earth does copying in data require spinning up a container?

                          Moving on from builds:

                          Containers are read-write by default, not read-only.

                          Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

                          The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

                          I'm sure I could go on.

                          • windexh8er 14 hours ago

                            > Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

                            I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.

                            > Why on Earth does copying in data require spinning up a container?

                            It doesn't.

                            > Containers are read-write by default, not read-only.

                            I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.

                            > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

                            Almost all of this is wrong.

                            > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

                            What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.

                            Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.

                            • amluto 11 hours ago

                              >> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.

                              > I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.

                              They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.

                              > > Containers are read-write by default, not read-only.

                              > I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.

                              Right. The issue is that the default is wrong. In a container:

                                  $ echo foo >the_wrong_path
                              
                              works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.

                              > > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.

                              > Almost all of this is wrong.

                              I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:

                              https://docs.docker.com/get-started/docker-concepts/running-...

                              and

                              https://docs.docker.com/get-started/docker-concepts/running-...

                              Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:

                                  void do_thing();
                              
                              If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.

                              At least the docs try to remind people that the whole mechanism is "insecure by default".

                              I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."

                              > > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.

                              > What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.

                              I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.

                              > If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.

                              Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.

                              • windexh8er 7 hours ago

                                > They're not so different. An environment is just big software.

                                Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.

                                > Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path

                                Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.

                                > If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.

                                You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.

                                > Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.

                                OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!

                    • bdhcuidbebe 18 hours ago

                      systemd runs as root yes, but services started by systemd dont unless you instruct them to.

                      that means your podman containers dont run as root unless you want them to.

                      mine runs as user services

                      • windexh8er 16 hours ago

                        I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).

                        I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?

                  • pydry 18 hours ago

                    Have you used podman compose? It's shit.

                    When I bring this up online the answer is invariably "well use quadlets then" (i.e. systemd).

                    >systemd doesn't add a root daemon, it reuses an existing one

                    lol the same could be said of every docker container ive ever run....

              • pydry 18 hours ago

                Quadlets is systemd. Red hat declared it to be the recommended/blessed way of running containers. podman compose is treated like the bastard stepchild (presumably because it doesnt have systemd as a dependency).

                Please try to understand the podman ecosystem before lashing out.

            • KAMSPioneer 19 hours ago

              Podman runs on FreeBSD without systemd, so there you go.

              • pydry 18 hours ago

                yeah, it runs fine without systemd, until you need a docker compose substitute and then you get told to use quadlets (systemd), podman compose (neglected and broken as fuck) or docker compose (with a daemon! also not totally compatible) or even kubernetes...

  • curt15 21 hours ago

    I understand that k8s uses containerd or similar daemons to run containers. Do podman's criticisms of docker also apply to k8s?

    • dharmab 6 minutes ago

      No. K8s runs containers in a way very similar to Podman. Podman is like a middle point between the simplicity of containerd and the feature set of the Kubernetes Kubelet.

  • user3939382 19 hours ago

    Until it’s supported by AWS ECS it’s not relevant for me since that’s what my container builds are for.

    • dharmab 13 hours ago

      Images built by Podman can be run by Docker and vice versa.

Nezteb a day ago

Protip: if you want to use Podman (or Podman Desktop) with Docker Compose compatibility, you'll have a better time installing podman-compose [1] and setting up your env like so:

  alias docker=podman
  
  # If you want to still use Docker Compose
  # export PODMAN_COMPOSE_PROVIDER=docker-compose
  
  # On macOS: `brew install podman-compose`
  export PODMAN_COMPOSE_PROVIDER=podman-compose
  export PODMAN_COMPOSE_WARNING_LOGS=false
Most of my initial issues transitioning to Podman were actually just Docker (and Docker Desktop) issues.

Quadlets are great and Podman has a tool called podlet [2] for converting Docker Compose files to Quadlets.

I prefer using a tool like kompose [3] to turn my Docker Compose files into Kubernetes manifests. Then I can use Podman's Kubernetes integration (with some tweaks for port forwarding [4]) to replace Docker Compose altogether!

[1] https://github.com/containers/podman-compose

[2] https://github.com/containers/podlet

[3] https://github.com/kubernetes/kompose

[4] https://kompose.io/user-guide/#komposecontrollerportexpose

  • pydry a day ago

    podman compose is really bad

    • akvadrako 8 hours ago

      Indeed, it has a lot of limitations. It's better to use docker compose with a podman socket.

    • SomeoneOnTheWeb a day ago

      How so? What problems do you have with it?

      • DrBenCarson 17 hours ago

        It never works on first go, constantly debugging and breaking

      • pydry 21 hours ago

        Missing features, lots of debugging spam which cant be turned off, doesnt properly adhere to the compose spec...

lbhdc 2 days ago

Last year I transitioned all of my personal projects to use podman. The biggest surface area was converting CI to use podman to build my docker files, but also changed out tooling to use it (like having kind use it instead of docker).

For the most part this worked without issue. The only snag I ran into was my CI provider can't use oci formatted images. Podman lets you select the format of image to build, so I was able to work around this using the `--format=docker` flag.

  • mariusor 2 days ago

    Same here. I migrated maybe 5-6 projects from docker to buildah and podman about 2 years ago and never looked back.

    Unlike other posts I've seen around I haven't really encountered issues with CI or local handling of images - though I am using the most bare bones of CI, SourceHut. And I actually feel better about using shell scripts for building the images to a Dockerfile.

    • lbhdc 2 days ago

      Oh hey! I have used your activity pub library, it's very nice :)

      • mariusor a day ago

        Thank you. :) I'm still working on it, dare I say it maybe even getting closer to a stable release.

  • codelion 2 days ago

    That's a pretty cool migration story! I've been meaning to give podman a more serious look. The OCI image format issue is good to know about – hadn't considered that compatibility angle. I'm curious, did you notice any performance differences in your CI builds after switching?

    • lbhdc 2 days ago

      Its been a while, so all my telemetry has since expired, but there was no meaningful difference in time.

      I was prepared to roll it all back, but I never ended up running into problems with it. It's just something that happens in the background that I don't have to think about.

      • jmholla 14 hours ago

        Yea, I was under the impression docker uses OCI containers these days and not their own custom definition. But I may be ill-informed.

  • LEARAX a day ago

    I would love to know more details about your CI setup. I'm running all of my self-hosted services as Quadlets (which I generally really love!) and CI (using Gitea) was/is a huge pain point.

    • lbhdc 20 hours ago

      I have a simple setup on GCP. I am using Cloud Build with the companion Github app to trigger builds on branch updates.

      I like it because I am deploying to GCP, and storing containers in Artifact Registry. Cloud Build has good interop with those other products and terraform, so its pretty convenient to live with.

      The pipelines themselves are pretty straight forward. Each step gets an image that it is executed in, and you can do anything you want in that step. There is some state sharing between steps, so if you build something in one step, you can use it in another.

    • qudat 17 hours ago

      I do a lot of self hosting as well and relegated to git post receive hook that sends events through https://pipe.pico.sh and then have a script that listens on that topic and builds what I need.

  • mikedelfino 2 days ago

    Are you pulling base images from Docker Hub, or do you build all images from source from scratch?

    • lbhdc 2 days ago

      I am pulling from a few registries, but trying to move everything to a private registry.

      In podman, you have to use the "full path" to work with docker hub. Eg `docker.io/library/nginx`.

jjice 2 days ago

Has Podman become more user friendly in recent years? I gave it a go about three or four years ago now when Docker began their commercial push (which I don't have an issue with).

This was for some hobby project, so I didn't spend a ton of time, but it definitely wasn't as set-and-forget as Docker was. I believe I had to set up a separate VM or something? This was on Linux as the host OS too. It's been a while, so apologies for the hazy memory.

Or it's very possible that I botched the entire setup. In my perfect world, it's a quick install and then `podman run`. Maybe it's time to give it another go.

  • rsolva 2 days ago

    Definitely more user friendly, and I love using Quadlets! For people using Flatpaks (Linux), check out the app 'Pods' as a lightweight alternative to Podman Desktop. It is still a young project, but is already a very useful way of managing your containers and pods.

    As a side note, it is so _refreshing_ to observe the native apps popping up for Linux lately, it feels like a turning point away from the Electron-everything trend. Apps are small, starts immediately and is well integrated with the rest of the system, both functionally and visually. A few other examples of native apps; Cartero, Decibels, GitFourchette, Wike – to name a few that I'm using.

  • seemaze 2 days ago

    I've found it very straight forward to work with. I run the cli on macOS to spin up ephemeral containers all the time for testing and simple tasks. Never had an issue.

    In the spirit of the OP, I also run podman rootless on a home server running the usual home lab suspects with great success. I've taken to using the 'kube play' command to deploy the apps from kubernetes yaml and been pleased with the results.

  • IshKebab 2 days ago

    It's almost a perfect drop-in replacement for Docker so I don't see why it would be any less "set-and-forget".

    I only ever found one thing that didn't work with it at all - I think it was Gitlab's test docker images because they set up some VMs with Vagrant or something. Pretty niche anyway.

    • bigstrat2003 2 days ago

      The one edge case I know of (and have run into) is that podman push doesn't support the --all-tags flag. They have also said they do not plan to implement it. It's annoying because that flag is useful for CI scripts (we give multiple tags to the same build), but not the end of the world either.

    • moogly 2 days ago

      I could not get LocalStack to work on Podman, to my chagrin. And no, doing the "sudo touch /etc/containers/nodocker" thing didn't solve it.

      • mdaniel 14 hours ago

          podman version
          podman pull public.ecr.aws/localstack/localstack:4.1
          podman run --detach --name lstack -p 4566:4566 public.ecr.aws/localstack/localstack:4.1
          # sorry, I don't have awscli handy
          export AWS_DEFAULT_REGION=us-east-1 AWS_ACCESS_KEY_ID=alpha AWS_SECRET_ACCESS_KEY=beta
          $HOMEBREW_PREFIX/opt/ansible/libexec/bin/python -c '
            import boto3
            sts = boto3.client("sts", endpoint_url="http://localhost:4566")
            print(sts.get_caller_identity())
            '
          {'UserId': 'AKIAIOSFODNN7EXAMPLE', 'Account': '000000000000', 'Arn': 'arn:aws:iam::000000000000:root', ...
        
        
        I'll spare you the verbosity but

          2025-02-22T18:51:56.427  INFO --- [et.reactor-0] localstack.request.aws     : AWS s3.CreateBucket => 200
          2025-02-22T18:52:14.332  INFO --- [et.reactor-0] localstack.request.aws     : AWS s3.PutObject => 200
        
          cat > sample-stack.yaml <<'YAML'
          AWSTemplateFormatVersion: 2010-09-09
          Resources:
            Iam0:
              Type: AWS::IAM::Role
              Properties:
                RoleName: Iam0
                ManagedPolicyArns:
                - arn:aws:iam::aws:policy/AdministratorAccess
                AssumeRolePolicyDocument:
                  Principal:
                    AWS:
                      Ref: AWS::AccountId
                  Effect: Allow
                  Action: sts:AssumeRole
          YAML
          create_stack_command_goes_here
          2025-02-22T18:55:02.657  INFO --- [et.reactor-0] localstack.request.aws     : AWS cloudformation.CreateStack => 200
        
        ---

        ed: ah, I bet you mean the lambda support; FWIW they do call out explicit support for Podman[1] but in my specific setup I had to switch it to use -e DOCKER_HOST=tcp://${my_vm_ip}:2375 and then $(podman system service tcp://0.0.0.0:2375) in the lima vm due to the podman.sock being chown to my macOS UID. My life experience is that engineering is filled with this kind of shit

        I used https://github.com/aws-samples/aws-cloudformation-inline-pyt... to end-to-end test it

        1: https://github.com/localstack/localstack/blob/v4.1.1/localst...

    • adenner 2 days ago

      In fact, there is even a package "podman-docker" that will alias podman to docker so most of your commands will usually work without modification. (of course, there are always the edge cases)

  • sieve 2 days ago

    It is not user-friendly, but it works flawlessly once you get used to it.

    I stayed away from docker all these years and tried podman from scratch last year after docker failed to work for a project I was experimenting with.

    Took an hour to read various articles and get things working.

    One thing I liked was it does not need sudo privileges or screw with the networking.

  • evilduck 2 days ago

    This is mostly solved I think. I run Podman Desktop on macOS and just aliased Docker to Podman in zshrc and it just works for me. I don’t do any local k8s or anything crazy, but it works with my compose files. I’m going to guess there’s still rough edges if you want GPU passthrough or something with complex networking, but for a server and a database running together it matches Docker itself.

  • johnbrodie 2 days ago

    Hasn't become more friendly from what I've seen. The project seems largely centered around K8s, and isn't really investing in fixing anything on the "compose" side. I did the same thing as you when Docker first started going down the more commercial path, and after dealing with random breakages for a number of years, fully switched back to Docker (for local dev work on osx).

    Podman machine is fine, but occasionally you have to fix things _in the vm_ to get your setup working. Those bugs, along with other breakages during many upgrades, plus slower performance compared to Docker, made me switch back. This is just for local dev with a web app or two and some supporting services in their own containers via compose, nothing special. Totally not worth it IMO.

  • h14h 2 days ago

    The biggest difference in my (admittedly limited) experience, is that you need to start a "podman machine" before you can start running containers. This is architecturally different from Docker's use of a daemon, in ways I'm not qualified to explain in more detail.

    It's an extra step, but not a painful one -- the default podman machine configuration seems to work pretty well out of the box for most things.

    Honestly, for my use-case (running Subabase stack locally), it was seamless enough to switch that I'm a little surprised a bash script like this is necessary. On my Mac, I think it was simply `brew install podman` followed by `podman machine start` and then I got back to work as if I were still using docker.

    By far the most tedious part of the switch was fully uninstalling Docker, and all its various startup programs & background processes.

    • stryan 2 days ago

      Podman only requires `podman machine` if you're using a non-Linux system; this sets up a Linux VM in the background that all the actual containers run on. Docker does the same thing, though I think it sets it up for you automatically.

  • bjoli 2 days ago

    The only snag I hit regularly is me forgetting to set :z or :Z on my podman volumes to make it play well with SELinux.

    I used to use docker compose, but migrated to podman quadlets. The only thing I miss is being able to define every container I run in a pod in the .pod file itself. Having it integrate with systemd is great.

  • mixedCase 2 days ago

    On NixOS it was as trivial as `podman.enable = true;`. IIRC on Arch it was just a matter of installing the package.

    It's all daemonless, rootless and runs directly with your host kernel so it should be as simple as it an application of this kind gets. Probably you followed some instructions somewhere that involved whatever the podman equivalent for docker-machine is?

  • ijustlovemath 2 days ago

    My container using is admittedly pretty simplistic (CRUD app with some REST services), but after initial setup I've found it to be extremely reliable and simple to use. They strive for 1:1 docker compat so I think it should be pretty easy to migrate.

mdaniel 13 hours ago

On the off chance it matters to anyone, brew whines that podman requires macOS 13.x due to https://github.com/containers/podman/issues/22121 but that's only for $(podman machine start) support, which relies on https://github.com/crc-org/vfkit/issues/37

If you already have colima lying around, that means you have lima and lima ships with both podman and podman-rootful templates:

  limactl create --name=proot template://podman-rootful --vm-type=qemu --cpus=4 --memory 4 --disk 20
  # it will emit the instructions at the end, but for context
  podman system connection add lima-proot "unix:///$HOME/.lima/proot/sock/podman.sock"
  podman system connection default lima-proot
  podman version # <-- off to the races
osigurdson 2 days ago

Podman is interesting as well because it can run Kubernetes yamls (to a small extent) which can be handy.

  • vaylian 2 days ago

    With the command `podman kube play file.yaml`

  • moondev a day ago

    Launching Kubernetes pods without a kube-apiserver. The kubelet can run in standalone mode and launch static pods as well, but I don't believe it supports deployment manifests like podman does. Pretty handy.

evantbyrne 2 days ago

Does Podman have a swarm counterpart, or does running services still effectively require configuring systemd and then switching to kubernetes for multi-machine?

  • hylaride 2 days ago

    Last I checked there's no native swarm equivalent in podman. Your best bet is nomad (much simpler than k8s if you want to spin some local setups) or kubernetes.

    • kitd 2 days ago

      kubernetes

      Podman can work with local pods, using the same yaml as for K8s. Not quite docker swarm, but useful for local testing IME when k8s is the eventual target.

    • evantbyrne 2 days ago

      Eh, starting with k8s just because I might want kubernetes in five years is a hard sell, given how easy swarm is to setup. devops that does not fulfill an immediate business need should be delayed because that labor is hella expensive.

  • KronisLV 17 hours ago

    It doesn't, which to me seems like a bummer.

    Docker Compose is really great for multi-container deployments on a single machine. And Docker Swarm takes that same Compose specification (although there were historical differences) and brings it over to clusters, all while remaining similarly simple. I'm surprised that ourside of Docker Swarm, Nomad or lightweight Kubernetes distros like K3s there haven't been that many attempts at simple clustering solutions. Even then, Kubernetes (which Podman supports) ends up being more complex.

  • Spivak 2 days ago

    No to the first, yes to the second. Podman has a daemon mode that works like like the Docker daemon, no systemd necessary.

    • keeperofdakeys 2 days ago

      > Podman has a daemon mode ...

      Can you provide any documentation about that?

      • stryan 2 days ago

        They're probably referring to the podman.socket, which isn't quite like a daemon-mode but means it can emulate it pretty well. Unless there is some daemon mode I missed that got added, but I'd be rather surprised at that.

      • Spivak 2 days ago

        Yep!

        https://docs.podman.io/en/latest/markdown/podman-system-serv...

        In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.

        I honestly prefer using the official docker cli when talking to podman.

DrBenCarson 17 hours ago

This is fine for Linux users and the actual servers, but for local development on a Mac, you cannot beat Orbstack (imo)

dminik 2 days ago

To mirror some of the other comments here: I've had decent success in using podman for my local dev setup (postgres, redis, ...).

I did run into one issue though. Rootless mode isn't supported (or at least easy to setup) when the user account is a member of an active directory (or whatever Linux equivalent my work laptop is running).

Though root mode works, I can't use podman desktop and I have to sudo every command.

vivzkestrel 2 days ago

just a pro tip "if it aint broken dont fix it" if you have a working docker file(s) do not migrate unless there is a ground breaking need

  • natebc 19 hours ago

    Podman and Buildah consume Dockerfiles perfectly fine. Have you come across a scenario where Dockerfile contents were a concern?

  • exceptione 2 days ago

    Security might be such a need, but that depends on how important that is for you. On top, docker auto-fiddles with your firewall.

deskamess a day ago

What's the podman UX/story on Windows if anyone is using it? Say for Server 2022 (prod) and Win 11 Pro (dev).

Does one prefer using WSL2 or Hyper-V as the machine provider? From what I understand, podman provides the container engine natively so nothing additional is required. Do container runtimes like containerd only come into play when using kubernetes? Not a windows specific question, but is there a reason to pick a podman native container vs one in a k8s context. I understand podman supports k8s as well. Other info: No current containers (docker or k8s) are in play.

Thanks in advance.

  • woodrowbarlow a day ago

    on windows, rancher desktop + podman offers a similar experience to docker desktop.

powerhugs 20 hours ago

I went with podman in 2020 when docker acted out last time and haven't looked back since.

JamesSwift a day ago

I've not looked into podman but this reminded me that I miss rkt. Anyone with experience in rkt and podman able to give me an overview of how they currently differ? I'm not a huge fan of how docker works, so I'd love an alternative.

  • jchw a day ago

    I went from rkt to podman. Podman is compatible with Docker, including the socket/API, but is similar to rkt in that it launches the container as a child when ran directly (versus Docker, which runs all containers and storage operations under the daemon). Podman also has integration with systemd[1] though it mostly just generates boilerplate for you, since it works a lot closer to how actual daemons work. (P.S.: You might want `--new` if you want a new container each time the unit starts.)

    Podman also supports running in "rootless" mode, using kernel.unprivileged_userns_clone and subuid/subgids for the sandboxing and slirp4netns for the networking. This obviously isn't exactly the same as rootful networking, but it works well enough for 99% of the use cases.

    If you are running Linux, I think using Podman instead of Docker is generally a no-brainer. I think the way they've approached rootless support is a lot better than Docker and things "just work" more often than not.

    [1]: https://docs.podman.io/en/latest/markdown/podman-generate-sy...

DrNosferatu a day ago

What about Singularity?

And while we’re at it, what’s your favorite non-sudo Docker alternative? And why?

rubenv a day ago

Only thing I really miss is good Podman support in Skaffold.

gdevenyi 18 hours ago

I can't use podman until they start releasing up to date packages for all systems I use.

Have they started releasing packages yet?

  • DistractionRect 14 hours ago

    They offer packages but if you're on a point release distro you'll want to build it from source.

    On my Debian box, I build the podman release target in a chroot, extract the archive in /opt/, and use stow to install/uninstall the package. You'll also want the latest crun, but which I also place in stow and install with stow.

  • worthless-trash 18 hours ago

    Not enough data provided. Packages exist for systems.

kalaksi 2 days ago

What if I'm using docker-compose?

vednig 2 days ago

Just read the source code.

Script does almost all of the things required for the "existing docker containers", migrating networks, blocks, restart mech,etc, that leaves out just one thing migrating any other third party script utilizing docker to podman based instructions. This would highly improve the experience. Goodluck

whalesalad 2 days ago

If you are using docker in this carefully assembled stateful way - you are doing it wrong. You should be using docker via scripts and IaaS tooling that will assert your desired setup from some kind of configuration. Meaning, you should be able to easily blow all of that away and recreate it with a single script. Likewise, a transition to podman should involve adjusting your scripts to re-assert that plan against podman instead of docker.

This is a cool tool for the decrepit hand-configured server with no documentation that has been running since 2017 untouched and needs an update... but I would encourage you to not fall into this trap to begin with.

  • pinoy420 2 days ago

    Yeah in theory. In practice that never happens for anything other than a vercel app.

pinoy420 2 days ago

Why do people consistently like to make their lives harder in software engineering?

  • passivegains 2 days ago

    programmers ("developers," if you prefer) have trouble with "second order" thinking. we integrate X technology in Y way, maybe with some Z optimization, and that'll solve the problem.

    okay, but, like... will it?

    is there new maintenance stuff you've completely ignored? (I've noticed this is more common when maintenance is someone else's job.) is it completely new and none of us know about it so we get blindsided unless everything goes exactly right every time? do we get visibility into what it's doing? can we fix it when (not if, when) it breaks? can everyone work on it or is it impossible for anyone but the person who set it up? they're good at thinking up things that should fix the problem but less good at things that will.

    I'm a fan of cross-functional feature teams because others in the software engineering ecosystem like QA, systems people, ops, etc. tend not to have this problem. programmers are accountable to the other stakeholders up front, bad ideas are handled in real time, and- this is the most important part- everyone learns. (I won't say all systems people are cantankerous bastards... but the mistakes they've harangued me for are usually the mistakes I don't make twice.)

MrThoughtful 2 days ago

I never tried Podman. I guess the benefit is that it runs on demand and not as a always on demon?

How does one install podman on Debian and how does one get a Debian image to run inside podman?

  • pzmarzly 2 days ago

    Runs on demand, doesn't require root, can be nested, usually uses newer and simpler primitives (e.g. a few nftables rules in Podman vs iptables spaghetti in Docker). In my experience it is ~90% compatible with Docker. The author explains the practical differences in the blog post https://www.edu4rdshl.dev/posts/from-docker-to-podman-full-m...

    It is usually easier to install - most distros ship relatively recent version of Podman, while Docker is split between docker.io (ancient), docker-ce (free but non in repos) and docker-ee.

    Not everything is rosy, some tools expect to be talking to real Docker and don't get fooled by `ln -s docker podman`. But it is almost there.

    Regarding Debian, just `sudo apt install podman && podman run -it debian` - see https://wiki.debian.org/Podman

    • wink 2 days ago

      Careful, the version in Debian 12 is old and apparently just barely predates the "good" versions.

      I had so many problems that I went back to Docker, because current Podman didn't seem to be trivially installable on Debian 12.

      • exceptione 2 days ago

        In general, if one is happy to run very old versions of software Debian can be your driver. If not, you are in for pain in my experience. (That is also why Ubuntu as default Linux is a tragedy, old bugs and missing features mean that it is not really attractive to officially support Linux for vendors.)

        • wink 2 days ago

          I've not experienced something on this scale for many years, "Debian stable packages are so outdated" is mostly a meme. Debian 12 was 1y old when I did this and very often you can relatively easily find a backport or build one - but I think in this case it was either glibc or kernel, that's why "just run upstream" didn't work.

          • WD-42 2 days ago

            What’s the point of using a distribution if you need to find back ports or build your own? Distros are, after all, mostly collections of installable software.

            • wink 2 days ago

              The point is that it works 95% of the time, or probably more like 98%.

              If this is a e.g. webserver and I only need my fastcgi backend built by myself, I can still have reverse proxy, database, and every other package be done by the distro.

              No one said you need backports. More like: If it fits 90% and one package doesn't work, you get it from somewhere else - that doesn't invalidate the concept of a distro for me. YMMV

              • exceptione 2 days ago

                Honest question: wouldn't that make you more nervous you now arrived at an unknown/unsupported configuration?

                Boring stability is the goal, but if Debian does not fit as is, then why not find a total package that is somewhat more cutting edge but does fit together? Especially given the fact that Debian does customization to upstream, so esoteric times esoteric.

                • wink 2 days ago

                  It doesn't make me nervous because Debian has only let me down a couple of times over nearly 20 years and for example Ubuntu und RHEL and SLES have let me down dozens of times each.

                  Also I don't usually run "supported". I just run a system that fits my needs.

                  • exceptione a day ago

                    Thanks for following up. Yeah, I should rather have said "tested/vetted".

              • vq a day ago

                I maintain a couple of Debian servers and this is how I do it too.

                Reverse proxy, DB, etc from Debian. The application server is built and deployed with nix. The Python version (and all the dependencies) that runs the application server is the tagged one in my nix flake which is the same used in the development environment.

                I make sure that PostgreSQL is never upgraded past what is available in the latest Debian stable on any on the dev machines.

      • righthand 2 days ago

        I did not have this same experience, all my VPS successfully run Debian’s podman package with zero issue running containers.

        • wink 2 days ago

          Glad to hear. When I brought it up somewhere I got exact the "oh you're running 4.x - we also had that problem, but 5 works fine".

  • natebc 2 days ago

    1) Podman is available in default debian repos. https://packages.debian.org/bookworm/podman

    2) `podman run --entrypoint="" --rm -it debian:stable /bin/bash`

    in most instances you can just alias docker to podman and carry on. It uses OCI formatted images just like docker and uses the same registry infrastructure that docker uses.

    • aragilar 2 days ago

      Installing `podman-docker` will do the aliasing for you.

    • MrThoughtful 2 days ago

      Where does it pull the Debian image from?

      I would think the Docker infrastructure is financed by Docker Inc as a marketing tool for their paid services? Are they ok when other software utilizes it?

      • natebc 19 hours ago

        By default it uses whatever is in registries.conf for unqualified-search-registries. You can specify in the fully qualified image name if you'd like.

        I can't speak to what Docker Inc. is okay with or not.

      • FireInsight a day ago

        On my system it asks between a few different public registries, and dockerhub/docker.io is one of the choices.

        t's all public infrastructure for hosting container images, I don't think Docker-the-company minds other software interfacing with it. After all, they get to call them 'Docker images', 'Dockerfiles', and put their branding everywhere. At this point

  • 2OEH8eoCRo0 2 days ago

    apt install podman

    podman run -it debian bash