Just want to say everyone should be using podman. Its architecture is way more sane and integrates with Linux on a far more basic level (a regular process that can be started via systemd, etc. instead of a root daemon. Run it as root to get privileged containers).
They've also built an incredible ecosystem around podman itself. Red Hat has been absolutely cooking with containers recently.
Systemd .container services (Quadlet) are excellent. I used them to set up multiple smaller sites without any issues. Containers work just like regular systemd services. I created a small Ansible template to demonstrate how simple yet powerful this solution is.
I agree. I have been using it as a drop in docker replacement alongside podman compose via aliases for years now and I often just forget I am not using docker. The only time it bit me recently is when some scripts were looking for containers with docker specific labels and I had to figure out why they failed only for me.
- Pod is significantly simpler than Docker, notably Podman doesn't need to run a background process, your containers run directly as separate processes.
- Podman avoids some long-standing security design weaknesses in Docker ("rootless"). Docker rootless _is_ a thing but has compatibility limits.
When you say "Docker Engine" that suggests other parts of Docker are licensed differently (I haven't looked into it). I'd say you have to compare the whole ecosystem and not just a single component either way.
I said "Docker Engine" specifically because it is "Docker Engine" that is the counterpart of Podman, and therefore it is the only component that matters. The discussion here is "Docker" vs "Podman," but "Docker Engine" is what we really mean when saying "Docker."
Docker Desktop has a much more restrictive license. Unfortunately, on MacOS and Windows, the "Docker Desktop" product is often referred to as simply "Docker".
FWIW, Podman has an open source alternative to Docker Desktop as well.
Im fully on board with the idea that root daemons shouldnt be necessary I just dont want systemd to become a dependency for yet again something else it shouldnt be a dependency for.
The point is that RedHat went on a tirade for years telling everyone: "Docker bad, root! Podman good, no root! Docker bad, daemon! Podman good, no daemon!".
And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.
For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.
But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).
But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).
Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.
SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.
I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.
> And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point
I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
>> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
$ echo foo >the_wrong_path
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.
> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
void do_thing();
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
> They're not so different. An environment is just big software.
Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.
> Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path
Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.
> If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.
> Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!
I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).
I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?
Quadlets is systemd. Red hat declared it to be the recommended/blessed way of running containers. podman compose is treated like the bastard stepchild (presumably because it doesnt have systemd as a dependency).
Please try to understand the podman ecosystem before lashing out.
yeah, it runs fine without systemd, until you need a docker compose substitute and then you get told to use quadlets (systemd), podman compose (neglected and broken as fuck) or docker compose (with a daemon! also not totally compatible) or even kubernetes...
No. K8s runs containers in a way very similar to Podman. Podman is like a middle point between the simplicity of containerd and the feature set of the Kubernetes Kubelet.
Protip: if you want to use Podman (or Podman Desktop) with Docker Compose compatibility, you'll have a better time installing podman-compose [1] and setting up your env like so:
alias docker=podman
# If you want to still use Docker Compose
# export PODMAN_COMPOSE_PROVIDER=docker-compose
# On macOS: `brew install podman-compose`
export PODMAN_COMPOSE_PROVIDER=podman-compose
export PODMAN_COMPOSE_WARNING_LOGS=false
Most of my initial issues transitioning to Podman were actually just Docker (and Docker Desktop) issues.
Quadlets are great and Podman has a tool called podlet [2] for converting Docker Compose files to Quadlets.
I prefer using a tool like kompose [3] to turn my Docker Compose files into Kubernetes manifests. Then I can use Podman's Kubernetes integration (with some tweaks for port forwarding [4]) to replace Docker Compose altogether!
Last year I transitioned all of my personal projects to use podman. The biggest surface area was converting CI to use podman to build my docker files, but also changed out tooling to use it (like having kind use it instead of docker).
For the most part this worked without issue. The only snag I ran into was my CI provider can't use oci formatted images. Podman lets you select the format of image to build, so I was able to work around this using the `--format=docker` flag.
Same here. I migrated maybe 5-6 projects from docker to buildah and podman about 2 years ago and never looked back.
Unlike other posts I've seen around I haven't really encountered issues with CI or local handling of images - though I am using the most bare bones of CI, SourceHut. And I actually feel better about using shell scripts for building the images to a Dockerfile.
That's a pretty cool migration story! I've been meaning to give podman a more serious look. The OCI image format issue is good to know about – hadn't considered that compatibility angle. I'm curious, did you notice any performance differences in your CI builds after switching?
Its been a while, so all my telemetry has since expired, but there was no meaningful difference in time.
I was prepared to roll it all back, but I never ended up running into problems with it. It's just something that happens in the background that I don't have to think about.
I would love to know more details about your CI setup. I'm running all of my self-hosted services as Quadlets (which I generally really love!) and CI (using Gitea) was/is a huge pain point.
I have a simple setup on GCP. I am using Cloud Build with the companion Github app to trigger builds on branch updates.
I like it because I am deploying to GCP, and storing containers in Artifact Registry. Cloud Build has good interop with those other products and terraform, so its pretty convenient to live with.
The pipelines themselves are pretty straight forward. Each step gets an image that it is executed in, and you can do anything you want in that step. There is some state sharing between steps, so if you build something in one step, you can use it in another.
I do a lot of self hosting as well and relegated to git post receive hook that sends events through https://pipe.pico.sh and then have a script that listens on that topic and builds what I need.
Has Podman become more user friendly in recent years? I gave it a go about three or four years ago now when Docker began their commercial push (which I don't have an issue with).
This was for some hobby project, so I didn't spend a ton of time, but it definitely wasn't as set-and-forget as Docker was. I believe I had to set up a separate VM or something? This was on Linux as the host OS too. It's been a while, so apologies for the hazy memory.
Or it's very possible that I botched the entire setup. In my perfect world, it's a quick install and then `podman run`. Maybe it's time to give it another go.
Definitely more user friendly, and I love using Quadlets! For people using Flatpaks (Linux), check out the app 'Pods' as a lightweight alternative to Podman Desktop. It is still a young project, but is already a very useful way of managing your containers and pods.
As a side note, it is so _refreshing_ to observe the native apps popping up for Linux lately, it feels like a turning point away from the Electron-everything trend. Apps are small, starts immediately and is well integrated with the rest of the system, both functionally and visually. A few other examples of native apps; Cartero, Decibels, GitFourchette, Wike – to name a few that I'm using.
I've found it very straight forward to work with. I run the cli on macOS to spin up ephemeral containers all the time for testing and simple tasks. Never had an issue.
In the spirit of the OP, I also run podman rootless on a home server running the usual home lab suspects with great success. I've taken to using the 'kube play' command to deploy the apps from kubernetes yaml and been pleased with the results.
It's almost a perfect drop-in replacement for Docker so I don't see why it would be any less "set-and-forget".
I only ever found one thing that didn't work with it at all - I think it was Gitlab's test docker images because they set up some VMs with Vagrant or something. Pretty niche anyway.
The one edge case I know of (and have run into) is that podman push doesn't support the --all-tags flag. They have also said they do not plan to implement it. It's annoying because that flag is useful for CI scripts (we give multiple tags to the same build), but not the end of the world either.
ed: ah, I bet you mean the lambda support; FWIW they do call out explicit support for Podman[1] but in my specific setup I had to switch it to use -e DOCKER_HOST=tcp://${my_vm_ip}:2375 and then $(podman system service tcp://0.0.0.0:2375) in the lima vm due to the podman.sock being chown to my macOS UID. My life experience is that engineering is filled with this kind of shit
In fact, there is even a package "podman-docker" that will alias podman to docker so most of your commands will usually work without modification. (of course, there are always the edge cases)
This is mostly solved I think. I run Podman Desktop on macOS and just aliased Docker to Podman in zshrc and it just works for me. I don’t do any local k8s or anything crazy, but it works with my compose files. I’m going to guess there’s still rough edges if you want GPU passthrough or something with complex networking, but for a server and a database running together it matches Docker itself.
Hasn't become more friendly from what I've seen. The project seems largely centered around K8s, and isn't really investing in fixing anything on the "compose" side. I did the same thing as you when Docker first started going down the more commercial path, and after dealing with random breakages for a number of years, fully switched back to Docker (for local dev work on osx).
Podman machine is fine, but occasionally you have to fix things _in the vm_ to get your setup working. Those bugs, along with other breakages during many upgrades, plus slower performance compared to Docker, made me switch back. This is just for local dev with a web app or two and some supporting services in their own containers via compose, nothing special. Totally not worth it IMO.
The biggest difference in my (admittedly limited) experience, is that you need to start a "podman machine" before you can start running containers. This is architecturally different from Docker's use of a daemon, in ways I'm not qualified to explain in more detail.
It's an extra step, but not a painful one -- the default podman machine configuration seems to work pretty well out of the box for most things.
Honestly, for my use-case (running Subabase stack locally), it was seamless enough to switch that I'm a little surprised a bash script like this is necessary. On my Mac, I think it was simply `brew install podman` followed by `podman machine start` and then I got back to work as if I were still using docker.
By far the most tedious part of the switch was fully uninstalling Docker, and all its various startup programs & background processes.
Podman only requires `podman machine` if you're using a non-Linux system; this sets up a Linux VM in the background that all the actual containers run on. Docker does the same thing, though I think it sets it up for you automatically.
The only snag I hit regularly is me forgetting to set :z or :Z on my podman volumes to make it play well with SELinux.
I used to use docker compose, but migrated to podman quadlets. The only thing I miss is being able to define every container I run in a pod in the .pod file itself. Having it integrate with systemd is great.
On NixOS it was as trivial as `podman.enable = true;`. IIRC on Arch it was just a matter of installing the package.
It's all daemonless, rootless and runs directly with your host kernel so it should be as simple as it an application of this kind gets. Probably you followed some instructions somewhere that involved whatever the podman equivalent for docker-machine is?
My container using is admittedly pretty simplistic (CRUD app with some REST services), but after initial setup I've found it to be extremely reliable and simple to use. They strive for 1:1 docker compat so I think it should be pretty easy to migrate.
If you already have colima lying around, that means you have lima and lima ships with both podman and podman-rootful templates:
limactl create --name=proot template://podman-rootful --vm-type=qemu --cpus=4 --memory 4 --disk 20
# it will emit the instructions at the end, but for context
podman system connection add lima-proot "unix:///$HOME/.lima/proot/sock/podman.sock"
podman system connection default lima-proot
podman version # <-- off to the races
Launching Kubernetes pods without a kube-apiserver. The kubelet can run in standalone mode and launch static pods as well, but I don't believe it supports deployment manifests like podman does. Pretty handy.
Does Podman have a swarm counterpart, or does running services still effectively require configuring systemd and then switching to kubernetes for multi-machine?
Last I checked there's no native swarm equivalent in podman. Your best bet is nomad (much simpler than k8s if you want to spin some local setups) or kubernetes.
Podman can work with local pods, using the same yaml as for K8s. Not quite docker swarm, but useful for local testing IME when k8s is the eventual target.
Eh, starting with k8s just because I might want kubernetes in five years is a hard sell, given how easy swarm is to setup. devops that does not fulfill an immediate business need should be delayed because that labor is hella expensive.
Docker Compose is really great for multi-container deployments on a single machine. And Docker Swarm takes that same Compose specification (although there were historical differences) and brings it over to clusters, all while remaining similarly simple. I'm surprised that ourside of Docker Swarm, Nomad or lightweight Kubernetes distros like K3s there haven't been that many attempts at simple clustering solutions. Even then, Kubernetes (which Podman supports) ends up being more complex.
They're probably referring to the podman.socket, which isn't quite like a daemon-mode but means it can emulate it pretty well. Unless there is some daemon mode I missed that got added, but I'd be rather surprised at that.
In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.
I honestly prefer using the official docker cli when talking to podman.
To mirror some of the other comments here: I've had decent success in using podman for my local dev setup (postgres, redis, ...).
I did run into one issue though. Rootless mode isn't supported (or at least easy to setup) when the user account is a member of an active directory (or whatever Linux equivalent my work laptop is running).
Though root mode works, I can't use podman desktop and I have to sudo every command.
What's the podman UX/story on Windows if anyone is using it? Say for Server 2022 (prod) and Win 11 Pro (dev).
Does one prefer using WSL2 or Hyper-V as the machine provider? From what I understand, podman provides the container engine natively so nothing additional is required. Do container runtimes like containerd only come into play when using kubernetes? Not a windows specific question, but is there a reason to pick a podman native container vs one in a k8s context. I understand podman supports k8s as well. Other info: No current containers (docker or k8s) are in play.
I've not looked into podman but this reminded me that I miss rkt. Anyone with experience in rkt and podman able to give me an overview of how they currently differ? I'm not a huge fan of how docker works, so I'd love an alternative.
I went from rkt to podman. Podman is compatible with Docker, including the socket/API, but is similar to rkt in that it launches the container as a child when ran directly (versus Docker, which runs all containers and storage operations under the daemon). Podman also has integration with systemd[1] though it mostly just generates boilerplate for you, since it works a lot closer to how actual daemons work. (P.S.: You might want `--new` if you want a new container each time the unit starts.)
Podman also supports running in "rootless" mode, using kernel.unprivileged_userns_clone and subuid/subgids for the sandboxing and slirp4netns for the networking. This obviously isn't exactly the same as rootful networking, but it works well enough for 99% of the use cases.
If you are running Linux, I think using Podman instead of Docker is generally a no-brainer. I think the way they've approached rootless support is a lot better than Docker and things "just work" more often than not.
They offer packages but if you're on a point release distro you'll want to build it from source.
On my Debian box, I build the podman release target in a chroot, extract the archive in /opt/, and use stow to install/uninstall the package. You'll also want the latest crun, but which I also place in stow and install with stow.
Script does almost all of the things required for the "existing docker containers", migrating networks, blocks, restart mech,etc, that leaves out just one thing migrating any other third party script utilizing docker to podman based instructions.
This would highly improve the experience. Goodluck
If you are using docker in this carefully assembled stateful way - you are doing it wrong. You should be using docker via scripts and IaaS tooling that will assert your desired setup from some kind of configuration. Meaning, you should be able to easily blow all of that away and recreate it with a single script. Likewise, a transition to podman should involve adjusting your scripts to re-assert that plan against podman instead of docker.
This is a cool tool for the decrepit hand-configured server with no documentation that has been running since 2017 untouched and needs an update... but I would encourage you to not fall into this trap to begin with.
programmers ("developers," if you prefer) have trouble with "second order" thinking. we integrate X technology in Y way, maybe with some Z optimization, and that'll solve the problem.
okay, but, like... will it?
is there new maintenance stuff you've completely ignored? (I've noticed this is more common when maintenance is someone else's job.) is it completely new and none of us know about it so we get blindsided unless everything goes exactly right every time? do we get visibility into what it's doing? can we fix it when (not if, when) it breaks? can everyone work on it or is it impossible for anyone but the person who set it up? they're good at thinking up things that should fix the problem but less good at things that will.
I'm a fan of cross-functional feature teams because others in the software engineering ecosystem like QA, systems people, ops, etc. tend not to have this problem. programmers are accountable to the other stakeholders up front, bad ideas are handled in real time, and- this is the most important part- everyone learns. (I won't say all systems people are cantankerous bastards... but the mistakes they've harangued me for are usually the mistakes I don't make twice.)
Runs on demand, doesn't require root, can be nested, usually uses newer and simpler primitives (e.g. a few nftables rules in Podman vs iptables spaghetti in Docker). In my experience it is ~90% compatible with Docker. The author explains the practical differences in the blog post https://www.edu4rdshl.dev/posts/from-docker-to-podman-full-m...
It is usually easier to install - most distros ship relatively recent version of Podman, while Docker is split between docker.io (ancient), docker-ce (free but non in repos) and docker-ee.
Not everything is rosy, some tools expect to be talking to real Docker and don't get fooled by `ln -s docker podman`. But it is almost there.
In general, if one is happy to run very old versions of software Debian can be your driver. If not, you are in for pain in my experience. (That is also why Ubuntu as default Linux is a tragedy, old bugs and missing features mean that it is not really attractive to officially support Linux for vendors.)
I've not experienced something on this scale for many years, "Debian stable packages are so outdated" is mostly a meme. Debian 12 was 1y old when I did this and very often you can relatively easily find a backport or build one - but I think in this case it was either glibc or kernel, that's why "just run upstream" didn't work.
What’s the point of using a distribution if you need to find back ports or build your own? Distros are, after all, mostly collections of installable software.
The point is that it works 95% of the time, or probably more like 98%.
If this is a e.g. webserver and I only need my fastcgi backend built by myself, I can still have reverse proxy, database, and every other package be done by the distro.
No one said you need backports. More like: If it fits 90% and one package doesn't work, you get it from somewhere else - that doesn't invalidate the concept of a distro for me. YMMV
Honest question: wouldn't that make you more nervous you now arrived at an unknown/unsupported configuration?
Boring stability is the goal, but if Debian does not fit as is, then why not find a total package that is somewhat more cutting edge but does fit together? Especially given the fact that Debian does customization to upstream, so esoteric times esoteric.
It doesn't make me nervous because Debian has only let me down a couple of times over nearly 20 years and for example Ubuntu und RHEL and SLES have let me down dozens of times each.
Also I don't usually run "supported". I just run a system that fits my needs.
I maintain a couple of Debian servers and this is how I do it too.
Reverse proxy, DB, etc from Debian. The application server is built and deployed with nix. The Python version (and all the dependencies) that runs the application server is the tagged one in my nix flake which is the same used in the development environment.
I make sure that PostgreSQL is never upgraded past what is available in the latest Debian stable on any on the dev machines.
2) `podman run --entrypoint="" --rm -it debian:stable /bin/bash`
in most instances you can just alias docker to podman and carry on. It uses OCI formatted images just like docker and uses the same registry infrastructure that docker uses.
I would think the Docker infrastructure is financed by Docker Inc as a marketing tool for their paid services? Are they ok when other software utilizes it?
On my system it asks between a few different public registries, and dockerhub/docker.io is one of the choices.
t's all public infrastructure for hosting container images, I don't think Docker-the-company minds other software interfacing with it. After all, they get to call them 'Docker images', 'Dockerfiles', and put their branding everywhere. At this point
Just want to say everyone should be using podman. Its architecture is way more sane and integrates with Linux on a far more basic level (a regular process that can be started via systemd, etc. instead of a root daemon. Run it as root to get privileged containers).
They've also built an incredible ecosystem around podman itself. Red Hat has been absolutely cooking with containers recently.
Systemd .container services (Quadlet) are excellent. I used them to set up multiple smaller sites without any issues. Containers work just like regular systemd services. I created a small Ansible template to demonstrate how simple yet powerful this solution is.
GH: https://github.com/Mati365/hetzner-podman-bunjs-deploy
I agree. I have been using it as a drop in docker replacement alongside podman compose via aliases for years now and I often just forget I am not using docker. The only time it bit me recently is when some scripts were looking for containers with docker specific labels and I had to figure out why they failed only for me.
What are the benefits of using Podman and not Docker?
From a technical perspective the big two are:
- Pod is significantly simpler than Docker, notably Podman doesn't need to run a background process, your containers run directly as separate processes.
- Podman avoids some long-standing security design weaknesses in Docker ("rootless"). Docker rootless _is_ a thing but has compatibility limits.
Rootless, daemonless, hardware passthrough, no Docker Inc pulling the rug, etc etc
Licensing. No root daemon.
Docker Engine is Apache 2.0, is this not a good license? Docker has a rootless mode too.
When you say "Docker Engine" that suggests other parts of Docker are licensed differently (I haven't looked into it). I'd say you have to compare the whole ecosystem and not just a single component either way.
I said "Docker Engine" specifically because it is "Docker Engine" that is the counterpart of Podman, and therefore it is the only component that matters. The discussion here is "Docker" vs "Podman," but "Docker Engine" is what we really mean when saying "Docker."
Docker Desktop has a much more restrictive license. Unfortunately, on MacOS and Windows, the "Docker Desktop" product is often referred to as simply "Docker".
FWIW, Podman has an open source alternative to Docker Desktop as well.
No root daemon got replaced with "but if you want a replacement for docker compose you ought to be using systemd (quadlets)".
Meh
Incorrect, the use of a non-root daemon is essential for isolation and security.
Im fully on board with the idea that root daemons shouldnt be necessary I just dont want systemd to become a dependency for yet again something else it shouldnt be a dependency for.
Huh, that's another uninformed take.
systemd is at it's core an app for running services, such as containers.
You should read up on podman and systemd before making up more arguments.
The point is that RedHat went on a tirade for years telling everyone: "Docker bad, root! Podman good, no root! Docker bad, daemon! Podman good, no daemon!".
And then here comes Quadlets and the systemd requirements. Irony at its finest! The reality is Podman is good software if you've locked yourself into a corner with Dan Walsh and RHEL. In that case, enjoy.
For everyone else the OSS ecosystem that is Docker actually has less licensing overhead and restrictions, in the long run, than dealing with IBM/RedHat. IMO that is.
You can run Quadlets under the systemd user session just as well.
But...you don't need systemd or Quadlets to run Podman, it's just convenient. You can also use podman-compose (I personally don't, but a coworker does and it's reasonable).
But yeah I already use a distro with systemd (most folks do, I think), so for me, using Podman with systemd doesn't add a root daemon, it reuses an existing one (again, for most Linux distros/users).
Exactly my point.
Today I can run docker rootless and in that case can leverage compose in the same manner. Is it the default? No, you've got me there.
SystemD runs as root. It's just ironic given all the hand waving over the years. And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point.
I've used Podman. It's fine. But the arguments of the past aren't as sharp as they originally were. I believe Docker improved because of Podman, so there's that. But to discount the reality of the doublespeak by paid for representatives from RedHat/IBM is, again, ironic.
> And Docker, and all it's tooling, are so ubiquitous and well thought out that Podman and friends are literally a reimplementation which is the selling point
I would argue that Docker’s tooling is not well thought out, and that’s putting it mildly. I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Podman copied it, which honestly makes me not love podman so much. Podman has quite poor documentation, and it doesn’t even seem to try to build actually good designs for tooling.
Curious what your point is?
> I can name many things I do not like about is, and I struggle to find things I like about it’s tooling.
Please share.
Off the top of my head:
FROM [foo]: [foo] is a reference that is generally not namespaced (ubuntu is relative to some registry, but it doesn't say which one) and it's expected to be mutable (ubuntu:latest today is not the same as ubuntu:latest tomorrow).
There are no lockfiles to pin and commit dependency versions.
Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
Mostly resulting from all of the above, build layer caching is basically a YOLO situation. I've had a build result in literally more than a year out-of-date dependencies because I built on a system that hadn't done that particular build for a while, had a layer cached (by name!), and I forgot to specify a TTL when I ran the build. But, of course, there is no correct TTL to specify.
Every lesson that anyone in the history of computing has ever learned about declarative or pure programming has been completely forgotten by the build systems.
Why on Earth does copying in data require spinning up a container?
Moving on from builds:
Containers are read-write by default, not read-only.
Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
The tooling around what constitutes a running container is, to me, rather unpleasant. I can't make a named group of services, restart them, possibly change some of the parts that make them up, and keep the same name in a pleasant manner. I can 'compose down' and 'compose up' them and hope I get a good state. Sometimes it works. And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
I'm sure I could go on.
> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
> Why on Earth does copying in data require spinning up a container?
It doesn't.
> Containers are read-write by default, not read-only.
I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
> Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
Almost all of this is wrong.
> And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
Pretty much everything you've outlined is, as I see it, a misunderstanding of what containers aim to solve and how they're operationalized. If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
>> Builds are non-reproducible by default. Every default represents worst practices, not best practices. Commands can and do access the network. Everything can mutate everything.
> I think you're conflating software build with environment builds - they are not the same and have different use cases people are after.
They're not so different. An environment is just big software. People have come up with schemes for building large environments for decades, e.g. rpmbuild, nix, Gentoo, whatever Debian's build system is called, etc. And, as far as I know, all of these have each layer explicitly declare what it is mutating; all of them track the input dependencies for each layer; and most or all of them block network access in build steps; some of them try to make layer builds explicitly reproducible. And software build systems (make, waf, npm, etc) have rather similar properties. And then there's Docker, which does none of these.
> > Containers are read-write by default, not read-only.
> I don't think you really understand containers since COW is the default. Containers are not "read-write" by default in the context of the underlying image. If you want to block writing to the file system that is trivial.
Right. The issue is that the default is wrong. In a container:
works, by default, using COW. No error. And the result is even kind of persistent -- it lasts until the "container" goes away, which can often mean "exactly until you try to update your image". And then you lose data.> > Things that are logically imports and exports do not have descriptive names. So your container doesn't expose a web service called 'API'; it exposes port 8000. And you need to remember it, and if the image changes the port, you lose, and there is no good way for the tooling to help. Similarly, volumes need to be bound to paths, and there is nothing resembling an interface definition to help get it right. And, since containers are read-write by default, typoing a mount path results in an apparently working container that loses data.
> Almost all of this is wrong.
I would really like to believe you. I would love for Docker to work better, and I tried to believe you, and I looked up best practices from the horse's mouth:
https://docs.docker.com/get-started/docker-concepts/running-...
and
https://docs.docker.com/get-started/docker-concepts/running-...
Look, in every programming language and environmnt I've ever used, even assembly, an interface has a name. If I write a function, it looks like this:
If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.At least the docs try to remind people that the whole mechanism is "insecure by default".
I even tried asking a fancy LLM how to export a port by name, and LLM (as expected) went into full obsequious mode, told me it's possible, gave me examples that don't do it, told me that Docker Compose can do it, and finally admitted the actual answer: "However, it's important to note that the OCI image specification itself (like in a Dockerfile) doesn't have a direct mechanism for naming ports."
> > And the compose files and quadlets are, of course, not really compatible with each other, nor are they compatible with Kubernetes without pulling teeth.
> What? This gets wilder as you go on. Why would you expect compose files to be "compatible" with k8s? They are two different ways to orchestrate containers.
I'd like to have some way for a developer to declare that their software can be run with the 'app' container and a 'mysql' container and you connect them like so. Or even that it's just one container image and it needs the following volumes bound in. And you could actually wire them up with different orchestration systems, and the systems could all read that metadata and help do the right thing. But no, no such metadata exists in an orchestration-system-agnostic way.
> If all of these things were true container usage, in general, wouldn't have been adopted to the point where it's as commonplace as it is today.
Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
> They're not so different. An environment is just big software.
Containers are not a software development platform, but a platform that can be used in the build phase of software development. They are very different. Docker is not inherently a software development platform because it does not provide the tools required to write, compile, or debug code. Instead, Docker is a platform that enables packaging applications and their dependencies into lightweight, portable containers. These containers can be used in various stages of the software development lifecycle but are not the development environment themselves. This is not just "big software" - which makes absolutely no sense.
> Right. The issue is that the default is wrong. In a container: $ echo foo >the_wrong_path
Can you do incorrect things in software development? Yes. Can you do incorrect things is containers? Yes. You're doing it wrong. If you are writing to a part of the filesystem that is not mounted outside of the container, yes, you will lose your data. Everyone using containers knows this and there are plenty of ways around it. I guess in your case you just always need to export the root of the filesystem so you don't foot gun yourself? I mean c'mon man. It sounds like you'd like to live in a software bubble to protect you from yourself at this point.
> If I write an HTTP API, it has a name, like GET /name_goes_here. If I write a class or interface or trait, its methods have names. ELF files expose symbols by name. Windows IIRC has a weird old system for exporting symbols by ordinal, but it’s problematic and largely unused. But Docker images expose their APIs (ports) by number. The welcome-to-docker container has an interface called '8080'. Thanks.
You clearly don't understand Docker networking. What you're describing is the default bridge. There are other ways to use networking in Docker outside of the default. In your case, again, maybe just run your containers in "host" networking mode because, again, you're too ignorant to read and understand the documentation of why you have to deal with a port mapping in a container that's sitting behind a bridge network. Again you're making up arguments and literally have no clue what you're talking about.
> Software doesn't look like this. Consider git: it has near universal adoption, but there is a very strong consensus in the community that many of the original CLI commands are really bad.
OK? Grab a dictionary - read the definition for the word: "subjective", enjoy!
systemd runs as root yes, but services started by systemd dont unless you instruct them to.
that means your podman containers dont run as root unless you want them to.
mine runs as user services
I don't see your point. This is exactly how Docker works. Containers that are running when instantiated from the Docker daemon don't need to be run as root. But you can... Just like your containers started from SystemD (quadlet).
I run all my containers, when using Docker, as non-root. So where is the upside other than where your trust lies?
Have you used podman compose? It's shit.
When I bring this up online the answer is invariably "well use quadlets then" (i.e. systemd).
>systemd doesn't add a root daemon, it reuses an existing one
lol the same could be said of every docker container ive ever run....
Quadlets is systemd. Red hat declared it to be the recommended/blessed way of running containers. podman compose is treated like the bastard stepchild (presumably because it doesnt have systemd as a dependency).
Please try to understand the podman ecosystem before lashing out.
Podman runs on FreeBSD without systemd, so there you go.
yeah, it runs fine without systemd, until you need a docker compose substitute and then you get told to use quadlets (systemd), podman compose (neglected and broken as fuck) or docker compose (with a daemon! also not totally compatible) or even kubernetes...
Process isolation
I understand that k8s uses containerd or similar daemons to run containers. Do podman's criticisms of docker also apply to k8s?
No. K8s runs containers in a way very similar to Podman. Podman is like a middle point between the simplicity of containerd and the feature set of the Kubernetes Kubelet.
[dead]
Until it’s supported by AWS ECS it’s not relevant for me since that’s what my container builds are for.
Images built by Podman can be run by Docker and vice versa.
Protip: if you want to use Podman (or Podman Desktop) with Docker Compose compatibility, you'll have a better time installing podman-compose [1] and setting up your env like so:
Most of my initial issues transitioning to Podman were actually just Docker (and Docker Desktop) issues.Quadlets are great and Podman has a tool called podlet [2] for converting Docker Compose files to Quadlets.
I prefer using a tool like kompose [3] to turn my Docker Compose files into Kubernetes manifests. Then I can use Podman's Kubernetes integration (with some tweaks for port forwarding [4]) to replace Docker Compose altogether!
[1] https://github.com/containers/podman-compose
[2] https://github.com/containers/podlet
[3] https://github.com/kubernetes/kompose
[4] https://kompose.io/user-guide/#komposecontrollerportexpose
podman compose is really bad
Indeed, it has a lot of limitations. It's better to use docker compose with a podman socket.
How so? What problems do you have with it?
It never works on first go, constantly debugging and breaking
Missing features, lots of debugging spam which cant be turned off, doesnt properly adhere to the compose spec...
Last year I transitioned all of my personal projects to use podman. The biggest surface area was converting CI to use podman to build my docker files, but also changed out tooling to use it (like having kind use it instead of docker).
For the most part this worked without issue. The only snag I ran into was my CI provider can't use oci formatted images. Podman lets you select the format of image to build, so I was able to work around this using the `--format=docker` flag.
Same here. I migrated maybe 5-6 projects from docker to buildah and podman about 2 years ago and never looked back.
Unlike other posts I've seen around I haven't really encountered issues with CI or local handling of images - though I am using the most bare bones of CI, SourceHut. And I actually feel better about using shell scripts for building the images to a Dockerfile.
Oh hey! I have used your activity pub library, it's very nice :)
Thank you. :) I'm still working on it, dare I say it maybe even getting closer to a stable release.
That's a pretty cool migration story! I've been meaning to give podman a more serious look. The OCI image format issue is good to know about – hadn't considered that compatibility angle. I'm curious, did you notice any performance differences in your CI builds after switching?
Its been a while, so all my telemetry has since expired, but there was no meaningful difference in time.
I was prepared to roll it all back, but I never ended up running into problems with it. It's just something that happens in the background that I don't have to think about.
Yea, I was under the impression docker uses OCI containers these days and not their own custom definition. But I may be ill-informed.
I would love to know more details about your CI setup. I'm running all of my self-hosted services as Quadlets (which I generally really love!) and CI (using Gitea) was/is a huge pain point.
I have a simple setup on GCP. I am using Cloud Build with the companion Github app to trigger builds on branch updates.
I like it because I am deploying to GCP, and storing containers in Artifact Registry. Cloud Build has good interop with those other products and terraform, so its pretty convenient to live with.
The pipelines themselves are pretty straight forward. Each step gets an image that it is executed in, and you can do anything you want in that step. There is some state sharing between steps, so if you build something in one step, you can use it in another.
I do a lot of self hosting as well and relegated to git post receive hook that sends events through https://pipe.pico.sh and then have a script that listens on that topic and builds what I need.
Are you pulling base images from Docker Hub, or do you build all images from source from scratch?
I am pulling from a few registries, but trying to move everything to a private registry.
In podman, you have to use the "full path" to work with docker hub. Eg `docker.io/library/nginx`.
Has Podman become more user friendly in recent years? I gave it a go about three or four years ago now when Docker began their commercial push (which I don't have an issue with).
This was for some hobby project, so I didn't spend a ton of time, but it definitely wasn't as set-and-forget as Docker was. I believe I had to set up a separate VM or something? This was on Linux as the host OS too. It's been a while, so apologies for the hazy memory.
Or it's very possible that I botched the entire setup. In my perfect world, it's a quick install and then `podman run`. Maybe it's time to give it another go.
Definitely more user friendly, and I love using Quadlets! For people using Flatpaks (Linux), check out the app 'Pods' as a lightweight alternative to Podman Desktop. It is still a young project, but is already a very useful way of managing your containers and pods.
As a side note, it is so _refreshing_ to observe the native apps popping up for Linux lately, it feels like a turning point away from the Electron-everything trend. Apps are small, starts immediately and is well integrated with the rest of the system, both functionally and visually. A few other examples of native apps; Cartero, Decibels, GitFourchette, Wike – to name a few that I'm using.
I've found it very straight forward to work with. I run the cli on macOS to spin up ephemeral containers all the time for testing and simple tasks. Never had an issue.
In the spirit of the OP, I also run podman rootless on a home server running the usual home lab suspects with great success. I've taken to using the 'kube play' command to deploy the apps from kubernetes yaml and been pleased with the results.
It's almost a perfect drop-in replacement for Docker so I don't see why it would be any less "set-and-forget".
I only ever found one thing that didn't work with it at all - I think it was Gitlab's test docker images because they set up some VMs with Vagrant or something. Pretty niche anyway.
The one edge case I know of (and have run into) is that podman push doesn't support the --all-tags flag. They have also said they do not plan to implement it. It's annoying because that flag is useful for CI scripts (we give multiple tags to the same build), but not the end of the world either.
I could not get LocalStack to work on Podman, to my chagrin. And no, doing the "sudo touch /etc/containers/nodocker" thing didn't solve it.
ed: ah, I bet you mean the lambda support; FWIW they do call out explicit support for Podman[1] but in my specific setup I had to switch it to use -e DOCKER_HOST=tcp://${my_vm_ip}:2375 and then $(podman system service tcp://0.0.0.0:2375) in the lima vm due to the podman.sock being chown to my macOS UID. My life experience is that engineering is filled with this kind of shit
I used https://github.com/aws-samples/aws-cloudformation-inline-pyt... to end-to-end test it
1: https://github.com/localstack/localstack/blob/v4.1.1/localst...
In fact, there is even a package "podman-docker" that will alias podman to docker so most of your commands will usually work without modification. (of course, there are always the edge cases)
It is not user-friendly, but it works flawlessly once you get used to it.
I stayed away from docker all these years and tried podman from scratch last year after docker failed to work for a project I was experimenting with.
Took an hour to read various articles and get things working.
One thing I liked was it does not need sudo privileges or screw with the networking.
This is mostly solved I think. I run Podman Desktop on macOS and just aliased Docker to Podman in zshrc and it just works for me. I don’t do any local k8s or anything crazy, but it works with my compose files. I’m going to guess there’s still rough edges if you want GPU passthrough or something with complex networking, but for a server and a database running together it matches Docker itself.
Hasn't become more friendly from what I've seen. The project seems largely centered around K8s, and isn't really investing in fixing anything on the "compose" side. I did the same thing as you when Docker first started going down the more commercial path, and after dealing with random breakages for a number of years, fully switched back to Docker (for local dev work on osx).
Podman machine is fine, but occasionally you have to fix things _in the vm_ to get your setup working. Those bugs, along with other breakages during many upgrades, plus slower performance compared to Docker, made me switch back. This is just for local dev with a web app or two and some supporting services in their own containers via compose, nothing special. Totally not worth it IMO.
The biggest difference in my (admittedly limited) experience, is that you need to start a "podman machine" before you can start running containers. This is architecturally different from Docker's use of a daemon, in ways I'm not qualified to explain in more detail.
It's an extra step, but not a painful one -- the default podman machine configuration seems to work pretty well out of the box for most things.
Honestly, for my use-case (running Subabase stack locally), it was seamless enough to switch that I'm a little surprised a bash script like this is necessary. On my Mac, I think it was simply `brew install podman` followed by `podman machine start` and then I got back to work as if I were still using docker.
By far the most tedious part of the switch was fully uninstalling Docker, and all its various startup programs & background processes.
Podman only requires `podman machine` if you're using a non-Linux system; this sets up a Linux VM in the background that all the actual containers run on. Docker does the same thing, though I think it sets it up for you automatically.
The only snag I hit regularly is me forgetting to set :z or :Z on my podman volumes to make it play well with SELinux.
I used to use docker compose, but migrated to podman quadlets. The only thing I miss is being able to define every container I run in a pod in the .pod file itself. Having it integrate with systemd is great.
On NixOS it was as trivial as `podman.enable = true;`. IIRC on Arch it was just a matter of installing the package.
It's all daemonless, rootless and runs directly with your host kernel so it should be as simple as it an application of this kind gets. Probably you followed some instructions somewhere that involved whatever the podman equivalent for docker-machine is?
My container using is admittedly pretty simplistic (CRUD app with some REST services), but after initial setup I've found it to be extremely reliable and simple to use. They strive for 1:1 docker compat so I think it should be pretty easy to migrate.
On the off chance it matters to anyone, brew whines that podman requires macOS 13.x due to https://github.com/containers/podman/issues/22121 but that's only for $(podman machine start) support, which relies on https://github.com/crc-org/vfkit/issues/37
If you already have colima lying around, that means you have lima and lima ships with both podman and podman-rootful templates:
Podman is interesting as well because it can run Kubernetes yamls (to a small extent) which can be handy.
With the command `podman kube play file.yaml`
Launching Kubernetes pods without a kube-apiserver. The kubelet can run in standalone mode and launch static pods as well, but I don't believe it supports deployment manifests like podman does. Pretty handy.
Does Podman have a swarm counterpart, or does running services still effectively require configuring systemd and then switching to kubernetes for multi-machine?
Last I checked there's no native swarm equivalent in podman. Your best bet is nomad (much simpler than k8s if you want to spin some local setups) or kubernetes.
kubernetes
Podman can work with local pods, using the same yaml as for K8s. Not quite docker swarm, but useful for local testing IME when k8s is the eventual target.
Eh, starting with k8s just because I might want kubernetes in five years is a hard sell, given how easy swarm is to setup. devops that does not fulfill an immediate business need should be delayed because that labor is hella expensive.
It doesn't, which to me seems like a bummer.
Docker Compose is really great for multi-container deployments on a single machine. And Docker Swarm takes that same Compose specification (although there were historical differences) and brings it over to clusters, all while remaining similarly simple. I'm surprised that ourside of Docker Swarm, Nomad or lightweight Kubernetes distros like K3s there haven't been that many attempts at simple clustering solutions. Even then, Kubernetes (which Podman supports) ends up being more complex.
No to the first, yes to the second. Podman has a daemon mode that works like like the Docker daemon, no systemd necessary.
> Podman has a daemon mode ...
Can you provide any documentation about that?
They're probably referring to the podman.socket, which isn't quite like a daemon-mode but means it can emulate it pretty well. Unless there is some daemon mode I missed that got added, but I'd be rather surprised at that.
Yep!
https://docs.podman.io/en/latest/markdown/podman-system-serv...
In places where you're doing a `dnf install podman` all you typically need to do is start the service and then point either the podman cli or docker cli directly at it. In Fedora for example it's podman.service.
I honestly prefer using the official docker cli when talking to podman.
This is fine for Linux users and the actual servers, but for local development on a Mac, you cannot beat Orbstack (imo)
To mirror some of the other comments here: I've had decent success in using podman for my local dev setup (postgres, redis, ...).
I did run into one issue though. Rootless mode isn't supported (or at least easy to setup) when the user account is a member of an active directory (or whatever Linux equivalent my work laptop is running).
Though root mode works, I can't use podman desktop and I have to sudo every command.
just a pro tip "if it aint broken dont fix it" if you have a working docker file(s) do not migrate unless there is a ground breaking need
Podman and Buildah consume Dockerfiles perfectly fine. Have you come across a scenario where Dockerfile contents were a concern?
Security might be such a need, but that depends on how important that is for you. On top, docker auto-fiddles with your firewall.
What's the podman UX/story on Windows if anyone is using it? Say for Server 2022 (prod) and Win 11 Pro (dev).
Does one prefer using WSL2 or Hyper-V as the machine provider? From what I understand, podman provides the container engine natively so nothing additional is required. Do container runtimes like containerd only come into play when using kubernetes? Not a windows specific question, but is there a reason to pick a podman native container vs one in a k8s context. I understand podman supports k8s as well. Other info: No current containers (docker or k8s) are in play.
Thanks in advance.
on windows, rancher desktop + podman offers a similar experience to docker desktop.
I went with podman in 2020 when docker acted out last time and haven't looked back since.
I've not looked into podman but this reminded me that I miss rkt. Anyone with experience in rkt and podman able to give me an overview of how they currently differ? I'm not a huge fan of how docker works, so I'd love an alternative.
I went from rkt to podman. Podman is compatible with Docker, including the socket/API, but is similar to rkt in that it launches the container as a child when ran directly (versus Docker, which runs all containers and storage operations under the daemon). Podman also has integration with systemd[1] though it mostly just generates boilerplate for you, since it works a lot closer to how actual daemons work. (P.S.: You might want `--new` if you want a new container each time the unit starts.)
Podman also supports running in "rootless" mode, using kernel.unprivileged_userns_clone and subuid/subgids for the sandboxing and slirp4netns for the networking. This obviously isn't exactly the same as rootful networking, but it works well enough for 99% of the use cases.
If you are running Linux, I think using Podman instead of Docker is generally a no-brainer. I think the way they've approached rootless support is a lot better than Docker and things "just work" more often than not.
[1]: https://docs.podman.io/en/latest/markdown/podman-generate-sy...
Podman looks cool, is there any equivalent of Watchtower (https://containrrr.dev/watchtower/) for Podman?
It’s in-built: https://docs.podman.io/en/latest/markdown/podman-auto-update...
GitHub issue for podman support, here: https://github.com/containrrr/watchtower/issues/1060
What about Singularity?
And while we’re at it, what’s your favorite non-sudo Docker alternative? And why?
…or Apptainer?
Only thing I really miss is good Podman support in Skaffold.
I can't use podman until they start releasing up to date packages for all systems I use.
Have they started releasing packages yet?
They offer packages but if you're on a point release distro you'll want to build it from source.
On my Debian box, I build the podman release target in a chroot, extract the archive in /opt/, and use stow to install/uninstall the package. You'll also want the latest crun, but which I also place in stow and install with stow.
Not enough data provided. Packages exist for systems.
What if I'm using docker-compose?
Use podman-compose?
https://docs.podman.io/en/latest/markdown/podman-compose.1.h...
https://github.com/containers/podman-compose
Or just use docker compose with podman. There's a section in arch's doc about that https://wiki.archlinux.org/title/Podman
Enable the Podman socket and have it alias the Docker socket.
Shameless plug: Alternatively, if you are on NixOS, you can just use compose2nix.
https://github.com/aksiksi/compose2nix
I actually am, but want to keep configs more distro agnostic.
Your Docker Compose file remains the source of truth, so there is no “lock-in”. That’s the beauty of config generation :)
Good to know!
There are options, but my experience has not exactly been positive.
You can continue using `docker-compose` (the provider) with `podman compose`. It's the default provider if installed.
Just read the source code.
Script does almost all of the things required for the "existing docker containers", migrating networks, blocks, restart mech,etc, that leaves out just one thing migrating any other third party script utilizing docker to podman based instructions. This would highly improve the experience. Goodluck
If you are using docker in this carefully assembled stateful way - you are doing it wrong. You should be using docker via scripts and IaaS tooling that will assert your desired setup from some kind of configuration. Meaning, you should be able to easily blow all of that away and recreate it with a single script. Likewise, a transition to podman should involve adjusting your scripts to re-assert that plan against podman instead of docker.
This is a cool tool for the decrepit hand-configured server with no documentation that has been running since 2017 untouched and needs an update... but I would encourage you to not fall into this trap to begin with.
Yeah in theory. In practice that never happens for anything other than a vercel app.
Why do people consistently like to make their lives harder in software engineering?
programmers ("developers," if you prefer) have trouble with "second order" thinking. we integrate X technology in Y way, maybe with some Z optimization, and that'll solve the problem.
okay, but, like... will it?
is there new maintenance stuff you've completely ignored? (I've noticed this is more common when maintenance is someone else's job.) is it completely new and none of us know about it so we get blindsided unless everything goes exactly right every time? do we get visibility into what it's doing? can we fix it when (not if, when) it breaks? can everyone work on it or is it impossible for anyone but the person who set it up? they're good at thinking up things that should fix the problem but less good at things that will.
I'm a fan of cross-functional feature teams because others in the software engineering ecosystem like QA, systems people, ops, etc. tend not to have this problem. programmers are accountable to the other stakeholders up front, bad ideas are handled in real time, and- this is the most important part- everyone learns. (I won't say all systems people are cantankerous bastards... but the mistakes they've harangued me for are usually the mistakes I don't make twice.)
I never tried Podman. I guess the benefit is that it runs on demand and not as a always on demon?
How does one install podman on Debian and how does one get a Debian image to run inside podman?
Runs on demand, doesn't require root, can be nested, usually uses newer and simpler primitives (e.g. a few nftables rules in Podman vs iptables spaghetti in Docker). In my experience it is ~90% compatible with Docker. The author explains the practical differences in the blog post https://www.edu4rdshl.dev/posts/from-docker-to-podman-full-m...
It is usually easier to install - most distros ship relatively recent version of Podman, while Docker is split between docker.io (ancient), docker-ce (free but non in repos) and docker-ee.
Not everything is rosy, some tools expect to be talking to real Docker and don't get fooled by `ln -s docker podman`. But it is almost there.
Regarding Debian, just `sudo apt install podman && podman run -it debian` - see https://wiki.debian.org/Podman
Careful, the version in Debian 12 is old and apparently just barely predates the "good" versions.
I had so many problems that I went back to Docker, because current Podman didn't seem to be trivially installable on Debian 12.
In general, if one is happy to run very old versions of software Debian can be your driver. If not, you are in for pain in my experience. (That is also why Ubuntu as default Linux is a tragedy, old bugs and missing features mean that it is not really attractive to officially support Linux for vendors.)
I've not experienced something on this scale for many years, "Debian stable packages are so outdated" is mostly a meme. Debian 12 was 1y old when I did this and very often you can relatively easily find a backport or build one - but I think in this case it was either glibc or kernel, that's why "just run upstream" didn't work.
What’s the point of using a distribution if you need to find back ports or build your own? Distros are, after all, mostly collections of installable software.
The point is that it works 95% of the time, or probably more like 98%.
If this is a e.g. webserver and I only need my fastcgi backend built by myself, I can still have reverse proxy, database, and every other package be done by the distro.
No one said you need backports. More like: If it fits 90% and one package doesn't work, you get it from somewhere else - that doesn't invalidate the concept of a distro for me. YMMV
Honest question: wouldn't that make you more nervous you now arrived at an unknown/unsupported configuration?
Boring stability is the goal, but if Debian does not fit as is, then why not find a total package that is somewhat more cutting edge but does fit together? Especially given the fact that Debian does customization to upstream, so esoteric times esoteric.
It doesn't make me nervous because Debian has only let me down a couple of times over nearly 20 years and for example Ubuntu und RHEL and SLES have let me down dozens of times each.
Also I don't usually run "supported". I just run a system that fits my needs.
Thanks for following up. Yeah, I should rather have said "tested/vetted".
I maintain a couple of Debian servers and this is how I do it too.
Reverse proxy, DB, etc from Debian. The application server is built and deployed with nix. The Python version (and all the dependencies) that runs the application server is the tagged one in my nix flake which is the same used in the development environment.
I make sure that PostgreSQL is never upgraded past what is available in the latest Debian stable on any on the dev machines.
I did not have this same experience, all my VPS successfully run Debian’s podman package with zero issue running containers.
Glad to hear. When I brought it up somewhere I got exact the "oh you're running 4.x - we also had that problem, but 5 works fine".
1) Podman is available in default debian repos. https://packages.debian.org/bookworm/podman
2) `podman run --entrypoint="" --rm -it debian:stable /bin/bash`
in most instances you can just alias docker to podman and carry on. It uses OCI formatted images just like docker and uses the same registry infrastructure that docker uses.
Installing `podman-docker` will do the aliasing for you.
Where does it pull the Debian image from?
I would think the Docker infrastructure is financed by Docker Inc as a marketing tool for their paid services? Are they ok when other software utilizes it?
By default it uses whatever is in registries.conf for unqualified-search-registries. You can specify in the fully qualified image name if you'd like.
I can't speak to what Docker Inc. is okay with or not.
On my system it asks between a few different public registries, and dockerhub/docker.io is one of the choices.
t's all public infrastructure for hosting container images, I don't think Docker-the-company minds other software interfacing with it. After all, they get to call them 'Docker images', 'Dockerfiles', and put their branding everywhere. At this point
> I guess the benefit is that it runs on demand and not as a always on demon?
Podman has much better systemd integration: https://www.redhat.com/en/blog/quadlet-podman
And you can use systemd to be their supervisor via quadlet: https://www.redhat.com/en/blog/quadlet-podman
apt install podman
podman run -it debian bash