Docker Is Eating Your Disk Space (And How PruneMate Fixes It)

The Problem: Docker Is Eating Your Disk Space

Symptoms of Docker Disk Space Issues

Your Docker host is running out of space. Again.

You’ve been spinning up containers, testing new services, building images. Everything’s humming along nicely. Then suddenly, boom your system starts throwing errors because the root filesystem is full. You check your disk usage and see Docker’s eaten up 200GB of space. Wait, what? How did this even happen?

Why Docker Accumulates Space

Here’s what’s going on

Honestly, Docker’s trying to be helpful. Think about it: when you stop a container, what if you want to restart it later with the same data? Docker doesn’t just nuke everything. Volumes stick around even after containers are gone. Build cache hangs out to make your next build faster. Old images stay put β€œjust in case” you need them again.

This makes total sense for production. But in a home lab where you’re constantly trying new stuff? It turns into a slow pile-up of junk. That database volume from three months ago? Still there. Build cache from a project you ditched? Yep, still hanging around. Images you pulled once for curiosity and never touched again? All of it adds up.

Checking What’s Using Your Space

Let’s see what’s actually eating your space. Run this:

Terminal window
docker system df

You’ll probably see something like this:

Terminal window
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 47 12 23.5GB 15.2GB (64%) # 15GB we could get back!
Containers 15 8 2.1GB 1.3GB (61%)
Local Volumes 89 24 45.8GB 38.2GB (83%) # Ouch, 38GB of unused volumes
Build Cache 156 0 12.3GB 12.3GB (100%) # 100% unused. All of it.

Look at that. 66GB of space just sitting there doing nothing. And here’s the kicker: if your Docker volumes live on your root filesystem (which they probably do), your entire system starts falling apart when space runs out. Databases start refusing connections. Package managers throw errors. Containers can’t write logs. It’s a mess.

The Risks of Manual Docker Cleanup

Built-in Docker Prune Commands

Sure, you can manually prune stuff with Docker’s built-in commands:

Terminal window
# These work, but they're scary to run blindly
docker image prune # Removes unused images. Pretty safe.
docker volume prune # Removes unused volumes. WAIT, ARE YOU SURE?
docker system prune # Nuclear option. Add -a and you're in danger territory.

Why Manual Cleanup Is Risky

But here’s the thing. It always feels risky. Are those volumes actually unused? What if you delete something you needed? I made this mistake early on. I thought I’d be smart and just manually deleted folders in /var/lib/docker/volumes. Big mistake. I nuked persistent data I actually cared about because I couldn’t tell what was important.

The built-in prune commands are safer than my dumb folder deletion, but they’re still pretty blunt. It’s all or nothing. Plus, if you’ve got multiple Docker hosts? Now you’re SSHing into each one, running the same commands over and over. That gets old real quick.

Enter PruneMate: Actually sensible Docker cleanup

What Makes PruneMate Worth Using

PruneMate is an open-source tool that fixes all this. It shows you what’s eating your space, lets you pick exactly what to clean, and runs it all on a schedule so you don’t have to think about it.

What makes it worth using:

  • Visual dashboard that shows where your space is going across all your Docker hosts
  • Granular control pick images, volumes, networks, containers, or build cache. Your choice.
  • Preview mode see exactly what’ll get deleted before you pull the trigger
  • Scheduled cleanup jobs that run automatically while you sleep
  • Remote host support using Docker Socket Proxy (no SSH needed)
  • Notifications via Gotify, ntfy, Discord, or Telegram

Instead of SSHing around and guessing what’s safe to delete, you get one clean interface that manages everything. Way better.

Setting Up PruneMate

Just deploy the container on one of your Docker hosts with Docker Compose. Here’s all you need:

compose.yml
services:
prunemate:
image: anoniemerd/prunemate:latest
container_name: prunemate
ports:
- '7676:8080' # Access the web UI on port 7676
volumes:
# Give PruneMate access to Docker on this host
- /var/run/docker.sock:/var/run/docker.sock
# Keep logs and config between restarts
- ./prunemate/logs:/var/log
- ./prunemate/config:/config
environment:
- PRUNEMATE_TZ=America/New_York # Change to your timezone
- PRUNEMATE_TIME_24H=true # Or false if you prefer AM/PM
# Optional: Add a password to protect the interface
# - PRUNEMATE_AUTH_USER=admin
# - PRUNEMATE_AUTH_PASSWORD_HASH=your_hash_here
restart: unless-stopped

Bring it up:

Terminal window
docker compose up -d

Now just open http://your-host:7676 in your browser and you’re good to go.

Managing Multiple Docker Hosts

If you want to manage other Docker hosts remotely, just set up a Docker Socket Proxy on each one. This lets PruneMate connect safely without giving it full access to the Docker socket (which would be a security nightmare):

compose.yml
services:
dockerproxy:
image: ghcr.io/tecnativa/docker-socket-proxy:latest
environment:
# These control what PruneMate can do via the proxy
- CONTAINERS=1 # Let it see and manage containers
- IMAGES=1 # Let it manage images
- NETWORKS=1 # Let it manage networks
- VOLUMES=1 # Let it manage volumes
- BUILD=1 # Needed for cleaning build cache
- POST=1 # Needed for actually running prune commands
ports:
- '2375:2375' # Standard Docker API port
volumes:
# Read-only access to Docker socket. Much safer.
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped

Deploy this on each remote host, then add them to PruneMate’s interface with their hostname and port 2375. Done. Now you’re managing your entire home lab from one place.

Using PruneMate Effectively

The Interface and Cleanup Options

The interface is super straightforward. You’ve got checkboxes for what to clean:

  • All unused containers
  • All unused images
  • All unused networks
  • All unused volumes
  • All build cache

By default, only unused images are checked. Smart choice images are easy to pull again if you need them. Volumes though? Those might have data you actually care about, so PruneMate leaves them alone unless you explicitly say otherwise.

A Typical Cleanup Process

Here’s what a typical cleanup looks like:

  1. Check the boxes for what you want to clean
  2. Hit β€œPreview & Run”
  3. Look at what it’s about to delete (with size estimates)
  4. If it looks good, hit β€œConfirm & Execute”
  5. Get a notification when it’s done

That preview step is clutch. You’re not going in blind. You’ll see something like β€œAbout to delete 50 unused volumes and free up 38GB” before anything happens. If something looks sketchy, you can bail out.

Setting Up Automated Cleanup

For automated cleanup, I set up a schedule. Here’s mine:

  • Images and containers: Weekly cleanup
  • Build cache: Weekly (I rebuild often, so cache gets stale fast)
  • Volumes: Manual only (way too risky to automate)

This keeps everything clean without me having to remember. The notifications tell me what got cleaned up, so I stay in the loop without babysitting it.

PruneMate vs Manual Cleanup

The Trade-offs Comparison

Let’s be real about the trade-offs:

Manual CleanupPruneMate
VisibilityRun docker system df on each host manuallyDashboard shows all hosts at once
SafetyHigh risk with wrong flags (-a, --volumes)Preview before deletion, focused on unused resources
GranularityAll-or-nothing (especially with system prune)Pick exactly what to clean
Remote hostsSSH to each one individuallyManage all hosts from one interface
AutomationRequires cron jobs or CI/CD pipelinesBuilt-in scheduler with notifications
Learning curveForces you to understand Docker maintenanceHides complexity (good and bad)
Time investmentHigh upfront to script it properly10 minutes to deploy and configure

Why This Matters Now

Look, manual cleanup teaches you how Docker actually works under the hood, and that’s valuable. But once you get it? There’s no point in running the same commands manually forever. PruneMate handles the tedious stuff while keeping you in the loop with notifications. Best of both worlds.

Here’s the thing. Component prices are going crazy right now. A 1TB NVMe that cost $80 last year? Try $150 now. Thanks, AI boom. If you can’t just throw money at bigger drives, you need to make better use of what you’ve got.

Docker disk space management is one of those problems you ignore until it bites you. Then your system’s already falling apart. Logs won’t write. Databases run out of room. New deployments fail because there’s no space for images. By the time you notice, you’re in crisis mode.

PruneMate keeps you ahead of this by cleaning up automatically. It’s not sexy, but it solves a real problem before it becomes your problem.

Bottom Line and Recommendation

If you’re running Docker anywhere (home lab, production, whatever), disk space maintenance isn’t optional. You can handle it manually with discipline and shell scripts. Or you can let PruneMate do it automatically while you work on stuff that actually matters.

Grab it here: PruneMate on GitHub

What about you?

How are you handling Docker cleanup? Manual commands? Custom scripts? Already using some automation tool? Let me know what’s working for you!

Related Posts

Check out some of our other posts

Understanding Kubernetes Services: ClusterIP vs NodePort vs LoadBalancer

Hey, trying to figure out how to expose your Kubernetes apps? If you're working with Kubernetes, you've probably noticed that Pods come and go, and their IP addresses keep changing. That's where

Managing Terraform at Scale with Terragrunt

The Problem with Terraform at Scale Code Duplication Across Environments Tired of copying Terraform code across every environment? If you're managing infrastructure with Terraform across

HashiCorp Pulls the Plug on CDKTF

CDKTF is Officially Deprecated The Deprecation Announcement Well, it finally happened. HashiCorp (now owned by IBM) officially deprecated the Cloud Development Kit for Terraform (CDKTF)

7 Reasons Learning the Linux Terminal is Worth It (Even for Beginners)

Why Learn the Linux Terminal? The Terminal's Enduring Value Is the Linux terminal still relevant in 2026? You bet it is. Even with all the fancy graphical interfaces and AI assistants ou

Setting Up GitHub Copilot Agent Skills in Your Repository

Why Teach Copilot New Skills? The Power of Specialized Instructions Want Copilot to do more than just autocomplete? If you're ready to teach Copilot some new tricks, Agent Skills are you

Securing CI/CD with IAM Roles

Why Secure Your CI/CD Pipeline? The Importance of Pipeline Security Hey, want to keep your CI/CD pipeline safe? If you’re working on software, locking down your pipeline is a must. Using