112

On a virtualized server running Ubuntu 10.04, df reports the following:

# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda1             7.4G  7.0G     0 100% /
none                  498M  160K  498M   1% /dev
none                  500M     0  500M   0% /dev/shm
none                  500M   92K  500M   1% /var/run
none                  500M     0  500M   0% /var/lock
none                  500M     0  500M   0% /lib/init/rw
/dev/sda3             917G  305G  566G  36% /home

This is puzzling me for two reasons: 1.) df says that /dev/sda1, mounted at /, has a 7.4 gigabyte capacity, of which only 7.0 gigabytes are in use, yet it reports / being 100 percent full; and 2.) I can create files on / so it clearly does have space left.

Possibly relevant is that the directory /www is a symbolic link to /home/www, which is on a different partition (/dev/sda3, mounted at /home).

Can anyone offer suggestions on what might be going on here? The server appears to be working without issue, but I want to make sure there's not a problem with the partition table, file systems or something else which might result in implosion (or explosion) later.

3
  • Thanks to all for the helpful answers. I can't create files as a normal user so it does appear that it's the 5 percent buffer that's preventing catastrophe. Now I just need to figure out why the disk is full (I'm a bit worried something malicious could be going on because none of the log files is taking up that much space and there's not much software installed, just a simple LAMP server)...
    – Chris
    Commented Sep 25, 2011 at 9:16
  • 3
    First place I'd look is /tmp. Another possibility is that you have a deleted file that a running program is holding on to. I think you can run 'lsof | grep deleted' as root to find those.
    – Scott
    Commented Sep 29, 2011 at 17:53
  • Important information is missing: What type of filesystem is it, and what are the mount options?
    – U. Windl
    Commented Aug 28, 2023 at 21:51

13 Answers 13

166

It's possible that a process has opened a large file which has since been deleted. You'll have to kill that process to free up the space. You may be able to identify the process by using lsof. On Linux deleted yet open files are known to lsof and marked as (deleted) in lsof's output.

You can check this with sudo lsof +L1

4
  • 14
    It's solved the the mystery for me. I removed a large log file from uwsgi without restart the service. When queried df -ah, I got disk full, but du -sh / tells that I should have free space. After retart uwsgi I got a lot of free space! Commented Oct 15, 2014 at 16:03
  • 1
    I had 40G worth of logs stuck in limbo and lsof +L1 gave me the x-ray vision to see what happened ;-) All I had to do was restart the service.
    – Jay Brunet
    Commented May 7, 2018 at 0:18
  • I've noticed this happens frequently because of custom logging added to a given services config in combination with logrotate. For instance I use Unbound (under Ubuntu). Logging defaults to syslog. But, I have mine use it's own log file. This is rotated via logrotate. In Unbound's case, logrotate needs unbound-control log_reopen in the logrotate file so it releases the old (deleted) open log. You could also opt to simply restart the service. ref: lists.nlnetlabs.nl/pipermail/unbound-users/2019-July/…
    – B. Shea
    Commented Jun 10, 2020 at 15:43
  • I've same issue but have no space in home directory to install lsof, what can I do? I've rebooted system but still showing that disk is full but it's not, I recently deleted a big log file and was struggling with logrotate
    – M.mhr
    Commented Oct 25, 2023 at 22:06
62

5% (by default) of the filesystem is reserved for cases where the filesystem fills up to prevent serious problems. Your filesystem is full. Nothing catastrophic is happening because of the 5% buffer -- root is permitted to use that safety buffer and, in your setup, non-root users have no reason to write into that filesystem.

If you have daemons that run as a non-root user but that need to manage files in that filesystem, things will break. One common such daemon is named. Another is ntpd.

1
  • 1
    To the question of WHY your disk is full, 7G really isn't that much space. You also appear to have everything dumped under one partition/filesystem (/). This is generally considered to be a Bad Thing (because if something goes haywire / fills up, and the world ends) but Linux distributions still persist in doing it because it's "simpler". I'd start by looking in /var (esp. /var/log) for huge logfiles. du -hs / (as root) will help you find the biggest directories and possibly point you at what needs cleaning up.
    – voretaq7
    Commented Sep 27, 2011 at 20:14
46

You may be out of inodes. Check inode usage with this command:

df -i
29

Most Linux filesystems (ext3, ext4) reserve 5% space for use only the root user.

You can see this with e.g

dumpe2fs /dev/sda1 | grep -i reserved

You can change the reserved amount using :

tune2fs -m 0 /dev/sda1

0 in this command stands for percent of disk size so maybe you would want to leave at least 1%.

In most cases the server will appear to continue working fine - assuming all processes are being run as 'root'.

1
  • Thanks a lot, this explained why I couldn't use the remaining 50 GB of my 1 TB external drive. Commented Sep 27, 2021 at 18:46
17

In addition to already suggested causes, in some cases it could be also following:

  • a different disk is mounted "over" the existing folder which is full of data
  • du will calculate the size spent of mounted disk and df will show really spent
  • solution: (when possible) unmount all non-root disks and check the size with du -md 1 again. Fix situation by moving hidden folder to some other place or mount on different place.
2
  • how do you find mount points other than df?
    – Hogan
    Commented Sep 4, 2015 at 13:48
  • 1
    @Hogan: maybe calling "mount" or "cat /etc/fstab" would help? Commented Sep 7, 2015 at 10:32
13

I had this problem and was baffled by the fact deleting various large files did not improve the situation (didn't know about the 5% buffer) anyway following some clues here

From root walked down the largest directories revealed by repetitively doing:-

du -sh */ 

until I came a directory for webserver log files which had some absolutely massive logs

which I truncated with

:>lighttpd.error.log

suddenly df -h was down to 48% used!

2
  • 20
    That should really end with "... then I set up log rotation."
    – hayalci
    Commented Dec 3, 2012 at 17:02
  • hayalci : found that the logrotation was pointing to the wrong directory.
    – zzapper
    Commented Dec 18, 2012 at 10:08
9

df -h is rounding the values. Even the percentages are rounded. Omit the -h and you see finer grained differences.

Oh. And ext3 and derivates reserve a percentage (default 5%) for the file-system for exactly this problematic constellation. If your root filesystem would be really full (0 byte remaining) you can't boot the system. So the reserved portion prevents this.

2
  • Could also be that he's ran out of free inodes. Run 'df -i' to get inodes usage. Commented Sep 24, 2011 at 21:52
  • He didn't provide information that the disk is full. He only thinks that the disk is full. 100% used space without error is only "virtually full".
    – mailq
    Commented Sep 24, 2011 at 22:32
4

If you are running out of space on /dev/shm and wondering why (given that actual used space (df -shc /dev/shm) is much smaller then /dev/shm allotted size)? lsof can help:

$ sudo lsof -s +L1 | awk '{print $7" "$2" "$3" "$10}' | grep 'dev/shm' | grep "^[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]" 
7931428864 1133806 roel /dev/shm/1600335920/subreducer/2/data/ibtmp1
12710576128 1133806 roel /dev/shm/1600335920/subreducer/2/tmp/#sql-temptable-114cee-8-e18.MAD
4173332480 1352445 roel /dev/shm/1600335920/subreducer/1/data/ibtmp1
13040484352 1352445 roel /dev/shm/1600335920/subreducer/1/tmp/#sql-temptable-14a2fd-8-eb3.MAD
9670602752 2298724 roel /dev/shm/1600338626/subreducer/2/tmp/#sql-temptable-231364-8-d2e.MAD

First file is consuming ~7.9GB, second about 12.7GB etc. The regex picks up on anything 1GB and over. You can tune the regex as needed. The cause could be that an otherwise dead process is holding on to a file. df -h will not show the issue;

Filesystem      Size  Used Avail Use% Mounted on
tmpfs            90G   90G  508K 100% /dev/shm

508K, yet...

$ du -shc | grep total
46G total

You can see the 90G <> 46G offset. It's in the files above.

Then, just kill the PID (kill -9 PID) listed in the second column of the output above.

$ kill -9 1133806

Result:

Filesystem      Size  Used Avail Use% Mounted on
tmpfs            90G   72G   19G  80% /dev/shm

Great, space cleared.

The reason for doing things this way and not just something like sudo lsof +L1 | grep '(deleted)' | grep 'dev/shm' | awk '{print $2}' | sudo xargs kill -9 is that the underlaying process(es) may still be working. If you're confident it/they is/are not, that command is a potential alternative depending on your scenario. It will kill all processes which have 'deleted' files open.

1
  • An optimized command: sudo lsof -s +L1 | awk '{print $7" "$2" "$3" "$10}' | grep 'dev/shm' | grep -E "^[0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]|^[3-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9][0-9]" | awk '{print $2}' | xargs kill -9 2>/dev/null: kills all processes which have dead files > 300MB open Commented Sep 18, 2020 at 6:10
3

I did a big update of several libraries and there was a lot of unnecessary libraries and temporal files so I free space in the "/" folder using:

apt-get install -f
sudo apt-get clean

And empty your trash

1
  • 2
    This is reasonable general advice on reducing disk usage, but it doesn't address the question about why df says the disk is full when it's not. Commented Feb 6, 2018 at 14:51
1

check the /lost+found, I had a system (centos 7) and some of file in the /lost+found ate up all the space

1

If your partition is btrfs, there may be a subvolume taking space. A btrfs filesystem can have many subvolumes, only one of which is mounted. You can use btrfs subvolume list <dir> to list all subvolumes and btrfs subvolume delete <dir>/<subvolume> to delete one. Make sure you do not delete the one that is mounted by default.

1

As this was the first page in my Google search, I want to hopefully help someone out there. I know this is a very old post and talking about the / partition not /boot.

First my system is using xfs. Over the weekend df -h showed /boot at 100% however du -h /boot was just using 40M

To resolve my issue I did

  1. umount /boot
  2. xfs_repair /dev/sda1
  3. mount -a
  4. df -h /boot

System now showing proper usage

0

I had a load of web projects, on a fairly small vps drive and encoutered this. I cleaned up the /backups folder, and removed a ton of old node_modules/vendor folders from old projects.

node_modules is notorious for thousands of tiny files

then rebooted - error went away

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .