Linux: Efficiently delete large directory containing thousands of files
http://unix.stackexchange.com/questions/37329/efficiently-delete-large-directory-containing-thousands-of-files
We have an issue with a folder becoming unwieldy with hundreds of thousands of tiny files.
There are so many files that performing rm -rf returns an error and instead what we need to do is something like:
find /path/to/folder -name "filenamestart*" -type f -exec rm -f{} \;
This works but is very slow and constantly fails from running out of memory.
Is there a better way to do this? Ideally I would like to remove the entire directory without caring about the contents inside it.
1. delete folder and files: find . -delete
2. delete files: find . -type f -delete
3. find Build16.0 -delete
http://www.stevekamerman.com/2008/03/deleting-tons-of-files-in-linux-argument-list-too-long/
Quick Linux Tip:
If you’re trying to delete a very large number of files at one time (I deleted a directory with 485,000+ today), you will probably run into this error:
The problem is that when you type something like “rm -rf *”, the “*” is replaced with a list of every matching file, like “rm -rf file1 file2 file3 file4″ and so on. There is a reletively small buffer of memory allocated to storing this list of arguments and if it is filled up, the shell will not execute the program.
To get around this problem, a lot of people will use the find command to find every file and pass them one-by-one to the “rm” command like this:
My problem is that I needed to delete 500,000 files and it was taking way too long.
I stumbled upon a much faster way of deleting files – the “find” command has a “-delete” flag built right in! Here’s what I ended up using:
Using this method, I was deleting files at a rate of about 2000 files/second – much faster!
You can also show the filenames as you’re deleting them:
…or even show how many files will be deleted, then time how long it takes to delete them:
real 0m3.660s
user 0m0.036s
sys 0m0.552s

浙公网安备 33010602011771号