When you are using a computing devices (either a laptop or PC or a Tab) for your personal use after some time (let take some years) you will realise that your disk is full and most of the space are occupied by duplicate files (Same copy of file located in different locations).
For ex: You might have a favourite music file in "My Favourite" folder as well as in the "Album" folder. But finding this duplicate manually is a biggest process. That too if the file names are different OMG!.
There are lot of free utilities available to do this in automated way, but if you are a programmer, you will always prefer to do it on your own.
Here are the steps we are going to do. This is purely on a linux - Ubuntu system. (for windows you might need to change the path as per conventions )
Using cpan module Digest::SHA1 we can get SHA1 for a file data as follows
In the above code I used read_file method which is provided by File::Slurp module.
To find SHA1 for all the files recursively in a directory. There are many modules available in www.cpan.org for iterating a directory but my favourite is always File::Find module which works same like a unix find command.
Our next step is to find the duplicates based on the SHA1 values found above. I am going to use Hash ref with key as a SHA1 value and values as an Array ref with list of file Path. So once we process all the files we can easily get the list of duplicate files by just getting length of the array.
Removing the duplicates
Now we have we have list of duplicate files found. Only thing left is removing the those files by keeping only one copy of them. Perl has a default command called unlink which will remove the file from that location.
unlink "$file"
Now combine everything and add some printing statements and options you will get a nice utility script to remove the duplicate files.
For ex: You might have a favourite music file in "My Favourite" folder as well as in the "Album" folder. But finding this duplicate manually is a biggest process. That too if the file names are different OMG!.
There are lot of free utilities available to do this in automated way, but if you are a programmer, you will always prefer to do it on your own.
Here are the steps we are going to do. This is purely on a linux - Ubuntu system. (for windows you might need to change the path as per conventions )
- Getting SHA1 for all the files recursively in a given directory
- Compare SHA1 with other files
- Remove the duplicate file
Using cpan module Digest::SHA1 we can get SHA1 for a file data as follows
use Digest::SHA1 'sha1_hex'; use File::Slurp; my $fdata = read_file($file); my $hash = sha1_hex($fdata);
In the above code I used read_file method which is provided by File::Slurp module.
To find SHA1 for all the files recursively in a directory. There are many modules available in www.cpan.org for iterating a directory but my favourite is always File::Find module which works same like a unix find command.
use File::Find; use File::Slurp; use Digest::SHA1 'sha1_hex'; my $dir = "./"; # Calls process_file subroutine for each file find({ wanted => \&process_file, no_chdir => 1 }, $dir); sub process_file { my $file = $_; print "Taking file $file\r\n"; if( -f $file and $file ne '.' and $file ne '..' ){ my $fdata = read_file($file); my $hash = sha1_hex($fdata); } }Finding the duplicates
Our next step is to find the duplicates based on the SHA1 values found above. I am going to use Hash ref with key as a SHA1 value and values as an Array ref with list of file Path. So once we process all the files we can easily get the list of duplicate files by just getting length of the array.
use File::Find; use File::Slurp; use Digest::SHA1 'sha1_hex'; my $dir = "./"; my $file_list; # Calls process_file subroutine for each file find({ wanted => \&process_file, no_chdir => 1 }, $dir); sub process_file { my $file = $_; print "Taking file $file\r\n"; if( -f $file and $file ne '.' and $file ne '..' ){ my $fdata = read_file($file); my $hash = sha1_hex($fdata); push(@{$file_list->{$hash}}, $file ); } }
Removing the duplicates
Now we have we have list of duplicate files found. Only thing left is removing the those files by keeping only one copy of them. Perl has a default command called unlink which will remove the file from that location.
unlink "$file"
Now combine everything and add some printing statements and options you will get a nice utility script to remove the duplicate files.
#!/usr/bin/perl use strict; use warnings; use File::Find; use File::Slurp; use Digest::SHA1 'sha1_hex'; my $dir = shift || './'; my $count = 0; my $file_list = {}; my $dup_dir_list = {}; my $dup_file_count = 0; my $dup_dir_count = 0; my $removed_count = 0; find({ wanted => \&process_file, no_chdir => 1 }, $dir); foreach my $sha_hash (keys %{$file_list}){ if(scalar(@{$file_list->{$sha_hash}} > 1)){ # Number of duplicate files $dup_file_count = $dup_file_count + scalar(@{$file_list->{$sha_hash}}) - 1; my $first_file = 1; foreach my $file (@{$file_list->{$sha_hash}}){ # Don't delete the first file if($first_file){ $first_file = 0; next; } if((unlink "$file") == 1){ print "REMOVED: $file\n"; $removed_count = $removed_count + 1; } } } } print "********************************************************\n"; print "$count files/dir's traced\n"; print "$dup_dir_count duplicate name directories found\n"; print "$dup_file_count duplicate files found\n"; print "$removed_count duplicate files removed\n"; print "********************************************************\n"; sub process_file { my $file = $_; #print "Taking file $file\r\n"; if( -f $file and $file ne '.' and $file ne '..'){ my $fdata = read_file($file); my $hash = sha1_hex($fdata); push(@{$file_list->{$hash}}, $file ); $count = $count + 1; local $| = 1; print "Processing file: $count\r"; } }
The above code will remove any duplicate files in a given directory based on SHA1 value for the data. Keep in mind that if you are having audio or video files which are downloaded from different sources might have different SHA1 values based on various conditions. So this script will remove only computer identical files and it does not have any AI to identify same video/audio/images. When we see an image as a human we can identify it easily but computer will see it as different files based on various properties like that image might be compressed or resolution might have changed etc.
Long time Mac users have one problem in common, that is duplicates files in the system that not only clutters the precious space but also colonize it unnecessarily. The situation is most common amongst the photographers and those people who love to keep memories intact in the system. If these photos are not sorted now or kept in an organized manner, there could take up your precious system space. And you don’t want that to happen, right? Now the question comes how should I delete photos from Mac and which is the best duplicate photo finder for Mac. Keep scrolling, and you will find your answer regarding duplicate photo finders for Mac soon.
ReplyDeleteDuplicate File Finder/Remover Using Perl And Sha1 >>>>> Download Now
Delete>>>>> Download Full
Duplicate File Finder/Remover Using Perl And Sha1 >>>>> Download LINK
>>>>> Download Now
Duplicate File Finder/Remover Using Perl And Sha1 >>>>> Download Full
>>>>> Download LINK Ry
deleting duplicate files on mac and windows is really tedious task but if any tool available for general user then task so easy. Here are the list of best duplicate photo finder for mac to delete duplicate photos on mac.
ReplyDeleteLong time Mac users have one problem in common, that is duplicates files in the system that not only clutters the precious space but also colonize it unnecessarily. The situation is most common amongst the photographers and those people who love to keep memories intact in the system. If these photos are not sorted now or kept in an organized manner, there could take up your precious system space.
ReplyDelete
ReplyDeleteGreat set of tips from the master himself. Excellent ideas. Thanks for Awesome tips Keep it up
allsoftwarepro.com
duplicate-cleaner-pro-crack
m4vgear-drm-media-converter-crac
I like your all post. You have done really good work. Thank you for the information you provide, it helped me a lot. wahabtech.net I hope to have many more entries or so from you.
ReplyDeleteVery interesting blog.
Duplicate Photo Finder Pro Crack
I am happy after visited this site. It contains valuable data for the guests. Much thanks to you!
ReplyDeletePC Software Download
AdwCleaner Crack
Driver Easy Pro Crack
CyberLink PowerDVD Ultra Crack
Duplicate Cleaner Pro Crack
IVT BlueSoleil Crack
Betternet VPN Premium Crack
After looking through a few blog articles on your website,
ReplyDeletewe sincerely appreciate the way you blogged.
We've added it to our list of bookmarked web pages and will be checking back in the near
future. Please also visit my website and tell us what you think.
Great work with hard work you have done I appreciate your work thanks for sharing it.
Duplicate Photo Cleaner Crack
My response on my own website. Appreciation is a wonderful thing...thanks for sharing keep it up. Comfy Photo Recovery Crack
ReplyDeleteDuplicate Photo Cleaner Crack
PC Reviver Crack
Pointwise Crack
InPixio Photo Focus Pro Crack
Spectrasonics Omnisphere Crack
SoftMaker Office Crack
Duplicate File Finder/Remover Using Perl And Sha1 >>>>> Download Now
ReplyDelete>>>>> Download Full
Duplicate File Finder/Remover Using Perl And Sha1 >>>>> Download LINK
>>>>> Download Now
Duplicate File Finder/Remover Using Perl And Sha1 >>>>> Download Full
>>>>> Download LINK rG
Thanks to share this article, check the top listed file finder programs and choose one out of them to clean duplicate documents in your computer.
ReplyDelete