============ REWRITE FILE ============ What is it? =========== This tool reads a file, feeding its contents thru a shell command or a pipeline, and reads the output from the pipeline writing it back to the file. It allows for in-place (re)compression, for example. For large multi-part archives you may not otherwise have been able to uncompress and recompress the data again because of lack of temporary disk space. Notice that this can be dangerous, since the original file is destroyed in the process. If the output from the pipeline is bigger than the file, the program will continue to allocate memory to buffer the difference, it will never over-write data in the file before reading it. What for? ========= In-place compression of large backup files, on systems with too little harddrive space to keep the original file while compressing it. Safety? ======= The overwriting will not start until a significant amount of data has been read successfully from the pipeline, the program will abort safely if the pipeline dies before that. It also tries very hard to not ruin your files, by ignoring HUP and INT signals, etc. Usage? ====== # recompress an lzo compressed file with gzip, and rename output # file using sed-expression. rewrite -r "s/\.lzo/\.gz/" file.lzo "lzop -d | gzip --fast" Multi-part archives? ==================== What if you have a big compressed file that's been split into parts. How can you recompress it while maintaining the parts (since individual parts don't have valid compression headers)? rewrite -v -r "s/\.Z/\.gz/" file.Z.aa file.Z.ab "zcat|gzip --fast" Will do just that! Notice that when rewriting stand-alone files, never invoke rewrite with more than one file at a time! That will cause contants to "spill" over from one file to the next because of OS level pipeline buffering. Performance? ============ This program uses some clever buffering and will take advantage of the fast writev() syscall. License? ======== GNU Public License -fredrik sjoholm