Linux系统dd命令该怎么用,相信很多没有经验的人对此束手无策,为此本文总结了问题出现的原因和解决方法,通过这篇文章希望你能解决这个问题。
Linux系统中dd命令可以用指定大小的块拷贝一个文件,并在拷贝的同时进行指定的转换。
参数注释:
bs=BYTES read and write BYTES bytes at a time (also see ibs=,obs=) cbs=BYTES convert BYTES bytes at a time conv=CONVS convert the file as per the comma separated symbol list count=N copy only N input blocks ibs=BYTES read BYTES bytes at a time (default: 512) if=FILE read from FILE instead of stdin(默认为标准输入) iflag=FLAGS read as per the comma separated symbol list obs=BYTES write BYTES bytes at a time (default: 512) of=FILE write to FILE instead of stdout(默认为标准输出) oflag=FLAGS write as per the comma separated symbol list seek=BLOCKS skip BLOCKS obs-sized blocks at start of output skip=BLOCKS skip BLOCKS ibs-sized blocks at start of input status=WHICH WHICH info to suppress outputting to stderr; 'noxfer' suppresses transfer stats, 'none' suppresses all
CONVS的可选参数
ascii from EBCDIC to ASCII ebcdic from ASCII to EBCDIC ibm from ASCII to alternate EBCDIC block pad newline-terminated records with spaces to cbs-size unblock replace trailing spaces in cbs-size records with newline lcase change upper case to lower case nocreat do not create the output file excl fail if the output file already exists notrunc do not truncate the output file ucase change lower case to upper case sparse try to seek rather than write the output for NUL input blocks swab swap every pair of input bytes noerror continue after read errors sync pad every input block with NULs to ibs-size; when used with block or unblock, pad with spaces rather than NULs fdatasync physically write output file data before finishing fsync likewise, but also write metadata
FLAGS的可选参数
append append mode (makes sense only for output; conv=notrunc suggested) direct use direct I/O for data directory fail unless a directory dsync use synchronized I/O for data sync likewise, but also for metadata fullblock accumulate full blocks of input (iflag only) nonblock use non-blocking I/O noatime do not update access time noctty do not assign controlling terminal from file nofollow do not follow symlinks count_bytes treat 'count=N' as a byte count (iflag only)
注意:指定数字的地方若以下列字符结尾,则乘以相应的数字:
c =1, w =2, b =512, kB =1000, K =1024, MB =1000*1000, M =1024*1024, xM =M GB =1000*1000*1000, G =1024*1024*1024, and so on for T, P, E, Z, Y
二、使用实例
1、将本地的/dev/hdb整盘备份到/dev/hdd
dd` `if``=``/dev/hdb` `of=``/dev/hdd
2、将/dev/hdb全盘数据备份到指定路径的image文件
dd` `if``=``/dev/hdb` `of=``/root/image
3、备份/dev/hdb全盘数据,并利用gzip工具进行压缩,保存到指定路径
dd` `if``=``/dev/hdb` `| ``gzip` `> ``/root/image``.gz
4、把一个文件拆分为3个文件
#文件大小为2.3k [Oracle@rhel6 ~]$ ll db1_db_links.sql -rw-r--r-- 1 oracle oinstall 2344 Nov 21 10:39 db1_db_links.sql #把这个文件拆成每个文件1k,bs=1k,count=1,使用skip参数指定在输入文件中跳过多少个bs支读取 [oracle@rhel6 ~]$ dd if=db1_db_links.sql of=dd01.sql bs=1k count=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 4.5536e-05 s, 22.5 MB/s [oracle@rhel6 ~]$ dd if=db1_db_links.sql of=dd02.sql bs=1k count=1 skip=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.000146387 s, 7.0 MB/s [oracle@rhel6 ~]$ dd if=db1_db_links.sql of=dd03.sql bs=1k count=1 skip=2 0+1 records in 0+1 records out 296 bytes (296 B) copied, 0.000204216 s, 1.4 MB/s #拆分出的文件 [oracle@rhel6 ~]$ ll dd*sql -rw-r--r-- 1 oracle oinstall 1024 May 20 14:58 dd01.sql -rw-r--r-- 1 oracle oinstall 1024 May 20 14:58 dd02.sql -rw-r--r-- 1 oracle oinstall 296 May 20 14:58 dd03.sql
5、把拆分出的文件合并为1个
#合并操作,此时用到seek参数,用于指定在输入文件中跳过的bs数 [oracle@rhel6 ~]$ dd of=1.sql if=dd01.sql 2+0 records in 2+0 records out 1024 bytes (1.0 kB) copied, 0.000176 s, 5.8 MB/s [oracle@rhel6 ~]$ dd of=1.sql if=dd02.sql bs=1k seek=1 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.000124038 s, 8.3 MB/s [oracle@rhel6 ~]$ dd of=1.sql if=dd03.sql bs=1k seek=2 0+1 records in 0+1 records out 296 bytes (296 B) copied, 0.00203881 s, 145 kB/s #与拆分前的文件进行校验 [oracle@rhel6 ~]$ diff 1.sql db1_db_links.sql [oracle@rhel6 ~]$
6、在输出文件中指定的位置插入数据,而不截断输出文件
需要使用conv=notrunc参数
[oracle@rhel6 ~]$ ``dd` `if``=2.sql of=1.sql bs=1k seek=1 count=2 conv=notrunc
看完上述内容,你们掌握Linux系统dd命令该怎么用的方法了吗?如果还想学到更多技能或想了解更多相关内容,欢迎关注编程网行业资讯频道,感谢各位的阅读!