天天看點

HDFS 的 Shell 操作

1.1基本文法

hadoop fs 具體指令 or hdfs dfs 具體指令(兩個完全一樣)

1.2指令大全

[[email protected] current]$ hadoop fs 
Usage: hadoop fs [generic options]
	[-appendToFile <localsrc> ... <dst>]
	[-cat [-ignoreCrc] <src> ...]
	[-checksum <src> ...]
	[-chgrp [-R] GROUP PATH...]
	[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
	[-chown [-R] [OWNER][:[GROUP]] PATH...]
	[-copyFromLocal [-f] [-p] [-l] [-d] [-t <thread count>] <localsrc> ... <dst>]
	[-copyToLocal [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-count [-q] [-h] [-v] [-t [<storage type>]] [-u] [-x] [-e] <path> ...]
	[-cp [-f] [-p | -p[topax]] [-d] <src> ... <dst>]
	[-createSnapshot <snapshotDir> [<snapshotName>]]
	[-deleteSnapshot <snapshotDir> <snapshotName>]
	[-df [-h] [<path> ...]]
	[-du [-s] [-h] [-v] [-x] <path> ...]
	[-expunge]
	[-find <path> ... <expression> ...]
	[-get [-f] [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
	[-getfacl [-R] <path>]
	[-getfattr [-R] {-n name | -d} [-e en] <path>]
	[-getmerge [-nl] [-skip-empty-file] <src> <localdst>]
	[-head <file>]
	[-help [cmd ...]]
	[-ls [-C] [-d] [-h] [-q] [-R] [-t] [-S] [-r] [-u] [-e] [<path> ...]]
	[-mkdir [-p] <path> ...]
	[-moveFromLocal <localsrc> ... <dst>]
	[-moveToLocal <src> <localdst>]
	[-mv <src> ... <dst>]
	[-put [-f] [-p] [-l] [-d] <localsrc> ... <dst>]
	[-renameSnapshot <snapshotDir> <oldName> <newName>]
	[-rm [-f] [-r|-R] [-skipTrash] [-safely] <src> ...]
	[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
	[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
	[-setfattr {-n name [-v value] | -x name} <path>]
	[-setrep [-R] [-w] <rep> <path> ...]
	[-stat [format] <path> ...]
	[-tail [-f] [-s <sleep interval>] <file>]
	[-test -[defsz] <path>]
	[-text [-ignoreCrc] <src> ...]
	[-touch [-a] [-m] [-t TIMESTAMP ] [-c] <path> ...]
	[-touchz <path> ...]
	[-truncate [-w] <length> <path> ...]
	[-usage [cmd ...]]

Generic options supported are:
-conf <configuration file>        specify an application configuration file
-D <property=value>               define a value for a given property
-fs <file:///|hdfs://namenode:port> specify default filesystem URL to use, overrides 'fs.defaultFS' property from configurations.
-jt <local|resourcemanager:port>  specify a ResourceManager
-files <file1,...>                specify a comma-separated list of files to be copied to the map reduce cluster
-libjars <jar1,...>               specify a comma-separated list of jar files to be included in the classpath
-archives <archive1,...>          specify a comma-separated list of archives to be unarchived on the compute machines

The general command line syntax is:
command [genericOptions] [commandOptions]
           

1.3常用指令操作

1.3.1準備工作

1)啟動hadoop叢集

[[email protected] hadoop-3.1.3]$ sbin/start-dfs.sh
[[email protected] hadoop-3.1.3]$ sbin/start-yarn.sh
           

2)-help:輸出這個指令參數

3)建立/sanguo 檔案夾

[[email protected] hadoop-3.1.3]$ hadoop fs -mkdir /sanguo
           

1.3.2上傳

1)-moveFromLocal:從本地剪切粘貼到 HDFS

[[email protected] hadoop-3.1.3]$ vim shuguo.txt
輸入:
shuguo
[[email protected] hadoop-3.1.3]$ hadoop fs -moveFromLocal ./shuguo.txt 
/sanguo
           

2)-copyFromLocal:從本地檔案系統中拷貝檔案到 HDFS 路徑去

[[email protected] hadoop-3.1.3]$ vim weiguo.txt
輸入:
weiguo
[[email protected] hadoop-3.1.3]$ hadoop fs -copyFromLocal weiguo.txt 
/sanguo
           

3)-put:等同于 copyFromLocal,生産環境更習慣用 put

[[email protected] hadoop-3.1.3]$ vim wuguo.txt
輸入:
wuguo
[[email protected] hadoop-3.1.3]$ hadoop fs -put ./wuguo.txt /sanguo
           

4)-appendToFile:追加一個檔案到已經存在的檔案末尾

[[email protected] hadoop-3.1.3]$ vim liubei.txt
輸入:
liubei
[[email protected] hadoop-3.1.3]$ hadoop fs -appendToFile liubei.txt 
/sanguo/shuguo.txt
           

1.3.3下載下傳

1)-copyToLocal:從 HDFS 拷貝到本地

[[email protected] hadoop-3.1.3]$ hadoop fs -copyToLocal 
/sanguo/shuguo.txt ./
           

2)-get:等同于 copyToLocal,生産環境更習慣用 get

[[email protected] hadoop-3.1.3]$ hadoop fs -get 
/sanguo/shuguo.txt ./shuguo2.txt
           

1.3.4 HDFS 直接操作

1)-ls: 顯示目錄資訊

[[email protected] hadoop-3.1.3]$ hadoop fs -ls /sanguo
           

2)-cat:顯示檔案内容

[[email protected] hadoop-3.1.3]$ hadoop fs -cat /sanguo/shuguo.txt
           

3)-chgrp、-chmod、-chown:Linux 檔案系統中的用法一樣,修改檔案所屬權限

[[email protected] hadoop-3.1.3]$ hadoop fs -chmod 666 
/sanguo/shuguo.txt
[[email protected] hadoop-3.1.3]$ hadoop fs -chown atguigu:atguigu 
/sanguo/shuguo.txt
           

4)-mkdir:建立路徑

[[email protected] hadoop-3.1.3]$ hadoop fs -mkdir /jinguo
           

5)-cp:從 HDFS 的一個路徑拷貝到 HDFS 的另一個路徑

[[email protected] hadoop-3.1.3]$ hadoop fs -cp /sanguo/shuguo.txt 
/jinguo
           

6)-mv:在 HDFS 目錄中移動檔案

[[email protected] hadoop-3.1.3]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo
[[email protected] hadoop-3.1.3]$ hadoop fs -mv /sanguo/weiguo.txt 
/jinguo
           

7)-tail:顯示一個檔案的末尾 1kb 的資料

[[email protected] hadoop-3.1.3]$ hadoop fs -tail /jinguo/shuguo.txt
           

8)-rm:删除檔案或檔案夾

[[email protected] hadoop-3.1.3]$ hadoop fs -rm /sanguo/shuguo.txt
           

9)-rm -r:遞歸删除目錄及目錄裡面内容

[[email protected] hadoop-3.1.3]$ hadoop fs -rm -r /sanguo
           

10)-du 統計檔案夾的大小資訊

[[email protected] hadoop-3.1.3]$ hadoop fs -du -s -h /jinguo
           

27 81 /jinguo

[[email protected] hadoop-3.1.3]$ hadoop fs -du -h /jinguo
           
14 42 /jinguo/shuguo.txt
7 21 /jinguo/weiguo.txt
6 18 /jinguo/wuguo.tx
           

說明:27 表示檔案大小;81 表示 27*3 個副本;/jinguo 表示檢視的目錄

11)-setrep:設定 HDFS 中檔案的副本數量

[[email protected] hadoop-3.1.3]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
           

這裡設定的副本數隻是記錄在 NameNode 的中繼資料中,是否真的會有這麼多副本,還得

看 DataNode 的數量。因為目前隻有 3 台裝置,最多也就 3 個副本,隻有節點數的增加到 10

台時,副本數才能達到 10。

繼續閱讀