1.hadoop指令
$ hadoop
fs run a generic filesystem user client
#通路檔案系統,相當于hdfs dfs
version print the version
jar <jar> run a jar file
#運作一個jar到yarn上
checknative [-a|-h] check native hadoop and compression libraries availability
#檢查原生hadoop和壓縮庫的可用性
distcp <srcurl> <desturl> copy file or directories recursively
#用于叢集間複制備份hdfs檔案,多用于運維
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
#hadoop啟動加載類路徑
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
CLASSNAME run the class named CLASSNAME
2.hdfs指令
$ hdfs
Usage: hdfs [--config confdir] COMMAND
where COMMAND is one of:
dfs run a filesystem command on the file systems supported in Hadoop.
#通路檔案系統
namenode -format format the DFS filesystem
#格式化檔案系統,通常隻有在初始化檔案系統的時候使用,初始化系統之後,盡量不要使用該指令,會造成叢集無法啟動問題
secondarynamenode run the DFS secondary namenode
namenode run the DFS namenode
journalnode run the DFS journalnode
zkfc run the ZK Failover Controller daemon
datanode run a DFS datanode
dfsadmin run a DFS admin client
haadmin run a DFS HA admin client
fsck run a DFS filesystem checking utility
#檢查檔案系統狀态,可以看到塊的狀态情況(包括損壞、丢失情況)
#具體的使用方式可以關注另一篇部落格:【學習筆記】Hadoop之HDFS Block損壞恢複最佳實踐(含思考題) https://blog.csdn.net/eryehong/article/details/95167059
balancer run a cluster balancing utility
#平衡叢集中各個節點的塊分布,運作該指令時,最好是叢集較為空閑時,否則的話會對檔案的讀寫産生影響
jmxget get JMX exported values from NameNode or DataNode.
mover run a utility to move block replicas across
storage types
oiv apply the offline fsimage viewer to an fsimage
oiv_legacy apply the offline fsimage viewer to an legacy fsimage
oev apply the offline edits viewer to an edits file
fetchdt fetch a delegation token from the NameNode
getconf get config values from configuration
#-檢視目前生效的配置項值
groups get the groups which users belong to
snapshotDiff diff two snapshots of a directory or diff the
current directory contents with a snapshot
lsSnapshottableDir list all snapshottable dirs owned by the current user
# # #Use -help to see options
portmap run a portmap service
nfs3 run an NFS version 3 gateway
cacheadmin configure the HDFS cache
crypto configure HDFS encryption zones
storagepolicies list/get/set block storage policies
version print the version
3.hdfs dfs指令
$ hdfs dfs
Usage: hadoop fs [generic options]
[-appendToFile <localsrc> ... <dst>]
[-cat [-ignoreCrc] <src> ...]
#檢視HDFS檔案内容
[-checksum <src> ...]
[-chgrp [-R] GROUP PATH...]
#修改HDFS檔案使用者組
[-chmod [-R] <MODE[,MODE]... | OCTALMODE> PATH...]
#修改HDFS檔案權限
[-chown [-R] [OWNER][:[GROUP]] PATH...]
#修改HDFS檔案使用者
[-copyFromLocal [-f] [-p] [-l] <localsrc> ... <dst>]
#将本地檔案複制到HDFS,相當于 -put
[-copyToLocal [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
#将HDFS檔案複制到本地,相當于 -get
[-count [-q] [-h] [-v] <path> ...]
[-cp [-f] [-p | -p[topax]] <src> ... <dst>]
#将HDFS檔案複制到HDFS的其他位置
[-createSnapshot <snapshotDir> [<snapshotName>]]
[-deleteSnapshot <snapshotDir> <snapshotName>]
[-df [-h] [<path> ...]]
#顯示HDFS的磁盤空間情況
[-du [-s] [-h] <path> ...]
#顯示HDFS檔案和目錄的使用空間
[-expunge]
[-find <path> ... <expression> ...]
[-get [-p] [-ignoreCrc] [-crc] <src> ... <localdst>]
#将HDFS檔案複制到本地
[-getfacl [-R] <path>]
[-getfattr [-R] {-n name | -d} [-e en] <path>]
[-getmerge [-nl] <src> <localdst>]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [<path> ...]]
[-mkdir [-p] <path> ...]
[-moveFromLocal <localsrc> ... <dst>]
[-moveToLocal <src> <localdst>]
[-mv <src> ... <dst>]
[-put [-f] [-p] [-l] <localsrc> ... <dst>]
#将本地檔案複制到HDFS
[-renameSnapshot <snapshotDir> <oldName> <newName>]
[-rm [-f] [-r|-R] [-skipTrash] <src> ...]
[-rmdir [--ignore-fail-on-non-empty] <dir> ...]
[-setfacl [-R] [{-b|-k} {-m|-x <acl_spec>} <path>]|[--set <acl_spec> <path>]]
[-setfattr {-n name [-v value] | -x name} <path>]
[-setrep [-R] [-w] <rep> <path> ...]
[-stat [format] <path> ...]
[-tail [-f] <file>]
[-test -[defsz] <path>]
[-text [-ignoreCrc] <src> ...]
[-touchz <path> ...]
[-usage [cmd ...]]
Generic options supported are
-conf <configuration file> specify an application configuration file
-D <property=value> use value for given property
-fs <local|namenode:port> specify a namenode
-jt <local|resourcemanager:port> specify a ResourceManager
-files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster
-libjars <comma separated list of jars> specify comma separated jar files to include in the classpath.
-archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.