第2章 HDFS的Shell操作
1.基本文法
bin/hadoop fs 具體指令
bin/hdfs dfs 具體指令
dfs是fs的實作類。
2.指令大全
[[email protected] hadoop-3.1.2]$ bin/hadoop fs
3.常用指令實操
(0)啟動Hadoop叢集(友善後續的測試)
[[email protected] hadoop-3.1.2]$ sbin/start-dfs.sh 啟動dsf
[[email protected] hadoop-3.1.2]$ sbin/start-yarn.sh 啟動yarn
[[email protected] hadoop-3.1.2]$ mr-jobhistory-daemon.sh start historyserver 啟動曆史伺服器
啟動是否成功檢視jps:
顯示上述服務表示啟動成功:
網頁驗證是否啟動成功:
http://hadoop101:9870/dfshealth.html#tab-datanode
(1)-help:輸出這個指令參數
[[email protected] hadoop-3.1.2]$ hadoop fs -help rm
(2)-ls: 顯示目錄資訊
(3)-mkdir:在HDFS上建立目錄
[[email protected] hadoop-3.1.2]$ hadoop fs -mkdir -p /sanguo/shuguo
(4)-moveFromLocal:從本地剪切粘貼到HDFS
[[email protected] hadoop-3.1.2]$ touch kongming.txt //在本地建立kongming.txt
[[email protected] hadoop-3.1.2]$ hadoop fs -moveFromLocal ./kongming.txt /sanguo/shuguo //将本地的kongming.txt送出到fds中
(5)-appendToFile:追加一個檔案到已經存在的檔案末尾
[[email protected] hadoop-3.1.2]$ touch zhangyong.txt
[[email protected] hadoop-3.1.2]$ vi zhangyong.txt
輸入
zhangyong cainiao
[[email protected] hadoop-3.1.2]$hdfs dfs -appendToFile zhangyong.txt /sanguo/shuguo/kongming.txt
(6)-cat:顯示檔案内容
[[email protected] hadoop-3.1.2]$ hadoop fs -cat /sanguo/shuguo/kongming.txt
(7)-chgrp 、-chmod、-chown:Linux檔案系統中的用法一樣,修改檔案所屬權限
[[email protected] hadoop-3.1.2]$ hadoop fs -chmod 666 /sanguo/shuguo/kongming.txt
[[email protected] hadoop-3.1.2]$ hadoop fs -chown zhangyong:zhangyong /sanguo/shuguo/kongming.txt
(8)-copyFromLocal:從本地檔案系統中拷貝檔案到HDFS路徑去
(9)-copyToLocal:從HDFS拷貝到本地
(10)-cp :從HDFS的一個路徑拷貝到HDFS的另一個路徑
[[email protected] hadoop-3.1.2]$ hadoop fs -cp /sanguo/shuguo/kongming.txt /zhuge.txt
(11)-mv:在HDFS目錄中移動檔案
(12)-get:等同于copyToLocal,就是從HDFS下載下傳檔案到本地
(13)-getmerge:合并下載下傳多個檔案,比如HDFS的目錄 /user/zhangyong/test下有多個檔案:log.1, log.2,log.3,…
[[email protected] hadoop-3.1.2]$ hadoop fs -getmerge /user/zhangyong/test/* ./zaiyiqi.txt
(14)-put:等同于copyFromLocal
(15)-tail:顯示一個檔案的末尾
[[email protected] hadoop-3.1.2]$ hadoop fs -tail /sanguo/shuguo/kongming.txt
(16)-rm:删除檔案或檔案夾
[zhangyong@hadoop101 hadoop-3.1.2]$ hadoop fs -rm /sanguo/shuguo/zhuge.txt
(17)-rmdir:删除空目錄
[[email protected] hadoop-3.1.2]$ hadoop fs -mkdir /test 建立空目錄
[[email protected] hadoop-3.1.2]$ hadoop fs -rmdir /test 删除目錄
(18)-du統計檔案夾的大小資訊
[[email protected] hadoop-3.1.2]$ hadoop fs -du -s -h /user/zhangyong/test
[[email protected] hadoop-3.1.2]$ hadoop fs -du -h /user/zhangyong/test
(19)-setrep:設定HDFS中檔案的副本數量
[[email protected] hadoop-3.1.2]$ hadoop fs -setrep 10 /sanguo/shuguo/kongming.txt
HDFS副本數量
這裡設定的副本數隻是記錄在NameNode的中繼資料中,是否真的會有這麼多副本,還得看DataNode的數量。因為目前隻有3台裝置,最多也就3個副本,隻有節點數的增加到10台時,副本數才能達到10。