容器部署spark+hadoop+java+scala+自己服務
步驟:公網下載下傳鏡像 -> 建立容器 -> 安裝應用 -> 容器打包 -> 建立鏡像 -> 離線使用鏡像
為了簡單起見:隻記錄上述軟體單節點安裝方式,叢集安裝可修改相關配置檔案,請自行百度。
公網下載下傳鏡像
- 鏡像來源:docker.io/centos
- 檢視鏡像
docker image ls
完成公網下載下傳鏡像到本地,可以通過鏡像建立容器
建立容器
自定義容器名字:recSys
建立容器有兩種方式,第一種方式一般是實驗階段使用,正式用于生産一般推薦第二種,兩者差別在于提前指定容器與主控端之間映射端口。
第一種方式:沒有指定端口,後期需要增加端口映射需要修改配置檔案
docker run -it --name=recSys docker.io/centos /bin/bash #建立容器
exit #退出時候容器自動關閉,重新進入容器需要重新開機容器
docker start recSys #啟動容器
docker exec -it recSys /bin/bash #進入容器
第二種方式:提前規劃好端口或者IP
結合業務:8099就是hadoop管理監控端口,50070是檔案系統管理端口,将主控端8900映射為8099,50071映射為50070,8002映射為8002。8002是自己應用服務的端口。
[[email protected] /]# docker run -itd -p 8900:8099 -p 50071:50070 -p 8002:8002 --name=recsys rec_sys_ready /bin/bash #-d表示背景運作
1eaf173131b47858510ca3e9f4e2f5264fb3ec5de938ae3a123a85b670784357
[[email protected] /]# docker ps #docker ps -a 檢視正在或已停止曆史運作容器
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
1eaf173131b4 rec_sys_ready "/bin/bash" 6 seconds ago Up 5 seconds 0.0.0.0:8002->8002/tcp, 0.0.0.0:8900->8099/tcp, 0.0.0.0:50071->50070/tcp recsys
#這種方式建立容器前台看不了,需要通過exec進入容器
docker exec -it recSys /bin/bash #進入容器
安裝應用
軟體版本:需要安裝jdk,spark,hadoop,scala,注意版本之間配套,如果不知道配套,可以每個軟體都下載下傳最新的版本
[[email protected] opt]# ls derby.log jdk1.8.0_131 recsys spark-1.5.0-bin-hadoop2.6 hadoop-2.7.3 metastore_db scala-2.11.8
上傳各軟體:這裡指從主控端傳到容器裡,如果軟體不在主控端,需要将其上傳到主控端,然後再傳到容器裡。
- 以spark為例,上傳指令類似linux cp,用 docker cp 檔案名 容器名稱:目錄
# 安裝spark [[email protected] ~]# docker cp spark-1.5.0-bin-hadoop2.6.tgz recSys:/opt/ #将軟體包從主控端複制到容器,recSys:/opt是容器名字和目标目錄 #所有軟體已上傳 [[email protected] opt]# ls jdk-8u131-linux-x64.tar.gz scala-2.11.8.tgz recsys spark-1.5.0-bin-hadoop2.6.tgz
安裝前準備
- 下載下傳的容器裡沒有ssh服務和用戶端,需要手動安裝ssh服務和用戶端,否則啟動hadoop會報錯。
centos環境
yum -y install openssh-server openssh-clients
ubuntu環境
apt-get update
apt-get install openssh-server openssh-clients
- 啟動ssh服務,啟動需手動建立/var/run/sshd,不然啟動sshd的時候會報錯
mkdir -p /var/run/sshd
- sshd以守護程序運作,注意如果重新開機容器後需要手動起ssh服務
- 安裝netstat,檢視sshd是否監聽22端口
apt-get install net-tools 或者yum install net-tools
netstat -apn | grep ssh
- 如果已經監聽22端口,說明sshd服務啟動成功
[[email protected] var]# mkdir -p /var/run/sshd
[[email protected] var]# /usr/sbin/sshd -D &
[1] 946
[[email protected] var]# WARNING: 'UsePAM no' is not supported in Fedora and may cause several problems.
[[email protected] var]# netstat -apn | grep ssh
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 946/sshd
tcp6 0 0 :::22 :::* LISTEN 946/sshd
- 容器裡修改root密碼
rpm -e cracklib-dicts --nodeps #檢視包是否存在
yum install cracklib-dicts #安裝修改密碼
passwd root #修改密碼
安裝各軟體
- 安裝步驟
- 解壓檔案
- 配置環境變量
- 各軟體特殊配置
#在容器解壓各工具包,其它工具解壓方式類似 tar -zxvf scala-2.11.8.tgz tar -zxvf spark-1.5.0-bin-hadoop2.6.tgz #解壓後 [[email protected] opt]# ls jdk1.8.0_131 recsys spark-1.5.0-bin-hadoop2.6 hadoop-2.7.3 metastore_db scala-2.11.8
2 配置各工具包環境變量
- java
[[email protected] opt]# cd jdk1.8.0_131/ [[email protected] jdk1.8.0_131]# ls COPYRIGHT THIRDPARTYLICENSEREADME-JAVAFX.txt db jre release LICENSE THIRDPARTYLICENSEREADME.txt include lib src.zip README.html bin javafx-src.zip man [[email protected] jdk1.8.0_131]# pwd #jdk路徑 /opt/jdk1.8.0_131 [[email protected] jdk1.8.0_131]# export PATH=$PATH:/opt/jdk1.8.0_131/bin:/opt/jdk1.8.0_131/jre/bin #添加環境變量 [[email protected] jdk1.8.0_131]# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/jdk1.8.0_131/bin:/opt/jdk1.8.0_131/jre/bin [[email protected] jdk1.8.0_131]# java -version #配置完之後驗證 java version "1.8.0_131" Java(TM) SE Runtime Environment (build 1.8.0_131-b11) Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode) [[email protected] jdk1.8.0_131]# javac -version #jdk驗證成功 javac 1.8.0_131
- scala環境變量配置
[[email protected] bin]# export PATH=$PATH:/opt/scala-2.11.8/bin/ [[email protected] bin]# echo $PATH /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/jdk1.8.0_131/bin:/opt/jdk1.8.0_131/jre/bin:/opt/scala-2.11.8/bin/ [[email protected] bin]# cd .. [[email protected] scala-2.11.8]# cd .. [[email protected] opt]# scala -version #配置成功驗證 Scala code runner version 2.11.8 -- Copyright 2002-2016, LAMP/EPFL
-
spark配置驗證
添加spark環境變量 /opt/spark-1.5.0-bin-hadoop2.6/bin/,并建立spark-env.sh檔案并添加配置
啟動 spark-shell[[email protected] conf]# pwd /opt/spark-1.5.0-bin-hadoop2.6/conf [[email protected] conf]# ls docker.properties.template metrics.properties.template spark-env.sh.template fairscheduler.xml.template slaves.template log4j.properties.template spark-defaults.conf.template cp spark-env.sh.template spark-env.sh vim spark-env.sh配置,添加如下配置 export SCALA_HOME=/opt/scala-2.11.8 export JAVA_HOME=/opt/jdk1.8.0_131 export SPARK_MASTER_IP=172.17.0.3 export SPARK_MASTER_PORT=7077 export SPARK_LOCAL_IP=172.17.0.3 export SPARK_DIST_CLASSPATH=$(/opt/hadoop-2.7.3/bin/hadoop classpath)
21/02/05 06:03:11 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
21/02/05 06:03:14 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
21/02/05 06:03:14 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
21/02/05 06:03:15 WARN Connection: BoneCP specified but not present in CLASSPATH (or one of dependencies)
21/02/05 06:03:23 WARN ObjectStore: Version information not found in metastore. hive.metastore.schema.verification is not enabled so recording the schema version 1.2.0
21/02/05 06:03:23 WARN ObjectStore: Failed to get database default, returning NoSuchObjectException
SQL context available as sqlContext.
scala>
- 注意:在終端直接通過export配置環境的方式屬于臨時配置,如果機器重新開機,會被清掉,保險做法是将配置寫入bashrc檔案中
[[email protected] ~]# docker exec -it recsys /bin/bash -c "cat ~/.bashrc" # .bashrc ...... #省略 JAVA_HOME=/opt/jdk1.8.0_131/jre/bin/ export PATH=$PATH:$JAVA_HOME export PATH=$PATH:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/jdk1.8.0_131/jre/bin/:/opt/hadoop-2.7.3/bin/:/opt/scala-2.11.8/bin/:/opt/spark-1.5.0-bin-hadoop2.6/bin/ #全配置
3 各軟體特殊配置
hadoop配置
1 . hadoop_env.sh和yarn-env.sh配置jdk路徑
打開/opt/hadoop-2.7.3/etc/hadoop/hadoop_env.sh,添加jdk路徑。注意是jdk的路徑,不是bin, jre下的啟動目錄
打開/opt/hadoop-2.7.3/etc/hadoop/yarn-env.sh,添加jdk路徑。注意是jdk的路徑,不是bin, jre下的啟動目錄export JAVA_HOME=/opt/jdk1.8.0_131
2.配置完驗證export JAVA_HOME=/opt/jdk1.8.0_131
[[email protected] hadoop]# hadoop version Hadoop 2.7.3 Subversion https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff Compiled by root on 2016-08-18T01:41Z Compiled with protoc 2.5.0 From source with checksum 2e4ce5f957ea4db193bce3734ff29ff4 This command was run using /opt/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar
3.yarm-site.xml、core-site.xml、hdfs-site.xml、map-reduce.site.xml 四個檔案配置,
- yarm-site.xml配置,路徑/opt/hadoop-2.7.3/etc/hadoop
先擷取hadoop classpath複制到yarm-site.xml value字段[[email protected] hadoop]# hadoop classpath /opt/hadoop-2.7.3/etc/hadoop:/opt/hadoop-2.7.3/share/hadoop/common/lib/*:/opt/hadoop-2.7.3/share/hadoop/common/*:/opt/hadoop-2.7.3/share/hadoop/hdfs:/opt/hadoop-2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop-2.7.3/share/hadoop/hdfs/*:/opt/hadoop-2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop-2.7.3/share/hadoop/yarn/*:/opt/hadoop-2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.7.3/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>${yarn.resourcemanager.hostname}:8099</value> </property> <property> <name>yarn.application.classpath</name> <value>/opt/hadoop-2.7.3/etc/hadoop:/opt/hadoop-2.7.3/share/hadoop/common/lib/*:/opt/hadoop-2.7.3/share/hadoop/common/*:/opt/hadoop-2.7.3/share/hadoop/hdfs:/opt/hadoop-2.7.3/share/hadoop/hdfs/lib/*:/opt/hadoop-2.7.3/share/hadoop/hdfs/*:/opt/hadoop-2.7.3/share/hadoop/yarn/lib/*:/opt/hadoop-2.7.3/share/hadoop/yarn/*:/opt/hadoop-2.7.3/share/hadoop/mapreduce/lib/*:/opt/hadoop-2.7.3/share/hadoop/mapreduce/*:/contrib/capacity-scheduler/*.jar</value> </property> </configuration>
- 修改core-site.xml、hdfs-site.xml、map-reduce.site.xml檔案之前,現在手動建立檔案系統目錄 /data, 如
-- hadoop |-- dfs | |-- data |
– name
|-- tmp
`-- var
- core-site.xml,檔案路徑 /opt/hadoop-2.7.3/etc/hadoop
<configuration> <property> <name>fs.default.name</name> <value>hdfs://172.17.0.3:9000</value> #172.17.0.3為容器ip位址,可改為主機名 </property> <property> <name>hadoop.tmp.dir</name> <value>/data/hadoop/tmp</value> #本地路徑 </property> </configuration>
- hdfs-site.xml,檔案路徑 /opt/hadoop-2.7.3/etc/hadoop
<configuration> <property> <name>dfs.name.dir</name> <value>/data/hadoop/dfs/name</value> </property> <property> <name>dfs.data.dir</name> <value>/data/hadoop/dfs/data</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
- map-reduce.site.xml,檔案路徑 /opt/hadoop-2.7.3/etc/hadoop
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.job.tracker</name> <value>172.17.0.3:9001</value> </property> <property> <name>mapred.local.dir</name> <value>/data/hadoop/var</value> </property> </configuration>
配置完成啟動hadoop
- 先格式化檔案系統: bin/hdfs namenode -format
- 啟動namenode: sbin/hadoop-daemon.sh start namenode
啟動datanode: sbin/hadoop-daemon.sh start datanode
備注:2,3可以合并為: sbin/start-all.sh
啟動後驗證 jps檢視程序正常能看到5個程序
1056 NameNode
1712 ResourceManager
28532 Jps
2059 NodeManager
1501 SecondaryNameNode
1262 DataNode
網頁端驗證
http://10.12.33.200:50071/dfshealth.html#tab-overview
http://10.12.33.200:8900/cluster #注意8900主控端端口,容器端口是8099
檔案系統測試
[[email protected] ~]# hdfs dfs -mkdir /log #建立日志目錄,注意根目錄是預設存在的,所有檔案都位于根目錄下,這裡路徑和hdfs-site.xml配置路徑沒關系
[[email protected] ~]# hdfs dfs -ls /log
[[email protected] ~]# hdfs dfs -ls /
Found 3 items
drwxr-xr-x - root supergroup 0 2021-02-18 07:26 /data
drwxr-xr-x - root supergroup 0 2021-02-20 08:42 /log
drwx------ - root supergroup 0 2021-02-07 09:01 /tmp
[[email protected] ~]# ls
anaconda-ks.cfg anaconda-post.log original-ks.cfg
[[email protected] ~]# hdfs dfs -put anaconda-post.log /log
[[email protected] ~]# hdfs dfs -ls /log
Found 1 items
-rw-r--r-- 1 root supergroup 435 2021-02-20 08:43 /log/anaconda-post.log
以上完成軟體安裝,然後再把應用複制到容器中, 測試成功以後将容器打包線上使用
容器打包
生成新鏡像,recsys為新鏡像名字
[[email protected] ~]# docker commit -a "chen" -m "recomment system" recSys recsys:1.0
sha256:250b235dcdfa5fd20e1923524eaa6b82af4aa7f21dfae88a147d5397597d5bf3
打包生成的鏡像
docker save -o recsys.tar
recsys.tar就是打包好的鏡像,将該鏡像複制到客戶環境或其它機器,通過load指令生成可用鏡像
docker load --input recsys.tar
這個鏡像就是已經建立好,并搭建了java+spark+hadoop+scala的環境,并部署好了自己應用程式,線上使用該鏡像參考前面方法
FAQ 問題記錄
-
問題: 檢視hadoop版本時候,找不到java路徑,這個問題需要配置jdk環境
[[email protected] sbin]# hadoop version
Error: JAVA_HOME is not set and could not be found.
- 啟動hadoop檔案系統報錯
[[email protected] hadoop-2.7.3]# sbin/start-dfs.sh Starting namenodes on [d9243e9eccf1] d9243e9eccf1: /opt/hadoop-2.7.3/sbin/slaves.sh: line 60: ssh: command not found localhost: /opt/hadoop-2.7.3/sbin/slaves.sh: line 60: ssh: command not found Starting secondary namenodes [0.0.0.0] 0.0.0.0: /opt/hadoop-2.7.3/sbin/slaves.sh: line 60: ssh: command not found
- 容器裡沒法ssh到其它伺服器,問題原因是沒有安裝ssh用戶端openssh-clients,導緻ssh指令用不了
[[email protected] ssh]# ssh #沒有安裝ssh用戶端 bash: ssh: command not found [[email protected] ssh]# ps -ef | grep sshd #ssh程序正常 root 2153 0 0 10:41 ? 00:00:00 /usr/sbin/sshd root 2179 18 0 10:47 ? 00:00:00 grep --color=auto sshd [[email protected] opt]# ssh [email protected] #安裝成功ssh-client之後,可以使用ssh [email protected]'s password: