天天看點

hive資料倉庫搭建

文章目錄

  • ​​mysql安裝​​
  • ​​1、主節點安裝mysql​​
  • ​​2、設定mysql參數​​
  • ​​3、設定開機自啟​​
  • ​​4、啟動mysql并初始化​​
  • ​​5、給mysql使用者hive授權​​
  • ​​6、重新開機MySQL服務​​
  • ​​hive安裝​​
  • ​​1、主節點安裝hive​​
  • ​​2、修改配置檔案名字​​
  • ​​3、配置hive-env.sh​​
  • ​​4、建立hive-site.xml檔案并配置​​
  • ​​5、傳入mysql Connector檔案​​
  • ​​6、修改環境變量并生效​​
  • ​​啟動hive服務​​
  • ​​1、建立資料庫hive導入 hive-schema​​
  • ​​2、啟動hive metastore​​
  • ​​3、啟動hiveserver2​​
  • ​​客戶機安裝hive​​
  • ​​1、上傳解壓包​​
  • ​​2、修改hive-site.xml​​
  • ​​3、修改環境變量​​
  • ​​測試hive​​
  • ​​1、客戶機啟動​​
  • ​​2、檢視資料庫​​
  • ​​3、浏覽器打開檢視​​

注意:在寫本文前,已經完成了三台機的Hadoop叢集,desktop機已經配好了網絡、yum源、關閉了防火牆等操作,詳細請看本專欄第一、二篇

mysql安裝

1、主節點安裝mysql

hadoop@ddai-master:~$ sudo apt update
hadoop@ddai-master:~$ sudo apt install mysql-client mysql-server      

2、設定mysql參數

hadoop@ddai-master:~$ sudo vim /etc/mysql/mysql.conf.d/mysqld.cnf      
修改參數如下:
bind-address = 127.0.0.1 改為 0.0.0.0

添加内容如下:
default-storage-engine = innodb
innodb_file_per_table=on
collation-server = utf8_general_ci
init-connect = 'SET NAMES utf8'
character-set-server = utf8      
hive資料倉庫搭建
hive資料倉庫搭建

3、設定開機自啟

hadoop@ddai-master:~$ sudo systemctl enable mysql.service      
hive資料倉庫搭建

4、啟動mysql并初始化

hadoop@ddai-master:~$ sudo mysql_secure_installation 
hadoop@ddai-master:~$ sudo systemctl start mysql.service
#初始化
hadoop@ddai-master:~$ sudo mysql_secure_installation      
hive資料倉庫搭建
hive資料倉庫搭建

5、給mysql使用者hive授權

hadoop@ddai-master:~$ sudo mysql -uroot -p123456
mysql> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
4 rows in set (0.00 sec)
mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed

mysql> use mysql;
mysql> create user 'hive'@'%' identified by 'Dai@123456';
mysql> grant all privileges on *.* to 'hive'@'%';
mysql> create user 'hive'@'localhost' identified by 'Dai@123456';
mysql> grant all privileges on *.* to 'hive'@'localhost';
mysql> ALTER USER 'hive'@'%' REQUIRE none;      

6、重新開機MySQL服務

hadoop@ddai-master:~$ sudo systemctl restart mysql.service      

hive安裝

1、主節點安裝hive

hive資料倉庫搭建
hadoop@ddai-master:~$ cd /opt/
hadoop@ddai-master:/opt$ sudo tar xvzf /home/hadoop/apache-hive-2.3.6-bin.tar.gz 
hadoop@ddai-master:/opt$ sudo chown -R hadoop:hadoop /opt/apache-hive-2.3.6-bin/      

2、修改配置檔案名字

hadoop@ddai-master:/opt$ cd /opt/apache-hive-2.3.6-bin/conf/

#把以下修改名稱
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv beeline-log4j2.properties.template beeline-log4j2.properties
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv hive-env.sh.template hive-env.sh
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv hive-exec-log4j2.properties.template hive-exec-log4j2.properties
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv hive-log4j2.properties.template  hive-log4j2.properties
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv llap-cli-log4j2.properties.template llap-cli-log4j2.properties
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ mv llap-daemon-log4j2.properties.template llap-daemon-log4j2.properties      
hive資料倉庫搭建

修改之後:

hive資料倉庫搭建

3、配置hive-env.sh

hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ vim hive-env.sh 


HADOOP_HOME=/opt/hadoop-2.8.5/

export HIVE_CONF_DIR=/opt/apache-hive-2.3.6-bin/conf/
export HIVE_AUX_JARS_PATH=/opt/apache-hive-2.3.6-bin/lib/      
hive資料倉庫搭建

4、建立hive-site.xml檔案并配置

hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/conf$ vim hive-site.xml      
<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>hive.metastore.warehouse.dir</name>

<value>/hive/warehouse</value>

<description>location of default database for the warehouse</description>

</property>

<property>

<name>javax.jdo.option.ConnectionURL</name>


<value>jdbc:mysql://ddai-master:3306/hive?createDatabaseIfNotExist=true&useSSL=false&allowPublicKeyRetrieval=true</value>

<description>JDBC connect string for a JDBC metastore.</description>

</property>

<property>

<name>javax.jdo.option.ConnectionDriverName</name>

<value>com.mysql.jdbc.Driver</value>

<description>Driver class name for a JDBC metastore</description>

</property>

<property>

<name>javax.jdo.option.ConnectionUserName</name>

<value>hive</value>

<description>Username to use against metastore database</description>


</property>

<property>


<name>javax.jdo.option.ConnectionPassword</name> <value>Dai@123456</value>

<description>password to use against metastore database</description>


</property>

<property>

<name>hive.querylog.location</name>

<value>/opt/apache-hive-2.3.6-bin/logs</value>

<description>Location of Hive run time structured log file</description>

</property>

<property>

<name>hive.metastore.uris</name>

<value>thrift://ddai-master:9083</value>

<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>

</property>

<property>

<name>hive.server2.webui.host</name>

<value>0.0.0.0</value>

</property>

<property>

<name>hive.server2.webui.port</name>

<value>10002</value>

</property>

</configuration>      

5、傳入mysql Connector檔案

hadoop@ddai-master:~$ cd /opt/apache-hive-2.3.6-bin/lib/      
hive資料倉庫搭建
hive資料倉庫搭建

6、修改環境變量并生效

hadoop@ddai-master:~$ vim .profile      
export HIVE_HOME=/opt/apache-hive-2.3.6-bin
export PATH=$PATH:$HIVE_HOME/bin      
hadoop@ddai-master:~$ source /home/hadoop/.profile      
hive資料倉庫搭建

啟動hive服務

1、建立資料庫hive導入 hive-schema

hadoop@ddai-master:~$ cd /opt/apache-hive-2.3.6-bin/scripts/metastore/upgrade/mysql/
hadoop@ddai-master:/opt/apache-hive-2.3.6-bin/scripts/metastore/upgrade/mysql$ mysql -hddai-master -uhive -pDai@123456
mysql> create database hive character set latin1;
mysql> use hive;
Database changed
mysql> source hive-schema-2.3.0.mysql.sql;
mysql> exit
Bye      
hive資料倉庫搭建

2、啟動hive metastore

出錯

hive資料倉庫搭建

解決

hive資料倉庫搭建

啟動hive之前先啟動hadoop

hive資料倉庫搭建

3、啟動hiveserver2

hive資料倉庫搭建

再開啟zookeeper和hbase,檢視程序

hive資料倉庫搭建

客戶機安裝hive

1、上傳解壓包

hadoop@ddai-desktop:~$ sudo scp -r hadoop@ddai-master:/opt/apache-hive-2.3.6-bin /opt/apache-hive-2.3.6-bin
hadoop@ddai-desktop:~$ sudo chown -R hadoop:hadoop /opt/apache-hive-2.3.6-bin/      

2、修改hive-site.xml

hadoop@ddai-desktop:~$ cd /opt/apache-hive-2.3.6-bin/conf/
hadoop@ddai-desktop:/opt/apache-hive-2.3.6-bin/conf$ vim hive-site.xml      
<?xml version="1.0"?>

<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>

<property>

<name>hive.metastore.warehouse.dir</name>

<value>/hive/warehouse</value>

<description>location of default database for the warehouse</description>

</property>

<property>


<name>hive.querylog.location</name>

<value>/opt/apache-hive-2.3.6-bin/logs</value>

<description>Location of Hive run time structured log file</description>

</property>

<property>


<name>hive.metastore.uris</name>

<value>thrift://ddai-master:9083</value>

<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore.</description>

</property>

</configuration>      

3、修改環境變量

hive資料倉庫搭建
source /home/hadoop/.profile      

測試hive

先開啟節點的hadoop,hive

1、客戶機啟動

2、檢視資料庫

hive資料倉庫搭建

3、浏覽器打開檢視