天天看點

Hive Window安裝 (Hadoop)

一、Hadoop安裝

   1)、啟動

Hive Window安裝 (Hadoop)

   2)、建立hdfs目錄

$ hadoop fs  -mkdir       /tmp
$ hadoop fs  -mkdir       /user/
$ hadoop fs  -mkdir       /user/hive/
$ hadoop fs  -mkdir       /user/hive/warehouse 
$ hadoop fs  -chmod g+w   /tmp
$ hadoop fs  -chmod g+w   /user/hive/warehouse      
Hive Window安裝 (Hadoop)

 二、MySql 建立庫Hive

Hive Window安裝 (Hadoop)

三、Hive安裝

  1)、配置環境變量

Hive Window安裝 (Hadoop)
Hive Window安裝 (Hadoop)

  2)、建立my_hive目錄

Hive Window安裝 (Hadoop)

  3)、配置檔案conf

hive-default.xml.template             ----->       hive-site.xml

hive-env.sh.template                     ----->             hive-env.sh

hive-exec-log4j.properties.template     ----->    hive-exec-log4j2.properties

hive-log4j.properties.template             ----->    hive-log4j2.properties

Hive Window安裝 (Hadoop)

hive-site.xml配置*******************************************************************

<!-- 指定的位置在hdfs上的目錄-->
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

<!-- 指定的位置在hdfs上的目錄-->
  <property>
    <name>hive.exec.scratchdir</name>
    <value>/tmp/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>

<!-- 本地scratch_dir目錄 -->
  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/scratch_dir</value>
    <description>Local scratch space for Hive jobs</description>
  </property><!-- 本地resources_dir目錄 -->
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/resources_dir/${hive.session.id}_resources</value>            <description>Temporary local directory for added resources in the remote file system.</description>  </property>

<!-- 本地querylog_dir目錄 -->
<property>
    <name>hive.querylog.location</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/querylog_dir</value>
    <description>Location of Hive run time structured log file</description>
  </property>

<!-- 本地operation_logs_dir目錄 -->
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/operation_logs_dir</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>

<!-- 資料庫URL -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?serverTimezone=UTC&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>

<!-- 資料庫Driver -->
 <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>

<!-- 資料庫使用者 -->
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>

<!-- Password資料庫密碼 -->
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>root</value>
    <description>password to use against metastore database</description>
  </property>

<!-- 解決 Caused by: MetaException(message:Version information not found in metastore. ) -->
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    </description>
  </property>


<!-- hive Required table missing : "DBS" in Catalog""Schema" 錯誤 -->
 <property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>      

  hive-env.sh配置*******************************************************************

Hive Window安裝 (Hadoop)

4)、添加lib 目錄包

    mysql-connector-java

Hive Window安裝 (Hadoop)

5)、bin

    Hive_x.x.x_bin.tar.gz 版本在windows 環境少 Hive的執行檔案、運作程式

Hive Window安裝 (Hadoop)

 用apache-hive-1.0.0-src\bin替換D:\temp\hadoop\apache-hive-2.1.1-bin

Hive Window安裝 (Hadoop)

6)、hive 初化資料庫

hive --service metastore

Hive Window安裝 (Hadoop)

注:若初化資料庫出錯,直接運作sql

Hive Window安裝 (Hadoop)
Hive Window安裝 (Hadoop)

 7)、hive 運作

Hive Window安裝 (Hadoop)

8、hive 運作出現問題

    impossible to write to binary log since binlog_format=statement