天天看点

Hive Window安装 (Hadoop)

一、Hadoop安装

   1)、启动

Hive Window安装 (Hadoop)

   2)、创建hdfs目录

$ hadoop fs  -mkdir       /tmp
$ hadoop fs  -mkdir       /user/
$ hadoop fs  -mkdir       /user/hive/
$ hadoop fs  -mkdir       /user/hive/warehouse 
$ hadoop fs  -chmod g+w   /tmp
$ hadoop fs  -chmod g+w   /user/hive/warehouse      
Hive Window安装 (Hadoop)

 二、MySql 创建库Hive

Hive Window安装 (Hadoop)

三、Hive安装

  1)、配置环境变量

Hive Window安装 (Hadoop)
Hive Window安装 (Hadoop)

  2)、创建my_hive目录

Hive Window安装 (Hadoop)

  3)、配置文件conf

hive-default.xml.template             ----->       hive-site.xml

hive-env.sh.template                     ----->             hive-env.sh

hive-exec-log4j.properties.template     ----->    hive-exec-log4j2.properties

hive-log4j.properties.template             ----->    hive-log4j2.properties

Hive Window安装 (Hadoop)

hive-site.xml配置*******************************************************************

<!-- 指定的位置在hdfs上的目录-->
  <property>
    <name>hive.metastore.warehouse.dir</name>
    <value>/user/hive/warehouse</value>
    <description>location of default database for the warehouse</description>
  </property>

<!-- 指定的位置在hdfs上的目录-->
  <property>
    <name>hive.exec.scratchdir</name>
    <value>/tmp/hive</value>
    <description>HDFS root scratch dir for Hive jobs which gets created with write all (733) permission. For each connecting user, an HDFS scratch dir: ${hive.exec.scratchdir}/&lt;username&gt; is created, with ${hive.scratch.dir.permission}.</description>
  </property>

<!-- 本地scratch_dir目录 -->
  <property>
    <name>hive.exec.local.scratchdir</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/scratch_dir</value>
    <description>Local scratch space for Hive jobs</description>
  </property><!-- 本地resources_dir目录 -->
  <property>
    <name>hive.downloaded.resources.dir</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/resources_dir/${hive.session.id}_resources</value>            <description>Temporary local directory for added resources in the remote file system.</description>  </property>

<!-- 本地querylog_dir目录 -->
<property>
    <name>hive.querylog.location</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/querylog_dir</value>
    <description>Location of Hive run time structured log file</description>
  </property>

<!-- 本地operation_logs_dir目录 -->
  <property>
    <name>hive.server2.logging.operation.log.location</name>
    <value>D:/temp/hadoop/apache-hive-2.1.1-bin/my_hive/operation_logs_dir</value>
    <description>Top level directory where operation logs are stored if logging functionality is enabled</description>
  </property>

<!-- 数据库URL -->
  <property>
    <name>javax.jdo.option.ConnectionURL</name>
    <value>jdbc:mysql://localhost:3306/hive?serverTimezone=UTC&amp;useSSL=false&amp;allowPublicKeyRetrieval=true</value>
    <description>
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    </description>
  </property>

<!-- 数据库Driver -->
 <property>
    <name>javax.jdo.option.ConnectionDriverName</name>
    <value>com.mysql.jdbc.Driver</value>
    <description>Driver class name for a JDBC metastore</description>
  </property>

<!-- 数据库用户 -->
  <property>
    <name>javax.jdo.option.ConnectionUserName</name>
    <value>root</value>
    <description>Username to use against metastore database</description>
  </property>

<!-- Password数据库密码 -->
  <property>
    <name>javax.jdo.option.ConnectionPassword</name>
    <value>root</value>
    <description>password to use against metastore database</description>
  </property>

<!-- 解决 Caused by: MetaException(message:Version information not found in metastore. ) -->
  <property>
    <name>hive.metastore.schema.verification</name>
    <value>false</value>
    <description>
      Enforce metastore schema version consistency.
      True: Verify that version information stored in is compatible with one from Hive jars.  Also disable automatic
            schema migration attempt. Users are required to manually migrate schema after Hive upgrade which ensures
            proper metastore schema migration. (Default)
      False: Warn if the version information stored in metastore doesn't match with one from in Hive jars.
    </description>
  </property>


<!-- hive Required table missing : "DBS" in Catalog""Schema" 错误 -->
 <property>
<name>datanucleus.schema.autoCreateAll</name>
<value>true</value>
<description>Auto creates necessary schema on a startup if one doesn't exist. Set this to false, after creating it once.To enable auto create also set hive.metastore.schema.verification=false. Auto creation is not recommended for production use cases, run schematool command instead.</description>
</property>      

  hive-env.sh配置*******************************************************************

Hive Window安装 (Hadoop)

4)、添加lib 目录包

    mysql-connector-java

Hive Window安装 (Hadoop)

5)、bin

    Hive_x.x.x_bin.tar.gz 版本在windows 环境少 Hive的执行文件、运行程序

Hive Window安装 (Hadoop)

 用apache-hive-1.0.0-src\bin替换D:\temp\hadoop\apache-hive-2.1.1-bin

Hive Window安装 (Hadoop)

6)、hive 初化数据库

hive --service metastore

Hive Window安装 (Hadoop)

注:若初化数据库出错,直接运行sql

Hive Window安装 (Hadoop)
Hive Window安装 (Hadoop)

 7)、hive 运行

Hive Window安装 (Hadoop)

8、hive 运行出现问题

    impossible to write to binary log since binlog_format=statement