天天看点

Hadoop2.6.x安装hive1.2.1

1、配置hive-site.xml

<property>

<name>javax.jdo.option.connectionurl</name>

<value>jdbc:mysql://master:3306/hive?createdatabaseifnotexist=true&characterencoding=utf-8</value>

<description>jdbc connection string for a jdbc metastore</description>

</property>

<name>javax.jdo.option.connectiondrivername</name>

<value>com.mysql.jdbc.driver</value>

<description>driver class name for a jdbc metastore</description>

<name>javax.jdo.option.connectionusername</name>

<value>root</value>

<description>username for metastore database</description>

<name>javax.jdo.option.connectionpassword</name>

<description></description>

<name>hive.metastore.warehouse.dir</name>

<value>/app/data/hive/warehouse</value>

#如果不配置下面的部分会产生错误1.

<name>hive.exec.local.scratchdir</name>

<value>/app/data/hive/iotmp</value>

<description>local scratch space for hive jobs</description>

<name>hive.downloaded.resources.dir</name>

<description>temporary local directory for added resources in the remote file system.</description>

<name>hive.querylog.location</name>

<value>/app/data/hive/iotmp/log</value>

<description>location of hive run time structured log file</description>

<name>hive.server2.logging.operation.log.location</name>

<value>/app/data/hive/iotmp/operation_logs</value>

<description>top level directory where operation logs are stored if logging functionality is enabled</description>

2、配置hive-env.sh

export hive_home=/app/bigdata/hive/apache-hive-1.2.1-bin

export hive_conf_dir=/app/bigdata/hive/apache-hive-1.2.1-bin/conf

3、配置 hive-config.sh

export java_home=/app/bigdata/java/jdk1.7.0_79

export hadoop_home=/app/bigdata/hadoop/hadoop-2.6.4

export spark_home=/app/bigdata/spark/spark-1.6.2-bin-hadoop2.6

4、配置log

vim hive-log4j.properties

hive.log.dir=/app/bigdata/hive/hive/log/

5、mysql给hive表授权

grant select,insert,update,delete,create,drop on vtdc.employee to [email protected] identified by ‘123′;

给来自10.163.225.87的用户joe分配可对数据库vtdc的employee表进行select,insert,update,delete,create,drop等操作的权限,并设定口令为123。

grant all on hive.* to root@'master' identified by 'root';

flush privileges;

6、启动hadoop服务:http://192.168.1.10:50070/

sh sbin/start-dfs.sh

sbin/start-yarn.sh

7、启动hive

8、hive数据库crud操作集合

    create database:create database testdb;

    show database: show databases;

    show tables: show tables; user table;

    create table:create table sudent(int id);

9、hive数据导入导出

    1、第一种加载数据到student中

        注意:使用load加载数据到数据库中是不使用mapreduce的,而桶类型的表用insert要用到mapreduce

        import data: load data local inpath '/app/bigdata/hive/apache-hive-1.2.1-bin/student' into table student;

        使用select * 不加条件时,不执行mapreduce,执行比较快;最后一行显示的是null,原因是文件中有一行空格;

    2、第二种加载数据到student中的方法

        在/usr/local/hive/目录下创建student_1文件,并写入一列数字;

        执行命令hadoop fs -put student /app/data/hive/warehouse/testdb.db/student

        或者 hdfs dfs -put  student /app/data/hive/warehouse/testdb.db/student

10、批量kill linux qemu 进程

ps aux|grep hadoop|grep -v grep|awk '{print $2}'|xargs kill -9