天天看點

Hadoop2.6.x安裝hive1.2.1

1、配置hive-site.xml

<property>

<name>javax.jdo.option.connectionurl</name>

<value>jdbc:mysql://master:3306/hive?createdatabaseifnotexist=true&characterencoding=utf-8</value>

<description>jdbc connection string for a jdbc metastore</description>

</property>

<name>javax.jdo.option.connectiondrivername</name>

<value>com.mysql.jdbc.driver</value>

<description>driver class name for a jdbc metastore</description>

<name>javax.jdo.option.connectionusername</name>

<value>root</value>

<description>username for metastore database</description>

<name>javax.jdo.option.connectionpassword</name>

<description></description>

<name>hive.metastore.warehouse.dir</name>

<value>/app/data/hive/warehouse</value>

#如果不配置下面的部分會産生錯誤1.

<name>hive.exec.local.scratchdir</name>

<value>/app/data/hive/iotmp</value>

<description>local scratch space for hive jobs</description>

<name>hive.downloaded.resources.dir</name>

<description>temporary local directory for added resources in the remote file system.</description>

<name>hive.querylog.location</name>

<value>/app/data/hive/iotmp/log</value>

<description>location of hive run time structured log file</description>

<name>hive.server2.logging.operation.log.location</name>

<value>/app/data/hive/iotmp/operation_logs</value>

<description>top level directory where operation logs are stored if logging functionality is enabled</description>

2、配置hive-env.sh

export hive_home=/app/bigdata/hive/apache-hive-1.2.1-bin

export hive_conf_dir=/app/bigdata/hive/apache-hive-1.2.1-bin/conf

3、配置 hive-config.sh

export java_home=/app/bigdata/java/jdk1.7.0_79

export hadoop_home=/app/bigdata/hadoop/hadoop-2.6.4

export spark_home=/app/bigdata/spark/spark-1.6.2-bin-hadoop2.6

4、配置log

vim hive-log4j.properties

hive.log.dir=/app/bigdata/hive/hive/log/

5、mysql給hive表授權

grant select,insert,update,delete,create,drop on vtdc.employee to [email protected] identified by ‘123′;

給來自10.163.225.87的使用者joe配置設定可對資料庫vtdc的employee表進行select,insert,update,delete,create,drop等操作的權限,并設定密碼為123。

grant all on hive.* to root@'master' identified by 'root';

flush privileges;

6、啟動hadoop服務:http://192.168.1.10:50070/

sh sbin/start-dfs.sh

sbin/start-yarn.sh

7、啟動hive

8、hive資料庫crud操作集合

    create database:create database testdb;

    show database: show databases;

    show tables: show tables; user table;

    create table:create table sudent(int id);

9、hive資料導入導出

    1、第一種加載資料到student中

        注意:使用load加載資料到資料庫中是不使用mapreduce的,而桶類型的表用insert要用到mapreduce

        import data: load data local inpath '/app/bigdata/hive/apache-hive-1.2.1-bin/student' into table student;

        使用select * 不加條件時,不執行mapreduce,執行比較快;最後一行顯示的是null,原因是檔案中有一行空格;

    2、第二種加載資料到student中的方法

        在/usr/local/hive/目錄下建立student_1檔案,并寫入一列數字;

        執行指令hadoop fs -put student /app/data/hive/warehouse/testdb.db/student

        或者 hdfs dfs -put  student /app/data/hive/warehouse/testdb.db/student

10、批量kill linux qemu 程序

ps aux|grep hadoop|grep -v grep|awk '{print $2}'|xargs kill -9