天天看點

CDH基于Kerberos身份認證和基于Sentry的權限控制功能的測試

本文旨在展示CDH基于Kerberos身份認證和基于Sentry的權限控制功能的測試示例。

1. 準備測試資料

1 2 3 4 5 6

cat /tmp/events.csv

10.1

.

2.3

,US,android,createNote

10.200

.

88.99

,FR,windows,updateNote

10.1

.

2.3

,US,android,updateNote

10.200

.

88.77

,FR,ios,createNote

10.1

.

4.5

,US,windows,updateTag

2. 建立使用者

2.1. 建立系統使用者

在叢集所有節點建立系統使用者并設定密碼

1 2 3 4 5 6

useradd user1

passwd user1

useradd user2

passwd user2

useradd user3

passwd user3

2.2. 建立kerberos使用者

1 2 3

kadmin.local -q 

"addprinc user1"

kadmin.local -q 

"addprinc user2"

kadmin.local -q 

"addprinc user3"

3. 建立資料庫和表

3.1. 建立資料庫

admin為sentry的超級管理者,該使用者配置權限時已設定

1

kinit admin

通過beeline連接配接 hiveserver2,運作下面指令建立hive庫的超級管理者角色, 并将該角色賦予admin組,使admin有操作hive庫的權力

1 2 3 4

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

create role admin_role;

GRANT ALL ON SERVER server1 TO ROLE admin_role;

GRANT ROLE admin_role TO GROUP admin;

建立兩個測試資料庫

1 2

create database db1;

create database db2;

3.2. 建立表

在兩個測試資料庫中各建立一張測試表,并導入測試資料

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15

create table db1.table1 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY 

','

;

create table db2.table1 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY 

','

;

create table db2.table2 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY 

','

;

load data local inpath 

'/home/iie/events.csv'

overwrite into table db1.table1;

load data local inpath 

'/home/iie/events.csv'

overwrite into table db2.table1;

load data local inpath 

'/home/iie/events.csv'

overwrite into table db2.table2;

  

4. 賦予使用者權限

4.1. 給user1賦予db1的所有權限

1 2 3

create role user1_role;

GRANT ALL ON DATABASE db1 TO ROLE user1_role;

GRANT ROLE user1_role TO GROUP user1;

4.2. 給user2賦予db2的所有權限

1 2 3

create role user2_role;

GRANT ALL ON DATABASE db2 TO ROLE user2_role;

GRANT ROLE user2_role TO GROUP user2;

4.3. 給user3賦予db2.table1的所有權限

1 2 3 4

create role user3_role;

use db2;

GRANT select ON table table1 TO ROLE user3_role;

GRANT ROLE user3_role TO GROUP user3;

5. 測試使用者權限

5.1. Hive測試

5.1.1. admin使用者擁有整個hive庫的權限

1 2 3 4 5 6 7 8 9 10 11 12 13

kinit admin

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

show databases;

5.1

.

2

.  user1使用者隻具有db1和

default

的權限

kinit user1

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

: jdbc:hive2:

//vmw208:10000/> show databases;

+----------------+--+

| database_name |

+----------------+--+

| db1 |

default

|

+----------------+--+

5.1.3. user2使用者隻具有db2和default的權限

1 2 3 4 5 6 7 8 9

kinit user2

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

: jdbc:hive2:

//vmw208:10000/> show databases;

+----------------+--+

| database_name |

+----------------+--+

| db2 |

default

|

+----------------+--+

5.1.4. user3使用者隻具有db2.table1和default的權限

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17

kinit user2

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

: jdbc:hive2:

//vmw208:10000/> show databases;

+----------------+--+

| database_name |

+----------------+--+

| db2 |

default

|

+----------------+--+

: jdbc:hive2:

//node0a17:10000/> use db2;

: jdbc:hive2:

//node0a17:10000/> show tables;

INFO : OK

+-----------+--+

| tab_name |

+-----------+--+

| table1 |

+-----------+--+

5.2. Hdfs測試

配置hdfs acl與sentry同步後,hdfs權限與sentry監控的目錄(/user/hive/warehouse)的權限同步

5.2.1. 切換到hive使用者,檢視hive庫檔案的權限

設定hdfs acl與sentry同步後,sentry監控的hive庫的權限改動會同步到對應的hdfs檔案權限

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77

[root

@vmw208

home]# kinit hive

[root

@vmw208

home]# hdfs dfs -getfacl -R /user/hive/warehouse/

# file: /user/hive/warehouse

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user1:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db/table1

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user1:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db1.db/table1/events.csv

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user1:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user2:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table1

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user2:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

# file: /user/hive/warehouse/db2.db/table1/events.csv

# owner: hive

# group: hive

user::rwx

user:hive:rwx

group:user2:rwx

group::---

group:hive:rwx

mask::rwx

other::--x

  

5.2.2. 切換到user1使用者,檢視hdfs檔案

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

[root

@vmw208

home]# kinit user1

Password 

for

user1

@HADOOP

.COM:

[root

@vmw208

home]# hdfs dfs -ls /user/hive/warehouse/db2.db

ls: Permission denied: user=user1, access=READ_EXECUTE, inode=

"/user/hive/warehouse/db2.db"

:hive:hive:drwxrwx—x

[root

@vmw208

home]# hdfs dfs -cat /user/hive/warehouse/db2.db/table1/events.csv

cat: Permission denied: user=user1, access=READ, inode=

"/user/hive/warehouse/db2.db/table1/events.csv"

:hive:hive:-rwxrwx--x

[root

@vmw208

home]# hdfs dfs -ls /user/hive/warehouse/db1.db

Found 

1

items

drwxrwx--x+ - hive hive 

2016

-

09

-

29

16

:

54

/user/hive/warehouse/db1.db/table1

[root

@vmw208

home]# hdfs dfs -cat /user/hive/warehouse/db1.db/table1/events.csv

10.1

.

2.3

,US,android,createNote

10.200

.

88.99

,FR,windows,updateNote

10.1

.

2.3

,US,android,updateNote

10.200

.

88.77

,FR,ios,createNote

10.1

.

4.5

,US,windows,updateTag

  

5.2.3. 切換到user2使用者,檢視hdfs檔案

1 2 3 4 5 6 7 8 9 10

[root

@vmw208

home]# kinit user2

Password 

for

user2

@HADOOP

.COM:

[root

@vmw208

home]# hdfs dfs -cat /user/hive/warehouse/db1.db/table1/events.csv

cat: Permission denied: user=user2, access=READ, inode=

"/user/hive/warehouse/db1.db/table1/events.csv"

:hive:hive:-rwxrwx--x

[root

@vmw208

home]# hdfs dfs -cat /user/hive/warehouse/db2.db/table1/events.csv

10.1

.

2.3

,US,android,createNote

10.200

.

88.99

,FR,windows,updateNote

10.1

.

2.3

,US,android,updateNote

10.200

.

88.77

,FR,ios,createNote

10.1

.

4.5

,US,windows,updateTag  

5.3. Spark測試

5.3.1. Spark讀hive表資料并列印到控制台

(1) 切換到user1使用者測試

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

[root

@vmw209

xdf]# kinit user1

Password 

for

user1

@HADOOP

.COM:

[root

@vmw209

xdf]# spark-submit --

class

iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db2 table1

……

Exception in thread 

"main"

org.apache.hadoop.security.AccessControlException: Permission denied: user=user1, access=READ_EXECUTE, inode=

"/user/hive/warehouse/db2.db/table1"

:hive:hive:drwxrwx—x

[root

@vmw209

xdf]# spark-submit --

class

iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db1 table1

……

+------------+-------+-------+----------+

| ip|country| client| action|

+------------+-------+-------+----------+

10.1

.

2.3

| US|android|createNote|

|

10.200

.

88.99

| FR|windows|updateNote|

10.1

.

2.3

| US|android|updateNote|

|

10.200

.

88.77

| FR| ios|createNote|

10.1

.

4.5

| US|windows| updateTag|

+------------+-------+-------+----------+

  

(2) 切換到user2使用者測試

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

[root

@vmw209

xdf]# kinit user2

Password 

for

user2

@HADOOP

.COM:

[root

@vmw209

xdf]# spark-submit --

class

iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db1 table1

……

Exception in thread 

"main"

org.apache.hadoop.security.AccessControlException: Permission denied: user=user2, access=READ_EXECUTE, inode=

"/user/hive/warehouse/db1.db/table1"

:hive:hive:drwxrwx—x

[root

@vmw209

xdf]# spark-submit --

class

iie.hadoop.permission.QueryTable --master local /home/xdf/spark.jar db2 table1

……

+------------+-------+-------+----------+

| ip|country| client| action|

+------------+-------+-------+----------+

10.1

.

2.3

| US|android|createNote|

|

10.200

.

88.99

| FR|windows|updateNote|

10.1

.

2.3

| US|android|updateNote|

|

10.200

.

88.77

| FR| ios|createNote|

10.1

.

4.5

| US|windows| updateTag|

+------------+-------+-------+----------+

5.3.2. Spark讀檔案資料寫入hive表中

調用工具程式spark.jar讀本地檔案/home/xdf/events.csv資料寫到db2.table2

切換到user2使用者測試

1 2 3 4 5 6 7

kinit user2

beeline -u 

"jdbc:hive2://vmw208:10000/;principal=hive/[email protected]"

use db2;

create table table2 (

ip STRING, country STRING, client STRING, action STRING

) ROW FORMAT DELIMITED FIELDS TERMINATED BY 

','

;

[root

@vmw209

xdf]# spark-submit --

class

iie.hadoop.permission.HCatWriterTest --master local /home/xdf/spark.jar /home/xdf/events.csv db2 table2

成功!

寫到db1.table1報錯,沒有權限!

1

Exception in thread 

"main"

org.apache.hive.hcatalog.common.HCatException : 

2004

: HCatOutputFormat not initialized, setOutput has to be called. Cause : org.apache.hadoop.security.AccessControlException: Permission denied: user=user2, access=WRITE, inode=

"/user/hive/warehouse/db1.db/table1"

:hive:hive:drwxrwx--x

上面隻是測試環境,因為kinit + 密碼的方式有時效限制,不适合在生産環境運作,幸好spark提供了相關的參數:

1 2 3 4

spark-submit

……

--principal # 使用者對應的kerberos principle

--keytab # 對應使用者principle生成的密鑰檔案

spark的權限管理通過對hdfs/hive的檔案目錄設定權限來管理,不同的使用者擁有不同的權限,使用者在送出spark任務時,指定對應使用者的kerberos principle和keytab來實作權限管理。任務送出指令如下:

1

spark-submit --

class

iie.hadoop.permission.QueryTable --master yarn-cluster --principal=user1

@HADOOP

.COM --keytab=/home/user1/user1.keytab /home/user1/spark.jar db1 table1

其中--principal 和--keytab與使用者一一對應

注意:spark-submit隻有在yarn-cluster模式下,--principal 和--keytab才有效

5.4. Kafka測試

5.4.1. 認證

使用者kafka為kafka權限控制的超級管理者

1

[root

@node10

iie]#kinit -kt /home/iie/kafka.keytab kafka

5.4.2. 建立topic

建立topic1和topic2

1 2

[root

@node10

iie]#kafka-topics --zookeeper node11:

2181

/kafka --create --topic topic1 --partitions 

2

--replication-factor 

1

[root

@node10

iie]#kafka-topics --zookeeper node11:

2181

/kafka --create --topic topic2 --partitions 

2

--replication-factor 

1

5.4.3. 賦權

給user1使用者附topic1的讀寫權限

1 2

[root

@node10

iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --add --allow-principal User:user1 --allow-host node10  --producer --topic topic1 --group console-consumer-

9175

[root

@node10

iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --add --allow-principal User:user1 --allow-host node10  --consumer --topic topic1 --group console-consumer-

9175

給user2使用者附topic2的讀寫權限

1 2

[root

@node10

iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --add --allow-principal User:user2 --allow-host node10  --producer --topic topic2 --group console-consumer-

9175

[root

@node10

iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --add --allow-principal User:user2 --allow-host node10  --consumer --topic topic2 --group console-consumer-

9175

5.4.4. 檢視權限

1 2 3 4 5 6 7

[root

@node10

iie]#kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --list

Current ACLs 

for

resource `Topic:topic1`:

User:user1 has Allow permission 

for

operations: Write from hosts: node10

User:user1 has Allow permission 

for

operations: Read from hosts: node10

Current ACLs 

for

resource `Topic:topic2`:

User:user2 has Allow permission 

for

operations: Read from hosts: node10

User:user2 has Allow permission 

for

operations: Write from hosts: node10

5.4.5. 建立生産消費配置檔案

建立consumer.properties

1 2 3 4 5

cat /etc/kafka/conf/consumer.properties

security.protocol=SASL_PLAINTEXT

sasl.mechanism=GSSAPI

sasl.kerberos.service.name=kafka

group.id=console-consumer-

9175

建立producer.properties

1 2 3 4

cat /etc/kafka/conf/producer.properties

security.protocol=SASL_PLAINTEXT

sasl.mechanism=GSSAPI

sasl.kerberos.service.name=kafka

5.4.6. 生産資料

指令行生産資料

1 2 3 4

[root

@node10

iie]#kinit user1

[root

@node10

iie]#kafka-console-producer --broker-list node12:

9092

--topic topic1 --producer.config /etc/kafka/conf/producer.properties

123123

123123

5.4.7. 消費資料

指令行消費資料

1 2 3 4

[root

@node10

iie]#kinit user1

[root

@node10

iie]#kafka-console-consumer --bootstrap-server node12:

9092

--topic topic1 --

new

-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties

123123

123123

使用者對topic沒有權限時報錯

1 2 3 4 5

[root

@node10

iie]# kinit user2

Password 

for

user2

@HADOOP

.COM:

[root

@node10

iie]# kafka-console-consumer --bootstrap-server node12:

9092

--topic topic1 --

new

-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties

[

2016

-

10

-

12

15

:

38

:

01

,

599

] ERROR Unknown error when running consumer:  (kafka.tools.ConsoleConsumer$)

org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [topic1]

5.4.8. 移除權限

登陸管理者使用者移除權限

1

[root

@node10

iie]#kinit -kt /home/iie/kafka.keytab kafka

删除user1對topic1的消費權限

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16

[root

@node10

iie]# kafka-acls --authorizer-properties zookeeper.connect=node11:

2181

/kafka --remove --allow-principal User:user1 --allow-host node10 --consumer --topic topic1 --group console-consumer-

92175

Are you sure you want to remove ACLs:

User:user1 has Allow permission 

for

operations: Read from hosts: node10

User:user1 has Allow permission 

for

operations: Describe from hosts: node10

from resource `Topic:topic1`? (y/n)

y

Are you sure you want to remove ACLs:

User:user1 has Allow permission 

for

operations: Read from hosts: node10

from resource `Group:console-consumer-

92175

`? (y/n)

y

Current ACLs 

for

resource `Topic:topic1`:

User:Aluser1 has Allow permission 

for

operations: Read from hosts: node10

User:Aluser1 has Allow permission 

for

operations: Describe from hosts: node10

User:user1 has Allow permission 

for

operations: Write from hosts: node10

Current ACLs 

for

resource `Group:console-consumer-

92175

`:

  

測試user1消費topic1報錯,說明權限已經移除

1 2 3 4 5 6 7 8 9 10 11 12

[root

@node10

iie]# kinit user1

Password 

for

user1

@HADOOP

.COM:

[root

@node10

iie]# kafka-console-consumer --bootstrap-server node12:

9092

--topic topic1 --

new

-consumer --from-beginning --consumer.config /etc/kafka/conf/consumer.properties

[

2016

-

10

-

12

15

:

45

:

11

,

572

] WARN The configuration sasl.mechanism = GSSAPI was supplied but isn't a known config. (org.apache.kafka.clients.consumer.ConsumerConfig)

[

2016

-

10

-

12

15

:

45

:

11

,

914

] WARN Not authorized to read from topic topic1. (org.apache.kafka.clients.consumer.internals.Fetcher)

[

2016

-

10

-

12

15

:

45

:

11

,

916

] ERROR Error processing message, terminating consumer process:  (kafka.tools.ConsoleConsumer$)

org.apache.kafka.common.errors.TopicAuthorizationException: Not authorized to access topics: [topic1]

[

2016

-

10

-

12

15

:

45

:

11

,

920

] WARN Not authorized to read from topic topic1. (org.apache.kafka.clients.consumer.internals.Fetcher)

[

2016

-

10

-

12

15

:

45

:

11

,

921

] ERROR Not authorized to commit to topics [topic1] 

for

group console-consumer-

9175

(org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)

[

2016

-

10

-

12

15

:

45

:

11

,

922

] WARN Auto offset commit failed 

for

group console-consumer-

9175

: Not authorized to access topics: [topic1] (org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)

[

2016

-

10

-

12

15

:

45

:

11

,

927

] WARN TGT renewal thread has been interrupted and will exit. (org.apache.kafka.common.security.kerberos.Login)

Processed a total of 

messages

cdh

繼續閱讀