天天看點

HADOOP在處理HIVE時權限錯誤的解決辦法

今天,小喬操作時發現問題:

org.apache.hadoop.security.accesscontrolexception: permission denied: user=root, access=write, inode="/user":hdfs:supergroup:drwxr-xr-x

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkfspermission(fspermissionchecker.java:265)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker.java:251)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.check(fspermissionchecker.java:232)

at org.apache.hadoop.hdfs.server.namenode.fspermissionchecker.checkpermission(fspermissionchecker.java:176)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkpermission(fsnamesystem.java:5490)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkpermission(fsnamesystem.java:5472)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.checkancestoraccess(fsnamesystem.java:5446)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirsinternal(fsnamesystem.java:3600)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirsint(fsnamesystem.java:3570)

at org.apache.hadoop.hdfs.server.namenode.fsnamesystem.mkdirs(fsnamesystem.java:3544)

at org.apache.hadoop.hdfs.server.namenode.namenoderpcserver.mkdirs(namenoderpcserver.java:739)

at org.apache.hadoop.hdfs.protocolpb.clientnamenodeprotocolserversidetranslatorpb.mkdirs(clientnamenodeprotocolserversidetranslatorpb.java:558)

at org.apache.hadoop.hdfs.protocol.proto.clientnamenodeprotocolprotos$clientnamenodeprotocol$2.callblockingmethod(clientnamenodeprotocolprotos.java)

at org.apache.hadoop.ipc.protobufrpcengine$server$protobufrpcinvoker.call(protobufrpcengine.java:585)

at org.apache.hadoop.ipc.rpc$server.call(rpc.java:1026)

at org.apache.hadoop.ipc.server$handler$1.run(server.java:1986)

at org.apache.hadoop.ipc.server$handler$1.run(server.java:1982)

at java.security.accesscontroller.doprivileged(native method)

at javax.security.auth.subject.doas(subject.java:415)

at org.apache.hadoop.security.usergroupinformation.doas(usergroupinformation.java:1548)

at org.apache.hadoop.ipc.server$handler.run(server.java:1980)

at sun.reflect.nativeconstructoraccessorimpl.newinstance0(native method)

at sun.reflect.nativeconstructoraccessorimpl.newinstance(nativeconstructoraccessorimpl.java:57)

at sun.reflect.delegatingconstructoraccessorimpl.newinstance(delegatingconstructoraccessorimpl.java:45)

at java.lang.reflect.constructor.newinstance(constructor.java:526)

at org.apache.hadoop.ipc.remoteexception.instantiateexception(remoteexception.java:106)

at org.apache.hadoop.ipc.remoteexception.unwrapremoteexception(remoteexception.java:73)

at org.apache.hadoop.hdfs.dfsclient.primitivemkdir(dfsclient.java:2549)

at org.apache.hadoop.hdfs.dfsclient.mkdirs(dfsclient.java:2518)

at org.apache.hadoop.hdfs.distributedfilesystem$16.docall(distributedfilesystem.java:827)

at org.apache.hadoop.hdfs.distributedfilesystem$16.docall(distributedfilesystem.java:823)

at org.apache.hadoop.fs.filesystemlinkresolver.resolve(filesystemlinkresolver.java:81)

at org.apache.hadoop.hdfs.distributedfilesystem.mkdirsinternal(distributedfilesystem.java:823)

at org.apache.hadoop.hdfs.distributedfilesystem.mkdirs(distributedfilesystem.java:816)

at org.apache.hadoop.mapreduce.jobsubmissionfiles.getstagingdir(jobsubmissionfiles.java:125)

at org.apache.hadoop.mapreduce.jobsubmitter.submitjobinternal(jobsubmitter.java:348)

at org.apache.hadoop.mapreduce.job$10.run(job.java:1295)

at org.apache.hadoop.mapreduce.job$10.run(job.java:1292)

at org.apache.hadoop.mapreduce.job.submit(job.java:1292)

at org.apache.hadoop.mapred.jobclient$1.run(jobclient.java:562)

at org.apache.hadoop.mapred.jobclient$1.run(jobclient.java:557)

at org.apache.hadoop.mapred.jobclient.submitjobinternal(jobclient.java:557)

at org.apache.hadoop.mapred.jobclient.submitjob(jobclient.java:548)

at org.apache.hadoop.hive.ql.exec.mr.execdriver.execute(execdriver.java:425)

at org.apache.hadoop.hive.ql.exec.mr.mapredtask.execute(mapredtask.java:136)

at org.apache.hadoop.hive.ql.exec.task.executetask(task.java:151)

at org.apache.hadoop.hive.ql.exec.taskrunner.runsequential(taskrunner.java:65)

at org.apache.hadoop.hive.ql.driver.launchtask(driver.java:1485)

at org.apache.hadoop.hive.ql.driver.execute(driver.java:1263)

at org.apache.hadoop.hive.ql.driver.runinternal(driver.java:1091)

at org.apache.hadoop.hive.ql.driver.run(driver.java:931)

at org.apache.hadoop.hive.ql.driver.run(driver.java:921)

at org.apache.hadoop.hive.cli.clidriver.processlocalcmd(clidriver.java:268)

at org.apache.hadoop.hive.cli.clidriver.processcmd(clidriver.java:220)

at org.apache.hadoop.hive.cli.clidriver.processline(clidriver.java:422)

at org.apache.hadoop.hive.cli.clidriver.executedriver(clidriver.java:790)

at org.apache.hadoop.hive.cli.clidriver.run(clidriver.java:684)

at org.apache.hadoop.hive.cli.clidriver.main(clidriver.java:623)

at sun.reflect.nativemethodaccessorimpl.invoke0(native method)

at sun.reflect.nativemethodaccessorimpl.invoke(nativemethodaccessorimpl.java:57)

at sun.reflect.delegatingmethodaccessorimpl.invoke(delegatingmethodaccessorimpl.java:43)

at java.lang.reflect.method.invoke(method.java:606)

at org.apache.hadoop.util.runjar.main(runjar.java:212)

解決辦法網上有很多,差不多都是同樣的,用修改配置的辦法解決,,呵呵,我們用cdh執行個體時,web界面也可以解決的。。

conf/hdfs-core.xml, 找到 dfs.permissions 的配置項 , 将value值改為 false

<property>

<name>dfs.permissions</name>

<value>false</value>

<description>

if "true", enable permission checking in hdfs.

if "false", permission checking is turned off,

but all other behavior is unchanged.

switching from one parameter value to the other does not change the mode,

owner or group of files or directories.

</description>

</property>

解決辦法:

1)修改hive配置檔案,hive的中間資料輸出路徑指向其他目錄

cd /opt/hive-0.9.0/conf

vi hive-site.xml  #修改如下:

  <name>hive.exec.scratchdir</name>

  <value>/hive_tmp/hive-${user.name}</value> #将hdfs:///hive_tmp目錄作為hive的中間資料路徑

  <description>scratch space for hive jobs</description>

2)修改目錄hdfs:///hive_tmp的所屬使用者和使用者組

hadoop fs -chown -r common_user:common_group /hive_tmp/*

#修改目錄hdfs:///hive_tmp的讀寫權限,確定目錄對使用者組common_group内的所有使用者可通路(rwx)

hadoop fs -chmod g+w /hive_tmp/*

3)将普通使用者user1,user2...等添加到使用者組common_group

usermod -a -g common_group user1

4)轉換使用者到user1,測試hive查詢

su user1

hive -e 'select * from taxi where speed > 150;'

# 查詢成功

HADOOP在處理HIVE時權限錯誤的解決辦法
HADOOP在處理HIVE時權限錯誤的解決辦法