前阶段线上在做Hive升级(CDH4.2.0 Hive 0.10——> Apache Hive0.11 with our patches)和Shark上线踩了不少坑,先来说一个Hiveserver的问题.

beeline进入后随便执行一个查询就会报错:

USERxxx don’t have write privilegs under /tmp/hive-hdfs

不对啊,已经启用了impersonation怎么还会去hdfs下的scratchdir写入临时文件呢?查看下代码发现原来CDH4.2Hiveimpersonationhive0.11在这处的判断行为是不同的:

Hive0.11 apache:只有在启用kerberos才使用hive-xxx作为scratchdir否则使用hiveserver的start user的scratchdir

if (        cliService.getHiveConf().getVar(ConfVars.HIVE_SERVER2_AUTHENTICATION)        .equals(HiveAuthFactory.AuthTypes.KERBEROS.toString())        &&        cliService.getHiveConf().        getBoolVar(ConfVars.HIVE_SERVER2_ENABLE_DOAS)        )    {      String delegationTokenStr = null;      try {        delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName);      } catch (UnsupportedOperationException e) {        // The delegation token is not applicable in the given deployment mode      }      sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(),            req.getConfiguration(), delegationTokenStr);    } else {      sessionHandle = cliService.openSession(userName, req.getPassword(),            req.getConfiguration());    }

Cloudera4.2.0的Hive0.10是只要启用了impersonation就使用独自的scratchdir...

if (cliService.getHiveConf().          getBoolVar(HiveConf.ConfVars.HIVE_SERVER2_KERBEROS_IMPERSONATION)) {        String delegationTokenStr = null;        try {          delegationTokenStr = cliService.getDelegationTokenFromMetaStore(userName);        } catch (UnsupportedOperationException e) {          // The delegation token is not applicable in the given deployment mode        }        sessionHandle = cliService.openSessionWithImpersonation(userName, req.getPassword(),              req.getConfiguration(), delegationTokenStr);      } else {        sessionHandle = cliService.openSession(userName, req.getPassword(),              req.getConfiguration());      }

并且这个作为一个Hiveserver的bug在0.13被修复:

workaround也简单,就是把/tmp/hive-hdfs改成777就好了=。=坑爹啊