Welcome 微信登录

首页 / 数据库 / MySQL / HBase0.9x问题总结

1.最近hbase的rgion经常挂掉一个,查看该节点日志发现如下错误:2014-02-22 01:52:02,194 ERROR org.apache.Hadoop.hbase.regionserver.HRegionServer: Close and delete failedorg.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)查了很长时间也没找到hbase的问题,后来根据网上资料查看了hadoop的日志如下:2014-02-22 01:52:00,935 ERROR org.apache.hadoop.security.UserGroupInformation: PriviledgedActionException as:hadoop cause:org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.2014-02-22 01:52:00,936 INFO org.apache.hadoop.ipc.Server: IPC Server handler 3 on 9000, call addBlock(/hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411, DFSClient_hb_rs_testhd3,60020,1392948100268, null) from 172.72.101.213:59979: error: org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException: No lease on /hbase/.logs/testhd3,60020,1392948100268/testhd3%2C60020%2C1392948100268.1393004989411 File does not exist. Holder DFSClient_hb_rs_testhd3,60020,1392948100268 does not have any open files.at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1631)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkLease(FSNamesystem.java:1622)结果发现两个日志有几乎相同的记录,可以确认hbase的问题是由hadoop引起,修改如下:解决办法,调整xcievers参数默认是4096,改为8192vi /home/dwhftp/opt/hadoop/conf/hdfs-site.xml<property><name>dfs.datanode.max.xcievers</name><value>8192</value></property>dfs.datanode.max.xcievers 参数说明一个 Hadoop HDFS Datanode 有一个同时处理文件的上限. 这个参数叫 xcievers (Hadoop的作者把这个单词拼错了). 在你加载之前,先确认下你有没有配置这个文件conf/hdfs-site.xml里面的xceivers参数,至少要有4096:<property><name>dfs.datanode.max.xcievers</name><value>4096</value></property>HBase 的详细介绍:请点这里
HBase 的下载地址:请点这里相关阅读:Hadoop+HBase搭建云存储总结 PDF http://www.linuxidc.com/Linux/2013-05/83844.htmHBase 结点之间时间不一致造成regionserver启动失败 http://www.linuxidc.com/Linux/2013-06/86655.htmHadoop+ZooKeeper+HBase集群配置 http://www.linuxidc.com/Linux/2013-06/86347.htmHadoop集群安装&HBase实验环境搭建 http://www.linuxidc.com/Linux/2013-04/83560.htm基于Hadoop集群的HBase集群的配置 http://www.linuxidc.com/Linux/2013-03/80815.htm‘Hadoop安装部署笔记之-HBase完全分布模式安装 http://www.linuxidc.com/Linux/2012-12/76947.htm单机版搭建HBase环境图文教程详解 http://www.linuxidc.com/Linux/2012-10/72959.htmOracle XE http端口8080的修改Oracle 10g 创建 DBLink相关资讯      Hbase 
  • HBase 参考文档翻译之 Getting   (08月15日)
  • HBase应用开发回顾与总结系列  (01月10日)
  • Apache HBase 2015年发展回顾与未  (01月04日)
  • 为啥HBase需要搭建SQL引擎层  (02月19日)
  • HBase表数据分页处理  (01月10日)
  • Hbase VS Oracle  (11/21/2015 20:22:40)
本文评论 查看全部评论 (0)
表情: 姓名: 字数