java.io.IOException: Incompatible clusterIDs
发布日期:2021-08-16 13:27:52 浏览次数:57 分类:技术文章

本文共 3270 字,大约阅读时间需要 10 分钟。

启动Hadoop集群的时候,所有的datanode启动不了,报错如下

java.io.IOException: Incompatible clusterIDs in /home/xiaoqiu/hadoop_tmp/dfs/data:namenode clusterID = CID-7ecadf3f-9aa7-429a-8013-4e3ad1f28870; datanode clusterID = CID-77fab491-d173-4dd3-8bc4-f36c0cb28b29        at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:777)        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadStorageDirectory(DataStorage.java:300)        at org.apache.hadoop.hdfs.server.datanode.DataStorage.loadDataStorage(DataStorage.java:416)        at org.apache.hadoop.hdfs.server.datanode.DataStorage.addStorageLocations(DataStorage.java:395)        at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:573)        at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:1393)        at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:1358)        at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:313)        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:216)        at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:637)        at java.lang.Thread.run(Thread.java:745)

解决办法:

分别进入各个节点的临时目录,找到data和name目录下面的VERSION,将data的ClusterID改成和name 一致

master节点s150:

name:

[xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/name/current]$ cat VERSION#Sun Dec 31 00:29:38 EST 2017namespaceID=685530356clusterID=CID-cd569893-3a8e-4837-8c10-bdb93fd50d65cTime=0storageType=NAME_NODEblockpoolID=BP-907694094-192.168.109.150-1514698178308layoutVersion=-63

data:

xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION#Sun Dec 31 00:27:44 EST 2017storageID=DS-a5caee40-5e97-4751-bcec-dc4f7a7e3fdaclusterID=CID-576629e1-43c9-4669-a6ee-74c5344be3df//不一致cTime=0datanodeUuid=b8cfe998-2d55-4fcc-9fc5-e849017cbcebstorageType=DATA_NODElayoutVersion=-56

修改data的clusterID

[xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION#Sun Dec 31 00:27:44 EST 2017storageID=DS-a5caee40-5e97-4751-bcec-dc4f7a7e3fdaclusterID=CID-cd569893-3a8e-4837-8c10-bdb93fd50d65//修改成和name一致cTime=0datanodeUuid=b8cfe998-2d55-4fcc-9fc5-e849017cbcebstorageType=DATA_NODElayoutVersion=-56

启动datanode

[xiaoqiu@s150 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ start-dfs.sh

节点s151:

name:

[xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/name/current]$ cat VERSION#Mon Dec 25 15:20:38 EST 2017namespaceID=875672388clusterID=CID-45abf7d9-2dec-4f77-b800-f20ddab41a1bcTime=0storageType=NAME_NODEblockpoolID=BP-1336727972-192.168.109.151-1514233238518layoutVersion=-63

data:

[xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ cat VERSION#Sun Dec 24 10:58:58 EST 2017storageID=DS-421723b4-ab06-486c-aece-a5a0b3f2d25e#clusterID=CID-77fab491-d173-4dd3-8bc4-f36c0cb28b29clusterID=CID-afd6244d-a77a-4ffe-a5ef-ce1a810145a7//修改为CID-45abf7d9-2dec-4f77-b800-f20ddab41a1bcTime=0datanodeUuid=e7800fda-3197-4ab9-ad34-24a1293f8097storageType=DATA_NODElayoutVersion=-56

启动datanode

[xiaoqiu@s151 /home/xiaoqiu/hadoop_tmp/dfs/data/current]$ hadoop-daemon.sh start datanode

同理,对其他节点亦如此,将每个节点的data的VERSION的clusterID 改成每个节点自己对应的name的VERSION的clusterID

转载于:https://www.cnblogs.com/flyingcr/p/10326966.html

转载地址:https://blog.csdn.net/weixin_30807779/article/details/98214947 如侵犯您的版权,请留言回复原文章的地址,我们会给您删除此文章,给您带来不便请您谅解!

上一篇:一站式自动化测试平台 http://www.Autotestplat.com
下一篇:数据结构之栈和队列

发表评论

最新留言

不错!
[***.144.177.141]2024年04月14日 04时17分18秒