Hadoop介绍
Hadoop集群一般为一个NameNode和ResourceManager,但在实际生产环境中,若恰好具有NameNode和ResourceManager的节点出现故障,那么整个Hadoop集群将会崩溃,这是因为在HDFS中NameNode是系统的核心节点,ResourceManager负责整个系统的资源管理和分配。
为了解决单点故障问题,在Hadoop2后中引入了高可用机制,支持NameNode和ResourceManager一个主节点和一个备用节点,而在Hadoop3中继续对其进行了优化和提升,它支持一个主节点和多个备用节点。所谓高可用(High Availability, HA)就是7*24 小时不中断服务,消除单点故障。
Hadoop HA严格来说应该分成各个组件的HA机制:HDFS的HA和YARN的HA,可以通过配置多个NameNode和ResourceManager(Active/Standby)实现在集群中的热备来解决上述问题。
环境准备:
图片
节点相关进程如下:
图片
- 操作系统:centos8
- 内存:4G
- Java 版本:jdk8
HDFS和YARN HA集群搭建
3.1 下载hadoop安装包
官网https://hadoop.apache.org/下载hadoop 3.3.0安装包解压至/usr/local下3台机器需修改的配置文件目录为/usr/local/hadoop/etc/hadoop下。
3.2 hadoop.env
export JAVA_HOME=/usr/local/jdk #配置jdk路径
#添加两行
export HDFS_JOURNALNODE_USER=root
export HDFS_ZKFC_USER=root
3.3 core-site.xml
fs.defaultFS
hdfs://ns
hadoop.tmp.dir
/usr/local/hadoop/tmp
dfs.journalnode.edits.dir
/usr/local/hadoop/tmp/jn
ha.zookeeper.quorum
hadoop:2181,k8s-2:2181,k8s-3:2181
3.4 hdfs-site.xml
dfs.replication
3
dfs.namenode.name.dir
/usr/local/hadoop/dfs/name
dfs.datanode.data.dir
/usr/local/hadoop/dfs/data
dfs.journalnode.edits.dir
/usr/local/hadoop/dfs/journalnode
The path where the JournalNode daemon will store its local state.
dfs.nameservices
ns
The logical name for this new nameservice.
dfs.ha.namenodes.ns
nn1,nn2,nn3
Unique identifiers for each NameNode in the nameservice.
dfs.namenode.rpc-address.ns.nn1
hadoop:8020
The fully-qualified RPC address for nn1 to listen on.
dfs.namenode.rpc-address.ns.nn2
k8s-2:8020
The fully-qualified RPC address for nn2 to listen on.
dfs.namenode.rpc-address.ns.nn3
k8s-3:8020
The fully-qualified RPC address for nn3 to listen on.
dfs.namenode.http-address.ns.nn1
hadoop:9870
The fully-qualified HTTP address for nn1 to listen on.
dfs.namenode.http-address.ns.nn2
k8s-2:9870
The fully-qualified HTTP address for nn2 to listen on.
dfs.namenode.http-address.ns.nn3
k8s-3:9870
The fully-qualified HTTP address for nn3 to listen on.
dfs.namenode.shared.edits.dir
qjournal://hadoop:8485;k8s-3:8485;k8s-2:8485/ns
The URI which identifies the group of JNs where the NameNodes will write/read edits.
dfs.client.failover.proxy.provider.ns
org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
The Java class that HDFS clients use to contact the Active NameNode.
dfs.ha.fencing.methods
sshfence
shell(/bin/true)
A list of scripts or Java classes which will be used to fence the Active NameNode during a failover.
sshfence - SSH to the Active NameNode and kill the process
shell - run an arbitrary shell command to fence the Active NameNode
dfs.ha.fencing.ssh.private-key-files
/root/.ssh/id_rsa
Set SSH private key file.
dfs.ha.automatic-failover.enabled
true
Automatic failover.
3.5 mapred-site.xml
mapreduce.framework.name
yarn
yarn.app.mapreduce.am.env
HADOOP_MAPRED_HOME=/usr/local/hadoop
mapreduce.map.env
HADOOP_MAPRED_HOME=/usr/local/hadoop
mapreduce.reduce.env
HADOOP_MAPRED_HOME=/usr/local/hadoop
3.6 yarn-site.xml
yarn.resourcemanager.ha.enabled
true
Enable RM HA.
yarn.resourcemanager.cluster-id
yrc
Identifies the cluster.
yarn.resourcemanager.ha.rm-ids
rm1,rm2,rm3
List of logical IDs for the RMs. e.g., "rm1,rm2".
yarn.resourcemanager.hostname.rm1
hadoop
Set rm1 service addresses.
yarn.resourcemanager.hostname.rm2
k8s-2
Set rm2 service addresses.
yarn.resourcemanager.hostname.rm3
k8s-3
Set rm3 service addresses.
yarn.resourcemanager.webapp.address.rm1
hadoop:8088
Set rm1 web application addresses.
yarn.resourcemanager.webapp.address.rm2
k8s-2:8088
Set rm2 web application addresses.
yarn.resourcemanager.webapp.address.rm3
k8s-3:8088
Set rm3 web application addresses.
hadoop.zk.address
hadoop:2181,k8s-2:2181,k8s-3:2181
Address of the ZK-quorum.
3.7 workers
hadoop
k8s-2
k8s-3
安装zookeeper
版本:zookeeper-3.6.4
通过https://www.apache.org/dyn/closer.lua/zookeeper/zookeeper-3.6.4/apache-zookeeper-3.6.4-bin.tar.gz 下载安装包,3台机器进行解压配置安装。
echo "1" > /data/zookeeperdata/myid #不同机器id不同
zoo.cfg配置如下:
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data/zookeeperdata #数据目录
dataLogDir=/data/zookeeperdata/logs #日志目录
clientPort=2181 #端口
server.1=192.xxx.xxx.128:2888:3888
server.2=192.xxx.xxx.132:2888:3888
server.3=192.xxx.xxx.131:2888:3888
环境变量配置
vi /etc/profile
export JAVA_HOME=/usr/local/jdk
export HAD00P_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export HADOOP_CLASSPATH=`hadoop classpath`
source /etc/profile
启动集群
在所有节点上使用rm -rf /usr/local/hadoop/dfs命令,删除之前创建的存储路径,同时在master节点上执行mkdir -p /usr/local/hadoop/dfs/name /usr/local/hadoop/dfs/data /usr/local/hadoop/dfs/journalnode,再次创建存储路径。
在所有节点上使用rm -rf /usr/local/hadoop/tmp /usr/local/hadoop/logs && mkdir -p /usr/local/hadoop/tmp /usr/local/hadoop/logs命令,重置临时路径和日志信息。
通过以上步骤,Hadoop HA集群就配置完成了,当第一次启动HA集群时需要依次执行以下命令:
$ZOOKEEPER_HOME/bin/zkServer.sh start # 开启Zookeeper进程(所有节点上执行)
$HADOOP_HOME/bin/hdfs --daemon start journalnode # 开启监控NameNode的管理日志的JournalNode进程(所有节点上执行)
$HADOOP_HOME/bin/hdfs namenode -format # 命令格式化NameNode(在master节点上执行)
scp -r /usr/local/hadoop/dfs k8s-2:/usr/local/hadoop # 将格式化后的目录复制到slave1中(在master节点上执行)
scp -r /usr/local/hadoop/dfs k8s-3:/usr/local/hadoop # 将格式化后的目录复制到slave2中(在master节点上执行)
$HADOOP_HOME/bin/hdfs zkfc -formatZK # 格式化Zookeeper Failover Controllers(在master节点上执行)
start-dfs.sh&&start-yarn.sh # 启动HDFS和Yarn集群(在master节点上执行)
若不是第一次启动HA集群(常规启动HA集群),则只需要依次执行以下命令:
$ZOOKEEPER_HOME/bin/zkServer.sh start # 开启Zookeeper进程(所有节点上执行)
start-all.sh或者$HADOOP_HOME/sbin/start-dfs.sh && $HADOOP_HOME/sbin/start-yarn.sh # 启动HDFS和Yarn集群(在master节点上执行)
启动完成后每个节点使用jps命令会出现NameNode、DataNode、ResourceManager、NodeManager、JournalNode、DFSZKFailoverController、QuorumPeerMain和Jps 8个进程。
图片
图片
通过页面访问查看:
http://192.xxx.xxx.128:9870/
图片
http://192.xxx.xxx.128:8088/cluster/nodes
图片
hdfs HA验证
6.1 查看各个节点NameNode状态
图片
6.2 验证hdfs的高可用
在开启HA集群并成功启动的情况下,在master节点中使用hdfs haadmin -getAllServiceState命令查看各个节点NameNode状态,接下来停止active状态节点的NameNode进程。
图片
Namenode active自动跳转其他节点,集群仍可用。
图片
图片
随后再启动该节点NameNode进程,最后再次查看状态,可以发现HDFS HA是正常的,并且没有发生抢占。
图片
验证yarn 高可用
在master节点中使用yarn rmadmin -getAllServiceState命令查看各个节点ResourceManager状态,接下来停止active状态节点的ResourceManage进程,ResourceManage active自动跳转到其他节点,集群仍可用,随后再启动该节点ResourceManager进程,最后再次查看状态,Failed状态恢复为standby。
图片
若要关闭集群可以master使用$HADOOP_HOME/sbin/stop-yarn.sh && $HADOOP_HOME/sbin/stop-dfs.sh命令或者stop-all.sh 即可关闭hadoop集群,然后关闭zookeeper,三台机器分别执行/data/apache-zookeeper-3.6.4-bin/bin/zkServer.sh stop。