Hadoop2X HA环境部署
一、前期准备1.1安装了jdk1.2安装了zookeeper分布式zookeeper单机和集群(全分布)的安装过程_一个人的牛牛的博客-CSDN博客1.3配置免密码登录Linux配置免密登录单机和全分布_一个人的牛牛的博客-CSDN博客1.4机器准备主节点从节点hadoop01hadoop02hadoop02hadoop03笔记本性能不够,没有安装多的虚拟机,hadoop02重复使用。
目录
一、前期准备
1.1安装了jdk
没有的可以参考:
Linux系统CentOS7安装jdk_一个人的牛牛的博客-CSDN博客
1.2安装了zookeeper分布式
没有的可以参考:
zookeeper单机和集群(全分布)的安装过程_一个人的牛牛的博客-CSDN博客
1.3配置免密码登录
没有的可以参考:
Linux配置免密登录单机和全分布_一个人的牛牛的博客-CSDN博客
1.4机器准备
主节点 | 从节点 |
hadoop01 | hadoop02 |
hadoop02 | hadoop03 |
笔记本性能不够,没有安装多的虚拟机,hadoop02重复使用。
二、安装部署
2.1上传
MobaXterm_Portable的简单使用_一个人的牛牛的博客-CSDN博客_mobaxterm portable和installer区别
2.2解压
进入安装包目录,执行:
tar -zvxf hadoop-2.7.3.tar.gz /training/
2.3配置环境变量(每一台都要)
vi ~/.bash_profile
#hadoop
export HADOOP_HOME=/training/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
环境变量生效
source ~/.bash_profile
2.4修改hadoo-env.sh
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi hadoo-env.sh
添加内容:
export JAVA_HOME=/training/jdk1.8.0_171
2.5修改core-site.xml
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi core-site.xml
添加内容:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://HAhadoop01</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/training/hadoop-2.7.3/tmp</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
</configuration>
2.6修改hdfs-site.xml
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi hdfs-site.xml
添加内容:
<configuration>
<!--指定hdfs的nameservice为HAhadoop01,需要和core-site.xml中的保持一致 -->
<property>
<name>dfs.nameservices</name>
<value>HAhadoop01</value>
</property>
<!-- HAhadoop01下面有两个NameNode,分别是HAhadoop02,HAhadoop03 -->
<property>
<name>dfs.ha.namenodes.HAhadoop01</name>
<value>HAhadoop02,HAhadoop03</value>
</property>
<!-- HAhadoop02的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop02</name>
<value>hadoop01:9000</value>
</property>
<!-- HAhadoop02的http通信地址 -->
<property>
<name>dfs.namenode.http-address.HAhadoop01.HAhadoop02</name>
<value>hadoop01:50070</value>
</property>
<!-- HAhadoop03的RPC通信地址 -->
<property>
<name>dfs.namenode.rpc-address.HAhadoop01.HAhadoop03</name>
<value>hadoop02:9000</value>
</property>
<!-- nn2的http通信地址 -->
<property>
<name>dfs.namenode.http-address.HAhadoop01.HAhadoop03</name>
<value>hadoop02:50070</value>
</property>
<!-- 指定NameNode的日志在JournalNode上的存放位置 -->
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://hadoop01:8485;hadoop02:8485;/HAhadoop01</value>
</property>
<!-- 指定JournalNode在本地磁盘存放数据的位置 -->
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/training/hadoop-2.7.3/journal</value>
</property>
<!-- 开启NameNode失败自动切换 -->
<property>
<name>dfs.ha.automatic-failover.enabled.HAhadoop01</name>
<value>true</value>
</property>
<!-- 配置失败自动切换实现方式 -->
<property>
<name>dfs.client.failover.proxy.provider.HAhadoop01</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<!-- 配置隔离机制方法,多个机制用换行分割,即每个机制暂用一行-->
<property>
<name>dfs.ha.fencing.methods</name>
<value>
sshfence
shell(/bin/true)
</value>
</property>
<!-- 使用sshfence隔离机制时需要ssh免登陆 -->
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<!-- 配置sshfence隔离机制超时时间 -->
<property>
<name>dfs.ha.fencing.ssh.connect-timeout</name>
<value>30000</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>/training/hadoop-2.7.3/data</value>
</property>
<property>
<name>dfs.datanode.name.dir</name>
<value>/training/hadoop-2.7.3/name</value>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
</configuration>
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,创建journal和tmp
mkdir tmp
mkdir data
mkdir name
mkdir journal
2.7修改mapred-site.xml
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi mapred-site.xml
添加内容:
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
2.8修改yarn-site.xml
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi yarn-site.xml
添加内容:
<configuration>
<!-- 开启RM高可靠 -->
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<!-- 指定RM的cluster id -->
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>yarn</value>
</property>
<!-- 指定RM的名字 -->
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<!-- 分别指定RM的地址 -->
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>hadoop01</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>hadoop02</value>
</property>
<!-- 指定zk集群地址 -->
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>hadoop01:2181,hadoop02:2181,hadoop03:2181</value>
</property>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
2.9修改slaves
进入hadoop目录的etc/hadoop目录,如:/training/hadoop-2.7.3/etc/hadoop,执行:
vi slaves
添加内容:
hadoop02
hadoop03
2.10修改:yarn-env.sh
添加
export JAVA_HOME=/training/jdk1.8.0_171
2.11拷贝到其他节点
scp -r hadoop-2.7.3/ root@hadoop02:/training/
scp -r hadoop-2.7.3/ root@hadoop03:/training/
三、验证
3.1启动zookeeper集群(每一台都要)
进入zookeeper的bin目录下执行:
zkServer.sh start
jps查看
jps、kafka、zookeeper群起脚本和rsync文件分发脚本(超详细)_一个人的牛牛的博客-CSDN博客
3.2启动
#在所有节点启动journalnode(一次,不可多次执行)
hadoop-daemon.sh start journalnode
#格式化HDFS(在hadoop01上执行)(一次,不可多次执行)
hdfs namenode -format
#将hadoop-2.7.3/name拷贝到hadoop02的/root/training/hadoop-2.7.3/name下
scp -r /training/hadoop-2.7.3/tmp/ root@hadoop02:/training/hadoop-2.7.3/
#格式化zookeeper(一次,不可多次执行)
hdfs zkfc -formatZK
会有日志Successfully created /hadoop-ha/HAhadoop01 in ZK.
#在所有节点停止journalnode(一次,不可多次执行)
hadoop-daemon.sh stop journalnode
#hadoop01和hadoop02上启动zkfc
hadoop-daemon.sh start zkfc
#在hadoop01上启动Hadoop集群
start-all.sh
#在hadoop02上启动ResourceManager
yarn-daemon.sh start resourcemanager
3.3jps查看进程
3.4浏览器验证
浏览器输入:hadoop01:50070
没有做ip地址映射的输入ip+50070如:192.168.12.134:50070
8088端口
参考于:
若兰幽竹的博客_CSDN博客-Kettle,Spark,Hadoop领域博主
https://blog.csdn.net/it_technologier/category_11482225.html?spm=1001.2014.3001.5482
更多推荐
所有评论(0)