安装准备
1、准备好虚拟机环境三台 ,ip分别为
192.168.204.10 nameNode
192.168.204.11 dataNode
192.168.204.12 dataNode
我这里采用 centos7.8 安装好jdk1.8
jdk安装见 https://blog.csdn.net/zhangxm_qz/article/details/106404878

2、准备 hadoop2.6.1安装包

配置虚拟机 ip 和hosts
修改三台服务器的ip为固定ip 并配置hosts域名解析
虚拟机IP和网关配置见:https://editor.csdn.net/md/?articleId=87940088

配置hosts域名解析

#127.0.0.1  server03 #localhost localhost.localdomain localhost4 localhost4.localdomain4 server01
#::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.204.10 server01 hadoop01 zk01 hbase01
192.168.204.11 server02 hadoop02 zk02 hbase02
192.168.204.12 server03 hadoop03 zk03 hbase03

*三台虚拟机都要配置,我这里第一行注释掉了

修改主机名
三台主机名分别修改为 server01 server 02 server03 我这里以server03为例

[root@server03 ~]# vi /etc/hostname
server03
~
[root@server03 ~]# vi /etc/sysconfig/network
# Created by anaconda
HOSTNAME=server03

关闭地防火墙
三台服务器都要关闭

[root@localhost ~]# systemctl stop firewalld.service
[root@localhost ~]# systemctl disable firewalld.service 

配置免密登录
分别在每台服务器配置对本身及另外两台服务器免密登录(自己也要和自己配置免密登录)

ssh-keygen 命令生成秘钥
ssh-copy-id server02 server03服务器上执行该命令,生成的秘钥复制到机器server02 便可以实现server03免密登录 server02
每台服务器都要 执行上述命令 生成秘钥,并复制到另外两台服务器上 示例过程如下:

[root@localhost etc]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa): 
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:73gbWFij6Ha5rBlpYps68OUyMKv8ENhKDm8FzAxzJ8Q root@localhost.localdomain
The key's randomart image is:
+---[RSA 2048]----+
|oo+ . |
| BEo |
| = o |
|.. . . + . |
|oo. . . S . |
|Bo.... . = |
|.Oooo * + o |
|o.*..* = +.. |
|o.o*o o.+.o. |
+----[SHA256]-----+
[root@localhost etc]# ssh server02
root@server02's password: 
Last login: Wed Jun 10 01:54:39 2020 from 192.168.204.1
[root@localhost ~]# exit
logout
Connection to server02 closed.
[root@localhost etc]# ssh-copy-id server02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@server02's password: 

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'server02'"
and check to make sure that only the key(s) you wanted were added.

解压hadoop压缩包
三台服务器分别解压hadoop-2.6.1.tar.gz,解压后执行 ln -s 命令 将hadoop2.6.1 创建软连接到 hadoop,方便管理
解压后目录如下 我这里解压到了 /opt/apps 目录

lrwxrwxrwx.  1 root  root         12 Jun 10 03:34 hadoop -> hadoop-2.6.1
drwxr-xr-x. 11 10011 10011       172 Jun 10 04:57 hadoop-2.6.1
-rw-r--r--.  1 root  root  197113367 Jun  9 06:10 hadoop-2.6.1.tar.gz
-rw-r--r--.  1 root  root  105718722 Jun  9 06:39 hbase-1.3.1-bin.tar.gz
lrwxrwxrwx.  1 root  root         12 Jun  9 22:43 jdk -> jdk1.8.0_73/
drwxr-xr-x.  8    10   143       255 Jan 29  2016 jdk1.8.0_73
-rw-r--r--.  1 root  root  181310701 Feb 24  2016 jdk-8u73-linux-x64.tar.gz
-rw-r--r--.  1 root  root   17699306 Oct  3  2015 zookeeper-3.4.6.tar.gz

配置环境变量
执行 vi /etc/profile 命令 在文件末尾追加如下内容(三台服务器都要修改)

#hadoop env
export HADOOP_HOME=/opt/apps/hadoop
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin

保存退出 并执行 source /etc/profile 命令是配置生效

[root@server01 ~]# source /etc/profile

修改配置文件

三台服务器分别操作, 我这里 服务器名字为 server01 server02 server03
创建 临时目录 /opt/apps/hadoop/tmp

修改core-site.xml配置

<configuration>
        <property>
                <name>fs.defaultFS</name>
                <value>hdfs://server01:9000</value>
                </property>
        <property>
                <name>hadoop.tmp.dir</name>
                <value>/opt/apps/hadoop/tmp</value>
        </property>
</configuration>

hadoop-env.sh 中增加 java环境变量配置

export JAVA_HOME=/opt/apps/jdk

hdfs-site.xml

<configuration>
<property>
        <name>dfs.replication</name>
        <value>2</value>
</property>
    <property>
         <name>dfs.secondary.http.address</name>
         <value>server01:50090</value>
    </property>
</configuration>

mapred-env.sh 增加java环境变量

export JAVA_HOME=/opt/apps/jdk

mapred-site.xml

<configuration>
<property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
</property>
        <property>
                <name>mapreduce.jobhistory.address</name>
                <value>server01:10020</value>
        </property>
        <property>
                <name>mapreduce.jobhistory.webapp.address</name>
                <value>server01:19888</value>
        </property>

</configuration>

slaves

server02
server03

yarn-site.xml

<configuration>

<!-- Site specific YARN configuration properties -->
        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>server01</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>
</configuration>

namenode初始化
bin目录下执行如下命令初始化nameNode

hadoop namenode -format

启动hadoop
执行 sbin下的start-all.sh 启动服务

[root@server01 sbin]# ./start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [server01]
server01: starting namenode, logging to /opt/apps/hadoop-2.6.1/logs/hadoop-root-namenode-server01.out
server02: starting datanode, logging to /opt/apps/hadoop-2.6.1/logs/hadoop-root-datanode-server02.out
server03: starting datanode, logging to /opt/apps/hadoop-2.6.1/logs/hadoop-root-datanode-server03.out
Starting secondary namenodes [server01]
server01: starting secondarynamenode, logging to /opt/apps/hadoop-2.6.1/logs/hadoop-root-secondarynamenode-server01.out
starting yarn daemons
starting resourcemanager, logging to /opt/apps/hadoop-2.6.1/logs/yarn-root-resourcemanager-server01.out
server02: starting nodemanager, logging to /opt/apps/hadoop-2.6.1/logs/yarn-root-nodemanager-server02.out
server03: starting nodemanager, logging to /opt/apps/hadoop-2.6.1/logs/yarn-root-nodemanager-server03.out

查看进程
执行jps 查看各个节点启动的进程
server01上启动了nameNode

[root@server01 sbin]# jps
3258 Jps
3003 ResourceManager
2861 SecondaryNameNode

server02和server03 上启动了dataNode

[root@server02 hadoop]# jps
1888 NodeManager
2010 Jps
1803 DataNode
[root@server03 hadoop]# jps
1888 NodeManager
1802 DataNode
2011 Jps

访问服务
访问 http://server01:50070/ 可以看看到hadoop信息,可以浏览目录及文件
在这里插入图片描述
上传文件测试

[root@server01 hadoop]# hadoop fs -put  /test.txt   /input
[root@server01 hadoop]# hadoop fs -ls /
Found 1 items
drwxr-xr-x   - root supergroup          0 2020-06-12 04:54 /input
[root@server01 hadoop]# hadoop fs -ls /input
Found 1 items
-rw-r--r--   2 root supergroup          7 2020-06-12 04:54 /input/test.txt
[root@server01 hadoop]# 

问题解决
1、初始化了两次nameNode 导致 dataNode 和NameNode clusterId不一致,
nameNode上查找VERSION 文件 并将 /opt/apps/hadoop-2.6.1/tmp/dfs/name/current/VERSION 文件中的 clusterID记录下来

[root@server01 hadoop]# find / -name VERSION
/usr/share/doc/gpgme-1.3.2/VERSION
/usr/share/doc/lvm2-2.02.186/VERSION
/opt/apps/hadoop-2.6.1/tmp/dfs/namesecondary/current/VERSION
/opt/apps/hadoop-2.6.1/tmp/dfs/name/current/VERSION

[root@server01 hadoop]# cat /opt/apps/hadoop-2.6.1/tmp/dfs/name/current/VERSION
#Fri Jun 12 04:16:15 EDT 2020
namespaceID=248370870
clusterID=CID-fcb850dc-b6fd-4c83-bb30-926de18e0614
cTime=0
storageType=NAME_NODE
blockpoolID=BP-686831001-192.168.204.10-1591949775155
layoutVersion=-60

dataNode 上找到 /opt/apps/hadoop-2.6.1/tmp/dfs/data/current/VERSION 文件修改文件中的clusterID与NameNode相同

[root@server03 hadoop]#  find / -name VERSION
/usr/share/doc/gpgme-1.3.2/VERSION
/usr/share/doc/lvm2-2.02.186/VERSION
/opt/apps/hadoop-2.6.1/tmp/dfs/data/current/VERSION
/opt/apps/hadoop-2.6.1/tmp/dfs/data/current/BP-440591175-192.168.204.10-1591776987762/current/VERSION
[root@server03 hadoop]# cat /opt/apps/hadoop-2.6.1/tmp/dfs/name/current/VERSION

[root@server03 hadoop]# vi /opt/apps/hadoop-2.6.1/tmp/dfs/data/current/VERSION
#Wed Jun 10 04:57:25 EDT 2020
storageID=DS-b163b449-e6e3-463a-aeb8-f22c648aa952
clusterID=CID-fcb850dc-b6fd-4c83-bb30-926de18e0614
cTime=0
datanodeUuid=76fd87a6-f462-489d-b77a-706f1c6c2f4f
storageType=DATA_NODE
layoutVersion=-56

2、未初始化nameNode也会各种不能成功访问

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐