Hadoop环境搭建(仅供学习使用,转载请注明出处)

运行环境:VMWare、CentOS7、JDK1.8、Hadoop2.7.3
文末附有需要用到的软件包下载地址。

1. 使用 VMware 创建 Linux 系统

依次点击 “文件 - 新建虚拟机 - 默认选项,下一步 - 选择Linux版本,下一步 - 修改虚拟机名称和位置,下一步,默认选项,下一步 - 完成”,如图所示:

image-20210923223621225

image-20210923223626482

image-20210923223631140

image-20210923223638657

image-20210923223649429

2. 配置静态 IP

2.1 打开VMware的虚拟网络编辑器,查看网关信息。

image-20210923224447176

image-20210923224454985

image-20210924025342176

2.2 使用如下命令修改网络配置文件,将 BOOTPROTO 由动态路由 dhcp 修改为静态路由 static、ONBOOT 开机启动由 no 修改为 yes,并填入上面获得的网关 GATEWAY、子网掩码 NETMASK、IP地址 IPADDR。

vi /etc/sysconfig/network-scrpits/ifcfg-ens33
GATEWAY=192.x.x.x
NETMASK=255.255.255.0
IPADDR=192.x.x.x

image-20210924025404699

2.3 使用命令 service network restart 更新网络,使得修改信息生效;

2.4 使用 ip a 或 ip addr(centOS 7以后的命令为这个)查看IP地址,如图所示,可以查看到配置的静态IP,即表示配置静态IP成功。

image-20210923225546797

此时还不能直接通过域名 ping 通外网,还需要配置域名解析服务 DNS。

2.5 使用上面的命令在网卡文件后追加 DNS 解析服务器;

2.6 使用命令 service network restart 更新网络,使得修改信息生效;

vi /etc/sysconfig/network-scrpits/ifcfg-ens33
DNS1=114.114.114.114
DNS2=8.8.8.8

使用 ping www.baidu.com 测试如图所示,可以连通外网。

image-20210923231404972

2.7 使用命令 hostnamectl set-hostname ‘xxxx’ 修改主机名(修改后需重启生效)。

3. 使用 SSH 工具连接服务器(补充)

工具介绍:sourceCRT、mobax、Xshell 都可以,以 mobax 为例

image-20210924020629848

双击创建的session,输入密码并保存接受,连接成功如图所示:

image-20210923231953135

4. 将环境需要的安装包上传至服务器

[root@hadoop01 ~]# cd /
[root@hadoop01 /]# ls
bin  boot  dev  etc  home  lib  lib64  media  mnt  opt  proc  root  run  sbin  srv  sys  tmp  usr  var
[root@hadoop01 /]# cd usr/
[root@hadoop01 usr]# ls
bin  etc  games  include  lib  lib64  libexec  local  sbin  share  src  tmp
# 创建存放软件安装包的文件夹
[root@hadoop01 usr]# mkdir software_installer
[root@hadoop01 usr]# ls
bin  etc  games  include  lib  lib64  libexec  local  sbin  share  software_installer  src  tmp
[root@hadoop01 usr]# cd software_installer/
[root@hadoop01 software_installer]# mkdir java
[root@hadoop01 software_installer]# mkdir hadoop
[root@hadoop01 software_installer]# mkdir mysql
[root@hadoop01 software_installer]# ls
hadoop  java  mysql
[root@hadoop01 software_installer]#

如果是使用的 mobax 软件可以在左侧边栏可视化上传文件,直接从电脑本地拖拽即可。

image-20210924020713807

image-20210923232731456

5. 安装 JDK 环境

这里提供的是 jdk 的 .tar.gz 压缩包,使用如下命令进行解压缩

# -C 指定解压路径
[root@hadoop01 java]# tar -zxvf jdk-8u211-linux-x64.tar.gz -C /usr/local/

配置环境变量

[root@hadoop01 java]# cd /usr/local/jdk1.8.0_211/
# 查看当前路径便于复制
[root@hadoop01 jdk1.8.0_211]# pwd
/usr/local/jdk1.8.0_211
[root@hadoop01 jdk1.8.0_211]# vi /etc/profile
# 在配置文件尾部追加
    # JAVA_HOME
    export JAVA_HOME=/usr/local//jdk1.8.0_211
    export PATH=$PATH:$JAVA_HOME/bin

# 更新配置文件
[root@hadoop01 jdk1.8.0_211]# source /etc/profile
# 更新后使用 java -version 可检查是否配置成功,如下即成功
[root@hadoop01 jdk1.8.0_211]# java -version
java version "1.8.0_211"
Java(TM) SE Runtime Environment (build 1.8.0_211-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.211-b12, mixed mode)

6. 安装 MySQL5.7

这里使用在线安装的方式进行安装(可灵活修改版本,并无需后续配置),不推荐使用离线方式。

6.1 去官网 https://dev.mysql.com/downloads/mysql/ 下载安装包

image-20210923234724563

image-20210923234734831

image-20210923234738355

image-20210923234844145

image-20210923234816251

6.2 解压

[root@hadoop01 local]# cd /usr/software_installer/mysql/
[root@hadoop01 mysql]# ls
mysql80-community-release-el7-3.noarch.rpm
[root@hadoop01 mysql]# rpm -ivh mysql80-community-release-el7-3.noarch.rpm

6.3 切换路径

[root@hadoop01 mysql]# cd /etc/yum.repos.d/
[root@hadoop01 yum.repos.d]# ls
CentOS-Base.repo       CentOS-fasttrack.repo  CentOS-Vault.repo
CentOS-CR.repo         CentOS-Media.repo      mysql-community.repo
CentOS-Debuginfo.repo  CentOS-Sources.repo    mysql-community-source.repo

6.4 修改配置文件,选择 MySQL 版本,把 8.0 的enable=1 改成 0,把 5.7 的 enable=0 改成 1。如图所示:

image-20210923235249677

6.5 安装

[root@hadoop01 yum.repos.d]# yum -y install mysql-community-server

如图所示,即成功安装 MySQL 5.7

image-20210924001100811

image-20210924001103011

6.6 启动 MySQL 服务并查看临时密码

[root@hadoop01 /]# service mysqld start
Redirecting to /bin/systemctl start mysqld.service
[root@hadoop01 /]# cd /var/lo
local/ lock/  log/
[root@hadoop01 /]# cd /var/log/
[root@hadoop01 log]# ll
total 632
drwxr-xr-x. 2 root   root      232 Sep 23  2021 anaconda
drwx------. 2 root   root       23 Sep 23  2021 audit
-rw-------. 1 root   root    17830 Sep 23 11:11 boot.log
-rw-------. 1 root   utmp        0 Sep 23  2021 btmp
drwxr-xr-x. 2 chrony chrony      6 Aug  8  2019 chrony
-rw-------. 1 root   root     1670 Sep 23 12:01 cron
-rw-r--r--. 1 root   root   123955 Sep 23 11:11 dmesg
-rw-r--r--. 1 root   root   123955 Sep 23  2021 dmesg.old
-rw-r-----. 1 root   root        0 Sep 23  2021 firewalld
-rw-r--r--. 1 root   root      193 Sep 23  2021 grubby_prune_debug
-rw-r--r--. 1 root   root   292000 Sep 23 12:09 lastlog
-rw-------. 1 root   root      750 Sep 23 12:09 maillog
-rw-------. 1 root   root   285474 Sep 23 12:12 messages
-rw-r-----. 1 mysql  mysql    4647 Sep 23 12:12 mysqld.log
drwxr-xr-x. 2 root   root        6 Sep 23  2021 rhsm
-rw-------. 1 root   root     7014 Sep 23 12:12 secure
-rw-------. 1 root   root        0 Sep 23  2021 spooler
-rw-------. 1 root   root    64000 Sep 23 12:09 tallylog
drwxr-xr-x. 2 root   root       23 Sep 23  2021 tuned
-rw-------. 1 root   root      719 Sep 23  2021 vmware-network.log
-rw-------. 1 root   root     3062 Sep 23 11:11 vmware-vgauthsvc.log.0
-rw-r--r--. 1 root   root     2957 Sep 23 11:11 vmware-vmsvc.log
-rw-rw-r--. 1 root   utmp     5376 Sep 23 11:56 wtmp
-rw-------. 1 root   root     2197 Sep 23 12:09 yum.log
[root@hadoop01 log]# cat mysqld.log

临时密码如图所示:

image-20210924001357843

6.7 登录 MySQL 并修改密码

[root@hadoop01 log]# mysql -uroot -p
Enter password:
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.7.35

Copyright (c) 2000, 2021, Oracle and/or its affiliates.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.
# 直接修改的话由于该密码不符合安全密码,无法进行修改,需修改安全级别
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY '123456';
ERROR 1819 (HY000): Your password does not satisfy the current policy requirements
# 修改密码的长度
mysql> set global validate_password_length=4;
Query OK, 0 rows affected (0.00 sec)
# 修改密码的安全级别
mysql> set global validate_password_policy=0;
Query OK, 0 rows affected (0.00 sec)
# 修改密码
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'root';
Query OK, 0 rows affected (0.00 sec)
# 更新语句
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)

6.8 此时如果使用本地客户端连接该服务器的数据库,还是不行的,需要进行授权

mysql> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
# 更新域属性
mysql> update user set host='%' where user ='root';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0
# 执行以上语句之后再执行更新语句:
mysql> FLUSH PRIVILEGES;
Query OK, 0 rows affected (0.00 sec)
# 再执行授权语句:
mysql> GRANT ALL PRIVILEGES ON *.* TO 'root'@'%'WITH GRANT OPTION;
Query OK, 0 rows affected (0.00 sec)

此时本地客户端可以连接到该数据库,如图所示:

image-20210924002440498

image-20210924002445068

7. 服务器之间做域名映射

[root@hadoop01 hadoop]# vi /etc/hosts
# 追加
192.168.xx.110 hadoop01
192.168.xx.120 hadoop02
192.168.xx.130 hadoop03

image-20210924000042085

8. 关闭防火墙

# 临时关闭防火墙,重启后失效
[root@hadoop01 hadoop]# systemctl stop firewalld
# 查看防火墙状态
[root@hadoop01 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Thu 2021-09-23 12:01:48 EDT; 5s ago
     Docs: man:firewalld(1)
  Process: 719 ExecStart=/usr/sbin/firewalld --nofork --nopid $FIREWALLD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 719 (code=exited, status=0/SUCCESS)

Sep 23 11:11:49 hadoop01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Sep 23 11:11:50 hadoop01 systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 23 12:01:47 hadoop01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Sep 23 12:01:48 hadoop01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
# 永久关闭防火墙,打开使用指令 systemctl start firewalld
[root@hadoop01 hadoop]# systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@hadoop01 hadoop]# systemctl status firewalld
● firewalld.service - firewalld - dynamic firewall daemon
   Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
   Active: inactive (dead)
     Docs: man:firewalld(1)

Sep 23 11:11:49 hadoop01 systemd[1]: Starting firewalld - dynamic firewall daemon...
Sep 23 11:11:50 hadoop01 systemd[1]: Started firewalld - dynamic firewall daemon.
Sep 23 12:01:47 hadoop01 systemd[1]: Stopping firewalld - dynamic firewall daemon...
Sep 23 12:01:48 hadoop01 systemd[1]: Stopped firewalld - dynamic firewall daemon.
[root@hadoop01 hadoop]#

9. 设置 SSH 免密登录(补充)

9.1 每台服务器,先各自生成自己的密码 (生成自己的公钥与私钥)

# 输入指令后一直回车
[root@hadoop01 hadoop]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:fCz53Hr7rBSufE56rKZn/6nLT7+8nwkJtWOk5tv0hvs root@hadoop01
The key's randomart image is:
+---[RSA 2048]----+
|                 |
|                 |
|             o   |
|       . o  + .  |
|        S o+.+   |
|         =ooo.o  |
|          oo=+o  |
|         .+BO+++o|
|        .=*OO@XE*|
+----[SHA256]-----+
[root@hadoop01 hadoop]#

如图,已经生成公钥和私钥,三台服务器都要生成自己的公钥与私钥

image-20210924000601263

9.2 分发公钥(记住别忘记告诉自己一份)

[root@hadoop02 /]# ssh-copy-id hadoop01
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop01 (192.xx.xx.xx)' can't be established.
ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc.
ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop01's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop01'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop02 /]# ssh-copy-id hadoop02
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop02 (192.xx.xx.xx)' can't be established.
ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc.
ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop02's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop02'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop02 /]# ssh-copy-id hadoop03
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host 'hadoop03 (192.xx.xx.xx)' can't be established.
ECDSA key fingerprint is SHA256:VhX+gclYY0o3MgV/cQmDbXVw0qazKGjK3HekZ8cFVIc.
ECDSA key fingerprint is MD5:ea:e0:fa:98:35:9b:e6:3b:5a:59:2a:a4:e9:5c:cb:d9.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@hadoop03's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'hadoop03'"
and check to make sure that only the key(s) you wanted were added.

[root@hadoop02 /]#

9.3 服务器之间分发文件

# 从 hadoop01 分发 jdk 至 hadoop03 指定路径
[root@hadoop01 /]# scp -r /usr/software_installer/java/jdk-8u211-linux-x64.tar.gz hadoop03:/usr/software_installer/java/
jdk-8u211-linux-x64.tar.gz                                              100%  186MB  82.1MB/s   00:02
[root@hadoop01 /]#

10. 安装配置 hadoop

10.1 下载到 hadoop 的 jar 包后将 jar 包上传至服务器。

image-20210924002634245

10.2 解压

[root@hadoop01 hadoop]# tar -zxvf hadoop-2.7.3.tar.gz -C /usr/local/

10.3 修改 hadoop 的配置文件

[root@hadoop01 hadoop]# cd /usr/local/hadoop-2.7.3/etc/hadoop/
[root@hadoop01 hadoop]# ll
total 152
-rw-r--r--. 1 root root  4436 Aug 17  2016 capacity-scheduler.xml
-rw-r--r--. 1 root root  1335 Aug 17  2016 configuration.xsl
-rw-r--r--. 1 root root   318 Aug 17  2016 container-executor.cfg
-rw-r--r--. 1 root root   774 Aug 17  2016 core-site.xml
-rw-r--r--. 1 root root  3589 Aug 17  2016 hadoop-env.cmd
-rw-r--r--. 1 root root  4224 Aug 17  2016 hadoop-env.sh
-rw-r--r--. 1 root root  2598 Aug 17  2016 hadoop-metrics2.properties
-rw-r--r--. 1 root root  2490 Aug 17  2016 hadoop-metrics.properties
-rw-r--r--. 1 root root  9683 Aug 17  2016 hadoop-policy.xml
-rw-r--r--. 1 root root   775 Aug 17  2016 hdfs-site.xml
-rw-r--r--. 1 root root  1449 Aug 17  2016 httpfs-env.sh
-rw-r--r--. 1 root root  1657 Aug 17  2016 httpfs-log4j.properties
-rw-r--r--. 1 root root    21 Aug 17  2016 httpfs-signature.secret
-rw-r--r--. 1 root root   620 Aug 17  2016 httpfs-site.xml
-rw-r--r--. 1 root root  3518 Aug 17  2016 kms-acls.xml
-rw-r--r--. 1 root root  1527 Aug 17  2016 kms-env.sh
-rw-r--r--. 1 root root  1631 Aug 17  2016 kms-log4j.properties
-rw-r--r--. 1 root root  5511 Aug 17  2016 kms-site.xml
-rw-r--r--. 1 root root 11237 Aug 17  2016 log4j.properties
-rw-r--r--. 1 root root   931 Aug 17  2016 mapred-env.cmd
-rw-r--r--. 1 root root  1383 Aug 17  2016 mapred-env.sh
-rw-r--r--. 1 root root  4113 Aug 17  2016 mapred-queues.xml.template
-rw-r--r--. 1 root root   758 Aug 17  2016 mapred-site.xml.template
-rw-r--r--. 1 root root    10 Aug 17  2016 slaves
-rw-r--r--. 1 root root  2316 Aug 17  2016 ssl-client.xml.example
-rw-r--r--. 1 root root  2268 Aug 17  2016 ssl-server.xml.example
-rw-r--r--. 1 root root  2191 Aug 17  2016 yarn-env.cmd
-rw-r--r--. 1 root root  4567 Aug 17  2016 yarn-env.sh
-rw-r--r--. 1 root root   690 Aug 17  2016 yarn-site.xml

(1)修改hadoop的依赖环境

[root@hadoop01 hadoop]# vi hadoop-env.sh
# 将 JAVA_HOME 修改为 jdk 的目录
export JAVA_HOME=/usr/local/jdk1.8.0_211

image-20210924003338875

(2)修改 core-site.xml(作用:idea api调用)

[root@hadoop01 hadoop]# vi core-site.xml
# 在 <configuration></configuration> 间添加
	<property>
		<name>fs.defaultFS</name>
		<value>hdfs://hadoop01:9000</value>
	</property>

	<property>
		<name>hadoop.tmp.dir</name>
		<value>/usr/local/hadoop-2.7.3/tmp</value>
	</property>

image-20210924003625127

(3)修改hdfs-site.xml(hdfs的存储 元数据与真实数据的存储位置)

[root@hadoop01 hadoop]# vi hdfs-site.xml
# 在 <configuration></configuration> 间添加
	<property>
		<name>dfs.namenode.name.dir</name>
		<value>/usr/local/hadoop-2.7.3/data/name</value>
	</property>

	<property>
		<name>dfs.datanode.data.dir</name>
		<value>/usr/local/hadoop-2.7.3/data/data</value>
	</property>

	<property>
		<name>dfs.replication</name>
		<value>3</value>
	</property>

	<property>
		<name>dfs.secondary.http.address</name>
		<value>hadoop01:50090</value>
	</property>

image-20210924003822752

(4)配置 mapreduce-site.xml

# 修改为正式文件
[root@hadoop01 hadoop]# mv mapred-site.xml.template mapred-site.xml
[root@hadoop01 hadoop]# ls
capacity-scheduler.xml      hadoop-policy.xml        kms-log4j.properties        ssl-client.xml.example
configuration.xsl           hdfs-site.xml            kms-site.xml                ssl-server.xml.example
container-executor.cfg      httpfs-env.sh            log4j.properties            yarn-env.cmd
core-site.xml               httpfs-log4j.properties  mapred-env.cmd              yarn-env.sh
hadoop-env.cmd              httpfs-signature.secret  mapred-env.sh               yarn-site.xml
hadoop-env.sh               httpfs-site.xml          mapred-queues.xml.template
hadoop-metrics2.properties  kms-acls.xml             mapred-site.xml
hadoop-metrics.properties   kms-env.sh               slaves
[root@hadoop01 hadoop]# vi mapred-site.xml
# 在 <configuration></configuration> 间添加
	<property>
		<name>mapreduce.framework.name</name>
		<value>yarn</value>
	</property>

image-20210924004024448

(5)配置yarn资源

[root@hadoop01 hadoop]# vi yarn-site.xml
# 在 <configuration></configuration> 间添加
	<property>
		<name>yarn.resourcemanager.hostname</name>
		<value>hadoop01</value>
	</property>

	<property>
		<name>yarn.nodemanager.aux-services</name>
		<value>mapreduce_shuffle</value>
	</property>

image-20210924004213942

(6)修改从节点

[root@hadoop01 hadoop]# vi slaves
# 修改为
hadoop02
hadoop03

(7)配置环境变量

[root@hadoop01 hadoop-2.7.3]# vi /etc/profile
# 追加如下
# HADOOP_HOME
export HADOOP_HOME=/usr/local/hadoop-2.7.3
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
# 更新
[root@hadoop01 hadoop-2.7.3]# source /etc/profile

11. 克隆服务器

11.1 将被克隆的机器关闭

11.2 按图操作

image-20210924004656772

image-20210924004704821image-20210924004708972

image-20210924004711562image-20210924004712921image-20210924004714654

11.3 注意点

克隆完成后,机器的账号密码与被克隆号机器一致,不能同时启动两台机器没需要先将克隆出的机器的IP地址进行修改,否则会产生冲突。

image-20210924005026306

image-20210924005135200

12. 启动 hadoop

12.1 集群启动之前,一定要记得初始化

[root@hadoop01 /]# hadoop namenode -format
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

21/09/23 13:08:45 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = hadoop01/192.xx.xx.xx
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.7.3
STARTUP_MSG:   classpath = /usr/local/hadoop-2.7.3/etc/hadoop:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/activation-1.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/asm-3.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jets3t-0.9.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/httpclient-4.2.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/httpcore-4.2.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/java-xmlbuilder-0.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-configuration-1.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-digester-1.8.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/slf4j-api-1.7.10.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/xz-1.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/gson-2.2.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/hadoop-auth-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/apacheds-kerberos-codec-2.0.0-M15.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/apacheds-i18n-2.0.0-M15.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/api-asn1-api-1.0.0-M20.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/api-util-1.0.0-M20.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/curator-framework-2.7.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/curator-client-2.7.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jsch-0.1.42.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/curator-recipes-2.7.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/junit-4.11.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/mockito-all-1.8.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-math3-3.1.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-httpclient-3.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-net-3.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jsp-api-2.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-common-2.7.3-tests.jar:/usr/local/hadoop-2.7.3/share/hadoop/common/hadoop-nfs-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/asm-3.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/htrace-core-3.1.0-incubating.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/netty-all-4.0.23.Final.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/xercesImpl-2.9.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/xml-apis-1.3.04.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-2.7.3-tests.jar:/usr/local/hadoop-2.7.3/share/hadoop/hdfs/hadoop-hdfs-nfs-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6-tests.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-lang-2.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/guava-11.0.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jsr305-3.0.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-logging-1.1.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-cli-1.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-api-2.2.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/stax-api-1.0-2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/activation-1.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/xz-1.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/servlet-api-2.5.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-codec-1.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-util-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-client-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-jaxrs-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jackson-xc-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/guice-3.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/asm-3.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-json-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jettison-1.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jaxb-impl-2.2.3-1.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/zookeeper-3.4.6.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/commons-collections-3.2.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/lib/jetty-6.1.26.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-api-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-common-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-common-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-applicationhistoryservice-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-tests-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-client-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-server-sharedcachemanager-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/yarn/hadoop-yarn-registry-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-core-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.9.13.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/xz-1.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/hadoop-annotations-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/commons-io-2.4.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/asm-3.2.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/leveldbjni-all-1.8.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-3.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/javax.inject-1.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/junit-4.11.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/lib/hamcrest-core-1.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar:/usr/local/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.7.3-tests.jar:/usr/local/hadoop-2.7.3/contrib/capacity-scheduler/*.jar:/usr/local/hadoop-2.7.3/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r baa91f7c6bc9cb92be5982de4719c1c8af91ccff; compiled by 'root' on 2016-08-18T01:41Z
STARTUP_MSG:   java = 1.8.0_211
************************************************************/
21/09/23 13:08:45 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
21/09/23 13:08:45 INFO namenode.NameNode: createNameNode [-format]
21/09/23 13:08:46 WARN common.Util: Path /usr/local/hadoop-2.7.3/data/name should be specified as a URI in configuration files. Please update hdfs configuration.
21/09/23 13:08:46 WARN common.Util: Path /usr/local/hadoop-2.7.3/data/name should be specified as a URI in configuration files. Please update hdfs configuration.
Formatting using clusterid: CID-cbb0ccb6-039f-4d59-8dae-a254ebda7acf
21/09/23 13:08:46 INFO namenode.FSNamesystem: No KeyProvider found.
21/09/23 13:08:46 INFO namenode.FSNamesystem: fsLock is fair:true
21/09/23 13:08:46 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
21/09/23 13:08:46 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
21/09/23 13:08:46 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
21/09/23 13:08:46 INFO blockmanagement.BlockManager: The block deletion will start around 2021 Sep 23 13:08:46
21/09/23 13:08:46 INFO util.GSet: Computing capacity for map BlocksMap
21/09/23 13:08:46 INFO util.GSet: VM type       = 64-bit
21/09/23 13:08:46 INFO util.GSet: 2.0% max memory 966.7 MB = 19.3 MB
21/09/23 13:08:46 INFO util.GSet: capacity      = 2^21 = 2097152 entries
21/09/23 13:08:46 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
21/09/23 13:08:46 INFO blockmanagement.BlockManager: defaultReplication         = 3
21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxReplication             = 512
21/09/23 13:08:46 INFO blockmanagement.BlockManager: minReplication             = 1
21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
21/09/23 13:08:46 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
21/09/23 13:08:46 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
21/09/23 13:08:46 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
21/09/23 13:08:46 INFO namenode.FSNamesystem: fsOwner             = root (auth:SIMPLE)
21/09/23 13:08:46 INFO namenode.FSNamesystem: supergroup          = supergroup
21/09/23 13:08:46 INFO namenode.FSNamesystem: isPermissionEnabled = true
21/09/23 13:08:46 INFO namenode.FSNamesystem: HA Enabled: false
21/09/23 13:08:46 INFO namenode.FSNamesystem: Append Enabled: true
21/09/23 13:08:47 INFO util.GSet: Computing capacity for map INodeMap
21/09/23 13:08:47 INFO util.GSet: VM type       = 64-bit
21/09/23 13:08:47 INFO util.GSet: 1.0% max memory 966.7 MB = 9.7 MB
21/09/23 13:08:47 INFO util.GSet: capacity      = 2^20 = 1048576 entries
21/09/23 13:08:47 INFO namenode.FSDirectory: ACLs enabled? false
21/09/23 13:08:47 INFO namenode.FSDirectory: XAttrs enabled? true
21/09/23 13:08:47 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
21/09/23 13:08:47 INFO namenode.NameNode: Caching file names occuring more than 10 times
21/09/23 13:08:47 INFO util.GSet: Computing capacity for map cachedBlocks
21/09/23 13:08:47 INFO util.GSet: VM type       = 64-bit
21/09/23 13:08:47 INFO util.GSet: 0.25% max memory 966.7 MB = 2.4 MB
21/09/23 13:08:47 INFO util.GSet: capacity      = 2^18 = 262144 entries
21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
21/09/23 13:08:47 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
21/09/23 13:08:47 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
21/09/23 13:08:47 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
21/09/23 13:08:47 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
21/09/23 13:08:47 INFO util.GSet: Computing capacity for map NameNodeRetryCache
21/09/23 13:08:47 INFO util.GSet: VM type       = 64-bit
21/09/23 13:08:47 INFO util.GSet: 0.029999999329447746% max memory 966.7 MB = 297.0 KB
21/09/23 13:08:47 INFO util.GSet: capacity      = 2^15 = 32768 entries
21/09/23 13:08:47 INFO namenode.FSImage: Allocated new BlockPoolId: BP-866472713-192.xx.xx.xx-1632416927282
21/09/23 13:08:47 INFO common.Storage: Storage directory /usr/local/hadoop-2.7.3/data/name has been successfully formatted.
21/09/23 13:08:47 INFO namenode.FSImageFormatProtobuf: Saving image file /usr/local/hadoop-2.7.3/data/name/current/fsimage.ckpt_0000000000000000000 using no compression
21/09/23 13:08:47 INFO namenode.FSImageFormatProtobuf: Image file /usr/local/hadoop-2.7.3/data/name/current/fsimage.ckpt_0000000000000000000 of size 351 bytes saved in 0 seconds.
21/09/23 13:08:47 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
21/09/23 13:08:47 INFO util.ExitUtil: Exiting with status 0
21/09/23 13:08:47 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop01/192.xx.xx.xx
************************************************************/
[root@hadoop01 /]#

12.2 启动

[root@hadoop01 /]# start-all.sh
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-namenode-hadoop01.out
hadoop03: starting datanode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-datanode-hadoop02.out
Starting secondary namenodes [hadoop01]
hadoop01: starting secondarynamenode, logging to /usr/local/hadoop-2.7.3/logs/hadoop-root-secondarynamenode-hadoop01.out
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-resourcemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop02.out
hadoop03: starting nodemanager, logging to /usr/local/hadoop-2.7.3/logs/yarn-root-nodemanager-hadoop03.out
[root@hadoop01 /]#

启动成功后可以看到三台服务器的节点情况如图:

image-20210924011054080

访问 Web 端口 192.xx.xx.xx:50070 可以看到

image-20210924011402873

到这里,Hadoop 环境搭建就成功了!

13. hdfs shell 文本交互命令

# 创建文件夹
[root@hadoop01 /]# hadoop fs -mkdir /software_installer
[root@hadoop01 /]# hadoop fs -mkdir /xx
# 删除文件夹
[root@hadoop01 /]# hadoop fs -rm -r /xx
21/09/23 13:19:22 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /xx
# 上传文件
[root@hadoop01 /]# hadoop fs -put /usr/software_installer/ /software_installer/
# 移动文件
[root@hadoop01 /]# hadoop fs -mv /software_installer/software_installer/* /software_installer
# 删除文件夹
[root@hadoop01 /]# hadoop fs -rm -r /software_installer/software_install
rm: `/software_installer/software_install': No such file or directory
[root@hadoop01 /]# hadoop fs -rm -r /software_installer/software_installer
21/09/23 13:24:45 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /software_installer/software_installer
[root@hadoop01 /]#

image-20210924011823956

image-20210924012501841

image-20210924012518810

附件

我整理了需要用到的软件,上传到了我的阿里云盘,如果你还没有注册,可以点击下方我的邀请链接进行下载注册,可比直接在应用商城中下载注册多500G空间,当然我也会获得同样大小的扩容空间,嘻嘻🤭!
##############################################################
我在使用不限速「阿里云盘」,赠送你 500GB 快来试试吧
点此链接领取福利:阿里云盘下载
##############################################################
我用阿里云盘分享了「Hadoop环境搭建」,你可以不限速下载🚀
复制这段内容打开「阿里云盘」App 即可获取
链接:Hadoop环境搭建软件包
在这里插入图片描述

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐