前提: 准备好三台可以互相通信的虚拟机,虚拟机创建教程可以参考我的上一篇博客
Hadoop2.x与Hadoop3.x的默认端口变化

1、编写集群分发脚本

rsync和scp区别:用rsync做文件的复制要比scp的速度快,rsync只对差异文件做更新。scp是把所有文件都复制过去。

# scp    -r          $pdir/$fname              $user@hadoop$host:$pdir/$fname
# 命令   递归         要拷贝的文件路径/名称         目的用户@主机:目的路径/名称
scp -r /opt/module/jdk1.8.0_212  atguigu@hadoop103:/opt/module
# rsync    -av       $pdir/$fname              $user@hadoop$host:$pdir/$fname
# 命令   选项参数   要拷贝的文件路径/名称    目的用户@主机:目的路径/名称
rsync -av /opt/software/* atguigu@hadoop103:/opt/software

Linux cd命令

# cd -P /var/lock 可以直接切换到这个目录链接的地址 /run/lock/
[atguigu@hadoop104 ~]$ cd /var/
lrwxrwxrwx.  1 root root   11 127 22:13 lock -> ../run/lock
[atguigu@hadoop103 lock]$ cd -P /var/lock;pwd
/run/lock
[atguigu@hadoop103 lock]$ cd /var/lock;pwd
/var/lock

脚本中mkdir 与 mkdir -p 的区别

# mkdir -p :递归创建目录,即使上级目录不存在,会按目录层级自动创建目录
[atguigu@hadoop103 software]$ mkdir aa/bb;ll
mkdir: 无法创建目录"aa/bb": 没有那个文件或目录
总用量 520604
-rw-r--r--. 1 atguigu atguigu 338075860 1017 08:44 hadoop-3.1.3.tar.gz
-rw-r--r--. 1 atguigu atguigu 195013152 1017 08:44 jdk-8u212-linux-x64.tar.gz
[atguigu@hadoop103 software]$ mkdir -p aa/bb;ll
总用量 520608
drwxrwxr-x. 3 atguigu atguigu      4096 1211 17:02 aa
-rw-r--r--. 1 atguigu atguigu 338075860 1017 08:44 hadoop-3.1.3.tar.gz
-rw-r--r--. 1 atguigu atguigu 195013152 1017 08:44 jdk-8u212-linux-x64.tar.gz

xsync集群分发脚本

# 说明:在/home/atguigu/bin这个目录下存放的脚本,atguigu用户可以在系统任何地方直接执行
[atguigu@hadoop102 ~]$ cd /home/atguigu
[atguigu@hadoop102 ~]$ ll
总用量 0
[atguigu@hadoop102 ~]$ cd /home/atguigu
[atguigu@hadoop102 ~]$ mkdir bin
[atguigu@hadoop102 ~]$ cd bin
[atguigu@hadoop102 bin]$ vim xsync
[atguigu@hadoop102 bin]$ cat xsync 
#!/bin/bash
#1. 判断参数个数
if [ $# -lt 1 ]
then
  echo Not Enough Arguement!
  exit;
fi
#2. 遍历集群所有机器
for host in hadoop102 hadoop103 hadoop104
do
  echo ====================  $host  ====================
  #3. 遍历所有目录,挨个发送
  for file in $@
  do
    #4. 判断文件是否存在
    if [ -e $file ]
    then
      #5. 获取父目录
      pdir=$(cd -P $(dirname $file); pwd)
      #6. 获取当前文件的名称
      fname=$(basename $file)
      ssh $host "mkdir -p $pdir"
      rsync -av $pdir/$fname $host:$pdir
    else
      echo $file does not exists!
    fi
  done
done

[atguigu@hadoop102 bin]$ chmod +x xsync
[atguigu@hadoop102 bin]$ sudo cp xsync /bin/

2、SSH远程登陆和退出

systemctl与service
Linux服务管理之unit的概念

# 安装SSH
[atguigu@hadoop102 ~]$ yum install openssh-server
# 查看SSH是否安装成功
[atguigu@hadoop102 ~]$ rpm -qa | grep ssh
openssh-7.4p1-16.el7.x86_64
openssh-server-7.4p1-16.el7.x86_64
openssh-clients-7.4p1-16.el7.x86_64
libssh2-1.4.3-10.el7_2.1.x86_64
# 停止SSH服务
[atguigu@hadoop102 ~]$ sudo systemctl stop sshd
# 开始SSH服务
[atguigu@hadoop102 ~]$ sudo systemctl start sshd
# 重启SSH服务
[atguigu@hadoop102 ~]$ sudo systemctl restart sshd
# 查看是否启动22端口
[atguigu@hadoop102 ~]$ sudo netstat -antp | grep sshd
tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      4695/sshd           
tcp        0      0 127.0.0.1:6010          0.0.0.0:*               LISTEN      4500/sshd: atguigu@ 
tcp        0      0 127.0.0.1:6011          0.0.0.0:*               LISTEN      2325/sshd: atguigu@ 
tcp        0      0 192.168.1.34:22         192.168.1.11:63562      ESTABLISHED 2319/sshd: atguigu  
tcp        0      0 192.168.1.34:22         192.168.1.11:55215      ESTABLISHED 4495/sshd: atguigu  
tcp6       0      0 :::22                   :::*                    LISTEN      4695/sshd           
tcp6       0      0 ::1:6010                :::*                    LISTEN      4500/sshd: atguigu@ 
tcp6       0      0 ::1:6011                :::*                    LISTEN      2325/sshd: atguigu@ 
# 设置开机启动
[atguigu@hadoop102 ~]$ sudo systemctl enable sshd
# 则禁止SSH开机启动
[atguigu@hadoop102 ~]$ sudo systemctl disable sshd
# 登录另一台电脑
[atguigu@hadoop102 ~]$ ssh hadoop103
# 登出另一台电脑 exit、logout、Ctrl + D
[atguigu@hadoop102 ~]$ logout

3、SSH免密登录(互相免密登录的机器上都要配置一次)

PS:hadoop102上,root账号还要配置一次(专门传需要root权限的文件)

# Hadoop102生成一对公钥和私钥,键入命令后回车三次
[atguigu@hadoop102 ~]$ ssh-keygen -t rsa
# 将公钥拷贝到要免密登录的目标机器hadoop102上
[atguigu@hadoop102 ~]$ ssh-copy-id hadoop102
# 将公钥拷贝到要免密登录的目标机器hadoop103上
[atguigu@hadoop102 ~]$ ssh-copy-id hadoop103
# 将公钥拷贝到要免密登录的目标机器hadoop104上
[atguigu@hadoop102 ~]$ ssh-copy-id hadoop104
# 配置免密登录所生成文件的目录
[atguigu@hadoop102 .ssh]$ pwd
/home/atguigu/.ssh
# 免密登录生成的文件
[atguigu@hadoop102 .ssh]$ ll
总用量 16
-rw-------. 1 atguigu atguigu 1197 1211 20:19 authorized_keys #记录ssh访问过计算机的公钥
-rw-------. 1 atguigu atguigu 1675 1211 20:10 id_rsa #生成的私钥
-rw-r--r--. 1 atguigu atguigu  399 1211 20:10 id_rsa.pub #生成的公钥
-rw-r--r--. 1 atguigu atguigu  552 1211 17:07 known_hosts #存放授权过的无密登录服务器公钥

4、Hadoop配置文件

关于文件协议file:///
最后记得,在集群上分发配置好的Hadoop配置文件

[atguigu@hadoop102 hadoop]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/
# 解压Hadoop
[atguigu@hadoop102 ~]$ tar -zxvf /opt/software/hadoop-3.1.3.tar.gz -C /opt/module/
# 重要的四个配置文件
[atguigu@hadoop102 hadoop]$ ll | grep site
-rw-r--r--. 1 atguigu atguigu   774 912 2019 core-site.xml
-rw-r--r--. 1 atguigu atguigu   775 912 2019 hdfs-site.xml
-rw-r--r--. 1 atguigu atguigu   620 912 2019 httpfs-site.xml # 无关配置文件
-rw-r--r--. 1 atguigu atguigu   682 912 2019 kms-site.xml # 无关配置文件
-rw-r--r--. 1 atguigu atguigu   758 912 2019 mapred-site.xml
-rw-r--r--. 1 atguigu atguigu   690 912 2019 yarn-site.xml
# 主要映射jdk的配置文件,可以不配
[atguigu@hadoop102 hadoop]$ ll | grep hadoop-env
-rw-r--r--. 1 atguigu atguigu  3999 912 2019 hadoop-env.cmd # 无关配置文件
-rw-r--r--. 1 atguigu atguigu 15903 912 2019 hadoop-env.sh

核心配置文件(用于全局的配置):core-site.xml

HDFS-- Hadoop中的ProxyUser
Hadoop的core-site.xml配置文件里的fs.default.name和fs.defaultFS

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- 指定NameNode的地址 -->
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://hadoop102:9820</value>
	</property>
<!-- 指定hadoop数据的存储目录 -->
    <property>
        <name>hadoop.tmp.dir</name>
        <!-- JIKEY:hadoop的安装目录/opt/module/hadoop-3.1.3 -->
        <value>/opt/module/hadoop-3.1.3/data</value>
	</property>

<!-- JIKEY:atguigu登录hdfs是的权限认证设置,不然atguigu没有权限进行一些操作-->
<!-- 配置HDFS网页登录使用的静态用户为atguigu -->
    <property>
        <name>hadoop.http.staticuser.user</name>
        <value>atguigu</value>
	</property>

<!-- JIEKY:使用HIVE时需要使用到-->
<!-- 配置该atguigu(superUser)允许通过代理访问的主机节点 -->
    <property>
        <name>hadoop.proxyuser.atguigu.hosts</name>
        <value>*</value>
	</property>
<!-- 配置该atguigu(superUser)允许通过代理用户所属组 -->
    <property>
        <name>hadoop.proxyuser.atguigu.groups</name>
        <value>*</value>
</property>
<!-- 配置该atguigu(superUser)允许通过代理的用户-->
    <property>
        <name>hadoop.proxyuser.atguigu.users</name>
        <value>*</value>
	</property>

</configuration>

HDFS配置文件:hdfs-site.xml

配置用户访问NameNode、Secondary NameNode的网页地址

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- nn web端访问地址-->
	<property>
        <name>dfs.namenode.http-address</name>
        <value>hadoop102:9870</value>
    </property>
	<!-- 2nn web端访问地址-->
    <property>
        <name>dfs.namenode.secondary.http-address</name>
        <value>hadoop104:9868</value>
    </property>
</configuration>

YARN配置文件:yarn-site.xml

mr!shuffle详细全过程

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
<!-- 指定MR走shuffle -->
    <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
	</property>
<!-- 指定ResourceManager的地址-->
    <property>
        <name>yarn.resourcemanager.hostname</name>
        <value>hadoop103</value>
	</property>

<!--JIEKY:YARN工作时需要知道的环境变量,默认配置,配上就好-->
<!-- 环境变量的继承 -->
    <property>
        <name>yarn.nodemanager.env-whitelist</name>
        <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOME</value>
	</property>

<!--JIEKY:可选配置-->
<!-- yarn容器允许分配的最大最小内存 -->
    <property>
        <name>yarn.scheduler.minimum-allocation-mb</name>
        <!--一般用默认,这里是怕个人电脑配置不够,所以配低点-->
        <value>512</value>
    </property>
    <property>
        <name>yarn.scheduler.maximum-allocation-mb</name>
        <value>4096</value>
	</property>
<!-- yarn容器允许管理的物理内存大小 -->
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>4096</value>
	</property>

<!-- JIEKY:因为MR有重试机制,MR失败会重试运行,这里为了避免检查,关掉-->
<!-- 关闭yarn对物理内存和虚拟内存的限制检查 -->
    <property>
        <name>yarn.nodemanager.pmem-check-enabled</name>
        <value>false</value>
    </property>
    <property>
        <name>yarn.nodemanager.vmem-check-enabled</name>
        <value>false</value>
    </property>

<!--JIEKY:开启程序运行历史记录功能-->
<!--各节点内部的通信地址,保留执行历史-->
<!-- 历史服务器端地址 -->
	<property>
	    <name>mapreduce.jobhistory.address</name>
	    <value>hadoop102:10020</value>
	</property>
<!--用户访问通信地址-->
<!-- 历史服务器web端地址 -->
	<property>
	    <name>mapreduce.jobhistory.webapp.address</name>
	    <value>hadoop102:19888</value>
	</property>

<!--JIEKY:开启日志聚集功能-->
<!-- 开启日志聚集功能 -->
	<property>
	    <name>yarn.log-aggregation-enable</name>
	    <value>true</value>
	</property>
<!-- 设置日志聚集服务器地址 -->
	<property>  
	    <name>yarn.log.server.url</name>  
	    <value>http://hadoop102:19888/jobhistory/logs</value>
	</property>
<!-- 设置日志保留时间为7天 -->
	<property>
	    <name>yarn.log-aggregation.retain-seconds</name>
	    <value>604800</value>
	</property>

</configuration>

MapReduce配置文件:mapred-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>

<configuration>
	<!-- 指定MapReduce程序运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>
# 分发配置文件到其他两台机器
[atguigu@hadoop102 hadoop]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/
# 检查文件分发是否成功
[atguigu@hadoop103 hadoop]$ cat mapred-site.xml 
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
	<!-- 指定MapReduce程序运行在Yarn上 -->
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
</configuration>

5、群起集群

Linux如何查看端口

# 配置work
[atguigu@hadoop102 hadoop]$ vim workers 
# 分发配置好的hadoop-3.1.3
[atguigu@hadoop102 hadoop]$ xsync /opt/module/hadoop-3.1.3
# 节点格式化NameNode
# 重新格式化NameNode前,一定要先停止namenode和datanode进程,并且要删除所有机器的data和logs目录
[atguigu@hadoop102 hadoop]$ hdfs namenode -format
# 单点启动,hadoop102:9870
[atguigu@hadoop102 hadoop-3.1.3]$ hdfs --daemon start namenode
[atguigu@hadoop102 hadoop-3.1.3]$ jps
9222 NameNode
9288 Jps
# hadoop102、hadoop103、hadoop104都启动一遍
[atguigu@hadoop102 hadoop-3.1.3]$ hdfs --daemon start datanode
# hadoop104 :2nn在实际情况中不使用,现在只是练习
[atguigu@hadoop104 hadoop-3.1.3]$ hdfs --daemon start secondarynamenode
# hadoop103 :rm 相关
[atguigu@hadoop103 hadoop-3.1.3]$ yarn --daemon start resourcemanager
# hadoop102、hadoop103、hadoop104都启动一遍
[atguigu@hadoop102 hadoop-3.1.3]$ yarn --daemon start nodemanager

hadoop3.1.3单点启动/停止的集群命令

hdfs --daemon start namenode # hadoop102
hdfs --daemon start secondarynamenode # hadoop104
yarn --daemon start resourcemanager # hadoop103
hdfs --daemon start datanode # hadoop102、hadoop103、hadoop104
yarn --daemon start nodemanager # hadoop102、hadoop103、hadoop104

hdfs --daemon stop namenode # hadoop102
hdfs --daemon stop secondarynamenode # hadoop104
yarn --daemon stop resourcemanager # hadoop103
hdfs --daemon stop datanode # hadoop102、hadoop103、hadoop104
yarn --daemon stop nodemanager # hadoop102、hadoop103、hadoop104

6、清理/tmp文件

# 正常情况下,/tmp目录的权限是:
[atguigu@hadoop102 tmp]$ ls -l /
drwxrwxrwt.  19 root root  4096 1212 18:16 tmp
# 所有用户可以建立文件[文件夹],可以删除别人的文件[文件夹],但是不能改写别人的文件[文件夹]

7、使用Hadoop群起脚本

前提:配置好了SSH无密码登录

# workers是用于配置启动那些datanode、NodeManager
[atguigu@hadoop102 hadoop-3.1.3]$ vim /opt/module/hadoop-3.1.3/etc/hadoop/workers 
[atguigu@hadoop102 hadoop-3.1.3]$ cat /opt/module/hadoop-3.1.3/etc/hadoop/workers 
hadoop102
hadoop103
hadoop104
# 分发配置文件
[atguigu@hadoop102 hadoop-3.1.3]$ xsync /opt/module/hadoop-3.1.3/etc/hadoop/workers
# 启动namenodes 、secondary namenodes、datanodes
[atguigu@hadoop102 hadoop-3.1.3]$ start-dfs.sh 
Starting namenodes on [hadoop102]
Starting datanodes
Starting secondary namenodes [hadoop104]
# 启动resourcemanager、nodemanagers
[atguigu@hadoop103 hadoop-3.1.3]$ start-yarn.sh
Starting resourcemanager
Starting nodemanagers
# 查看启动情况
[atguigu@hadoop102 hadoop-3.1.3]$ jps
6576 DataNode
6980 Jps
6422 NameNode
6859 NodeManager
[atguigu@hadoop104 hadoop-3.1.3]$ jps
5604 DataNode
5813 NodeManager
5688 SecondaryNameNode
5950 Jps
[atguigu@hadoop103 hadoop-3.1.3]$ jps
6496 ResourceManager
6629 NodeManager
6316 DataNode
6988 Jps

自定义快捷脚本

# 群起脚本
[atguigu@hadoop102 bin]$ cat myclusters.sh 
#!/bin/bash
if [ $# -lt 1 ]
then
    echo "No Args Input..."
    exit ;
fi
case $1 in
"start")
        echo " =================== 启动 hadoop集群 ==================="

        echo " --------------- 启动 hdfs ---------------"
        ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/start-dfs.sh"
        echo " --------------- 启动 yarn ---------------"
        ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/start-yarn.sh"
        echo " --------------- 启动 historyserver ---------------"
        ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon start historyserver"
;;
"stop")
        echo " =================== 关闭 hadoop集群 ==================="

        echo " --------------- 关闭 historyserver ---------------"
        ssh hadoop102 "/opt/module/hadoop-3.1.3/bin/mapred --daemon stop historyserver"
        echo " --------------- 关闭 yarn ---------------"
        ssh hadoop103 "/opt/module/hadoop-3.1.3/sbin/stop-yarn.sh"
        echo " --------------- 关闭 hdfs ---------------"
        ssh hadoop102 "/opt/module/hadoop-3.1.3/sbin/stop-dfs.sh"
;;
*)
    echo "Input Args Error..."
;;
esac

# 监控脚本
[atguigu@hadoop102 bin]$ cat jpsall 
#!/bin/bash
for host in hadoop102 hadoop103 hadoop104
do
        echo =============== $host ===============
        ssh $host jps $@ | grep -v Jps
done

8、配置时间同步

# 查看所有虚拟机ntpd开机自启动状态
[atguigu@hadoop102 ~]$ sudo systemctl is-enabled ntpd
# 关掉ntpd服务和自启
[atguigu@hadoop102 ~]$ sudo systemctl stop ntpd
[atguigu@hadoop102 ~]$ sudo systemctl disable ntpd
# 配置时间hadoop102为服务器
[atguigu@hadoop102 ~]$ sudo vim /etc/ntp.conf
[atguigu@hadoop102 ~]$ sudo vim /etc/sysconfig/ntpd
# 打开ntpd服务和自启
[atguigu@hadoop102 ~]$ sudo systemctl start ntpd
[atguigu@hadoop102 ~]$ sudo systemctl enable ntpd
# 做一个小实验
[atguigu@hadoop104 ~]$ sudo date -s "2017-9-11 11:11:11"
2017年 09月 11日 星期一 11:11:11 CST
[atguigu@hadoop104 ~]$ sudo date
2017年 09月 11日 星期一 11:11:20 CST
[atguigu@hadoop104 ~]$ sudo date
20211213日 星期一 22:12:17 CST
# 查看配置内容
[atguigu@hadoop102 ~]$ cat /etc/ntp.conf 
# For more information about this file, see the man pages
# ntp.conf(5), ntp_acc(5), ntp_auth(5), ntp_clock(5), ntp_misc(5), ntp_mon(5).

driftfile /var/lib/ntp/drift

# Permit time synchronization with our time source, but do not
# permit the source to query or modify the service on this system.
restrict default nomodify notrap nopeer noquery

# Permit all access over the loopback interface.  This could
# be tightened as well, but to do so would effect some of
# the administrative functions.
restrict 127.0.0.1 
restrict ::1

# Hosts on local network are less restricted.
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap
restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst

server 127.127.1.0
fudge 127.127.1.0 stratum 10

#broadcast 192.168.1.255 autokey	# broadcast server
#broadcastclient			# broadcast client
#broadcast 224.0.1.1 autokey		# multicast server
#multicastclient 224.0.1.1		# multicast client
#manycastserver 239.255.254.254		# manycast server
#manycastclient 239.255.254.254 autokey # manycast client

# Enable public key cryptography.
#crypto

includefile /etc/ntp/crypto/pw

# Key file containing the keys and key identifiers used when operating
# with symmetric key cryptography. 
keys /etc/ntp/keys

# Specify the key identifiers which are trusted.
#trustedkey 4 8 42

# Specify the key identifier to use with the ntpdc utility.
#requestkey 8

# Specify the key identifier to use with the ntpq utility.
#controlkey 8

# Enable writing of statistics records.
#statistics clockstats cryptostats loopstats peerstats

# Disable the monitoring facility to prevent amplification attacks using ntpdc
# monlist command when default restrict does not include the noquery flag. See
# CVE-2013-5211 for more details.
# Note: Monitoring will not be disabled with the limited restriction flag.
disable monitor
[atguigu@hadoop102 ~]$ cat /etc/sysconfig/ntpd
# Command line options for ntpd
OPTIONS="-g"

SYNC_HWCLOCK=yes

9、添加单机切换局域网的配置

[atguigu@hadoop102 ~]$ cat bin/mynetwork.sh 
#!/bin/bash
if [ $# -lt 1 ]
then
    echo "No Args Input..."
    exit ;
fi

export basepath=/etc/sysconfig/network-scripts
case $1 in
"good")
	sudo cp $basepath/ifcfg-ens33.good $basepath/ifcfg-ens33
	sudo cat /opt/data/host.good > /etc/hosts
	sudo service network restart
;;
"bad")
	sudo cp $basepath/ifcfg-ens33.bad $basepath/ifcfg-ens33
	sudo cat /opt/data/host.bad > /etc/hosts
	sudo service network restart
;;
*)
    echo "Input Args Error..."
;;
esac

ifconfig
[atguigu@hadoop102 ~]$ cat /opt/data/host.good 
192.168.1.32 hadoop100
192.168.1.33 hadoop101
192.168.1.34 hadoop102
192.168.1.35 hadoop103
192.168.1.36 hadoop104
192.168.1.37 hadoop105
[atguigu@hadoop102 ~]$ cat /opt/data/host.bad 
192.168.2.32 hadoop100
192.168.2.33 hadoop101
192.168.2.34 hadoop102
192.168.2.35 hadoop103
192.168.2.36 hadoop104
192.168.2.37 hadoop105
Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐