Hello,world!

  

🐒本篇博客使用到的工具有:VMware16 ,Xftp7

若不熟悉操作命令,推荐使用带GUI页面的CentOS7虚拟机

我将使用带GUI页面的虚拟机演示

虚拟机(Virtual Machine)

指通过软件模拟的具有完整硬件系统功能的、运行在一个完全隔离环境中的完整计算机系统。在实体计算机中能够完成的工作在虚拟机中都能够实现。在计算机中创建虚拟机时,需要将实体机的部分硬盘和内存容量作为虚拟机的硬盘和内存容量。每个虚拟机都有独立的CMOS、硬盘和操作系统,可以像使用实体机一样对虚拟机进行操作。

【确保服务器集群安装和配置已经完成!】可参考我的上篇博客:

VMware创建Linux虚拟机之(一)实现免密登录_Vim_飞鱼的博客-CSDN博客

VMware创建Linux虚拟机之(二)下载安装JDK与配置Java环境变量_Vim_飞鱼的博客-CSDN博客

VMware创建Linux虚拟机之(三)Hadoop安装与配置及搭建集群_Vim_飞鱼的博客-CSDN博客_利用vmware虚拟机安装hadoop

前言


请根据读者的自身情况,进行相应随机应变。

我的三台CentOS7服务器:
主机:master(192.168.149.101)
从机:slave1(192.168.149.102)
从机:slave2(192.168.149.103)

每一个节点的安装与配置是相同的,在实际工作中,通常在master节点上完成安装和配置后,然后将安装目录通过 scp 命令复制到其他节点即可。

注意:所有操作都是root用户权限,需要我们登陆时选择root用户登录。


zookeeper完全分布式安装


下载Zookeeper安装包


Zookeeper 下载地址:

Apache ZooKeeperhttps://zookeeper.apache.org/

解压Zookeeper安装包


        首先,需要确保 network 网络已经配置好,使用 Xftp 等类似工具进行上传,把 apache-zookeeper-3.6.2-bin.tar.gz 上传到 opt/ 目录内。(也可使用U盘等工具拖拽)
上传完成后,在 master 主机上执行以下代码: 解压zookeeper

cd /opt/

tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz

执行成功后,系统在 opt 目录自动创建 zookeeper-3.6.2 子目录。

  注意:可使用 ls 等命令查看文件解压是否无误。


 

配置bashrc文件(等同于profile)

#zookeeper config

export ZOOKEEPER_HOME=/opt/zookeeper-3.6.2

export PATH=$PATH:$ZOOKEEPER_HOME/bin

三台虚拟机均进行此操作 


 

创建zookeeper数据存放目录

首先,我们可以使用 ls -l  命令查看文件权限,并修改文件权限

其中  -R  表示目录   前篇文章已经提到 

ls -l

sudo chmod -R 777 /opt/zookeeper-3.6.2

分别在三台虚拟机上执行 echo id > /opt/zookeeper-3.6.2/myid  命令创建 zookeeper 编号的 myid  的文件

echo 0 > /opt/zookeeper-3.6.2/myid

echo 1 > /opt/zookeeper-3.6.2/myid

echo 2 > /opt/zookeeper-3.6.2/myid

编辑zoo.cfg配置文件

将  zookeeper  下载后解压到当前用户目录  opt  ,然后进入 zookeeper-3.6.2/conf,将zoo_sample.cfg  文件复制为 zoo.cfg 并编辑

cd zookeper-3.6.2/conf​​​​​​​

cp zoo_sample.cfg zoo.cfg​​​​​​​

vim zoo.cfg

其中 dataDir  是zookeeper数据存放位置,server.y=XXXX:2888:3888  是zookeeper每台配置的信息,y代表zookeeper编号及myid文件对应的内容,XXXX是服务器对应的IP地址或者主机名

将以上 zookeeper 文件复制到三台服务器上

scp -r /opt/zookeeper-3.6.2 root@slave1:/opt

scp -r /opt/zookeeper-3.6.2 root@slave2:/opt

 

启动zookeeper

分别在服务器上执行zookeeper服务启动命令

服务器1    master​​​​​​​

[root@master bin]# cd /opt/zookeeper-3.6.2/bin/

[root@master bin]# ls
README.txt    zkCli.sh   zkServer.cmd            zkSnapShotToolkit.cmd  zkTxnLogToolkit.sh
zkCleanup.sh  zkEnv.cmd  zkServer-initialize.sh  zkSnapShotToolkit.sh
zkCli.cmd     zkEnv.sh   zkServer.sh             zkTxnLogToolkit.cmd
[root@master bin]# zkServer.sh start

ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

服务器2   slave1​​​​​​​

[root@slave1 ~]# cd /opt/zookeeper-3.6.2/bin/

[root@slave1 bin]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

服务器3    slave2

[root@slave2 ~]# cd /opt/zookeeper-3.6.2/bin/

[root@slave2 bin]# zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

我们可以使用  status   检查集群状态

master

[root@master bin]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

[root@master bin]# 

 slave1

[root@slave1 bin]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader

[root@slave1 bin]# 

 slave2

[root@slave2 bin]# zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper-3.6.2/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: follower

[root@slave2 bin]# 

上图中的  leader  及  follower   是集群自动分配

至此 ,zookeeper集群安装完成。

学会了吗?

 

 


 

HBase完全分布式安装 


下载HBase安装包


Apache HBase – Apache HBase Downloadsicon-default.png?t=M85Bhttps://hbase.apache.org/downloads.html


 

上传至master虚拟机并解压HBase


cd /opt/

tar -zxvf hbase-2.3.3.tar.gz

#修改权限
sudo chmod -R 777 /opt/hbase-2.3.3

配置环境变量


[root@master ~]# vim /etc/bashrc


#HBase
export HBASE_HOME=/opt/hbase-2.3.3
export PATH=$PATH:$HBASE_HOME/bin


#配置生效

[root@master ~]# source /etc/bashrc

三台虚拟机均进行此操作 

在这里我们可以使用 hbase -version  查看环境变量是否正确

[root@master ~]# hbase -version
java version "1.8.0_261"
Java(TM) SE Runtime Environment (build 1.8.0_261-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.261-b12, mixed mode)

[root@master ~]# 

 

配置hbase相关文件


修改及配置 hbase-2.3.3/conf  目录下的  hbase-env.sh  文件

[root@master ~]# cd /opt/hbase-3.6.2/conf
[root@master ~]# vim hbase-env.sh

#!/usr/bin/env bash



# Where log files are stored.  $HBASE_HOME/logs by default.
# export HBASE_LOG_DIR=${HBASE_HOME}/logs

# Enable remote JDWP debugging of major HBase processes. Meant for Core Developers 
# export HBASE_MASTER_OPTS="$HBASE_MASTER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8070"
# export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8071"
# export HBASE_THRIFT_OPTS="$HBASE_THRIFT_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8072"
# export HBASE_ZOOKEEPER_OPTS="$HBASE_ZOOKEEPER_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8073"
# export HBASE_REST_OPTS="$HBASE_REST_OPTS -Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=n,address=8074"

# A string representing this instance of hbase. $USER by default.
# export HBASE_IDENT_STRING=$USER

# The scheduling priority for daemon processes.  See 'man nice'.
# export HBASE_NICENESS=10

# The directory where pid files are stored. /tmp by default.
export HBASE_PID_DIR=/opt/hadoop/hadoop/pids

# Seconds to sleep between slave commands.  Unset by default.  This
# can be useful in large clusters, where, e.g., slave rsyncs can
# otherwise arrive faster than the master can service them.
# export HBASE_SLAVE_SLEEP=0.1

# Tell HBase whether it should manage it's own instance of ZooKeeper or not.
# export HBASE_MANAGES_ZK=true

# The default log rolling policy is RFA, where the log file is rolled as per the size defined for the 
# RFA appender. Please refer to the log4j.properties file to see more details on this appender.
# In case one needs to do log rolling on a date change, one should set the environment property
# HBASE_ROOT_LOGGER to "<DESIRED_LOG LEVEL>,DRFA".
# For example:
# HBASE_ROOT_LOGGER=INFO,DRFA
# The reason for changing default to RFA is to avoid the boundary case of filling out disk space as 
# DRFA doesn't put any cap on the log size. Please refer to HBase-5655 for more context.

# Tell HBase whether it should include Hadoop's lib when start up,
# the default value is false,means that includes Hadoop's lib.
export HBASE_DISABLE_HADOOP_CLASSPATH_LOOKUP="true"

# Override text processing tools for use by these launch scripts.
# export GREP="${GREP-grep}"
# export SED="${SED-sed}"
export JAVA_HOME=/opt/jdk1.8.0_261 
export HBASE_HOME=/opt/hbase-2.3.3
export HBASE_MANAGES_ZK=false

其中 HBASE_MANAGES_ZK=false 表示我们使用自己安装  zookeeper  集群而不是 hbase 自带的 zookeeper 集群

修改及配置 hbase-2.3.3/conf 目录下的 hbase-site.xml 文件


[root@master ~]# vim hbase-site.xml


<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
/*
 * Licensed to the Apache Software Foundation (ASF) under one
 * or more contributor license agreements.  See the NOTICE file
 * distributed with this work for additional information
 * regarding copyright ownership.  The ASF licenses this file
 * to you under the Apache License, Version 2.0 (the
 * "License"); you may not use this file except in compliance
 * with the License.  You may obtain a copy of the License at
 *
 *     http://www.apache.org/licenses/LICENSE-2.0
 *
 * Unless required by applicable law or agreed to in writing, software
 * distributed under the License is distributed on an "AS IS" BASIS,
 * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
 * See the License for the specific language governing permissions and
 * limitations under the License.
 */
-->
<configuration>
<property>
    <name>hbase.root.dir</name>
    <value>hdfs://master:9000/hbase</value>
</property>
  <property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
  </property>
  <property>
    <name>hbase.tmp.dir</name>
    <value>./tmp</value>
  </property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>master,slave1,slave2</value>
</property>
<property>
 <name>hbase.zookeeper.property.clientPort</name>	
         <value>2181</value>
 </property>

  <property>
    <name>hbase.unsafe.stream.capability.enforce</name>
    <value>false</value>
  </property>
<property>
    <name>hbase.zookeeper.property.dataDir</name>
    <value>/opt/zookeeper-3.6.2</value>
  </property>
<property>
    <name>zookeeper.session.timeout</name>
    <value>300000</value>   <!--默认: 180000 :zookeeper 会话超时时间,单位是毫秒 -->
</property>
  <property>

    <name>hbase.master.maxclockskew</name>
    <value>30000</value>

</property>

</configuration>

添加 hbase 集群服务器的 ip 或者 hostname ,vi regionservers

[root@master ~]# vim regionservers

master
slave1
slave2

将以上  Hbase 文件复制到三台服务器上

scp -r /opt/hbase-2.3.3 root@slave1:/opt

scp -r /opt/hbase-2.3.3 root@slave2:/opt

完成即可 


 

启动hbase


配置完成hbase后将上面的所有文件复制到其他两台服务器上,然后使用  start-hbase.sh  命令启动hbase集群

[root@master ~]# cd /opt/hbase-2.3.3/bin/

[root@master bin]# ls
considerAsDead.sh     hbase-config.cmd        master-backup.sh          start-hbase.sh
draining_servers.rb   hbase-config.sh         region_mover.rb           stop-hbase.cmd
get-active-master.rb  hbase-daemon.sh         regionservers.sh          stop-hbase.sh
graceful_stop.sh      hbase-daemons.sh        region_status.rb          test
hbase                 hbase-jruby             replication               tmp
hbase-cleanup.sh      hirb.rb                 rolling-restart.sh        zookeepers.sh
hbase.cmd             local-master-backup.sh  shutdown_regionserver.rb
hbase-common.sh       local-regionservers.sh  start-hbase.cmd

[root@master bin]# start-hbase.sh
running master, logging to /opt/hbase-2.3.3/logs/hbase-root-master-master.out
slave1: running regionserver, logging to /opt/hbase-2.3.3/logs/hbase-root-regionserver-slave1.out
slave2: running regionserver, logging to /opt/hbase-2.3.3/logs/hbase-root-regionserver-slave2.out
master: running regionserver, logging to /opt/hbase-2.3.3/logs/hbase-root-regionserver-master.out

[root@master bin]# 


在哪台服务器使用上述命令启动则那台服务器即为 master 节点,使用 jps命令查看启动情况

[root@master bin]# jps
7281 ResourceManager
8450 HMaster
6965 SecondaryNameNode
6535 NameNode
8619 HRegionServer
7693 QuorumPeerMain
8943 Jps

 slave1

[root@slave1 bin]# jps
2180 QuorumPeerMain
2509 Jps
1919 DataNode
2351 HRegionServer

[root@slave1 bin]# 

slave2

[root@slave2 ~]# jps
3441 QuorumPeerMain
3875 HRegionServer
4040 Jps
3165 DataNode

[root@slave2 ~]# 

可以看到服务器1启动和  HMasterHRegionServer  进程,服务器2和服务器3启动和HRegionServer  进程。

至此大功告成!!!困扰了我一个月的难题,终于解决了!!!

🙇‍

当然,我们也可以通过Web页面查看 Hbase 集群情况 : http://IP:16010

 

 

加油(ง •_•)ง 

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐