Linux CentOS7.6 Hadoop3.0.3+HBASE2.1.0单机部署
1、准备虚拟机一台:192.168.48.128Linux环境[root@server1 ~]# cat /etc/redhat-releaseCentOS Linux release 7.6.1810 (Core)安装JDK:https://blog.csdn.net/qq_39680564/article/details/82768938JDK版本:[root@server1...
1、准备
虚拟机一台:192.168.48.128
Linux环境
[root@server1 ~]# cat /etc/redhat-release
CentOS Linux release 7.6.1810 (Core)
安装JDK:https://blog.csdn.net/qq_39680564/article/details/82768938
JDK版本:
[root@server1 ~]# java -version
java version "1.8.0_74"
Java(TM) SE Runtime Environment (build 1.8.0_74-b02)
Java HotSpot(TM) 64-Bit Server VM (build 25.74-b02, mixed mode)
2、配置主机映射
2.1 修改主机名
hostname server1
2.2 修改主机映射
vim /etc/hosts
文末加入如下内容
192.168.48.128 server1
如图:
ping一下主机名server1,是否ping的通
3、配置免密登录
3.1 生成秘钥文件
ssh-keygen -t rsa -P ''
一直按回车确认,最后在/root/.ssh/下生成秘钥文件id_rsa.pub
3.2 创建authorized_keys文件
touch /root/.ssh/authorized_keys
将id_rsa.pub文件中的内容复制进authorized_keys中,如图:
4、下载安装包
本文下载到/opt目录
下载Hadoop:
wget http://archive.apache.org/dist/hadoop/core/hadoop-3.0.3/hadoop-3.0.3.tar.gz
下载Hbase:
wget https://archive.apache.org/dist/hbase/2.1.0/hbase-2.1.0-bin.tar.gz
解压
tar -zxvf hadoop-3.0.3.tar.gz
tar -zxvf hbase-2.1.0-bin.tar.gz
改名
mv hadoop-3.0.3 hadoop
mv hbase-2.1.0 hbase
5、配置环境变量
配置Hadoop和Hbase的环境变量
vim ~/.bashrc
加入如下内容
# hadoop
export HADOOP_HOME=/opt/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
# hbase
export HBASE_HOME=/opt/hbase
export HBASE_CONF_DIR=/opt/hbase/conf
export PATH=$PATH:$HBASE_HOME/bin
如图:
刷新环境变量
source ~/.bashrc
6、配置Hadoop配置文件
6.1 修改hadoop-env.sh文件
vim /opt/hadoop/etc/hadoop/hadoop-env.sh
加入如下内容:
export JAVA_HOME=/opt/jdk-1.8
export HADOOP_HOME=/opt/hadoop
如图:
6.2 修改core-site.xml文件
vim /opt/hadoop/etc/hadoop/core-site.xml
加入如下内容
<property>
<name>fs.default.name</name>
<value>hdfs://server1:9000</value>
</property>
如图:
6.3 修改yarn-site.xml文件
vim /opt/hadoop/etc/hadoop/yarn-site.xml
加入如下内容
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>server1:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>server1:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>server1:8050</value>
</property>
如图:
6.4 修改mapred-site.xml文件
vim /opt/hadoop/etc/hadoop/mapred-site.xml
加入如下内容:
<property>
<name>mapred.job.tracker</name>
<value>server1:54311</value>
</property>
<property>
<name>yarn.app.mapreduce.am.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.map.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
<property>
<name>mapreduce.reduce.env</name>
<value>HADOOP_MAPRED_HOME=$HADOOP_HOME</value>
</property>
如图:
6.5 修改hdfs-site.xml文件
vim /opt/hadoop/etc/hadoop/hdfs-site.xml
加入如下内容
<property>
<name>dfs.name.dir</name>
<value>/root/hadoop/dfs/name</value>
<description>Path on the local filesystem where theNameNode stores the namespace and transactions logs persistently.</description>
</property>
<property>
<name>dfs.data.dir</name>
<value>/root/hadoop/dfs/data</value>
<description>Comma separated list of paths on the localfilesystem of a DataNode where it should store its blocks.</description>
</property>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>need not permissions</description>
</property>
如图:
记得创建这两个目录
mkdir -p /root/hadoop/dfs/name
mkdir -p /root/hadoop/dfs/data
6.6 修改start-dfs.sh、stop-dfs.sh文件
vim /opt/hadoop/sbin/start-dfs.sh
vim /opt/hadoop/sbin/stop-dfs.sh
分别在开头加入如下内容
HDFS_DATANODE_USER=root
HDFS_DATANODE_SECURE_USER=hdfs
HDFS_NAMENODE_USER=root
HDFS_SECONDARYNAMENODE_USER=root
如图:
6.7 修改start-yarn.sh、stop-yarn.sh文件
vim /opt/hadoop/sbin/start-yarn.sh
vim /opt/hadoop/sbin/stop-yarn.sh
分别在开头加入如下内容
YARN_RESOURCEMANAGER_USER=root
HADOOP_SECURE_DN_USER=yarn
YARN_NODEMANAGER_USER=root
如图:
7、启动Hadoop服务
7.1 初始化namenode
/opt/hadoop/bin/hadoop namenode -format
初始化成功后会在/root/hadoop/dfs/name下生成current目录
7.2 启动HDFS
/opt/hadoop/sbin/start-dfs.sh
如图:
jps查看进程:有NameNode、DataNode、SecondaryNameNode三个进程
访问HDFS:http://192.168.48.128:9870
7.3 启动YARN
/opt/hadoop/sbin/start-yarn.sh
如图:
jps查看进程:新增了NodeManager、ResourceManager两个进程
访问YARN:http://192.168.48.128:8088
8、配置Hbase配置文件
修改hbase-site.xml文件
vim /opt/hbase/conf/hbase-site.xml
添加如下内容:
<property>
<name>hbase.rootdir</name>
<value>hdfs://server1:9000/hbase</value>
</property>
<property>
<name>hbase.unsafe.stream.capability.enforce</name>
<value>false</value>
</property>
如图:
9、启动Hbase服务
/opt/hbase/bin/start-hbase.sh
如图:
jps查看进程:多了一个HMaster进程
访问Hbase:http://192.168.48.128:16010
使用Hbase创建一个表
hbase基本操作:https://blog.csdn.net/qq_39680564/article/details/89672165
[root@server1 bin]# hbase shell
HBase Shell
Use "help" to get list of supported commands.
Use "exit" to quit this interactive shell.
Version 2.1.0, re1673bb0bbfea21d6e5dba73e013b09b8b49b89b, Tue Jul 10 17:26:48 CST 2018
Took 0.0089 seconds
hbase(main):001:0> create 'table1', 'tab1_id', 'tab1_add', 'tab1_info'
Created table table1
Took 3.3246 seconds
=> Hbase::Table - table1
hbase(main):002:0> list
TABLE
table1
1 row(s)
Took 0.0865 seconds
=> ["table1"]
Hbase中查看该表
HDFS中查看该表
更多推荐
所有评论(0)