搭建ganglia集群并且监视hadoop CDH4.6
前言最近在研究云监控的相关工具,感觉ganglia颇有亮点,能从一个集群整体的角度来展现数据. 但是安装过程稍过复杂,相关依赖稍多,故写此文章与大家分享下.本文不讲解相关原理,若想了解请参考其他资料. 本文目的: 即使之前未触过ganglia,也能按照文中步骤搭建自己的ganglia监控集群.@Author duangr@Website http://my.
前言
最近在研究云监控的相关工具,感觉ganglia颇有亮点,能从一个集群整体的角度来展现数据. 但是安装过程稍过复杂,相关依赖稍多,故写此文章与大家分享下.
本文不讲解相关原理,若想了解请参考其他资料.
本文目的: 即使之前未触过ganglia,也能按照文中步骤搭建自己的ganglia监控集群.
@Author duangr
@Website http://my.oschina.net/duangr/blog/181585
1.相关环境
Host Name | IP | OS | Arch |
master | 192.168.1.201 | CentOS 6.4 | x86_64 |
slave1 | 192.168.1.202 | CentOS 6.4 | x86_64 |
slave2 | 192.168.1.203 | CentOS 6.4 | x86_64 |
全部主机确认:
-
iptables关闭
-
SELinux disabled
2.部署规划
项 | 值 |
监控服务主节点 | master |
被监控从节点 | slave1 slave2 |
Ganglia监控服务的主节点需要安装:
-
ganglia
-
ganglia-web
-
php
-
apache
Ganglia被监控从节点需要安装:
-
ganglia
安装路径规划
项 | 值 |
ganglia安装路径 | /usr/local/ganglia |
php安装路径 | /usr/local/php |
apache安装路径 | /usr/local/apache2 |
ganglia-web安装路径 | /export/home/ganglia/ganglia-web-3.5.10 |
rrds数据路径 | /var/lib/ganglia/rrdtool |
3.代码获取
4.前提依赖
4.1 主机环境检查(全部主机节点)
- # rpm -q gcc glibc glibc-common rrdtool rrdtool-devel apr apr-devel expat expat-devel pcre pcre-devel dejavu-lgc-sans-mono-fonts dejavu-sans-mono-fonts
- gcc-4.4.7-3.el6.x86_64
- glibc-2.14.1-6.x86_64
- glibc-common-2.14.1-6.x86_64
- rrdtool-1.3.8-6.el6.x86_64
- rrdtool-devel-1.3.8-6.el6.x86_64
- apr-1.3.9-5.el6_2.x86_64
- apr-devel-1.3.9-5.el6_2.x86_64
- expat-2.0.1-11.el6_2.x86_64
- expat-devel-2.0.1-11.el6_2.x86_64
- pcre-7.8-6.el6.x86_64
- pcre-devel-7.8-6.el6.x86_64
- dejavu-lgc-sans-mono-fonts-2.30-2.el6.noarch.rpm
- dejavu-sans-mono-fonts-2.30-2.el6.noarch.rpm
若有缺失,请先安装. 可通过如下几个镜像网站下载相关安装包:
4.2 dejavu
- rpm -ivh dejavu-lgc-sans-mono-fonts-2.30-2.el6.noarch.rpm
- rpm -ivh dejavu-sans-mono-fonts-2.30-2.el6.noarch.rpm
4.3 rrdtool
- rpm -ivh rrdtool-1.3.8-6.el6.x86_64.rpm
- rpm -ivh rrdtool-devel-1.3.8-6.el6.x86_64.rpm
4.4 apr
- rpm -ivh apr-1.3.9-5.el6_2.x86_64.rpm
- rpm -ivh apr-devel-1.3.9-5.el6_2.x86_64.rpm
4.5 libexpat
- rpm -ivh expat-2.0.1-11.el6_2.x86_64.rpm
- rpm -ivh expat-devel-2.0.1-11.el6_2.x86_64.rpm
4.6 libpcre
- rpm -ivh pcre-7.8-6.el6.x86_64.rpm
- rpm -ivh pcre-devel-7.8-6.el6.x86_64.rpm
4.7 confuse
confuse-2.7 http://www.nongnu.org/confuse/
- tar -zxf confuse-2.7.tar.gz
- cd confuse-2.7
- ./configure CFLAGS=-fPIC --disable-nls
- make && make install
4.8 python
Python-2.7.3.tar.bz2 http://www.python.org/
- tar -jxf Python-2.7.3.tar.bz2
- ./configure --prefix=/usr/local --enable-shared
- make && make install
配置共享库
- vi /etc/ld.so.conf
- -- 增加如下内容
- /usr/local/lib
启用配置
ldconfig
检查是否生效
ldconfig -v |grep "libpython2.7.so"
5.编译安装
5.1 安装ganglia (全部节点都要安装)
- # tar -zxf ganglia-3.6.0.tar.gz
- # cd ganglia-3.6.0
- # ./configure --prefix=/usr/local/ganglia --with-gmetad --enable-gexec --with-python=/usr/local
- Welcome to..
- ______ ___
- / ____/___ _____ ____ _/ (_)___ _
- / / __/ __ `/ __ \/ __ `/ / / __ `/
- / /_/ / /_/ / / / / /_/ / / / /_/ /
- \____/\__,_/_/ /_/\__, /_/_/\__,_/
- /____/
- Copyright (c) 2005 University of California, Berkeley
- Version: 3.6.0
- Library: Release 3.6.0 0:0:0
- Type "make" to compile.
- # make && make install
5.2 安装ganglia-web (主节点安装)
- # tar -zxf ganglia-web-3.5.12.tar.gz -C /export/home/ganglia/
- # cd /export/home/ganglia/ganglia-web-3.5.12
- # cp conf_default.php conf.php
vi conf.php 调整为如下内容
- $conf['gweb_confdir'] = "/var/www/html/ganglia";
- $conf['gmetad_root'] = "/var/www/html";
vi header.php
- <?php
- session_start();
- ini_set('date.timezone','PRC'); --修改时区为本地时区
- if (isset($_GET['date_only'])) {
- $d = date("r");
- echo $d;
- exit(0);
- }
配置临时目录
- cd /var/www/html/ganglia-web-3.5.12/dwoo
- mkdir cache
- chmod 777 cache
- mkdir compiled
- chmod 777 compiled
5.3 安装apache (主节点安装)
- tar -zxf httpd-2.2.23.tar.gz
- cd httpd-2.2.23
- ./configure --prefix=/usr/local/apache2
- make && make install
5.4 安装php (主节点安装)
- tar -zxf php-5.4.10.tar.gz
- cd php-5.4.10
- ./configure --prefix=/usr/local/php --with-apxs2=/usr/local/apache2/bin/apxs --with-mysql=/usr
- make && make install
注:在实际安装过程中,我采用的是
- yum -y install httpd php
按照以上的安装方式, 配置出来的默认站点目录为/var/www/html/,新建一个php脚本info.php:
- <?php
- phpinfo();
- ?>
5.5 使用apache发布ganglia-web (主节点安装)
vi /usr/local/apache2/conf/httpd.conf
- ....
- Listen 80
- ....
- <IfModule dir_module>
- DirectoryIndex index.html index.php
- AddType application/x-httpd-php .php
- </IfModule>
- ....
- # 在文件最后增加如下内容
- # ganglia
- Alias /ganglia "/var/www/html/ganglia"
- <Directory "/var/www/html/ganglia">
- AuthType Basic
- Options None
- AllowOverride None
- Order allow,deny
- Allow from all
- </Directory>
启动httpd服务
- /usr/local/apache2/bin/apachectl restart
6.配置Ganglia
6.1 配置gmetad (主节点配置)
- cd ganglia-3.6.0
- cp ./gmetad/gmetad.init /etc/init.d/gmetad
- cp ./gmetad/gmetad.conf /usr/local/ganglia/etc/
vi /etc/init.d/gmetad --修改如下内容
- GMETAD=/usr/local/ganglia/sbin/gmetad
vi /usr/local/ganglia/etc/gmetad.conf -- 修改如下内容
- data_source "hadoop-cluster" 10 master slave1 slave2
- xml_port 8651
- interactive_port 8652
- rrd_rootdir "/var/lib/ganglia/rrds"
- case_sensitive_hostnames 0
修改rrds数据目录所有者
- chown -R nobody:nobody /var/lib/ganglia/rrds
启动gmetad服务,并设为开机自动运行
- service gmetad restart
- chkconfig --add gmetad
6.2 配置gmond (全部节点配置)
- cd ganglia-3.6.0
- cp ./gmond/gmond.init /etc/init.d/gmond
- ./gmond/gmond -t > /usr/local/ganglia/etc/gmond.conf
vi /etc/init.d/gmond --修改如下内容
- GMOND=/usr/local/ganglia/sbin/gmond
vi /usr/local/ganglia/etc/gmond.conf -- 修改如下内容
- cluster {
- name = "hadoop-cluster"
- owner = "nobody"
- latlong = "unspecified"
- url = "unspecified"
- }
复制python module到ganglia部署目录
- mkdir /usr/local/ganglia/lib64/ganglia/python_modules
- cp ./gmond/python_modules/*/*.py /usr/local/ganglia/lib64/ganglia/python_modules
安装程序ganglia-3.6.0默认提供了一些python module的配置文件,只需要部署到 /usr/local/ganglia/etc/conf.d 目录下面即可生效
若对默认提供的这些监控脚本不太关心,可以跳过下面这步:
- cp ./gmond/python_modules/conf.d/*.pyconf /usr/local/ganglia/etc/conf.d
启动gmond服务,并设为开机自动运行
service gmond restart
chkconfig --add gmond
6.3 将ganglia_web放到/var/www/html/目录下面
7.监控页面
http://192.168.1.201/ganglia/
8.与CDH4.6的整合
CDH中hadoop配置文件不少,为了整合ganglia,需要编辑文件hadoop-metrics2.properties,从/etc/hadoop/conf.dist复制相关文件到
$hadoop_conf目录下。
cp /etc/hadoop/conf.dist/hadoop-metrics2.properties /etc/hadoop/conf
在hadoop-metrics2.properties中添加
#
# Below are for sending metrics to Ganglia
#
# for Ganglia 3.0 support
# *.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink30
#
# for Ganglia 3.1 support
*.sink.ganglia.class=org.apache.hadoop.metrics2.sink.ganglia.GangliaSink31
*.sink.ganglia.period=10
# default for supportsparse is false
*.sink.ganglia.supportsparse=true
*.sink.ganglia.slope=jvm.metrics.gcCount=zero,jvm.metrics.memHeapUsedM=both
*.sink.ganglia.dmax=jvm.metrics.threadsBlocked=70,jvm.metrics.memHeapUsedM=40
namenode.sink.ganglia.servers=239.2.11.71:8649
datanode.sink.ganglia.servers=239.2.11.71:8649
#jobtracker.sink.ganglia.servers=239.2.11.71:8649
#tasktracker.sink.ganglia.servers=239.2.11.71:8649
resourcemanager.sink.ganglia.servers=239.2.11.71:8649
nodemanager.sink.ganglia.servers=239.2.11.71:8649
maptask.sink.ganglia.servers=239.2.11.71:8649
reducetask.sink.ganglia.servers=239.2.11.71:8649
#dfs.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
# add
dfs.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
dfs.period=10
dfs.servers=239.2.11.71:8649
mapred.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
mapred.period=10
mapred.servers=239.2.11.71:8649
#jvm.class=org.apache.hadoop.metrics.spi.NullContextWithUpdateThread
#jvm.period=300
jvm.class=org.apache.hadoop.metrics.ganglia.GangliaContext31
jvm.period=10
jvm.servers=239.2.11.71:8649
其中server和port的确定从/usr/local/ganglia/etc/gmond.conf得到
/* Feel free to specify as many udp_send_channels as you like. Gmond
used to only support having a single channel */
udp_send_channel {
#bind_hostname = yes # Highly recommended, soon to be default.
# This option tells gmond to use a source address
# that resolves to the machine's hostname. Without
# this, the metrics may appear to come from any
# interface and the DNS names associated with
# those IPs will be used to create the RRDs.
mcast_join = 239.2.11.71
port = 8649
ttl = 1
}
mcast_join = 239.2.11.71则确定了server地址,是一个ganglia 用来发送xml格式文件信息的组播地址,是固定的,哪一点节点配置都是这个地址。port=8649这里可以看到。如此,我们重启hadoop服务,就可从ganglia界面看到监控画面。
更多推荐
所有评论(0)