1:首先下载hadoop安装介质

地址:http://mirrors.cnnic.cn/apache/Hadoop/common/hadoop-2.4.1/


2:安装虚拟机,配置好网络

网络配置,参见我的博文:http://fuwenchao.blog.51cto.com/6008712/1398629

我又三台虚拟机:

2.1:hd0

  1. [root@localhost ~]# hostname  
  2. localhost.hadoop0  
  3. [root@localhost ~]# ifconfig  
  4. eth0      Link encap:Ethernet  HWaddr 00:0C:29:5D:31:1C    
  5.           inet addr:192.168.1.205  Bcast:192.168.1.255  Mask:255.255.255.0  
  6.           inet6 addr: fe80::20c:29ff:fe5d:311c/64 Scope:Link  
  7.           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1  
  8.           RX packets:484 errors:0 dropped:0 overruns:0 frame:0  
  9.           TX packets:99 errors:0 dropped:0 overruns:0 carrier:0  
  10.           collisions:0 txqueuelen:1000   
  11.           RX bytes:48893 (47.7 KiB)  TX bytes:8511 (8.3 KiB)  
  12.   
  13. lo        Link encap:Local Loopback    
  14.           inet addr:127.0.0.1  Mask:255.0.0.0  
  15.           inet6 addr: ::1/128 Scope:Host  
  16.           UP LOOPBACK RUNNING  MTU:16436  Metric:1  
  17.           RX packets:27 errors:0 dropped:0 overruns:0 frame:0  
  18.           TX packets:27 errors:0 dropped:0 overruns:0 carrier:0  
  19.           collisions:0 txqueuelen:0   
  20.           RX bytes:2744 (2.6 KiB)  TX bytes:2744 (2.6 KiB)  
  21.   
  22. [root@localhost ~]# uname -a  
  23. Linux localhost.hadoop0 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux  
  24. [root@localhost ~]#   
[root@localhost ~]# hostname
localhost.hadoop0
[root@localhost ~]# ifconfig
eth0      Link encap:Ethernet  HWaddr 00:0C:29:5D:31:1C  
          inet addr:192.168.1.205  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:fe5d:311c/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:484 errors:0 dropped:0 overruns:0 frame:0
          TX packets:99 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000 
          RX bytes:48893 (47.7 KiB)  TX bytes:8511 (8.3 KiB)

lo        Link encap:Local Loopback  
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:27 errors:0 dropped:0 overruns:0 frame:0
          TX packets:27 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0 
          RX bytes:2744 (2.6 KiB)  TX bytes:2744 (2.6 KiB)

[root@localhost ~]# uname -a
Linux localhost.hadoop0 2.6.32-358.el6.x86_64 #1 SMP Fri Feb 22 00:31:26 UTC 2013 x86_64 x86_64 x86_64 GNU/Linux
[root@localhost ~]# 

2.2:hd1


localhost.hadoop1                           192.168.1.206



2.3:hd2

localhost.hadoop2                           192.168.1.207


3:配置各个主机的映射关系

在每台主机的/etc/hosts文件中增加如下几行

192.168.1.205 localhost.hadoop0  hd0
192.168.1.206 localhost.hadoop1  hd1
192.168.1.207 localhost.hadoop2  hd2



4:hd0到hd1和hd2的无密码登陆


一般系统都会默认安装ssh,如果没有的话,参见我的博文:http://fuwenchao.blog.51cto.com/6008712/1437628

4.1:设置本地的无密码登陆

在hd0上执行如下步骤:

1、 进入.ssh文件夹

2、 ssh-keygen -t  rsa 之后一路回 车(产生秘钥)

3、 把id_rsa.pub 追加到授权的 key 里面去(cat id_rsa.pub >> authorized_keys)

4、 重启 SSH 服 务命令使其生效 :service sshd restart(这里RedHat下为sshdUbuntu下为ssh)

此时已经可以进行ssh localhost的无密码登陆

     【注意】:以上操作在每台机器上面都要进行。

4.2:设置无密码的远程登录

设置到hd1的免密码登陆:进入hd0的.ssh目录
执行
  1. [root@localhost ~]# cd .ssh  
  2. [root@localhost .ssh]# ls   
  3. authorized_keys  id_rsa  id_rsa.pub  known_hosts  
  4. [root@localhost .ssh]# scp authorized_keys root@hd1:~/.ssh/authorized_keys_from_hd0  
[root@localhost ~]# cd .ssh
[root@localhost .ssh]# ls 
authorized_keys  id_rsa  id_rsa.pub  known_hosts
[root@localhost .ssh]# scp authorized_keys root@hd1:~/.ssh/authorized_keys_from_hd0
进入hd1的.ssh目录
执行
  1. [root@localhost .ssh]# ls  
  2. authorized_keys  authorized_keys_from_hd0  id_rsa  id_rsa.pub  known_hosts  
  3. [root@localhost .ssh]# cat authorized_keys_from_hd0 >>authorized_keys  
  4. [root@localhost .ssh]#   
[root@localhost .ssh]# ls
authorized_keys  authorized_keys_from_hd0  id_rsa  id_rsa.pub  known_hosts
[root@localhost .ssh]# cat authorized_keys_from_hd0 >>authorized_keys
[root@localhost .ssh]# 

至此hd0到hd1的免密码登陆就设置好了,hd0到hd2的免密码登陆同理

5:安装JDK

参见我的博文:http://fuwenchao.blog.51cto.com/6008712/1332277
 在每台机器上安装,记得安装完后运行java -version看下安装好了没!!
我的安装路径为/hd/hsinstall/jdk1.7

6:关闭服务器的防火墙

在每台机器上运行
  1. [root@localhost jdk1.7]# service iptables stop  
  2. iptables: Flushing firewall rules:                         [  OK  ]  
  3. iptables: Setting chains to policy ACCEPT: filter          [  OK  ]  
  4. iptables: Unloading modules:                               [  OK  ]  
  5. [root@localhost jdk1.7]# service iptables status  
  6. iptables: Firewall is not running.  
  7. [root@localhost jdk1.7]# chkconfig iptables off  
  8. [root@localhost jdk1.7]#   
[root@localhost jdk1.7]# service iptables stop
iptables: Flushing firewall rules:                         [  OK  ]
iptables: Setting chains to policy ACCEPT: filter          [  OK  ]
iptables: Unloading modules:                               [  OK  ]
[root@localhost jdk1.7]# service iptables status
iptables: Firewall is not running.
[root@localhost jdk1.7]# chkconfig iptables off
[root@localhost jdk1.7]# 

看下iptables状态
  1. [root@localhost jdk1.7]# chkconfig --list iptables  
  2. iptables        0:off   1:off   2:off   3:off   4:off   5:off   6:off  
  3. [root@localhost jdk1.7]#   
[root@localhost jdk1.7]# chkconfig --list iptables
iptables       	0:off	1:off	2:off	3:off	4:off	5:off	6:off
[root@localhost jdk1.7]# 
设置开机不自启动
chkconfig iptables off

7:将下载好的hadoop安装包上传到服务器

在每台机器上上传一份
  1. [root@localhost Hadoop_install_need]# pwd  
  2. /root/Hadoop_install_need  
  3. [root@localhost Hadoop_install_need]# ls  
  4. hadoop-2.4.1.tar.gz  JDK  
  5. [root@localhost Hadoop_install_need]#   
[root@localhost Hadoop_install_need]# pwd
/root/Hadoop_install_need
[root@localhost Hadoop_install_need]# ls
hadoop-2.4.1.tar.gz  JDK
[root@localhost Hadoop_install_need]# 

8:解压缩文件

在每台机器上执行
tar -zxvf hadoop-2.4.1.tar.gz 

在第三台机器上执行的时候遇到了一个特傻B的问题,就是总报错如下
  1. tar: hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/icon_success_sml.gif: Cannot open: No such file or directory  
  2. hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/apache-maven-project-2.png  
  3. tar: hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/apache-maven-project-2.png: Cannot open: No such file or directory  
tar: hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/icon_success_sml.gif: Cannot open: No such file or directory
hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/apache-maven-project-2.png
tar: hadoop-2.4.1/share/doc/hadoop/hadoop-streaming/images/apache-maven-project-2.png: Cannot open: No such file or directory

整了半天也不知道是啥原因,因为文件是一样,命令是一样,为什么在前两台上正常执行,在第三台却会报错呢
试了半天不知道啥原因,于是分开执行试下
gunzip aa.tar.gz
tar xvf aa.tar
--
  1. [root@localhost Hadoop_install_need]# gunzip hadoop-2.4.1.tar.gz   
  2.   
  3. gzip: hadoop-2.4.1.tar: No space left on device  
[root@localhost Hadoop_install_need]# gunzip hadoop-2.4.1.tar.gz 

gzip: hadoop-2.4.1.tar: No space left on device

原来如此,磁盘空间不足,df看下
  1. [root@localhost Hadoop_install_need]# df -h  
  2. Filesystem            Size  Used Avail Use% Mounted on  
  3. /dev/sda2             3.9G  3.9G     0 100% /  
  4. tmpfs                 495M   72K  495M   1% /dev/shm  
  5. /dev/sda1             985M   40M  895M   5% /boot  
  6. /dev/sda5             8.1G  147M  7.5G   2% /hadoop  
[root@localhost Hadoop_install_need]# df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/sda2             3.9G  3.9G     0 100% /
tmpfs                 495M   72K  495M   1% /dev/shm
/dev/sda1             985M   40M  895M   5% /boot
/dev/sda5             8.1G  147M  7.5G   2% /hadoop

法克鱿呀
当时没规划好空间,现在跟分区不够了,咋办呀!!!




---为了解决上面的问题,出现了一点小状况,将三个虚拟机重新安装了一遍,配置还是跟以前一样,只不过这次全部选择最小化安装了

现在的进度是jdk安装好了,hadoop解压到了/hd/hdinstall下面,接着往下吧

9:关闭防火墙

service iptables stop

10:开始安装hadoop2.4.1

前面讲的是一些基础设施的搭建,现在开始正式学习hadoop的安装

首先解压

  1. [root@hd1 hdsoft]# tar -zxvf hadoop-2.4.1.tar.gz   
[root@hd1 hdsoft]# tar -zxvf hadoop-2.4.1.tar.gz 

复制到

  1. [root@hd1 hdinstall]# pwd  
  2. /hd/hdinstall  
  3. [root@hd1 hdinstall]# ll  
  4. total 8  
  5. drwxr-xr-x. 10 67974 users 4096 Jul 21 04:45 hadoop-2.4.1  
  6. drwxr-xr-x.  8 uucp    143 4096 May  8 04:50 jdk1.7  
  7. [root@hd1 hdinstall]#   
[root@hd1 hdinstall]# pwd
/hd/hdinstall
[root@hd1 hdinstall]# ll
total 8
drwxr-xr-x. 10 67974 users 4096 Jul 21 04:45 hadoop-2.4.1
drwxr-xr-x.  8 uucp    143 4096 May  8 04:50 jdk1.7
[root@hd1 hdinstall]# 

配置之前,需要在hd0本地文件系统创建以下文件夹:

/root/dfs/name

/root/dfs/data

/root/tmp


修改配置文件--7个

  1. ~/hadoop-2.2.0/etc/hadoop/hadoop-env.sh  
  2.   
  3. ~/hadoop-2.2.0/etc/hadoop/yarn-env.sh  
  4.   
  5. ~/hadoop-2.2.0/etc/hadoop/slaves  
  6.   
  7. ~/hadoop-2.2.0/etc/hadoop/core-site.xml  
  8.   
  9. ~/hadoop-2.2.0/etc/hadoop/hdfs-site.xml  
  10.   
  11. ~/hadoop-2.2.0/etc/hadoop/mapred-site.xml  
  12.   
  13. ~/hadoop-2.2.0/etc/hadoop/yarn-site.xml  
  14.   
  15. 以上个别文件默认不存在的,可以复制相应的template文件获得。  
~/hadoop-2.2.0/etc/hadoop/hadoop-env.sh

~/hadoop-2.2.0/etc/hadoop/yarn-env.sh

~/hadoop-2.2.0/etc/hadoop/slaves

~/hadoop-2.2.0/etc/hadoop/core-site.xml

~/hadoop-2.2.0/etc/hadoop/hdfs-site.xml

~/hadoop-2.2.0/etc/hadoop/mapred-site.xml

~/hadoop-2.2.0/etc/hadoop/yarn-site.xml

以上个别文件默认不存在的,可以复制相应的template文件获得。

改成如下

配置文件1:hadoop-env.sh

修改JAVA_HOME值(export JAVA_HOME=/hd/hdinstall/jdk1.7)

配置文件2:yarn-env.sh

修改JAVA_HOME值(export JAVA_HOME=/hd/hdinstall/jdk1.7

配置文件3:slaves (这个文件里面保存所有slave节点)

  1. [root@hd1 hadoop]#  more slaves   
  2. hd1  
  3. hd2  
  4. [root@hd1 hadoop]#   
[root@hd1 hadoop]#  more slaves 
hd1
hd2
[root@hd1 hadoop]# 
配置文件4:core-site.xml
  1. [root@hd1 hadoop]# more core-site.xml   
  2. <?xml version="1.0" encoding="UTF-8"?>  
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  4. <!--  
  5.   Licensed under the Apache License, Version 2.0 (the "License");  
  6.   you may not use this file except in compliance with the License.  
  7.   You may obtain a copy of the License at  
  8.   
  9.     http://www.apache.org/licenses/LICENSE-2.0  
  10.   
  11.   Unless required by applicable law or agreed to in writing, software  
  12.   distributed under the License is distributed on an "AS IS" BASIS,  
  13.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
  14.   See the License for the specific language governing permissions and  
  15.   limitations under the License. See accompanying LICENSE file.  
  16. -->  
  17.   
  18. <!-- Put site-specific property overrides in this file. -->  
  19.   
  20. <configuration>  
  21.  <property>  
  22.   
  23.                                 <name>fs.defaultFS</name>  
  24.   
  25.                                 <value>hdfs://hd0:9000</value>  
  26.   
  27.                 </property>  
  28.   
  29.        <property>  
  30.   
  31.                                 <name>io.file.buffer.size</name>  
  32.   
  33.                                 <value>131072</value>  
  34.   
  35.                 </property>  
  36.   
  37.        <property>  
  38.   
  39.                                 <name>hadoop.tmp.dir</name>  
  40.   
  41.                                 <value>file:/root/tmp</value>  
  42.   
  43.                                 <description>Abase for other temporary directories.</description>  
  44.   
  45.                 </property>  
  46.   
  47.         <property>  
  48.   
  49.                <name>hadoop.proxyuser.hduser.hosts</name>  
  50.   
  51.                <value>*</value>  
  52.   
  53.        </property>  
  54.   
  55.                  <property>  
  56.   
  57.                <name>hadoop.proxyuser.hduser.groups</name>  
  58.   
  59.                <value>*</value>  
  60.   
  61.        </property>  
  62. </configuration>  
  63. [root@hd1 hadoop]#   
[root@hd1 hadoop]# more core-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
 <property>

                                <name>fs.defaultFS</name>

                                <value>hdfs://hd0:9000</value>

                </property>

       <property>

                                <name>io.file.buffer.size</name>

                                <value>131072</value>

                </property>

       <property>

                                <name>hadoop.tmp.dir</name>

                                <value>file:/root/tmp</value>

                                <description>Abase for other temporary directories.</description>

                </property>

        <property>

               <name>hadoop.proxyuser.hduser.hosts</name>

               <value>*</value>

       </property>

                 <property>

               <name>hadoop.proxyuser.hduser.groups</name>

               <value>*</value>

       </property>
</configuration>
[root@hd1 hadoop]# 

配置文件5:hdfs-site.xml
  1. [root@hd1 hadoop]# more hdfs-site.xml   
  2. <?xml version="1.0" encoding="UTF-8"?>  
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  4. <!--  
  5.   Licensed under the Apache License, Version 2.0 (the "License");  
  6.   you may not use this file except in compliance with the License.  
  7.   You may obtain a copy of the License at  
  8.   
  9.     http://www.apache.org/licenses/LICENSE-2.0  
  10.   
  11.   Unless required by applicable law or agreed to in writing, software  
  12.   distributed under the License is distributed on an "AS IS" BASIS,  
  13.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
  14.   See the License for the specific language governing permissions and  
  15.   limitations under the License. See accompanying LICENSE file.  
  16. -->  
  17.   
  18. <!-- Put site-specific property overrides in this file. -->  
  19.   
  20. <configuration>  
  21.   
  22.  <property>  
  23.   
  24.                 <name>dfs.namenode.secondary.http-address</name>  
  25.   
  26.                <value>hd0:9001</value>  
  27.   
  28.         </property>  
  29.   
  30.          <property>  
  31.   
  32.                   <name>dfs.namenode.name.dir</name>  
  33.   
  34.                  <value>file:/root/dfs/name</value>  
  35.   
  36.             </property>  
  37.   
  38.            <property>  
  39.   
  40.                     <name>dfs.datanode.data.dir</name>  
  41.   
  42.                     <value>file:/root/dfs/data</value>  
  43.   
  44.             </property>  
  45.   
  46.             <property>  
  47.   
  48.                      <name>dfs.replication</name>  
  49.   
  50.                      <value>3</value>  
  51.   
  52.              </property>  
  53.   
  54.              <property>  
  55.   
  56.                      <name>dfs.webhdfs.enabled</name>  
  57.   
  58.                      <value>true</value>  
  59.   
  60.          </property>  
  61. </configuration>  
  62. [root@hd1 hadoop]#   
[root@hd1 hadoop]# more hdfs-site.xml 
<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

 <property>

                <name>dfs.namenode.secondary.http-address</name>

               <value>hd0:9001</value>

        </property>

         <property>

                  <name>dfs.namenode.name.dir</name>

                 <value>file:/root/dfs/name</value>

            </property>

           <property>

                    <name>dfs.datanode.data.dir</name>

                    <value>file:/root/dfs/data</value>

            </property>

            <property>

                     <name>dfs.replication</name>

                     <value>3</value>

             </property>

             <property>

                     <name>dfs.webhdfs.enabled</name>

                     <value>true</value>

         </property>
</configuration>
[root@hd1 hadoop]# 

配置文件6:mapred-site.xml
  1. [root@hd1 hadoop]# more mapred-site.xml  
  2. <?xml version="1.0"?>  
  3. <?xml-stylesheet type="text/xsl" href="configuration.xsl"?>  
  4. <!--  
  5.   Licensed under the Apache License, Version 2.0 (the "License");  
  6.   you may not use this file except in compliance with the License.  
  7.   You may obtain a copy of the License at  
  8.   
  9.     http://www.apache.org/licenses/LICENSE-2.0  
  10.   
  11.   Unless required by applicable law or agreed to in writing, software  
  12.   distributed under the License is distributed on an "AS IS" BASIS,  
  13.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
  14.   See the License for the specific language governing permissions and  
  15.   limitations under the License. See accompanying LICENSE file.  
  16. -->  
  17.   
  18. <!-- Put site-specific property overrides in this file. -->  
  19.   
  20. <configuration>  
  21. <property>  
  22.   
  23.                                 <name>mapreduce.framework.name</name>  
  24.   
  25.                                 <value>yarn</value>  
  26.   
  27.                 </property>  
  28.   
  29.                 <property>  
  30.   
  31.                                 <name>mapreduce.jobhistory.address</name>  
  32.   
  33.                                 <value>hd0:10020</value>  
  34.   
  35.                 </property>  
  36.   
  37.                 <property>  
  38.   
  39.                <name>mapreduce.jobhistory.webapp.address</name>  
  40.   
  41.                <value>hd0:19888</value>  
  42.   
  43.        </property>  
  44.   
  45. </configuration>  
  46. [root@hd1 hadoop]#   
[root@hd1 hadoop]# more mapred-site.xml
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>

                                <name>mapreduce.framework.name</name>

                                <value>yarn</value>

                </property>

                <property>

                                <name>mapreduce.jobhistory.address</name>

                                <value>hd0:10020</value>

                </property>

                <property>

               <name>mapreduce.jobhistory.webapp.address</name>

               <value>hd0:19888</value>

       </property>

</configuration>
[root@hd1 hadoop]# 

配置文件7:yarn-site.xml
  1. [root@hd1 hadoop]# more yarn-site.xml   
  2. <?xml version="1.0"?>  
  3. <!--  
  4.   Licensed under the Apache License, Version 2.0 (the "License");  
  5.   you may not use this file except in compliance with the License.  
  6.   You may obtain a copy of the License at  
  7.   
  8.     http://www.apache.org/licenses/LICENSE-2.0  
  9.   
  10.   Unless required by applicable law or agreed to in writing, software  
  11.   distributed under the License is distributed on an "AS IS" BASIS,  
  12.   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.  
  13.   See the License for the specific language governing permissions and  
  14.   limitations under the License. See accompanying LICENSE file.  
  15. -->  
  16. <configuration>  
  17.   
  18. <!-- Site specific YARN configuration properties -->  
  19. <property>  
  20.   
  21.                <name>yarn.nodemanager.aux-services</name>  
  22.   
  23.                <value>mapreduce_shuffle</value>  
  24.   
  25.         </property>  
  26.   
  27.                  <property>  
  28.   
  29.                <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>  
  30.   
  31.                <value>org.apache.hadoop.mapred.ShuffleHandler</value>  
  32.   
  33.         </property>  
  34.   
  35.         <property>  
  36.   
  37.                <name>yarn.resourcemanager.address</name>  
  38.   
  39.                <value>hd0:8032</value>  
  40.   
  41.        </property>  
  42.   
  43.                 <property>  
  44.   
  45.                <name>yarn.resourcemanager.scheduler.address</name>  
  46.   
  47.                <value> hd0:8030</value>  
  48.   
  49.                </property>  
  50.   
  51.                <property>  
  52.   
  53.                        <name>yarn.resourcemanager.resource-tracker.address</name>  
  54.   
  55.                         <value>hd0:8031</value>  
  56.   
  57.                </property>  
  58.   
  59.                <property>  
  60.   
  61.                        <name>yarn.resourcemanager.admin.address</name>  
  62.   
  63.                         <value>hd0:8033</value>  
  64.   
  65.                </property>  
  66.   
  67.                 <property>  
  68.   
  69.                <name>yarn.resourcemanager.webapp.address</name>  
  70.   
  71.                <value>hd0:8088</value>  
  72.   
  73.        </property>  
  74.   
  75. </configuration>  
  76. [root@hd1 hadoop]#   
[root@hd1 hadoop]# more yarn-site.xml 
<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->
<property>

               <name>yarn.nodemanager.aux-services</name>

               <value>mapreduce_shuffle</value>

        </property>

                 <property>

               <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>

               <value>org.apache.hadoop.mapred.ShuffleHandler</value>

        </property>

        <property>

               <name>yarn.resourcemanager.address</name>

               <value>hd0:8032</value>

       </property>

                <property>

               <name>yarn.resourcemanager.scheduler.address</name>

               <value> hd0:8030</value>

               </property>

               <property>

                       <name>yarn.resourcemanager.resource-tracker.address</name>

                        <value>hd0:8031</value>

               </property>

               <property>

                       <name>yarn.resourcemanager.admin.address</name>

                        <value>hd0:8033</value>

               </property>

                <property>

               <name>yarn.resourcemanager.webapp.address</name>

               <value>hd0:8088</value>

       </property>

</configuration>
[root@hd1 hadoop]# 

hd1 和 hd2 同上配置,你可以把上面的配置文件复制过去!

配置完成,准备起航

进入目录
  1. [root@hd0 bin]# pwd  
  2. /hd/hdinstall/hadoop-2.4.1/bin  
  3. [root@hd0 bin]#   
[root@hd0 bin]# pwd
/hd/hdinstall/hadoop-2.4.1/bin
[root@hd0 bin]# 

格式化namenode
  1. ./hdfs namenode –format  
./hdfs namenode –format

启动hdfs
  1. ../sbin/start-dfs.sh  
../sbin/start-dfs.sh

但是此时报错了,信息如下
  1. [root@hd0 bin]# ../sbin/start-dfs.sh  
  2. 14/07/21 01:45:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
  3. Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /hd/hdinstall/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.  
  4. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.  
  5. hd0]  
  6. HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known  
  7. 64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known  
  8. sed: -e expression #1, char 6: unknown option to `s'  
  9. VM: ssh: Could not resolve hostname VM: Name or service not known  
  10. Java: ssh: Could not resolve hostname Java: Name or service not known  
  11. library: ssh: Could not resolve hostname library: Name or service not known  
  12. You: ssh: Could not resolve hostname You: Name or service not known  
  13. have: ssh: Could not resolve hostname have: Name or service not known  
  14. loaded: ssh: Could not resolve hostname loaded: Name or service not known  
  15. stack: ssh: Could not resolve hostname stack: Name or service not known  
  16. VM: ssh: Could not resolve hostname VM: Name or service not known  
  17. disabled: ssh: Could not resolve hostname disabled: Name or service not known  
  18. guard.: ssh: Could not resolve hostname guard.: Name or service not known  
  19. warning:: ssh: Could not resolve hostname warning:: Name or service not known  
  20. try: ssh: Could not resolve hostname try: Name or service not known  
  21. fix: ssh: Could not resolve hostname fix: Name or service not known  
  22. might: ssh: Could not resolve hostname might: Name or service not known  
  23. have: ssh: Could not resolve hostname have: Name or service not known  
  24. will: ssh: Could not resolve hostname will: Name or service not known  
  25. the: ssh: Could not resolve hostname the: Name or service not known  
  26. guard: ssh: Could not resolve hostname guard: Name or service not known  
  27. which: ssh: Could not resolve hostname which: Name or service not known  
  28. The: ssh: Could not resolve hostname The: Name or service not known  
  29. stack: ssh: Could not resolve hostname stack: Name or service not known  
  30. now.: ssh: Could not resolve hostname now.: Name or service not known  
  31. recommended: ssh: Could not resolve hostname recommended: Name or service not known  
  32. It's: ssh: Could not resolve hostname It's: Name or service not known  
  33. with: ssh: Could not resolve hostname with: Name or service not known  
  34. -c: Unknown cipher type 'cd'  
  35. fix: ssh: Could not resolve hostname fix: Name or service not known  
  36. or: ssh: Could not resolve hostname or: Name or service not known  
  37. to: ssh: connect to host to port 22: Connection refused  
  38. that: ssh: Could not resolve hostname that: Name or service not known  
  39. library: ssh: Could not resolve hostname library: Name or service not known  
  40. it: ssh: Could not resolve hostname it: No address associated with hostname  
  41. 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known  
  42. with: ssh: Could not resolve hostname with: Name or service not known  
  43. highly: ssh: Could not resolve hostname highly: Name or service not known  
  44. The authenticity of host 'hd0 (192.168.1.205)' can't be established.  
  45. RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.  
  46. Are you sure you want to continue connecting (yes/no)? '-z: ssh: Could not resolve hostname '-z: Name or service not known  
  47. <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known  
  48. link: ssh: Could not resolve hostname link: No address associated with hostname  
  49. you: ssh: Could not resolve hostname you: Name or service not known  
  50. the: ssh: Could not resolve hostname the: Name or service not known  
  51. noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known  
  52. Server: ssh: Could not resolve hostname Server: Name or service not known  
  53. ^Chd0: Host key verification failed.  
  54. hd1: starting datanode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-datanode-hd1.out  
  55. hd2: starting datanode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-datanode-hd2.out  
  56. Starting secondary namenodes [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /hd/hdinstall/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.  
  57. It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.  
  58. hd0]  
  59. sed: -e expression #1, char 6: unknown option to `s'  
  60. Java: ssh: Could not resolve hostname Java: Name or service not known  
  61. warning:: ssh: Could not resolve hostname warning:: Name or service not known  
  62. You: ssh: Could not resolve hostname You: Name or service not known  
  63. Server: ssh: Could not resolve hostname Server: Name or service not known  
  64. might: ssh: Could not resolve hostname might: Name or service not known  
  65. library: ssh: Could not resolve hostname library: Name or service not known  
  66. 64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known  
  67. VM: ssh: Could not resolve hostname VM: Name or service not known  
  68. have: ssh: Could not resolve hostname have: Name or service not known  
  69. disabled: ssh: Could not resolve hostname disabled: Name or service not known  
  70. have: ssh: Could not resolve hostname have: Name or service not known  
  71. HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known  
  72. which: ssh: Could not resolve hostname which: Name or service not known  
  73. guard.: ssh: Could not resolve hostname guard.: Name or service not known  
  74. will: ssh: Could not resolve hostname will: Name or service not known  
  75. loaded: ssh: Could not resolve hostname loaded: Name or service not known  
  76. guard: ssh: Could not resolve hostname guard: Name or service not known  
  77. stack: ssh: Could not resolve hostname stack: Name or service not known  
  78. stack: ssh: Could not resolve hostname stack: Name or service not known  
  79. the: ssh: Could not resolve hostname the: Name or service not known  
  80. recommended: ssh: Could not resolve hostname recommended: Name or service not known  
  81. now.: ssh: Could not resolve hostname now.: Name or service not known  
  82. The: ssh: Could not resolve hostname The: Name or service not known  
  83. VM: ssh: Could not resolve hostname VM: Name or service not known  
  84. you: ssh: Could not resolve hostname you: Name or service not known  
  85. to: ssh: connect to host to port 22: Connection refused  
  86. -c: Unknown cipher type 'cd'  
  87. try: ssh: Could not resolve hostname try: Name or service not known  
  88. It's: ssh: Could not resolve hostname It's: Name or service not known  
  89. highly: ssh: Could not resolve hostname highly: Name or service not known  
  90. fix: ssh: Could not resolve hostname fix: Name or service not known  
  91. library: ssh: Could not resolve hostname library: Name or service not known  
  92. fix: ssh: Could not resolve hostname fix: Name or service not known  
  93. The authenticity of host 'hd0 (192.168.1.205)' can't be established.  
  94. RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.  
  95. Are you sure you want to continue connecting (yes/no)? it: ssh: Could not resolve hostname it: No address associated with hostname  
  96. that: ssh: Could not resolve hostname that: Name or service not known  
  97. or: ssh: Could not resolve hostname or: Name or service not known  
  98. with: ssh: Could not resolve hostname with: Name or service not known  
  99. the: ssh: Could not resolve hostname the: Name or service not known  
  100. with: ssh: Could not resolve hostname with: Name or service not known  
  101. <libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known  
  102. link: ssh: Could not resolve hostname link: No address associated with hostname  
  103. '-z: ssh: Could not resolve hostname '-z: Name or service not known  
  104. 'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known  
  105. noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known  
  106. ^Chd0: Host key verification failed.  
  107. 14/07/21 01:52:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
[root@hd0 bin]# ../sbin/start-dfs.sh
14/07/21 01:45:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /hd/hdinstall/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
hd0]
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
sed: -e expression #1, char 6: unknown option to `s'
VM: ssh: Could not resolve hostname VM: Name or service not known
Java: ssh: Could not resolve hostname Java: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
try: ssh: Could not resolve hostname try: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
The: ssh: Could not resolve hostname The: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
It's: ssh: Could not resolve hostname It's: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
-c: Unknown cipher type 'cd'
fix: ssh: Could not resolve hostname fix: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
to: ssh: connect to host to port 22: Connection refused
that: ssh: Could not resolve hostname that: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
it: ssh: Could not resolve hostname it: No address associated with hostname
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
The authenticity of host 'hd0 (192.168.1.205)' can't be established.
RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.
Are you sure you want to continue connecting (yes/no)? '-z: ssh: Could not resolve hostname '-z: Name or service not known
<libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known
link: ssh: Could not resolve hostname link: No address associated with hostname
you: ssh: Could not resolve hostname you: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
^Chd0: Host key verification failed.
hd1: starting datanode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-datanode-hd1.out
hd2: starting datanode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-datanode-hd2.out
Starting secondary namenodes [Java HotSpot(TM) 64-Bit Server VM warning: You have loaded library /hd/hdinstall/hadoop-2.4.1/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c <libfile>', or link it with '-z noexecstack'.
hd0]
sed: -e expression #1, char 6: unknown option to `s'
Java: ssh: Could not resolve hostname Java: Name or service not known
warning:: ssh: Could not resolve hostname warning:: Name or service not known
You: ssh: Could not resolve hostname You: Name or service not known
Server: ssh: Could not resolve hostname Server: Name or service not known
might: ssh: Could not resolve hostname might: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
64-Bit: ssh: Could not resolve hostname 64-Bit: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
disabled: ssh: Could not resolve hostname disabled: Name or service not known
have: ssh: Could not resolve hostname have: Name or service not known
HotSpot(TM): ssh: Could not resolve hostname HotSpot(TM): Name or service not known
which: ssh: Could not resolve hostname which: Name or service not known
guard.: ssh: Could not resolve hostname guard.: Name or service not known
will: ssh: Could not resolve hostname will: Name or service not known
loaded: ssh: Could not resolve hostname loaded: Name or service not known
guard: ssh: Could not resolve hostname guard: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
stack: ssh: Could not resolve hostname stack: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
recommended: ssh: Could not resolve hostname recommended: Name or service not known
now.: ssh: Could not resolve hostname now.: Name or service not known
The: ssh: Could not resolve hostname The: Name or service not known
VM: ssh: Could not resolve hostname VM: Name or service not known
you: ssh: Could not resolve hostname you: Name or service not known
to: ssh: connect to host to port 22: Connection refused
-c: Unknown cipher type 'cd'
try: ssh: Could not resolve hostname try: Name or service not known
It's: ssh: Could not resolve hostname It's: Name or service not known
highly: ssh: Could not resolve hostname highly: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
library: ssh: Could not resolve hostname library: Name or service not known
fix: ssh: Could not resolve hostname fix: Name or service not known
The authenticity of host 'hd0 (192.168.1.205)' can't be established.
RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.
Are you sure you want to continue connecting (yes/no)? it: ssh: Could not resolve hostname it: No address associated with hostname
that: ssh: Could not resolve hostname that: Name or service not known
or: ssh: Could not resolve hostname or: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
the: ssh: Could not resolve hostname the: Name or service not known
with: ssh: Could not resolve hostname with: Name or service not known
<libfile>',: ssh: Could not resolve hostname <libfile>',: Name or service not known
link: ssh: Could not resolve hostname link: No address associated with hostname
'-z: ssh: Could not resolve hostname '-z: Name or service not known
'execstack: ssh: Could not resolve hostname 'execstack: Name or service not known
noexecstack'.: ssh: Could not resolve hostname noexecstack'.: Name or service not known
^Chd0: Host key verification failed.
14/07/21 01:52:15 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

谷歌被封,百度半天解决了
在/etc/profile 中添加如下几行
  1. export HADOOP_PREFIX=/hd/hdinstall/hadoop-2.4.1/  
  2. export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native  
  3. export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"  
export HADOOP_PREFIX=/hd/hdinstall/hadoop-2.4.1/
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_PREFIX}/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_PREFIX/lib"
使生效
. /etc/profile

重新启动hdfs

  1. [root@hd0 sbin]# ./start-dfs.sh   
  2. 14/07/21 05:01:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
  3. Starting namenodes on [hd0]  
  4. The authenticity of host 'hd0 (192.168.1.205)' can't be established.  
  5. RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.  
  6. Are you sure you want to continue connecting (yes/no)? yes  
  7. hd0: Warning: Permanently added 'hd0,192.168.1.205' (RSA) to the list of known hosts.  
  8. hd0: starting namenode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-namenode-hd0.out  
  9. hd1: datanode running as process 1347. Stop it first.  
  10. hd2: datanode running as process 1342. Stop it first.  
  11. Starting secondary namenodes [hd0]  
  12. hd0: starting secondarynamenode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-hd0.out  
  13. 14/07/21 05:02:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
  14. [root@hd0 sbin]#   
[root@hd0 sbin]# ./start-dfs.sh 
14/07/21 05:01:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hd0]
The authenticity of host 'hd0 (192.168.1.205)' can't be established.
RSA key fingerprint is c0:a4:cb:1b:91:30:0f:33:82:92:9a:e9:ac:1d:ef:11.
Are you sure you want to continue connecting (yes/no)? yes
hd0: Warning: Permanently added 'hd0,192.168.1.205' (RSA) to the list of known hosts.
hd0: starting namenode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-namenode-hd0.out
hd1: datanode running as process 1347. Stop it first.
hd2: datanode running as process 1342. Stop it first.
Starting secondary namenodes [hd0]
hd0: starting secondarynamenode, logging to /hd/hdinstall/hadoop-2.4.1/logs/hadoop-root-secondarynamenode-hd0.out
14/07/21 05:02:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hd0 sbin]# 

成功

此时在hd0上面运行的进程有:namenode secondarynamenode

hd1和hd2上面运行的进程有:datanode

hd0:

  1. [root@hd0 sbin]# ps -ef|grep name  
  2. root      2076     1 12 05:01 ?        00:00:29 /hd/hdinstall/jdk1.7/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-namenode-hd0.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode  
  3. root      2210     1 10 05:01 ?        00:00:23 /hd/hdinstall/jdk1.7/bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-secondarynamenode-hd0.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode  
  4. root      2349  1179  0 05:05 pts/1    00:00:00 grep name  
[root@hd0 sbin]# ps -ef|grep name
root      2076     1 12 05:01 ?        00:00:29 /hd/hdinstall/jdk1.7/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-namenode-hd0.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
root      2210     1 10 05:01 ?        00:00:23 /hd/hdinstall/jdk1.7/bin/java -Dproc_secondarynamenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-secondarynamenode-hd0.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode
root      2349  1179  0 05:05 pts/1    00:00:00 grep name
hd1:

  1. [root@hd1 name]# ps -ef|grep data  
  2. root      1347     1  4 04:45 ?        00:01:08 /hd/hdinstall/jdk1.7/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-datanode-hd1.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode  
  3. root      1571  1183  0 05:10 pts/1    00:00:00 grep data  
[root@hd1 name]# ps -ef|grep data
root      1347     1  4 04:45 ?        00:01:08 /hd/hdinstall/jdk1.7/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-datanode-hd1.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
root      1571  1183  0 05:10 pts/1    00:00:00 grep data

hd2:

  1. [root@hd2 dfs]# ps -ef|grep data  
  2. root      1342     1  2 04:45 ?        00:01:46 /hd/hdinstall/jdk1.7/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-datanode-hd2.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode  
  3. root      1605  1186  0 05:45 pts/1    00:00:00 grep data  
  4. [root@hd2 dfs]#   
[root@hd2 dfs]# ps -ef|grep data
root      1342     1  2 04:45 ?        00:01:46 /hd/hdinstall/jdk1.7/bin/java -Dproc_datanode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,console -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/hd/hdinstall/hadoop-2.4.1/logs -Dhadoop.log.file=hadoop-root-datanode-hd2.log -Dhadoop.home.dir=/hd/hdinstall/hadoop-2.4.1 -Dhadoop.id.str=root -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/hd/hdinstall/hadoop-2.4.1/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -server -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=ERROR,RFAS -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.datanode.DataNode
root      1605  1186  0 05:45 pts/1    00:00:00 grep data
[root@hd2 dfs]# 

启动yarn: ./sbin/start-yarn.sh

  1. [root@hd0 sbin]# ./start-yarn.sh   
  2. starting yarn daemons  
  3. starting resourcemanager, logging to /hd/hdinstall/hadoop-2.4.1//logs/yarn-root-resourcemanager-hd0.out  
  4. hd1: starting nodemanager, logging to /hd/hdinstall/hadoop-2.4.1/logs/yarn-root-nodemanager-hd1.out  
  5. hd2: starting nodemanager, logging to /hd/hdinstall/hadoop-2.4.1/logs/yarn-root-nodemanager-hd2.out  
[root@hd0 sbin]# ./start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /hd/hdinstall/hadoop-2.4.1//logs/yarn-root-resourcemanager-hd0.out
hd1: starting nodemanager, logging to /hd/hdinstall/hadoop-2.4.1/logs/yarn-root-nodemanager-hd1.out
hd2: starting nodemanager, logging to /hd/hdinstall/hadoop-2.4.1/logs/yarn-root-nodemanager-hd2.out

此时在hd0上面运行的进程有:namenode 、secondary namenode、 resource manager

hd1和hd2上面运行的进程有:datanode、 node managet


查看集群状态

  1. [root@hd0 sbin]# cd ../bin/  
  2. [root@hd0 bin]# ./hdfs dfsadmin -report  
  3. 14/07/21 05:10:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable  
  4. Configured Capacity: 10321133568 (9.61 GB)  
  5. Present Capacity: 8232607744 (7.67 GB)  
  6. DFS Remaining: 8232558592 (7.67 GB)  
  7. DFS Used: 49152 (48 KB)  
  8. DFS Used%: 0.00%  
  9. Under replicated blocks: 0  
  10. Blocks with corrupt replicas: 0  
  11. Missing blocks: 0  
  12.   
  13. -------------------------------------------------  
  14. Datanodes available: 2 (2 total, 0 dead)  
  15.   
  16. Live datanodes:  
  17. Name: 192.168.1.206:50010 (hd1)  
  18. Hostname: hd1  
  19. Decommission Status : Normal  
  20. Configured Capacity: 5160566784 (4.81 GB)  
  21. DFS Used: 24576 (24 KB)  
  22. Non DFS Used: 1044275200 (995.90 MB)  
  23. DFS Remaining: 4116267008 (3.83 GB)  
  24. DFS Used%: 0.00%  
  25. DFS Remaining%: 79.76%  
  26. Configured Cache Capacity: 0 (0 B)  
  27. Cache Used: 0 (0 B)  
  28. Cache Remaining: 0 (0 B)  
  29. Cache Used%: 100.00%  
  30. Cache Remaining%: 0.00%  
  31. Last contact: Mon Jul 21 05:10:59 CST 2014  
  32.   
  33.   
  34. Name: 192.168.1.207:50010 (hd2)  
  35. Hostname: hd2  
  36. Decommission Status : Normal  
  37. Configured Capacity: 5160566784 (4.81 GB)  
  38. DFS Used: 24576 (24 KB)  
  39. Non DFS Used: 1044250624 (995.88 MB)  
  40. DFS Remaining: 4116291584 (3.83 GB)  
  41. DFS Used%: 0.00%  
  42. DFS Remaining%: 79.76%  
  43. Configured Cache Capacity: 0 (0 B)  
  44. Cache Used: 0 (0 B)  
  45. Cache Remaining: 0 (0 B)  
  46. Cache Used%: 100.00%  
  47. Cache Remaining%: 0.00%  
  48. Last contact: Mon Jul 21 05:11:02 CST 2014  
[root@hd0 sbin]# cd ../bin/
[root@hd0 bin]# ./hdfs dfsadmin -report
14/07/21 05:10:55 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Configured Capacity: 10321133568 (9.61 GB)
Present Capacity: 8232607744 (7.67 GB)
DFS Remaining: 8232558592 (7.67 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0

-------------------------------------------------
Datanodes available: 2 (2 total, 0 dead)

Live datanodes:
Name: 192.168.1.206:50010 (hd1)
Hostname: hd1
Decommission Status : Normal
Configured Capacity: 5160566784 (4.81 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 1044275200 (995.90 MB)
DFS Remaining: 4116267008 (3.83 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.76%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Mon Jul 21 05:10:59 CST 2014


Name: 192.168.1.207:50010 (hd2)
Hostname: hd2
Decommission Status : Normal
Configured Capacity: 5160566784 (4.81 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 1044250624 (995.88 MB)
DFS Remaining: 4116291584 (3.83 GB)
DFS Used%: 0.00%
DFS Remaining%: 79.76%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Last contact: Mon Jul 21 05:11:02 CST 2014

查看HDFS:    http://192.168.1.205:50070



--






参考:http://blog.csdn.net/licongcong_0224/article/details/12972889

Logo

华为开发者空间,是为全球开发者打造的专属开发空间,汇聚了华为优质开发资源及工具,致力于让每一位开发者拥有一台云主机,基于华为根生态开发、创新。

更多推荐