本站分享:AI、大数据、数据分析师培训认证考试,包括:Python培训Excel培训Matlab培训SPSS培训SAS培训R语言培训Hadoop培训Amos培训Stata培训Eviews培训

Hadoop与Hbase分布式配置_hadoop hbase 配置

hadoop培训 cdadata 3111℃

Hadoop与Hbase分布式配置

关键词:hadoop hbase 配置   hbase与hadoop关系   hadoop与hbase的关系

在单机上部署好hadoop与hbase之后,现在终于要开始在集群上来部署了。一路配置下来,个人觉得其实分布式配置与单机配置差不多,但是修改了配置文件中的一些参数适应分布式。当然这只是简单地配置部署,如果真的要作为一个实际系统来使用,考虑性能稳定性等其他因素的时候当然就没有这么简单了,但是今天的主要工作是部署一个分布式Hadoop/Hbase,先不考虑性能优化的问题,而且我们的节点数也很小,初期只有一个master和两个slaves:
master: 30.0.0.69
node1: 30.0.0.161
node2: 30.0.0.162
整体的配置可以分为分布式条件配置、hadoop配置文件、hbase配置文件三个步骤、安装文件迁移到其他节点。当然在正式开始之前,你要确保各个节点上都有一个hadoop用户(或者其他同名的用户都可以),可以在安装系统的时候默认使用该用户安装,或者使用adduser添加一个吧!不过这里要确保该用户可以使用sudo命令,因此最好加入到%admin组里,必要的时候需要修改/etc/sudoers文件,具体配置说明可以见:
http://blog.chinaunix.net/uid-26275986-id-3940725.html中有说明。


一、分布式条件配置
这里自己所说的分布式配置,主要是关于集群之间通信的时候的配置,主要涉及网卡配置、hosts配置和ssh免登陆设置。
1. 网卡配置
自己的安装系统都是Ubuntu-12.04-desktop,对于Ubuntu的网卡配置来说,可以使用ifconfig命令来配置,或者用右上角的网络工具,但是自己还是习惯使用配置文件的方法:

点击(此处)折叠或打开

  1. auto lo
  2. iface lo inet loopback
  3. #auto eth0
  4. #iface eth0 inet dhcp
  5. auto eth0
  6. iface eth0 inet static
  7. address 30.0.0.69
  8. netmask 255.255.255.0
  9. gateway 30.0.0.254
  10. #dns-nameservers DNS-IP

上面是网卡接口的配置文件,需要注意的有两点:一是通过修改该配置文件的方式需要sudo /etc/init.d/networking restart才能立即生效,否则重启后也可以使用新配置;二是Ubuntu不能直接设置/etc/resolv.conf配置文件来设定DNS,因为每次重启后都会清空,如果要设置DNS,要么在网卡配置文件中(如本例)添加最后一行(本例不需要DNS,因此注释掉了),或者去修改/etc/resolvconf/resolv.conf.d.base文件,然后重启网卡服务即可生效。

2. 主机hosts配置
/etc/hosts文件用于确定集群中每个节点的IP,为了方便后续集群可以正确通信,这里要进行设置,并且在每一个节点上都要配置该文件:

点击(此处)折叠或打开

  1. 127.0.0.1    localhost
  2. #127.0.0.1    hadoop
  3. #127.0.0.1 master
  4. 30.0.0.69  master
  5. 30.0.0.161 node1
  6. 30.0.0.162 node2
  7. # The following lines are desirable for IPv6 capable hosts
  8. #::1 ip6-localhost ip6-loopback
  9. #fe00::0 ip6-localnet
  10. #ff00::0 ip6-mcastprefix
  11. #ff02::1 ip6-allnodes
  12. #ff02::2 ip6-allrouters

自己不使用IPv6,因此把相应地址都注释掉了。本文件配置好后可以复制到其他节点。

3. 配置ssh免登陆
由于hadoop各节点间使用ssh通讯,因此为了避免频繁密码验证,这里需要设置ssh免密码登录。其实ssh免登陆设置很简单,只需要被登录节点保存有登录节点用户的公钥即可。
第一步:ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa,这步之后会在~/.ssh中添加公私钥文件:
Hadoop与Hbase分布式配置_hadoop hbase 配置
第二步:将公钥导入目标机器的认证文件中:cat id_dsa.pub >> authorized_keys
当然,这里需要做的是将master的公钥文件导入到node1和node2中的.ssh中的authorized_keys中。但是自己在测试的时候出了问题,node2可以ssh其他节点,但是其他节点不能ssh回来。费了一番功夫后发觉各个节点的版本不一致,node2使用的是系统自带的版本,估计没有server部分,需要更新最新版本。为了方便起见,这步设置的时候建议统一sudo apt-get install ssh即可。
这些步骤完成之后,可以相互ssh测试,第一次还需要输入密码,但是第二次之后就可以免密码登录了。


二、Hadoop分布式配置文件设置
当然分布式也要求单机版hadoop运行的必要条件,jdk包肯定是必需的。保证各个节点上hadoop运行用户和hadoop/hbase存放目录的一致,后期的拷贝会省去很多麻烦。然后下载hadoop包,解压之后开始修改配置文件:
1. 配置jdk运行路径,修改hadoop-env.xml

点击(此处)折叠或打开

  1. # Set Hadoop-specific environment variables here.
  2. # The only required environment variable is JAVA_HOME. All others are
  3. # optional. When running a distributed configuration it is best to
  4. # set JAVA_HOME in this file, so that it is correctly defined on
  5. # remote nodes.
  6. # The java implementation to use. Required.
  7. export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_35
  8. # Extra Java CLASSPATH elements. Optional.
  9. # export HADOOP_CLASSPATH=
  10. # The maximum amount of heap to use, in MB. Default is 1000.
  11. #export HADOOP_HEAPSIZE=1000
  12. # Extra Java runtime options. Empty by default.
  13. # export HADOOP_OPTS=-server
  14. # Command specific options appended to HADOOP_OPTS when specified
  15. export HADOOP_NAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_NAMENODE_OPTS”
  16. export HADOOP_SECONDARYNAMENODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_SECONDARYNAMENODE_OPTS”
  17. export HADOOP_DATANODE_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_DATANODE_OPTS”
  18. export HADOOP_BALANCER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_BALANCER_OPTS”
  19. export HADOOP_JOBTRACKER_OPTS=”-Dcom.sun.management.jmxremote $HADOOP_JOBTRACKER_OPTS”
  20. # export HADOOP_TASKTRACKER_OPTS=
  21. # The following applies to multiple commands (fs, dfs, fsck, distcp etc)
  22. #export HADOOP_CLIENT_OPTS=”-Xmx1024m $HADOOP_CLIENT_OPTS”
  23. # Extra ssh options. Empty by default.
  24. # export HADOOP_SSH_OPTS=”-o ConnectTimeout=1 -o SendEnv=HADOOP_CONF_DIR”
  25. # Where log files are stored. $HADOOP_HOME/logs by default.
  26. # export HADOOP_LOG_DIR=${HADOOP_HOME}/logs
  27. # File naming remote slave hosts. $HADOOP_HOME/conf/slaves by default.
  28. # export HADOOP_SLAVES=${HADOOP_HOME}/conf/slaves
  29. # host:path where hadoop code should be rsync’d from. Unset by default.
  30. # export HADOOP_MASTER=master:/home/$USER/src/hadoop
  31. # Seconds to sleep between slave commands. Unset by default. This
  32. # can be useful in large clusters, where, e.g., slave rsyncs can
  33. # otherwise arrive faster than the master can service them.
  34. # export HADOOP_SLAVE_SLEEP=0.1
  35. # The directory where pid files are stored. /tmp by default.
  36. # export HADOOP_PID_DIR=/var/hadoop/pids
  37. # A string representing this instance of hadoop. $USER by default.
  38. # export HADOOP_IDENT_STRING=$USER
  39. # The scheduling priority for daemon processes. See ‘man nice’.
  40. # export HADOOP_NICENESS=10

2. 配置namenode,修改core-site.xml

点击(此处)折叠或打开

  1. <?xml version=”1.0″?>
  2. <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
  3. <!– Put site-specific property overrides in this file. –>
  4. <configuration>
  5.    <property>
  6.        <name>fs.default.name</name>
  7.        <value>hdfs://master:9000</value>
  8.    </property>
  9.    <property>
  10.        <name>hadoop.tmp.dir</name>
  11.        <value>/home/hadoop/hdfs/tmp</value>
  12.    </property>
  13. </configuration>

3. 配置hdfs,修改hdfs-site.xml文件:

点击(此处)折叠或打开

  1. <?xml version=”1.0″?>
  2. <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
  3. <!– Put site-specific property overrides in this file. –>
  4. <configuration>
  5.    <property>
  6.        <name>dfs.name.dir</name>
  7.        <value>/home/hadoop/hdfs/name</value>
  8.        <final>true</final>
  9.    </property>
  10.    <property>
  11.        <name>dfs.data.dir</name>
  12.        <value>/home/hadoop/hdfs/data</value>
  13.        <final>true</final>
  14.    </property>
  15.    <property>
  16.        <name>dfs.datanode.max.xcievers</name>
  17.        <value>32768</value>
  18.    </property>
  19.    <property>
  20.        <name>dfs.replication</name>   //有几个datanode最多设置几个数值,每个datanode只能保存一份备份
  21.        <value>2</value>
  22.        <final>true</final>
  23.    </property>
  24. </configuration>

这里注意final标签表示本设置在后续运行中不允许动态修改和覆盖。
4. 配置map-reduce,修改mapred-site.xml:

点击(此处)折叠或打开

  1. <?xml version=”1.0″?>
  2. <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
  3. <!– Put site-specific property overrides in this file. –>
  4. <configuration>
  5.    <property>
  6.        <name>mapred.job.tracker</name>
  7.        <value>30.0.0.69:9001</value>
  8.    </property>
  9.    <property>
  10.        <name>mapred.child.java.opts</name>
  11.        <value>Xmx800m</value>    //根据自身机器情况设置大小,自己的内存只有1G
  12.        <final>true</final>
  13.    </property>
  14. </configuration>

5. 根据实际情况配置masters和slaves文件,在master文件中写入master角色的主机名(master),slaves文件中写入datanode角色的主机名(node1、node2)
6. 以上配置在master上配置,然后将配置好的hadoop目录直接拷贝到其余各节点,注意保证各节点hadoop运行用户、hadoop存放位置的一致,不要忘记在各机上设置hosts文件;
7. 运行测试:
在master上启动hadoop即可,然后用Jps命令查看启动进程,TaskTracker和Datanode应该都在node1和node2上运行:
Hadoop与Hbase分布式配置_hadoop hbase 配置
然后通过web->http://30.0.0.69:50070可以查看hdfs系统:可以看到Live Nodes为2
Hadoop与Hbase分布式配置_hadoop hbase 配置
如果想查看mapreduce的任务情况,web->http://30.0.0.69:50030
Hadoop与Hbase分布式配置_hadoop hbase 配置
自己很奇怪为什么自己的Nodes项为0?明明在分节点上TaskTracker都已经启动了。上网去查资料,好像比较常见,一般的解决方法有:
1-关闭namenode的safemode:hadoop dfsadmin -safemode leave;
2-格式化hdfs:删除各个几点的hdfs目录,即hdfs-site文件中指定的tmp目录;
3-查看防火墙:sudo ufw status
这些方法都不管用,后来突然想到是不是因为没有运行的mapreduce呢?于是赶紧测试hadoop:
Hadoop与Hbase分布式配置_hadoop hbase 配置
果然不出所料,这次结果出现了两个节点,看来果然是自己的理解有问题,这里只能显示正在运行的map-reduce任务:
Hadoop与Hbase分布式配置_hadoop hbase 配置
至此hadoop分布式的配置基本结束,下面开始配置hbase。


三、Hbase配置
配置Hbase其实和Hadoop是类似的思路,先在HMaster上配置好文件,保证用户和路径的一致的前提下,直接复制拷贝hbase安装目录就可以了:
1. 配置hbase运行环境,修改hbase-env.xml:

点击(此处)折叠或打开

  1. #
  2. #/**
  3. # * Copyright 2007 The Apache Software Foundation
  4. # *
  5. # * Licensed to the Apache Software Foundation (ASF) under one
  6. # * or more contributor license agreements. See the NOTICE file
  7. # * distributed with this work for additional information
  8. # * regarding copyright ownership. The ASF licenses this file
  9. # * to you under the Apache License, Version 2.0 (the
  10. # * “License”); you may not use this file except in compliance
  11. # * with the License. You may obtain a copy of the License at
  12. # *
  13. # * http://www.apache.org/licenses/LICENSE-2.0
  14. # *
  15. # * Unless required by applicable law or agreed to in writing, software
  16. # * distributed under the License is distributed on an “AS IS” BASIS,
  17. # * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  18. # * See the License for the specific language governing permissions and
  19. # * limitations under the License.
  20. # */
  21. # Set environment variables here.
  22. # The java implementation to use. Java 1.6 required.
  23. export JAVA_HOME=/home/hadoop/platform/jdk1.6.0_35
  24. export HBASE_HOME=/home/hadoop/platform/hbase-0.90.0
  25. export HADOOP_HOME=/home/hadoop/platform/hadoop-1.0.3
  26. # Extra Java CLASSPATH elements. Optional.
  27. # export HBASE_CLASSPATH=
  28. # The maximum amount of heap to use, in MB. Default is 1000.
  29. #export HBASE_HEAPSIZE=1000
  30. # Extra Java runtime options.
  31. # Below are what we set by default. May only work with SUN JVM.
  32. # For more on why as well as other possible settings,
  33. # see http://wiki.apache.org/hadoop/PerformanceTuning
  34. export HBASE_OPTS=“$HBASE_OPTS -ea -XX:+HeapDumpOnOutOfMemoryError -XX:+UseConcMarkSweepGC -XX:+CMSIncrementalMode”
  35. # Uncomment below to enable java garbage collection logging.
  36. # export HBASE_OPTS=”$HBASE_OPTS -verbose:gc -XX:+PrintGCDetails -XX:+PrintGCDateStamps -Xloggc:$HBASE_HOME/logs/gc-hbase.log”
  37. # Uncomment and adjust to enable JMX exporting
  38. # See jmxremote.password and jmxremote.access in $JRE_HOME/lib/management to configure remote password access.
  39. # More details at: http://java.sun.com/javase/6/docs/technotes/guides/management/agent.html
  40. #
  41. # export HBASE_JMX_BASE=”-Dcom.sun.management.jmxremote.ssl=false -Dcom.sun.management.jmxremote.authenticate=false”
  42. # export HBASE_MASTER_OPTS=”$HBASE_MASTER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10101″
  43. # export HBASE_REGIONSERVER_OPTS=”$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102″
  44. # export HBASE_THRIFT_OPTS=”$HBASE_THRIFT_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10103″
  45. # export HBASE_ZOOKEEPER_OPTS=”$HBASE_ZOOKEEPER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10104″
  46. # File naming hosts on which HRegionServers will run. $HBASE_HOME/conf/regionservers by default.
  47. # export HBASE_REGIONSERVERS=${HBASE_HOME}/conf/regionservers
  48. # Extra ssh options. Empty by default.
  49. # export HBASE_SSH_OPTS=”-o ConnectTimeout=1 -o SendEnv=HBASE_CONF_DIR”
  50. # Where log files are stored. $HBASE_HOME/logs by default.
  51. export HBASE_LOG_DIR=${HBASE_HOME}/logs
  52. # A string representing this instance of hbase. $USER by default.
  53. # export HBASE_IDENT_STRING=$USER
  54. # The scheduling priority for daemon processes. See ‘man nice’.
  55. # export HBASE_NICENESS=10
  56. # The directory where pid files are stored. /tmp by default.
  57. # export HBASE_PID_DIR=/var/hadoop/pids
  58. # Seconds to sleep between slave commands. Unset by default. This
  59. # can be useful in large clusters, where, e.g., slave rsyncs can
  60. # otherwise arrive faster than the master can service them.
  61. # export HBASE_SLAVE_SLEEP=0.1
  62. # Tell HBase whether it should manage it’s own instance of Zookeeper or not.
  63. export HBASE_MANAGES_ZK=true

2. 配置hdfs,修改hbase-site.xml:

点击(此处)折叠或打开

  1. <?xml version=”1.0″?>
  2. <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
  3. <!–
  4. /**
  5.  * Copyright 2010 The Apache Software Foundation
  6.  *
  7.  * Licensed to the Apache Software Foundation (ASF) under one
  8.  * or more contributor license agreements. See the NOTICE file
  9.  * distributed with this work for additional information
  10.  * regarding copyright ownership. The ASF licenses this file
  11.  * to you under the Apache License, Version 2.0 (the
  12.  * “License”); you may not use this file except in compliance
  13.  * with the License. You may obtain a copy of the License at
  14.  *
  15.  * http://www.apache.org/licenses/LICENSE-2.0
  16.  *
  17.  * Unless required by applicable law or agreed to in writing, software
  18.  * distributed under the License is distributed on an “AS IS” BASIS,
  19.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  20.  * See the License for the specific language governing permissions and
  21.  * limitations under the License.
  22.  */
  23. –>
  24. <configuration>
  25.    <property>
  26.        <name>hbase.rootdir</name>
  27.        <value>hdfs://master:9000/hbase</value>
  28.    </property>
  29.    <property>
  30.        <name>hbase.cluster.distributed</name>
  31.        <value>true</value>
  32.    </property>
  33.  <property>
  34.        <name>hbase.master</name>
  35.        <value>30.0.0.69:60000</value>
  36.    </property>
  37.    <property>
  38.        <name>hbase.zookeeper.quorum</name>
  39.        <value>30.0.0.161,30.0.0.162</value>
  40.    </property>
  41.    <property>
  42.        <name>zookeeper.znode.parent</name>
  43.        <value>/hbase</value>   //默认位置
  44.    </property>
  45. </configuration>

这里主要是设置hdfs系统的根位置以及HMaster的位置。
3. 拷贝hbase目录下的src/main/resources/下的hbase-default.xml文件,修改hbase.rootdir写定HDFS目录和hbase.cluster.distributed两项:

点击(此处)折叠或打开

  1. <?xml version=”1.0″?>
  2. <?xml-stylesheet type=”text/xsl” href=”configuration.xsl”?>
  3. <!–
  4. /**
  5.  * Copyright 2009 The Apache Software Foundation
  6.  *
  7.  * Licensed to the Apache Software Foundation (ASF) under one
  8.  * or more contributor license agreements. See the NOTICE file
  9.  * distributed with this work for additional information
  10.  * regarding copyright ownership. The ASF licenses this file
  11.  * to you under the Apache License, Version 2.0 (the
  12.  * “License”); you may not use this file except in compliance
  13.  * with the License. You may obtain a copy of the License at
  14.  *
  15.  * http://www.apache.org/licenses/LICENSE-2.0
  16.  *
  17.  * Unless required by applicable law or agreed to in writing, software
  18.  * distributed under the License is distributed on an “AS IS” BASIS,
  19.  * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  20.  * See the License for the specific language governing permissions and
  21.  * limitations under the License.
  22.  */
  23. –>
  24. <configuration>
  25.   <property>
  26.     <name>hbase.rootdir</name>
  27.     <value>hdfs://master:9000/hbase</value>
  28.     <description>The directory shared by region servers and into
  29.     which HBase persists. The URL should be ‘fully-qualified’
  30.     to include the filesystem scheme. For example, to specify the
  31.     HDFS directory ‘/hbase’ where the HDFS instance’s namenode is
  32.     running at namenode.example.org on port 9000, set this value to:
  33.     hdfs://namenode.example.org:9000/hbase. By default HBase writes
  34.     into /tmp. Change this configuration else all data will be lost
  35.     on machine restart.
  36.     </description>
  37.   </property>
  38.   <property>
  39.     <name>hbase.master.port</name>
  40.     <value>60000</value>
  41.     <description>The port the HBase Master should bind to.</description>
  42.   </property>
  43.   <property>
  44.     <name>hbase.cluster.distributed</name>
  45.     <value>true</value>
  46.     <description>The mode the cluster will be in. Possible values are
  47.       false for standalone mode and true for distributed mode. If
  48.       false, startup will run all HBase and ZooKeeper daemons together
  49.       in the one JVM.
  50.     </description>
  51.   </property>
  52.   <property>
  53.     <name>hbase.tmp.dir</name>
  54.     <value>/tmp/hbase-${user.name}</value>
  55.     <description>Temporary directory on the local filesystem.
  56.     Change this setting to point to a location more permanent
  57.     than ‘/tmp’ (The ‘/tmp’ directory is often cleared on
  58.     machine restart).
  59.     </description>
  60.   </property>
  61.   <property>
  62. 其余部分略

4. 类似于hadoop的master与slaves,这里也需要编辑各个节点上的HMasters与HRegionServers,直接添加节点名称即可;
5. 将hbase目录拷贝到其余各个节点相同位置,至此hbase配置基本完成;
6. 运行测试start-hbase:
Hadoop与Hbase分布式配置_hadoop hbase 配置
使用hbase shell命令建表:
Hadoop与Hbase分布式配置_hadoop hbase 配置
通过WEB页面查看HMaster:http://30.0.0.69:60010
Hadoop与Hbase分布式配置_hadoop hbase 配置


PS:
配置分布式hadoop/hbase一定要把Master和HMaster的IP搞清楚,中间的配置文件不要有错,否则后续的服务是不能正常启动的。另外整体的思路是:
1. 全节点配置hosts、ssh,安装jps;建立相同的hadoop运行用户,相同的文件路径;
2. Master/HMaster上配置hadoop/hbase;
3. 将hadoop/hbase目录直接迁移到各节点,测试可用;

转载请注明:数据分析 » Hadoop与Hbase分布式配置_hadoop hbase 配置

喜欢 (0)or分享 (0)