本站分享:AI、大数据、数据分析师培训认证考试,包括:Python培训Excel培训Matlab培训SPSS培训SAS培训R语言培训Hadoop培训Amos培训Stata培训Eviews培训

linux中hadoop+zookeeper+hbase配置

hadoop培训 cdadata 3091℃

linux中hadoop+zookeeper+hbase配置

关键词: hadoop hbase 配置 hadoop zookeeper配置 hbase zookeeper配置

环境准备

1.在windows下安装VMware

2.创建了3个fedora14 linux。地址分别为:
m201 192.168.0.201 (Namenode)
s202 192.168.0.202 (Datanode)
s203 192.168.0.203 (Datanode)

3.在linux系统中下载所需要的软件。分别为:
jdk-6u23-linux-i586-rpm.bin
hadoop-0.20.2.tar.gz
zookeeper-3.3.3.tar.gz
hbase-0.90.2.tar.gz
将下载的软件保存到/root/install目录下。

安装jdk(s202,s203进行同样的操作)

1.执行jdk-6u23-linux-i586-rpm.bin就行可以。jdk将安装在/usr/java/jdk1.6.0_23目录下。

2.设置java环境变量,修改/etc/profile文件。在文件最后增加:
export JAVA_HOME=/usr/java/jdk1.6.0_23/
export JRE_HOME=/usr/java/jdk1.6.0_23/jre/
export CLASSPATH=.:$JAVA_HOME/lib:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
3.使/etc/profile文件生效,执行这个文件。

设置ssh(使m201,可以不用密码访问s202和s203)

官网上的一段话:

Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost

If you cannot ssh to localhost without a passphrase, execute the following commands:
$ ssh-keygen -t dsa -P ” -f ~/.ssh/id_dsa
$ cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys

将m201上的id_dsa.pub 文件追加到s202和s203的authorized_keys文件内

安装hadoop

1.到/root/install目录解压hadoop-0.20.2.tar.gz,执行命令:tar -zxvf hadoop-0.20.2.tar.gz。运行结束后将生成hadoop-0.20.2目录

2。进入/root/install/hadoop-0.20.2/conf目录

3.修改文件masters(定义masters IP)
192.168.0.201

4.修改文件slaves(定义slaves IP)
192.168.0.202
192.168.0.203

5.修改文件hadoop-env.sh(设置jdk路径)
export JAVA_HOME=/usr/java/jdk1.6.0_23

6.修改文件core-site.xml在<configuration>中加入
<property>
<name>hadoop.tmp.dir</name>
<value>/hadoopdata</value>
<description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://m201:9000</value>
<description>The name of the default file system. A URI whose
scheme and authority determine the FileSystem implementation. The
uri’s scheme determines the config property (fs.SCHEME.impl) naming
the FileSystem implementation class. The uri’s authority is used to
determine the host, port, etc. for a filesystem.</description>
</property>

7.修改文件hdfs-site.xml在<configuration>中加入
<property>
<name>dfs.replication</name>
<value>1</value>
</property>

8.修改文件mapred-site.xml在<configuration>中加入
<property>
<name>mapred.job.tracker</name>
<value>m201:9001</value>
<description>The host and port that the MapReduce job tracker runs
at. If “local”, then jobs are run in-process as a single map
and reduce task.
</description>
</property>

9.设置环境变量,修改文件/etc/profile
export HADOOP_HOME=/root/install/hadoop-0.20.2
export PATH=$HADOOP_HOME/bin:$PATH

s202,s203,也执行一样的操作

执行/etc/profile使其生效

10.配/etc/hosts文件,加入
192.168.0.201 m201
192.168.0.202 s202
192.168.0.203 s203
s202,s203,也执行一样的操作

11.将/root/install/hadoop-0.20.2目录复制到s202,s203上
可使用scp -r 源 主机:目标

11.格式化HDFS文件系统
/root/install/hadoop-0.20.2/bin/hadoop namenode –format命令

12. 执行/root/install/hadoop-0.20.2/bin/start-all.sh文件,启服务
/root/install/hadoop-0.20.2/bin/stop-all.sh文件,停止服务

hadoop安装完成
可运行
http://192.168.0.201:50070/dfshealth.jsp
查看hadoop是否运行
安装zookeeper(在m201上执行)

1.在/root/install/hadoop-0.20.2/中创建目录zookeeper
cd /root/install/hadoop-0.20.2
mkdir zookeeper

2.在/root/install目录中解压zookeeper
cd /root/install
tar -zxvf zookeeper-3.3.3.tar.gz

3.将zookeeper移动至/root/install/hadoop-0.20.2/zookeeper目录
cd /root/install/zookeeper
mv * /root/install/hadoop-0.20.2/zookeeper

3配置zookeeper
1).创建zoo.cfg文件
cd /root/install/hadoop-0.20.2/zookeeper/conf
cp zoo_sample.cfg zoo.cfg
2).修改zoo.cfg文件,zoo.cfg文件的完整内容如下:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.

dataDir=/root/install/hadoop-0.20.2/zookeeper/zookeeper-data #(新增加)

dataLogDir=/root/install/hadoop-0.20.2/zookeeper/logs #(新增加)

# the port at which the clients will connect
clientPort=2181

server.1=m201:2888:3888 #(新增加)

server.2=s202:2888:3888 #(新增加)

server.3=s203:2888:3888 #(新增加)
在文件中写入 #(新增加)的项目

3).创建zookeeper-data目录
cd /root/install/hadoop-0.20.2/zookeeper/
mkdir zookeeper-data

3).创建myid文件
cd /root/install/hadoop-0.20.2/zookeeper/zookeeper-data
vi myid
myid文件中的内空写:1
:x保存文件

4.将/root/install/hadoop-0.20.2/zookeeper目录复制到s202,s203上
可使用scp -r 源 主机:目标

5.进入s202主机,写myid文件内容修改为:2

6.进入s203主机,写myid文件内容修改为:3

7.启动zookeeper(m201,s202,s203,执行同样的操作)
/root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh start
/root/install/hadoop-0.20.2/zookeeper/bin/zkServer.sh stop(为停止)

安装hbase(m201中操作)
1.在/root/install/hadoop-0.20.2/中创建目录hbase
cd /root/install/hadoop-0.20.2
mkdir hbase

2.在/root/install目录中解压hbase
cd /root/install
tar -zxvf hbase-0.90.2.tar.gz

3.将hbase移动至/root/install/hadoop-0.20.2/hbase目录
cd /root/install/hbase-0.90.2
mv * /root/install/hadoop-0.20.2/hbase

4.配置hbase
1).配置/etc/profile文件,加入
export HBASE_HOME=/root/install/hadoop-0.20.2/hbase
export PATH=$PATH:$HBASE_HOME/bin

s202,s203,也执行一样的操作

执行/etc/profile使其生效

2).修改hbase-site.xml文件
cd /root/install/hadoop-0.20.2/hbase/conf
vi hbase-site.xml

在<configuration>中加入 :
<property>

<name>hbase.rootdir</name>

<value>hdfs://m201:9000/hasexx</value>

<description>The directory shared by region servers.</description>

</property>
<property>
<name>hbase.master.port</name>
<value>60000</value>
</property>
<property>

<name>hbase.cluster.distributed</name>

<value>true</value>

<description>The mode the cluster will be in. Possible values are

false: standalone and pseudo-distributed setups with managed Zookeeper

true: fully-distributed with unmanaged Zookeeper Quorum (see hbase-env.sh)

</description>

</property>

<property>

<name>dfs.replication</name>

<value>1</value>

</property>
<property>
<name>hbase.zookeeper.property.dataDir</name>
<value>/root/install/hadoop-0.20.2/zookeeper</value>
<description>Property from ZooKeeper’s config zoo.cfg.
The directory where the snapshot is stored.
</description>
</property>
<property>
<name>hbase.zookeeper.property.clientPort</name>
<value>2181</value>
<description>Property from ZooKeeper’s config zoo.cfg.
The port at which the clients will connect.
</description>
</property>

<property>
<name>hbase.zookeeper.quorum</name>
<value>m201,s202,s203</value>
<description>Comma separated list of servers in the ZooKeeper Quorum.
For example, “host1.mydomain.com,host2.mydomain.com,host3.mydomain.com”.
By default this is set to localhost for local and pseudo-distributed modes
of operation. For a fully-distributed setup, this should be set to a full
list of ZooKeeper quorum servers. If HBASE_MANAGES_ZK is set in hbase-env.sh
this is the list of servers which we will start/stop ZooKeeper on.
</description>
</property>

3).修改hbase-env.sh文件,加入
export JAVA_HOME=/usr/java/jdk1.6.0_23/
export HBASE_CLASSPATH=/root/install/hadoop-0.20.2/conf
export HBASE_MANAGES_ZK=false

4).复制zookeeper的zoo.cfg文件到/root/install/hadoop-0.20.2/conf目录中
cp /root/install/hadoop-0.20.2/zookeeper/conf/zoo.cfg /root/install/hadoop-0.20.2/conf/

5).修改regionservers文件,完整内容为:
192.168.0.202
192.168.0.203

6).将hadoop的hadoop-0.20.2-core.jar文复制到hbase的lib目录下,删除原来的hadoop-core-0.20-append-r1056497.jar文件

7).将/root/install/hadoop-0.20.2/hbase目录复制到s202,s203上
可使用scp -r 源主机:目标

5.启动服务
/root/install/hadoop-0.20.2/hbase/bin/start-hbase.sh
/root/install/hadoop-0.20.2/hbase/bin/stop-hbase.sh停止

转载请注明:数据分析 » linux中hadoop+zookeeper+hbase配置

喜欢 (0)or分享 (0)