这里是当初在三个ECS节点上搭建hadoop+zookeeper+hbase+solr的主要步骤,文章内容未经过润色,请参考的同学搭配其他博客一同使用,并记得根据实际情况调整相关参数
1. 指定位置解压
2. vi /etc/profile
export HBASE_HOME=/opt/hbase/hbase-2.1.9
export PATH=.:${JAVA_HOME}/bin:${SCALA_HOME}/bin:${SPARK_HOME}/bin:$PATH
source /etc/profile
3. vi /.../hbase-2.1.9/conf/hbase-env.sh
export JAVA_HOME=/opt/jdk/jdk1.8.0_191
export HADOOP_HOME=/opt/hadoop/hadoop-2.7.7
export HBASE_HOME=/opt/hbase/hbase-2.1.9
export HBASE_CLASSPATH=ls /opt/hadoop/hadoop-2.7.7/etc/hadoop/
export HBASE_PID_DIR=/opt/DonotDelete/hbasepid
export HBASE_MANAGES_ZK=false
###
export HBASE_CLASSPATH-->hadoop配置文件的位置
HBASE_MANAGES_ZK=false-->不启用HBase自带的Zookeeper集群
export HBASE_PID_DIR-->存储pid,防止pid在tmp文件夹中被删而造成无法通过命令关闭进程
详见:
https://blog.csdn.net/xiao_jun_0820/article/details/35222699
https://www.cnblogs.com/qindongliang/p/4894572.html
https://www.cnblogs.com/weiyiming007/p/12018288.html
同样的,为了hadoop的pid的安全
vi /opt/hadoop/hadoop-2.7.7/etc/hadoop/hadoop-env.sh
export HADOOP_PID_DIR=/opt/DonotDelete/hadooppid
同理vi ~/spark-env.sh
export SPARK_PID_DIR=/opt/DonotDelete/sparkpid
4. vi /.../hbase-2.1.9/conf/hbase-site.xml
注:如果要指定HDFS上的目录,端口号要与hdfs-site.xml中设为一致
hbase.rootdir
hdfs://Gwj:8020/hbase
hbase.zookeeper.property.clientPort
2181
zookeeper.session.timeout
120000
hbase.master.maxclockskew
150000
hbase.zookeeper.quorum
Gwj,Ssj,Pyf
hbase.tmp.dir
/opt/hbase/temphbasedata
hbase.cluster.distributed
true
hbase.master
Gwj:60000
5. vi /.../hbase-2.1.9/conf/regionservers
Ssj
Pyf
6. 启动 关闭 检查状态
/opt/hbase/hbase-2.1.9/bin/start-hbase.sh
stop-hbase.sh
status-hbase.sh
正常启动节点进程
HBase
Master---HMaster
Slave---HRegionServer