安装配置hadoop和jdk
配置/usr/local/src/hadoop/etc/hadoop/下的四个文件
hdfs-site.xml
dfs.namenode.name.dir
file:/usr/local/src/hadoop/dfs/name
dfs.datanode.data.dir
file:/usr/local/src/hadoop/dfs/data
dfs.replication
3
core-site.xml
fs.defaultFS
hdfs://192.168.150.81:9000
io.file.buffer.size
131072
、
hadoop.tmp.dir
file:/usr/local/src/hadoop/tmp
mapred-site.xml
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
master:10020
mapreduce.jobhistory.webapp.address
master:19888
yarn-site.xml
yarn.resourcemanager.address
master:8032
yarn.resourcemanager.scheduler.address
master:8030
yarn.resourcemanager.resource-tracker.address
master:8031
yarn.resourcemanager.admin.address
master:8033
yarn.resourcemanager.webapp.address
master:8088
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.auxservices.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
Haddop其他相关配置
在/usr/local/src/haddop/etc/haddop/目录下
vi masters
#添加master主机ip地址
vi slaves
#slave1主机ip
#slave2主机ip
新建目录
mkdir /usr/local/src/hadoop/tmp
mkdir /usr/local/src/hadoop/dfs/name -p
mkdir /usr/local/src/hadoop/dfs/data -p
修改目录权限
chown -R hadoop:hadoop /usr/local/src/hadoop/
同步配置文件到slave节点
scp -r /usr/local/src/hadoop/ root@slave1:/usr/local/src/scp -r /usr/local/src/hadoop/ root@slave2:/usr/local/src/
在slave节点上设置hadoop环境变量
修改slave节点上目录权限
chown -R hadoop:hadoop /usr/local/src/hadoop/chown -R hadoop:hadoop /usr/local/src/hadoop/