hadoop-2.8.3 配置安装

  • 2019 年 10 月 6 日
  • 笔记

centoshadoopinstall

hadoop-2.8.3

1. 环境

  • 1. 资源分配 hostnameipenvtype node192.168.100.199jdk zk HaoopResourceManager QuorumPeerMain NameNode SecondaryNameNode node1192.168.100.101jdk zk HaoopQuorumPeerMain DataNode NodeManager node2192.168.100.102jdk zk HaoopQuorumPeerMain DataNode NodeManager node3192.168.100.103jdk zk HaoopQuorumPeerMain DataNode NodeManager
  • 2. 通过ntp服务,使各节点时间同步

2. 配置文件

  • 1. core-site.xml
    <property>          <name>fs.defaultFS</name>          <value>hdfs://node</value>      </property>      <property>          <name>io.file.buffer.size</name>          <value>131072</value>      </property>      <property>          <name>hadoop.tmp.dir</name>          <value>file:/opt/data/hadoop_data/tmp</value>          <description>Abase for other temporary directories.</description>      </property>
  • 2. hdfs-site.xml
    <property>          <name>dfs.namenode.name.dir</name>          <value>file:/opt/data/hadoop_data/namenode</value>      </property>      <property>          <name>dfs.blocksize</name>          <value>268435456</value>      </property>      <!-- 设置namenode的rpc通讯地址 -->      <property>          <name>dfs.namenode.rpc-address</name>          <value>node:8020</value>      </property>      <!-- 设置namenode的http通讯地址 -->      <property>          <name>dfs.namenode.http-address</name>          <value>node:50070</value>      </property>      <!-- 设置namenode存放的路径 -->      <property>          <name>dfs.datanode.data.dir</name>          <value>/opt/data/hadoop_data/dfs/data</value>      </property>      <!-- 设置hdfs副本数量 -->      <property>          <name>dfs.replication</name>          <value>1</value>      </property>      <property>          <name>dfs.webhdfs.enabled</name>          <value>true</value>      </property>      <!-- 设置datanode存放的路径 -->      <property>          <name>dfs.datanode.data.dir</name>          <value>/opt/data/hadoop_data/datanode</value>      </property>      <property>          <name>dfs.permissions</name>          <value>false</value>      </property>      <property>          <name>dfs.permissions.enabled</name>          <value>false</value>      </property>
  • 3. yarn-site.xml
    <!-- 设置 resourcemanager 在哪个节点-->      <property>          <name>yarn.resourcemanager.hostname</name>          <value>node</value>      </property>      <!-- reducer取数据的方式是mapreduce_shuffle -->      <property>          <name>yarn.nodemanager.aux-services</name>          <value>mapreduce_shuffle</value>      </property>      <property>          <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>          <value>org.apache.hadoop.mapred.ShuffleHandler</value>      </property>      <property>          <name>yarn.nodemanager.vmem-check-enabled</name>          <value>true</value>      </property>
  • 4. mapred-site.xml
    <property>          <name>mapreduce.framework.name</name>          <value>yarn</value>      </property>      <property>          <name>mapreduce.jobhistory.address</name>          <value>node:10020</value>      </property>      <property>          <name>mapreduce.jobhistory.webapp.address</name>          <value>node:19888</value>      </property>      <property>          <name>yarn.app.mapreduce.am.resource.mb</name>          <value>768</value>      </property>
  • 5. masters
node
  • 6. slaves
node1  node2  node3
  • 7. zoo.cfg ( zookeeper )
tickTime=2000  initLimit=10  syncLimit=5   #该路径下还需个节点分别创建myid文件,并设置和下面对应的id  dataDir=/opt/data/zookeeper_data/data  dataLogDir=/opt/data/zookeeper_data/log  clientPort=2181  server.1=node:2888:3888  server.2=node1:2888:3888  server.3=node2:2888:3888  server.4=node4:2888:3888

3. 启动顺序

  1. 首次启动 hadoop namenode -format
  2. zk zkServer.sh start zkServer.sh status zkServer.sh stop
  3. hdfs start-dfs.sh stop-dfs.sh
  4. yarn start-yarn.sh stop-yarn.sh
  5. 第3和4步骤可用 start-all.sh , stop-all.sh 代替
  • webUI
    • node:50070
    • node:8088

本文由 bytebye 创作 本站文章除注明转载/出处外,均为本站原创或翻译,转载前请务必署名