hadoop-2.8.3 配置安裝

  • 2019 年 10 月 6 日
  • 筆記

centoshadoopinstall

hadoop-2.8.3

1. 環境

  • 1. 資源分配 hostnameipenvtype node192.168.100.199jdk zk HaoopResourceManager QuorumPeerMain NameNode SecondaryNameNode node1192.168.100.101jdk zk HaoopQuorumPeerMain DataNode NodeManager node2192.168.100.102jdk zk HaoopQuorumPeerMain DataNode NodeManager node3192.168.100.103jdk zk HaoopQuorumPeerMain DataNode NodeManager
  • 2. 通過ntp服務,使各節點時間同步

2. 配置文件

  • 1. core-site.xml
    <property>          <name>fs.defaultFS</name>          <value>hdfs://node</value>      </property>      <property>          <name>io.file.buffer.size</name>          <value>131072</value>      </property>      <property>          <name>hadoop.tmp.dir</name>          <value>file:/opt/data/hadoop_data/tmp</value>          <description>Abase for other temporary directories.</description>      </property>
  • 2. hdfs-site.xml
    <property>          <name>dfs.namenode.name.dir</name>          <value>file:/opt/data/hadoop_data/namenode</value>      </property>      <property>          <name>dfs.blocksize</name>          <value>268435456</value>      </property>      <!-- 設置namenode的rpc通訊地址 -->      <property>          <name>dfs.namenode.rpc-address</name>          <value>node:8020</value>      </property>      <!-- 設置namenode的http通訊地址 -->      <property>          <name>dfs.namenode.http-address</name>          <value>node:50070</value>      </property>      <!-- 設置namenode存放的路徑 -->      <property>          <name>dfs.datanode.data.dir</name>          <value>/opt/data/hadoop_data/dfs/data</value>      </property>      <!-- 設置hdfs副本數量 -->      <property>          <name>dfs.replication</name>          <value>1</value>      </property>      <property>          <name>dfs.webhdfs.enabled</name>          <value>true</value>      </property>      <!-- 設置datanode存放的路徑 -->      <property>          <name>dfs.datanode.data.dir</name>          <value>/opt/data/hadoop_data/datanode</value>      </property>      <property>          <name>dfs.permissions</name>          <value>false</value>      </property>      <property>          <name>dfs.permissions.enabled</name>          <value>false</value>      </property>
  • 3. yarn-site.xml
    <!-- 設置 resourcemanager 在哪個節點-->      <property>          <name>yarn.resourcemanager.hostname</name>          <value>node</value>      </property>      <!-- reducer取數據的方式是mapreduce_shuffle -->      <property>          <name>yarn.nodemanager.aux-services</name>          <value>mapreduce_shuffle</value>      </property>      <property>          <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>          <value>org.apache.hadoop.mapred.ShuffleHandler</value>      </property>      <property>          <name>yarn.nodemanager.vmem-check-enabled</name>          <value>true</value>      </property>
  • 4. mapred-site.xml
    <property>          <name>mapreduce.framework.name</name>          <value>yarn</value>      </property>      <property>          <name>mapreduce.jobhistory.address</name>          <value>node:10020</value>      </property>      <property>          <name>mapreduce.jobhistory.webapp.address</name>          <value>node:19888</value>      </property>      <property>          <name>yarn.app.mapreduce.am.resource.mb</name>          <value>768</value>      </property>
  • 5. masters
node
  • 6. slaves
node1  node2  node3
  • 7. zoo.cfg ( zookeeper )
tickTime=2000  initLimit=10  syncLimit=5   #該路徑下還需個節點分別創建myid文件,並設置和下面對應的id  dataDir=/opt/data/zookeeper_data/data  dataLogDir=/opt/data/zookeeper_data/log  clientPort=2181  server.1=node:2888:3888  server.2=node1:2888:3888  server.3=node2:2888:3888  server.4=node4:2888:3888

3. 啟動順序

  1. 首次啟動 hadoop namenode -format
  2. zk zkServer.sh start zkServer.sh status zkServer.sh stop
  3. hdfs start-dfs.sh stop-dfs.sh
  4. yarn start-yarn.sh stop-yarn.sh
  5. 第3和4步驟可用 start-all.sh , stop-all.sh 代替
  • webUI
    • node:50070
    • node:8088

本文由 bytebye 創作 本站文章除註明轉載/出處外,均為本站原創或翻譯,轉載前請務必署名