Hadoop+Hive+HBase+Spark 集群部署(四)
- 2019 年 10 月 6 日
- 筆記
3. Hive
安裝mysql資料庫(本文不作介紹)
創建metastore資料庫並為其授權
create database metastore; grant all on metastore.* to hive@'%' identified by 'hive'; grant all on metastore.* to hive@'localhost' identified by 'hive'; flush privileges;
將jdbc驅動放到hive lib中
mv mysql-connector-java-5.1.43.tar.gz $HIVE_HOME/lib
複製初始化文件並重改名
cp hive-env.sh.template hive-env.sh cp hive-default.xml.template hive-site.xml cp hive-log4j2.properties.template hive-log4j2.properties cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties
修改 hive-env.sh
export JAVA_HOME=/usr/java/jdk1.8.0_162 ##Java路徑 export HADOOP_HOME=/opt/soft/hadoop-2.8.3 ##Hadoop安裝路徑 export HIVE_HOME=/opt/soft/apache-hive-2.3.3-bin ##Hive安裝路徑 export HIVE_CONF_DIR=//opt/soft/apache-hive-2.3.3-bin/conf ##Hive配置文件路徑
在hdfs 中創建下面的目錄 ,並且授權
hdfs dfs -mkdir -p /user/hive/warehouse hdfs dfs -mkdir -p /user/hive/tmp hdfs dfs -mkdir -p /user/hive/log hdfs dfs -chmod -R 777 /user/hive/warehouse hdfs dfs -chmod -R 777 /user/hive/tmp hdfs dfs -chmod -R 777 /user/hive/log
hive-site.xml
1. 將下面的配置添加到hive-site.xml
<property> <name>hive.exec.scratchdir</name> <value>/user/hive/tmp</value> </property> <property> <name>hive.metastore.warehouse.dir</name> <value>/user/hive/warehouse</value> </property> <property> <name>hive.querylog.location</name> <value>/user/hive/log</value> </property> ## 配置 MySQL 資料庫連接資訊 <property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:mysql://nnode:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>com.mysql.jdbc.Driver</value> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>hive</value> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>hive</value> </property>
2. 修改 hive-site.xml
- 將配置文件中所有的
${system:java.io.tmpdir}
更改為/opt/data/hive_data
(如果沒有該文件則創建)並將此文件夾賦予讀寫許可權 - 將配置文件中所有
${system:user.name}
更改為root
3. 初始化資料庫
[root@node ~]# schematool -initSchema -dbType mysql SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/soft/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/soft/hadoop-2.8.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Metastore connection URL: jdbc:mysql://node:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false Metastore Connection Driver : com.mysql.jdbc.Driver Metastore connection User: hive Starting metastore schema initialization to 2.3.0 Initialization script hive-schema-2.3.0.mysql.sql Initialization script completed schemaTool completed
測試安裝是否成功
[root@node ~]# hive SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/opt/soft/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/opt/soft/hadoop-2.8.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory] Logging initialized using configuration in file:/opt/soft/apache-hive-2.3.3-bin/conf/hive-log4j2.properties Async: true Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases. hive>
本文由 bytebye 創作 本站文章除註明轉載/出處外,均為本站原創或翻譯,轉載前請務必署名