Hadoop+Hive+HBase+Spark 集群部署(四)

  • 2019 年 10 月 6 日
  • 筆記

hadoophivehbasespark

3. Hive

安装mysql数据库(本文不作介绍)

创建metastore数据库并为其授权

create database metastore;  grant all on metastore.* to hive@'%'  identified by 'hive';  grant all on metastore.* to hive@'localhost'  identified by 'hive';  flush privileges;

将jdbc驱动放到hive lib中

mv mysql-connector-java-5.1.43.tar.gz $HIVE_HOME/lib

复制初始化文件并重改名

cp hive-env.sh.template hive-env.sh  cp hive-default.xml.template hive-site.xml  cp hive-log4j2.properties.template hive-log4j2.properties  cp hive-exec-log4j2.properties.template hive-exec-log4j2.properties

修改 hive-env.sh

export JAVA_HOME=/usr/java/jdk1.8.0_162    ##Java路径  export HADOOP_HOME=/opt/soft/hadoop-2.8.3   ##Hadoop安装路径  export HIVE_HOME=/opt/soft/apache-hive-2.3.3-bin    ##Hive安装路径  export HIVE_CONF_DIR=//opt/soft/apache-hive-2.3.3-bin/conf    ##Hive配置文件路径

在hdfs 中创建下面的目录 ,并且授权

hdfs dfs -mkdir -p /user/hive/warehouse  hdfs dfs -mkdir -p /user/hive/tmp  hdfs dfs -mkdir -p /user/hive/log  hdfs dfs -chmod -R 777 /user/hive/warehouse  hdfs dfs -chmod -R 777 /user/hive/tmp  hdfs dfs -chmod -R 777 /user/hive/log

hive-site.xml

1. 将下面的配置添加到hive-site.xml

<property>      <name>hive.exec.scratchdir</name>      <value>/user/hive/tmp</value>  </property>  <property>      <name>hive.metastore.warehouse.dir</name>      <value>/user/hive/warehouse</value>  </property>  <property>      <name>hive.querylog.location</name>      <value>/user/hive/log</value>  </property>  ## 配置 MySQL 数据库连接信息  <property>      <name>javax.jdo.option.ConnectionURL</name>      <value>jdbc:mysql://nnode:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false</value>  </property>  <property>      <name>javax.jdo.option.ConnectionDriverName</name>      <value>com.mysql.jdbc.Driver</value>  </property>  <property>      <name>javax.jdo.option.ConnectionUserName</name>      <value>hive</value>  </property>  <property>      <name>javax.jdo.option.ConnectionPassword</name>      <value>hive</value>  </property>

2. 修改 hive-site.xml

  • 将配置文件中所有的${system:java.io.tmpdir}更改为 /opt/data/hive_data (如果没有该文件则创建)并将此文件夹赋予读写权限
  • 将配置文件中所有 ${system:user.name}更改为 root

3. 初始化数据库

[root@node ~]# schematool  -initSchema -dbType mysql  SLF4J: Class path contains multiple SLF4J bindings.  SLF4J: Found binding in [jar:file:/opt/soft/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: Found binding in [jar:file:/opt/soft/hadoop-2.8.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.  SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]  Metastore connection URL:	 jdbc:mysql://node:3306/metastore?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false  Metastore Connection Driver :	 com.mysql.jdbc.Driver  Metastore connection User:	 hive  Starting metastore schema initialization to 2.3.0  Initialization script hive-schema-2.3.0.mysql.sql  Initialization script completed  schemaTool completed

测试安装是否成功

[root@node ~]# hive  SLF4J: Class path contains multiple SLF4J bindings.  SLF4J: Found binding in [jar:file:/opt/soft/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: Found binding in [jar:file:/opt/soft/hadoop-2.8.3/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]  SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.  SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]    Logging initialized using configuration in file:/opt/soft/apache-hive-2.3.3-bin/conf/hive-log4j2.properties Async: true  Hive-on-MR is deprecated in Hive 2 and may not be available in the future versions. Consider using a different execution engine (i.e. spark, tez) or using Hive 1.X releases.  hive> 

本文由 bytebye 创作 本站文章除注明转载/出处外,均为本站原创或翻译,转载前请务必署名