03.01 Hive JDBC操作

一:啟動Hadoop

1. core-site.xml 配置代理用戶屬性

特別注意:hadoop.proxyuser..hosts 和 hadoop.proxyuser..groups這兩個屬性,服務器用戶名是hadoop所在的機器的登錄的名字,根據自己實際的登錄名來配置。這裡我的電腦用戶名為mengday。

<code>


<configuration>
<property>
<name>hadoop.tmp.dir/<name>
<value>file:/usr/local/Cellar/hadoop/3.2.1/libexec/tmp/<value>
/<property>
<property>
<name>fs.defaultFS/<name>
<value>hdfs://localhost:8020/<value>
/<property>
<property>
<name>hadoop.proxyuser.mengday.hosts/<name>
<value>*/<value>
/<property>
<property>
<name>hadoop.proxyuser.mengday.groups/<name>
<value>*/<value>
/<property>
/<configuration>
/<code>
2. 啟動hadoop
<code>> cd /usr/local/Cellar/hadoop/3.2.1/sbin
> ./start-all.sh
> jps
/<code>

啟動成功後注意查看DataNode節點是否啟動起來, 經常遇到DataNode節點啟動不成功。

Hive JDBC操作

二:配置hive-site.xml

Java是通過beeline來連接Hive的。啟動beeline最重要的就是配置好hive-site.xml。

其中javax.jdo.option.ConnectionURL涉及到一個數據庫,最好重新刪掉原來的metastore數據庫然後重新創建一個並初始化一下。

<code>mysql> create database metastore;
/<code>
<code>> cd /usr/local/Cellar/hive/3.1.2/libexec/bin
> schematool -initSchema -dbType mysql
/<code>

hive-site.xml

<code><configuration>
  <property>
<name>hive.metastore.local/<name>
<value>true/<value>
/<property>
<property>
<name>hive.metastore.uris/<name>
<value>thrift://localhost:9083/<value>
<description>Thrift URI for the remote metastore. Used by metastore client to connect to remote metastore./<description>
/<property>
<property>
<name>javax.jdo.option.ConnectionURL/<name>
<value>jdbc:mysql://localhost:3306/metastore?characterEncoding=UTF-8&createDatabaseIfNotExist=true/<value>
/<property>

<property>
<name>javax.jdo.option.ConnectionDriverName/<name>
<value>com.mysql.cj.jdbc.Driver/<value>
/<property>
  
<property>
<name>javax.jdo.option.ConnectionUserName/<name>
<value>root/<value>
/<property>
  

  <property>
<name>javax.jdo.option.ConnectionPassword/<name>
<value>root123/<value>
/<property>



<property>
<name>hive.exec.local.scratchdir/<name>
<value>/tmp/hive/<value>
/<property>

<property>
<name>hive.downloaded.resources.dir/<name>
<value>/tmp/hive/<value>
/<property>

<property>
<name>hive.metastore.warehouse.dir/<name>
<value>/data/hive/warehouse/<value>
/<property>

<property>
<name>hive.metastore.event.db.notification.api.auth/<name>
<value>false/<value>
/<property>
<property>
<name>hive.server2.active.passive.ha.enable/<name>
<value>true/<value>
/<property>
<property>
<name>hive.server2.transport.mode/<name>
<value>binary/<value>
<description>
Expects one of [binary, http].
Transport mode of HiveServer2.
/<description>
/<property>
<property>
<name>hive.server2.logging.operation.log.location/<name>
<value>/tmp/hive/<value>
/<property>
<property>

<name>hive.hwi.listen.host/<name>
<value>0.0.0.0/<value>
<description>This is the host address the Hive Web Interface will listen on/<description>
/<property>

<property>
<name>hive.server2.webui.host/<name>
<value>0.0.0.0/<value>
<description>The host address the HiveServer2 WebUI will listen on/<description>
/<property>

/<configuration>
/<code>

三:啟動metastore

在啟動beeline之前需要先啟動hiveserver2,而在啟動hiveserver2之前需要先啟動metastore。metastore默認的端口為9083。

<code>> cd /usr/local/Cellar/hive/3.1.2/bin
> hive --service metastore &
/<code>
Hive JDBC操作

啟動過一定確認一下啟動是否成功。

Hive JDBC操作

四:啟動hiveserver2

<code>> cd /usr/local/Cellar/hive/3.1.2/bin
> hive --service hiveserver2 &
/<code>
Hive JDBC操作

hiveserver2默認的端口為10000,啟動之後一定要查看10000端口是否存在,配置有問題基本上10000端口都啟動不成功。10000端口存在不存在是啟動beeline的關鍵。

Hive JDBC操作

五:啟動beeline

<code>> cd /usr/local/Cellar/hive/3.1.2/bin
> beeline -u jdbc:hive2://localhost:10000/default -n mengday -p
/<code>
  • -u: 連接的url,jdbc:hive2://:/,端口號默認10000 可通過 ```hiveserver2 --hiveconf hive.server2.thrift.port=14000 修改端口號,default是自帶的數據庫
  • -n: hive所在的那臺服務器的登錄賬號名稱, 這裡是我Mac機器的登錄用戶名mengday, 這裡的名字要和core-site.xml中的hadoop.proxyuser.mengday.hosts和hadoop.proxyuser.mengday.groups中mengday保持一致。
  • -p: 密碼,用戶名對應的密碼

看到0: jdbc:hive2://localhost:10000/default>就表示啟動成功了。

Hive JDBC操作

六:Hive JDBC

1. 引入依賴
<code><dependency>
<groupid>org.apache.hive/<groupid>
<artifactid>hive-jdbc/<artifactid>

<version>3.1.2/<version>
/<dependency>
/<code>
2. 準備數據

/data/employee.txt

<code>1,zhangsan,28,60.66,2020-02-01 10:00:00,true,eat#drink,k1:v1#k2:20,s1#c1#s1#1
2,lisi,29,60.66,2020-02-01 11:00:00,false,play#drink,k3:v3#k4:30,s2#c2#s1#2
/<code>
3. Java
<code>import java.sql.*;


public class HiveJdbcClient {
private static String url = "jdbc:hive2://localhost:10000/default";
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
private static String user = "mengday";
private static String password = "user對應的密碼";

private static Connection conn = null;
private static Statement stmt = null;
private static ResultSet rs = null;

static {
try {
Class.forName(driverName);
conn = DriverManager.getConnection(url, user, password);
stmt = conn.createStatement();
} catch (Exception e) {
e.printStackTrace();
}
}

public static void init() throws Exception {
stmt.execute("drop database if exists hive_test");
stmt.execute("create database hive_test");
rs = stmt.executeQuery("show databases");
while (rs.next()) {
System.out.println(rs.getString(1));
}


stmt.execute("drop table if exists employee");
String sql = "create table if not exists employee(" +
" id bigint, " +
" username string, " +

" age tinyint, " +
" weight decimal(10, 2), " +
" create_time timestamp, " +
" is_test boolean, " +
" tags array<string>, " +
" ext map<string>, " +
" address struct<string> " +
" ) " +
" row format delimited " +
" fields terminated by ',' " +
" collection items terminated by '#' " +
" map keys terminated by ':' " +
" lines terminated by '\\n'";
stmt.execute(sql);

rs = stmt.executeQuery("show tables");
while (rs.next()) {
System.out.println(rs.getString(1));
}

rs = stmt.executeQuery("desc employee");
while (rs.next()) {
System.out.println(rs.getString(1) + "\\t" + rs.getString(2));
}
}

private static void load() throws Exception {
// 加載數據
String filePath = "/data/employee.txt";
stmt.execute("load data local inpath '" + filePath + "' overwrite into table employee");


// 查詢數據
rs = stmt.executeQuery("select * from employee");
while (rs.next()) {
System.out.println(rs.getLong("id") + "\\t"
+ rs.getString("username") + "\\t"
+ rs.getObject("tags") + "\\t"
+ rs.getObject("ext") + "\\t"
+ rs.getObject("address")
);
}
}

private static void close() throws Exception {
if ( rs != null) {
rs.close();
}
if (stmt != null) {
stmt.close();

}
if (conn != null) {
conn.close();
}
}


public static void main(String[] args) throws Exception {
init();

load();

close();
}
}
/<string>/<string>/<string>/<code>
Hive JDBC操作


分享到:


相關文章: