大家好,又见面了,我是你们的朋友全栈君。
原文地址: http://blog.csdn.net/nsrainbow/article/details/43243389 最新课程请关注原作者博客,获得更好的显示体验
声明:
- 本文基于Centos 6.x + CDH 5.x
硬件要求
为什么用 Impala
- 如果是ETL任务使用Hive
- 如果是实时的热查询则用Impala
安装Impala
安装
Alex 的 Hadoop 菜鸟教程: 第10课 Hive 安装和使用教程
所有的datanode上安装Impalad,也就是下面的 impala + impala-server 。在某台机器上同时装上 impala-state-store + impala-catalog
$ sudo yum install impala # Binaries for daemons
$ sudo yum install impala-server # Service start/stop script
$ sudo yum install impala-state-store # Service start/stop script
$ sudo yum install impala-catalog # Service start/stop script
在host2上执行
$ sudo yum install impala # Binaries for daemons$ sudo yum install impala-server # Service start/stop script
安装impala的时候会检查很多包依赖,请根据提示安装包,比如
--> Finished Dependency ResolutionError: Package: hadoop-libhdfs-2.5.0+cdh5.2.1+578-1.cdh5.2.1.p0.14.el6.x86_64 (cloudera-cdh5) Requires: hadoop-hdfs = 2.5.0+cdh5.2.1+578-1.cdh5.2.1.p0.14.el6 Installed: hadoop-hdfs-2.5.0+cdh5.3.0+781-1.cdh5.3.0.p0.54.el6.x86_64 (@cloudera-cdh5) hadoop-hdfs = 2.5.0+cdh5.3.0+781-1.cdh5.3.0.p0.54.el6 Available: hadoop-hdfs-2.5.0+cdh5.2.1+578-1.cdh5.2.1.p0.14.el6.x86_64 (cloudera-cdh5) hadoop-hdfs = 2.5.0+cdh5.2.1+578-1.cdh5.2.1.p0.14.el6 You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest
这种情况下,先检查你的CDH repository ,很多情况下,并不是你没安装这个包,而是版本不对,所以建议使用 本地repo库,这样这可以保证其他组件跟hadoop之间的依赖统一,排查的步骤
- 检查 /etc/yum.repo.d/cloudera-cdh5.repo 是否存在
- 如果存在是否 enabled =1
- 如果enabled=1,检查url是否可以访问
- 如果以上都ok,那就是版本不对了,比如上面的错误是说我的 hadoop-hdfs 装的是CDH5.3 的库,而现在我本地的CDH repo是5.2 ,这一定是我以前安装hadoop-hdfs的时候没有用本地库造成的问题,现在只能重做本地库为 cdh5.3,或者暂时使用远程库,我使用了远程库
配置
打开 “短路读取”
<property> <name>dfs.client.read.shortcircuit</name> <value>true</value></property><property> <name>dfs.domain.socket.path</name> <value>/var/run/hdfs-sockets/dn._PORT</value></property><property> <name>dfs.client.file-block-storage-locations.timeout.millis</name> <value>10000</value></property>
两台机器上都建立需要的文件夹
[root@host1 run]# mkdir /var/run/hdfs-sockets/[root@host1 run]# chown -R hdfs.hdfs /var/run/hdfs-sockets/
usermod -a -G hadoop impalausermod -a -G hdfs impala
打开”块位置跟踪” (必须打开否则不能启动)
块位置跟踪让impala知道块文件的具体位置在哪,这个配置不打开impala就启动不起来
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
修改后重启datanode
打开JDBC支持
启动impala服务
$ sudo service impala-state-store start
$ sudo service impala-catalog start
$ sudo service impala-server start
- Impala如果启动不起来可以去 /var/log/impala/ 目录下看启动日志
- Impala的日志可以在 http://<hostname>:25000/logs 上查看到
- Cloudera官方还很友善的提供了Impala的各种问题的解决方案,比如查询太慢,Join失败等
启动不起来的常见错误:
E0202 08:01:24.944171 29251 cpu-info.cc:135] CPU does not support the Supplemental SSE3 (SSSE3) instruction set, which is required. Exiting if Supplemental SSE3 is not functional...
那么你跟我一样,运行虚拟机的机器的CPU较老,还不支持SSSE3,但是运行Impala有一个硬指标,就是必须要SSSE3以上的CPU才行,这个真是没撤,虚拟机也模拟不出一个不一样的CPU,只能找别的机器来学习了
shell安装
sudo yum install impala-shell
配置Impala
IMPALA_CATALOG_SERVICE_HOST=<span style="font-family: Arial, Helvetica, sans-serif;">host1
</span>IMPALA_STATE_STORE_HOST=host1
IMPALA_STATE_STORE_PORT=24000
IMPALA_BACKEND_PORT=22000
IMPALA_LOG_DIR=/var/log/impala
IMPALA_CATALOG_SERVICE_HOST=host1
export IMPALA_STATE_STORE_ARGS=${IMPALA_STATE_STORE_ARGS:- \
-log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}}
IMPALA_SERVER_ARGS=" \
-log_dir=${IMPALA_LOG_DIR} \
-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \
-state_store_port=${IMPALA_STATE_STORE_PORT} \
-use_statestore \
-state_store_host=${IMPALA_STATE_STORE_HOST} \
-be_port=${IMPALA_BACKEND_PORT}"
export ENABLE_CORE_DUMPS=${ENABLE_COREDUMPS:-false}
如果打开DUMPS
export ENABLE_CORE_DUMPS=${ENABLE_COREDUMPS:-true}
可以生成DUMPS文件
Impala-shell的使用
建表例子
Step1
hdfs dfs -mkdir -p /user/cloudera/sample_data/tab1 /user/cloudera/sample_data/tab2
在本地建立文本tab1.csv
1,true,123.123,2012-10-24 08:55:00
2,false,1243.5,2012-10-25 13:40:00
3,false,24453.325,2008-08-22 09:33:21.123
4,false,243423.325,2007-05-12 22:32:21.33454
5,true,243.325,1953-04-22 09:11:33
tab2.csv
1,true,12789.123
2,false,1243.5
3,false,24453.325
4,false,2423.3254
5,true,243.325
60,false,243565423.325
70,true,243.325
80,false,243423.325
90,true,243.325
把csv文件上传到hdfs
$ hdfs dfs -put tab1.csv /user/cloudera/sample_data/tab1$ hdfs dfs -ls /user/cloudera/sample_data/tab1Found 1 items-rw-r--r-- 1 cloudera cloudera 192 2013-04-02 20:08 /user/cloudera/sample_data/tab1/tab1.csv$ hdfs dfs -put tab2.csv /user/cloudera/sample_data/tab2$ hdfs dfs -ls /user/cloudera/sample_data/tab2Found 1 items-rw-r--r-- 1 cloudera cloudera 158 2013-04-02 20:09 /user/cloudera/sample_data/tab2/tab2.csv
Step2
DROP TABLE IF EXISTS tab1;-- The EXTERNAL clause means the data is located outside the central location-- for Impala data files and is preserved when the associated Impala table is dropped.-- We expect the data to already exist in the directory specified by the LOCATION clause.CREATE EXTERNAL TABLE tab1( id INT, col_1 BOOLEAN, col_2 DOUBLE, col_3 TIMESTAMP)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','LOCATION '/user/cloudera/sample_data/tab1';DROP TABLE IF EXISTS tab2;-- TAB2 is an external table, similar to TAB1.CREATE EXTERNAL TABLE tab2( id INT, col_1 BOOLEAN, col_2 DOUBLE)ROW FORMAT DELIMITED FIELDS TERMINATED BY ','LOCATION '/user/cloudera/sample_data/tab2';DROP TABLE IF EXISTS student;CREATE TABLE student( id INT, name STRING)ROW FORMAT DELIMITED FIELDS TERMINATED BY ',';
- 前两个表是基于之前上传的文件的外部表
- 第三个表建立了一个空表,存储位置就是 hive 的 hive.metastore.warehouse.dir,依旧是用LOAD DATA方法加载数据
- 因为Impala是对hive的优化,所以impala的数据是直接存在hive上的,确保impala有对 hive.metastore.warehouse.dir 的写权限,如果没有就把impala加到hive这个组里面
[xmseapp03:21000] > select * from tab1;
Query: select * from tab1
+----+-------+------------+-------------------------------+
| id | col_1 | col_2 | col_3 |
+----+-------+------------+-------------------------------+
| 1 | true | 123.123 | 2012-10-24 08:55:00 |
| 2 | false | 1243.5 | 2012-10-25 13:40:00 |
| 3 | false | 24453.325 | 2008-08-22 09:33:21.123000000 |
| 4 | false | 243423.325 | 2007-05-12 22:32:21.334540000 |
| 5 | true | 243.325 | 1953-04-22 09:11:33 |
+----+-------+------------+-------------------------------+
Fetched 5 row(s) in 6.91s
Impala还可以调用写好的.sql文件
Step1
1|AAAAAAAABAAAAAAA|980124|7135|32946|2452238|2452208|Mr.|Javier|Lewis|Y|9|12|1936|CHILE||Javie
r.Lewis@VFAxlnZEvOx.org|2452508|
2|AAAAAAAACAAAAAAA|819667|1461|31655|2452318|2452288|Dr.|Amy|Moses|Y|9|4|1966|TOGO||Amy.Moses@
Ovk9KjHH.com|2452318|
3|AAAAAAAADAAAAAAA|1473522|6247|48572|2449130|2449100|Miss|Latisha|Hamilton|N|18|9|1979|NIUE||
Latisha.Hamilton@V.com|2452313|
4|AAAAAAAAEAAAAAAA|1703214|3986|39558|2450030|2450000|Dr.|Michael|White|N|7|6|1983|MEXICO||Mic
hael.White@i.org|2452361|
5|AAAAAAAAFAAAAAAA|953372|4470|36368|2449438|2449408|Sir|Robert|Moran|N|8|5|1956|FIJI||Robert.
Moran@Hh.edu|2452469|
然后上传到hdfs上
hdfs dfs -put costomer.dat /user/hive/tpcds/customer/
然后我们写一段sql 名叫 customer_setup.sql
---- store_sales fact table and surrounding dimension tables only--create database tpcds;use tpcds;drop table if exists customer;create external table customer( c_customer_sk int, c_customer_id string, c_current_cdemo_sk int, c_current_hdemo_sk int, c_current_addr_sk int, c_first_shipto_date_sk int, c_first_sales_date_sk int, c_salutation string, c_first_name string, c_last_name string, c_preferred_cust_flag string, c_birth_day int, c_birth_month int, c_birth_year int, c_birth_country string, c_login string, c_email_address string, c_last_review_date string)row format delimited fields terminated by '|' location '/user/hive/tpcds/customer.dat';
Step2
impala-shell -i localhost -f customer_setup.sql
hive能做的事情impala都可以做,我就说说hive做不到但是Impala可以做到的事情
单条插入外部分区表的数据
Step1
$ hdfs dfs -mkdir -p /user/impala/data/logs/year=2015/month=01/day=01/host=host1
$ hdfs dfs -mkdir -p /user/impala/data/logs/year=2015/month=02/day=22/host=host2
并上传文本文件a.txt
1,jack
2,michael
和b.txt
3,sara4,john
hdfs dfs -put /root/a.txt /user/impala/data/logs/year=2015/month=01/day=01/host=host1hdfs dfs -put /root/b.txt /user/impala/data/logs/year=2015/month=02/day=22/host=host2
Step2
create external table logs (id int, name string) partitioned by (year string, month string, day string, host string) row format delimited fields terminated by ',' location '/user/impala/data/logs';
Step3
alter table logs add partition (year="2015",month="01",day="01",host="host1");
alter table logs add partition (year="2015",month="02",day="22",host="host2");
Step4
select * from logs
插入一条数据
insert into logs partition (year="2015", month="01", day="01", host="host1") values (6,"ted");
再查一下
select * from logs;
Impala的Java调用
选择JDBC驱动
- 对于Impala 2.0 以及以后的版本, 你需要在 Hive0.13 JDBC驱动 和 Cloudera JDBC 驱动之间选择。
- 如果你在项目中已经用了早期版本的Impala的JDBC, 你需要升级你的JDBC到上面提到的两个驱动中的一个, 因为之前你只能用Hive 0.12 驱动,但是这个驱动是不兼容 Imala2.0 及以上版本的。
- 你可以用linux的包管理器(在Centos上就是yum)来安装 Hive JDBC 驱动(包名叫 hive-jdbc)。
- 你也可以从Cloudera的官网上下载Cloudera JDBC 2.5驱动
- 两种驱动都可以提供稳定的速度提升(也就是说想用哪种看心情!说了跟没说一样)
<dependency>
<groupId>org.apache.hive</groupId>
<artifactId>hive-jdbc</artifactId>
<version>0.14.0</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-common</artifactId>
<version>2.2.0</version>
</dependency>
另外,github上还有一个Impala的Maven模板项目,大家可以借鉴一下
Cloudera-Impala-JDBC-Example
例子
Alex 的 Hadoop 菜鸟教程: 第11课 Hive的Java调用 建立的项目可以直接继续用
Connection con = DriverManager.getConnection("jdbc:hive2://host1:10000/default", "hive", "");
改成
Connection con = DriverManager.getConnection("jdbc:hive2://host1:21050/;auth=noSasl, "", "");
为了简化例子,我把ImpalaJdbcClient简化成只有查询部分了
package org.crazycake.play_hive;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.SQLException;
import java.sql.Statement;
public class ImpalaJdbcClient {
/**
* 注意:hive-server2 引用的driver是 org.apache.hive.* 而 hive-server 是
* org.apache.hadoop.hive.*
*/
private static String driverName = "org.apache.hive.jdbc.HiveDriver";
/**
* @param args
* @throws SQLException
*/
public static void main(String[] args) throws SQLException {
try {
Class.forName(driverName);
} catch (ClassNotFoundException e) {
// TODO Auto-generated catch block
e.printStackTrace();
System.exit(1);
}
// Impala的默认端口是 21050
Connection con = DriverManager.getConnection("jdbc:hive2://xmseapp03:21050/;auth=noSasl", "", "");
Statement stmt = con.createStatement();
// select * query
String sql = "select * from logs";
System.out.println("Running: " + sql);
ResultSet res = stmt.executeQuery(sql);
while (res.next()) {
System.out.println(String.valueOf(res.getInt(1)) + "\t" + res.getString(2));
}
}
}
运行一下,效果是
Running: select * from logs3 sara4 john6 ted1 jack2 michael
参考资料
发布者:全栈程序员-用户IM,转载请注明出处:https://javaforall.cn/134469.html原文链接:https://javaforall.cn
【正版授权,激活自己账号】: Jetbrains全家桶Ide使用,1年售后保障,每天仅需1毛
【官方授权 正版激活】: 官方授权 正版激活 支持Jetbrains家族下所有IDE 使用个人JB账号...