hudi与spark整合
注意:hudi与spark的版本一定要参照hudi版本的源码里面的,不然会出现jar冲突,sql解析器版本不一致等等问题。若环境不一致,则需要一一排查,最简单的方式就是版本要一致。
我们若以jdbc的形式访问,则在hudi与spark整合之前这里首先与hive整合。这里的整合是借助hive将hudi表的元数据存储到mysql当中,当然在hdfs上也能看到表的元数据。
hive与spark整合
先将hive的配置文件hive-site.xml放到spark的conf目录下 配置spark thrift的端口 10001 (hive默认是10000) 编辑脚本启动 /root/zxf/spark3/spark-3.1.2-bin-hadoop3.2/sbin/start-thriftserver.sh \
--master yarn \
--deploy-mode client \
--queue default \
--num-executors 4 \
--conf spark.driver.memory=2G \
--conf spark.executor.memory=2G \
--conf spark.executor.cores=4 \
--conf spark.scheduler.mode=FAIR \
hudi与spark整合
先将hive与spark整合完成。这里的整合是为了存储元数据hudi1与spark整合。下载hudi源码包,编译
mvn clean package -DskipTests -Dscala2.12 -Dspark3
代码方式整合
启动脚本
export SPARK_HOME=/root/zxf/spark3/spark-3.1.2-bin-hadoop3.2/
/root/zxf/spark3/spark-3.1.2-bin-hadoop3.2/bin/spark-shell \
--master yarn \
--deploy-mode client \
--executor-memory 2G \
--num-executors 3 \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--jars /root/zxf/spark3/hudi-release-0.10.0/packaging/hudi-spark-bundle/target/hudi-spark3-bundle_2.12-0.10.0-sources.jar \
--packages org.apache.hudi:hudi-spark3-bundle_2.12:0.10.0,org.apache.spark:spark-avro_2.12:3.1.2 \
测试用例参考官网示例(https://hudi.apache.org/docs/0.10.0/quick-start-guide)
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceReadOptions._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
import org.apache.hudi.common.model.HoodieRecord
val tableName = "hudi_trips_cow"
val basePath = "/root/zxf/hudi0.12/hudi_trips_cow"
val dataGen = new DataGenerator
val inserts = convertToStringList(dataGen.generateInserts(10)) //这里调用自带的模拟数据生成的代码
val df = spark.read.json(spark.sparkContext.parallelize(inserts, 2))
df.write.format("hudi").
options(getQuickstartWriteConfigs).
option(PRECOMBINE_FIELD_OPT_KEY, "ts").
option(RECORDKEY_FIELD_OPT_KEY, "uuid").
option(PARTITIONPATH_FIELD_OPT_KEY, "partitionpath").
option(TABLE_NAME, tableName).
mode(Overwrite).
save(basePath)
// spark-shell
val tripsSnapshotDF = spark.
read.
format("hudi").
load(basePath)
tripsSnapshotDF.createOrReplaceTempView("hudi_trips_snapshot")
spark.sql("select fare, begin_lon, begin_lat, ts from hudi_trips_snapshot where fare > 20.0").show()
spark.sql("select _hoodie_commit_time, _hoodie_record_key, _hoodie_partition_path, rider, driver, fare from hudi_trips_snapshot").show()
spark.sql("select count(*) from hudi_trips_snapshot").show()
sql方式整合
基于thrift server
#!/bin/bash
export SPARK_HOME=/root/zxf/spark3/spark-3.1.2-bin-hadoop3.2
/root/zxf/spark3/spark-3.1.2-bin-hadoop3.2/sbin/start-thriftserver.sh \
--master yarn \
--deploy-mode client \
--queue default \
--num-executors 4 \
--conf spark.driver.memory=2G \
--conf spark.executor.memory=2G \
--conf spark.executor.cores=4 \
--conf spark.scheduler.mode=FAIR \
--conf 'spark.serializer=org.apache.spark.serializer.KryoSerializer' \
--conf 'spark.sql.extensions=org.apache.spark.sql.hudi.HoodieSparkSessionExtension' \
--jars /root/zxf/spark3/hudi-release-0.10.0/packaging/hudi-spark-bundle/target/hudi-spark3-bundle_2.12-0.10.0-sources.jar \
--packages org.apache.hudi:hudi-spark3-bundle_2.12:0.10.0,org.apache.spark:spark-avro_2.12:3.1.2 \
测试用例
create table hudi_cow_pt_tbl (
id bigint,
name string,
ts bigint,
dt string,
hh string
) using hudi
tblproperties (
type = 'cow',
primaryKey = 'id',
preCombineField = 'ts'
)
partitioned by (dt, hh)
location '/tmp/hudi/hudi_cow_pt_tbl'; --这里的路径是hdfs上的数据存储的目录,将会这个目录下建立一系列的数据目录,多数为分区
insert into hudi_cow_pt_tbl select 1,'tom',1669001467,'2022-11-21','11'
参考文章
发表评论