支持HC05,HC06等蓝牙串口模块,蓝牙串口助手APP下载,支持自定义按钮,适合电子调试,基于E4A易安卓编写,源码开放,大家可以自己制作app。 业余制作,界面比较丑,只确保功能尽可能正常,介…
HC06蓝牙模块的手机APP源码文件:url80.ctfile.com/f/25127180-735564895-a747cb?p=551685 (访问密码: 551685)
通过IDE如Idea编程实质上和前面的spark-shell和spark-sql相似,其他都是Spark编程的知识,下面以scala语言为示例,idea新建scala的maven项目
image-20221124110101979
pom文件添加如下依赖
4.0.0 cn.itxs hoodie-spark-demo 1.0 org.scala-lang scala-library ${scala.version}
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-core_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-sql_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-hive_${scala.binary.version}</artifactId>
<version>${spark.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
<scope>provided</scope>
</dependency>
<dependency>
<groupId>org.apache.hudi</groupId>
<artifactId>hudi-spark3.3-bundle_${scala.binary.version}</artifactId>
<version>${hoodie.version}</version>
<scope>provided</scope>
</dependency>
org.apache.maven.plugins maven-compiler-plugin 3.10.1 1.8 1.8 ${project.build.sourceEncoding} org.scala-tools maven-scala-plugin 2.15.2 compile testCompile org.apache.maven.plugins maven-shade-plugin 3.2.4 package shade *:* META-INF/*.SF META-INF/*.DSA META-INF/*.RSA 创建常量对象
object Constant {
val HUDI_STORAGE_PATH = “hdfs://192.168.5.53:9000/tmp/”
}
插入hudi数据
package cn.itxs
import org.apache.spark.sql.SparkSession
import org.apache.spark.SparkConf
import org.apache.hudi.QuickstartUtils._
import scala.collection.JavaConversions._
import org.apache.spark.sql.SaveMode._
import org.apache.hudi.DataSourceWriteOptions._
import org.apache.hudi.config.HoodieWriteConfig._
object InsertDemo {
def main(args: Array[String]): Unit = {
val sparkConf = new SparkConf()
.setAppName(this.getClass.getSimpleName)
.setMaster(“local[*]”)
.set(“spark.serializer”, “org.apache.spark.serializer.KryoSerializer”)
val sparkSession = SparkSession.builder()
.config(sparkConf)
.enableHiveSupport()
.getOrCreate()
val tableName = "hudi_trips_cow_idea"
val basePath = Constant.HUDI_STORAGE_PATH+tableName
val dataGen = new DataGenerator
val inserts = convertToStringList(dataGen.generateInserts(10))
val df = sparkSession.read.json(sparkSession.sparkContext.parallelize(inserts,2))
df.write.format("hudi").
options(getQuickstartWriteConfigs).
option(PRECOMBINE_FIELD.key(), "ts").
option(RECORDKEY_FIELD.key(), "uuid").
option(PARTITIONPATH_FIELD.key(), "partitionpath").
option(TBL_NAME.key(), tableName).
mode(Overwrite).
save(basePath)
sparkSession.close()
}
}
由于依赖中scope是配置为provided,因此运行配置中勾选下面这项
image-20221124111557461
运行InsertDemo程序写入hudi数据
image-20221124111827746
运行ReadDemo程序读取hudi数据
image-20221124112658848
通过mvn clean package打包后上传运行
spark-submit
–class cn.itxs.ReadDemo
/home/commons/spark-3.3.0-bin-hadoop3/appjars/hoodie-spark-demo-1.0.jar
DeltaStreamer
HoodieDeltaStreamer实用程序(hudi-utilities-bundle的一部分)提供了从不同源(如DFS或Kafka)中获取的方法,具有以下功能。
从Kafka的新事件,从Sqoop的增量导入或输出HiveIncrementalPuller或DFS文件夹下的文件。
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)