Spark 中 BroadCast 导致的内存溢出(SparkFatalException)

2024-01-09

背景

本文基于

Spark 3.1.1    
open-jdk-1.8.0.352

目前在排查 Spark 任务的时候,遇到了一个很奇怪的问题,在此记录一下。

现象描述

一个 Spark Application, Driver端的内存为 5GB,一直以来都是能正常调度运行,突然有一天,报错了:

Caused by: org.apache.spark.sql.catalyst.errors.package$TreeNodeException: execute, tree:
Exchange hashpartitioning(user_lable_id#530L, 500), ENSURE_REQUIREMENTS, [id=#1564]
+- *(16) Project [xxx]
   +- *(16) BroadcastHashJoin 
      ...
            +- *(14) ColumnarToRow
             +- FileScan parquet xxx

	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:56)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.doExecute(ShuffleExchangeExec.scala:169)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
	at org.apache.spark.sql.execution.InputAdapter.inputRDD(WholeStageCodegenExec.scala:525)
	at org.apache.spark.sql.execution.InputRDDCodegen.inputRDDs(WholeStageCodegenExec.scala:453)
	at org.apache.spark.sql.execution.InputRDDCodegen.inputRDDs$(WholeStageCodegenExec.scala:452)
	at org.apache.spark.sql.execution.InputAdapter.inputRDDs(WholeStageCodegenExec.scala:496)
	at org.apache.spark.sql.execution.SortExec.inputRDDs(SortExec.scala:132)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:746)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
	at org.apache.spark.sql.execution.InputAdapter.doExecute(WholeStageCodegenExec.scala:511)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
	at org.apache.spark.sql.execution.joins.SortMergeJoinExec.inputRDDs(SortMergeJoinExec.scala:378)
	at org.apache.spark.sql.execution.ProjectExec.inputRDDs(basicPhysicalOperators.scala:50)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:746)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.inputRDD$lzycompute(ShuffleExchangeExec.scala:123)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.inputRDD(ShuffleExchangeExec.scala:123)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.shuffleDependency$lzycompute(ShuffleExchangeExec.scala:157)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.shuffleDependency(ShuffleExchangeExec.scala:155)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.$anonfun$doExecute$1(ShuffleExchangeExec.scala:172)
	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
	... 291 more
Caused by: java.util.concurrent.ExecutionException: org.apache.spark.util.SparkFatalException
	at java.util.concurrent.FutureTask.report(FutureTask.java:122)
	at java.util.concurrent.FutureTask.get(FutureTask.java:206)
	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.doExecuteBroadcast(BroadcastExchangeExec.scala:199)
	at org.apache.spark.sql.execution.InputAdapter.doExecuteBroadcast(WholeStageCodegenExec.scala:515)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeBroadcast$1(SparkPlan.scala:193)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.executeBroadcast(SparkPlan.scala:189)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareBroadcast(BroadcastHashJoinExec.scala:203)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.prepareRelation(BroadcastHashJoinExec.scala:217)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenOuter(HashJoin.scala:497)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenOuter$(HashJoin.scala:496)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenOuter(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume(HashJoin.scala:352)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume$(HashJoin.scala:349)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:194)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:149)
	at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:87)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:194)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:149)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.consume(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenOuter(HashJoin.scala:542)
	at org.apache.spark.sql.execution.joins.HashJoin.codegenOuter$(HashJoin.scala:496)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.codegenOuter(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume(HashJoin.scala:352)
	at org.apache.spark.sql.execution.joins.HashJoin.doConsume$(HashJoin.scala:349)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doConsume(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:194)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:149)
	at org.apache.spark.sql.execution.ProjectExec.consume(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.ProjectExec.doConsume(basicPhysicalOperators.scala:87)
	at org.apache.spark.sql.execution.CodegenSupport.consume(WholeStageCodegenExec.scala:194)
	at org.apache.spark.sql.execution.CodegenSupport.consume$(WholeStageCodegenExec.scala:149)
	at org.apache.spark.sql.execution.InputAdapter.consume(WholeStageCodegenExec.scala:496)
	at org.apache.spark.sql.execution.InputRDDCodegen.doProduce(WholeStageCodegenExec.scala:483)
	at org.apache.spark.sql.execution.InputRDDCodegen.doProduce$(WholeStageCodegenExec.scala:456)
	at org.apache.spark.sql.execution.InputAdapter.doProduce(WholeStageCodegenExec.scala:496)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.InputAdapter.produce(WholeStageCodegenExec.scala:496)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:54)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce(HashJoin.scala:346)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce$(HashJoin.scala:345)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:54)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce(HashJoin.scala:346)
	at org.apache.spark.sql.execution.joins.HashJoin.doProduce$(HashJoin.scala:345)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.doProduce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.joins.BroadcastHashJoinExec.produce(BroadcastHashJoinExec.scala:40)
	at org.apache.spark.sql.execution.ProjectExec.doProduce(basicPhysicalOperators.scala:54)
	at org.apache.spark.sql.execution.CodegenSupport.$anonfun$produce$1(WholeStageCodegenExec.scala:95)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.CodegenSupport.produce(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.CodegenSupport.produce$(WholeStageCodegenExec.scala:90)
	at org.apache.spark.sql.execution.ProjectExec.produce(basicPhysicalOperators.scala:41)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doCodeGen(WholeStageCodegenExec.scala:655)
	at org.apache.spark.sql.execution.WholeStageCodegenExec.doExecute(WholeStageCodegenExec.scala:718)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$execute$1(SparkPlan.scala:180)
	at org.apache.spark.sql.execution.SparkPlan.$anonfun$executeQuery$1(SparkPlan.scala:218)
	at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
	at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:215)
	at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:176)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.inputRDD$lzycompute(ShuffleExchangeExec.scala:123)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.inputRDD(ShuffleExchangeExec.scala:123)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.shuffleDependency$lzycompute(ShuffleExchangeExec.scala:157)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.shuffleDependency(ShuffleExchangeExec.scala:155)
	at org.apache.spark.sql.execution.exchange.ShuffleExchangeExec.$anonfun$doExecute$1(ShuffleExchangeExec.scala:172)
	at org.apache.spark.sql.catalyst.errors.package$.attachTree(package.scala:52)
	... 328 more
Caused by: org.apache.spark.util.SparkFatalException
	at org.apache.spark.sql.execution.exchange.BroadcastExchangeExec.$anonfun$relationFuture$1(BroadcastExchangeExec.scala:173)
	at org.apache.spark.sql.execution.SQLExecution$.$anonfun$withThreadLocalCaptured$1(SQLExecution.scala:190)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
	at java.lang.Thread.run(Thread.java:750)

注意:处于安全考虑,本文隐藏了具体的物理执行计划
对于一个在大数据行业摸爬滚打了多年的老手来说,第一眼肯定是跟着堆栈信息进行排查,
理所当然的就是会找到 BroadcastExchangeExec 这个类,但是就算把代码全看一遍也不会有所发现。

蓦然回首

这个问题折腾了我大约2个小时,错误发生的上下文都看了不止十遍了,还是没找到一丝头绪,可能是上帝的旨意,在离错误不到50行的地方, 刚好是一个页面的距离 ,发现了以下错误:

53.024: [Full GC (Ergonomics) [PSYoungGen: 802227K->698101K(1191424K)] [ParOldGen: 3085945K->3085781K(3495424K)] 3888173K->3783883K(4686848K), [Metaspace: 135862K->135862K(1185792K)], 0.9651630 secs] [Times: user=25.51 sys=0.39, real=0.96 secs] 
53.990: [Full GC (Allocation Failure) [PSYoungGen: 698101K->698047K(1191424K)] [ParOldGen: 3085781K->3079721K(3495424K)] 3783883K->3777769K(4686848K), [Metaspace: 135862K->134900K(1185792K)], 0.6236139 secs] [Times: user=14.05 sys=0.28, real=0.63 secs] 
java.lang.OutOfMemoryError: Java heap space
Dumping heap to panda_dump ...
Heap dump file created [3938522340 bytes in 5.708 secs]

真是 众人寻他千百度,蓦然回首 , 没想到是 OOM 问题。

结论

在查找错误的时候,还是得在错误的上下文中多翻几页。

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

Spark 中 BroadCast 导致的内存溢出(SparkFatalException) 的相关文章

  • Python 如何安装Selenium(推荐)

    一 Selenium的定义 Selenium 是一个 Web的自动化测试工具 最初是为网站 自动化测试而开发的 Selenium 可以直接调用浏览器 它支持所有主流的浏览器 包括PhantomJS 这些无界面的浏览器 可以接收指令 让浏览器
  • 虚拟主机操作系统 Windows、Linux

    操作系统将直接影响服务器的性能 安全性和可用性 因此确保选择合适的操作系统对于成功运行您的网站或应用程序至关重要 以下是一些考虑因素 可帮助您选择适合您需求的虚拟主机操作系统 1 熟悉度和技术支持 如何选择操作系统应该考虑您的经验水平和熟悉
  • python 如何将英语单词翻译成中文

    要将英语单词翻译成中文 可以使用 Python 的第三方库 googletrans 该库使用 Google Translate 提供的 API 来进行翻译 首先 需要安装 googletrans 库 可以使用以下命令在终端或命令提示符中安装

随机推荐

  • 【性能测试入门】:压力测试概念!

    压力测试可以验证软件应用程序的稳定性和可靠性 压力测试的目标是评估软件在极端负载条件下的鲁棒性和错误处理能力 并确保软件在紧急情况下不会崩溃 它甚至可以进行超出软件正常工作条件的测试 并评估软件在极端条件下的工作方式 在软件工程中 压力测试
  • pandas用法整理

    处理表格数据的时候经常用到pandas 每次用的时候都要去查函数 每次记不住 每次都查 哈哈哈 自己整理一下 码住 一 Pandas的数据类型 进行数据分析时 如何正确使用数据类型 这非常重要 在pandas中的数据类型和python原生数
  • 计算机配件杂谈-鼠标

    目录 基础知识 鼠标的发展 鼠标的左右手 鼠标的显示样式 鼠标的移动和可见性 移动 可见性
  • 6类典型场景的无线AP选型和部署方案

    你们好 我的网工朋友 前段时间刚给你们来了篇解决无线频繁断网的技术文 解决无线频繁断网 这个办法值得收藏 不少朋友私聊 说想再聊聊无线AP的选型和部署方案 这不就安排上了 无线网络覆盖项目中 无线AP的合理选型和部署非常重要 在设计施工中
  • K8S中的Secret创建和使用

    天行健 君子以自强不息 地势坤 君子以厚德载物 每个人都有惰性 但不断学习是好好生活的根本 共勉 文章均为学习整理笔记 分享记录为主 如有错误请指正 共同学习进步 文章目录 创建secret 1 kubectl命令创建 2 yaml文件创建
  • 浅谈安科瑞直流表在孟加拉某能源公司的应用

    摘要 本文介绍了安科瑞直流电表在孟加拉某能源公司的应用 主要用于光伏直流柜内 配合分流器对汇流箱的输出电流电压等进行测量 并采集配电现场的开关信号 装置带有RS485接口可以把测量和采集的数据和设备状态上传 Abstract This ar
  • OpenHarmony沙箱文件

    一 前言 1 前景提要 DevEcoStudio版本 DevEco Studio 3 1 Release SDK版本 3 2 2 5 API版本 9 2 概念 在openharmony文件管理模块中 按文件所有者分类分为应用文件和用户文件和
  • 论文为什么引用的句子被标红

    大家好 今天来聊聊论文为什么引用的句子被标红 希望能给大家提供一点参考 以下是针对论文重复率高的情况 提供一些修改建议和技巧 可以借助此类工具 论文为什么引用的句子被标红 在论文撰写过程中 我们经常需要引用他人的观点 数据或研究成果来支持自
  • 基于FNN的模糊控制器matlab仿真

    目录 1 算法仿真效果 2 MATLAB源码 3 算法概述 3 1 模糊逻辑基础 3 2 神经网络基础
  • 大学论文查重是用什么来查

    大家好 今天来聊聊大学论文查重是用什么来查 希望能给大家提供一点参考 以下是针对论文重复率高的情况 提供一些修改建议和技巧 可以借助此类工具 大学论文查重是用什么来查 一 背景介绍 在高等教育中 学术诚信和论文的原创性是每个学者必须遵守的原
  • python按列写入数据到excel

    要将数据按列写入 Excel 可以使用 Python 的 openpyxl 库 首先 需要安装 openpyxl 库 可以使用以下命令在终端或命令提示符中安装 pip install openpyxl 然后 可以按照以下步骤编写代码 1 导
  • 基于多目标粒子群的车间搬运费用和布局面积最小的多目标求解,基于MOPSO的多目标求解

    目录 摘要 测试函数shubert 粒子群算法的原理 粒子群算法的主要参数 粒子群算法原理 基于多目标粒子群的车间搬运费用和布局面积最小的多目标求解 基于MOPSO的多目标求解 代码 结果分析 展望 代码下载 基于多目标粒子群的车间搬运费用
  • python opencv orb特征点提取匹配然后图像拼接

    opencv 基于ORB特征点图像拼接 特征点 warpperspective CSDN博客 图像用这儿的 import cv2 import numpy as np from matplotlib import pyplot as plt
  • 处理不舒服的同事关系:实用建议与技巧

    处理不舒服的同事关系 实用建议与技巧 在工作中 我们难免会遇到一些与同事关系不和谐的情况 这些不舒服的关系可能会影响到我们的工作情绪和效率 那么 如何处理这些不舒服的同事关系呢 本文将为你提供一些实用的建议 一 保持冷静和理智 在处理同事关
  • creo老是卡住怎么办?如何解决Creo卡顿问题

    Creo PRO E 是美国PTC公司于2010年10月推出CAD设计软件包 Creo是整合了PTC公司的三个软件Pro Engineer的参数化技术 CoCreate的直接建模技术和ProductView的三维可视化技术的新型CAD设计软
  • 新导物联实验室设备定位管理系统

    新导物联实验室设备定位管理系统是一种用于实验室设备管理和定位的系统 它利用物联网技术和定位技术 帮助实验室管理人员实时了解实验室内设备的位置和状态 提高设备的利用率和管理效率 该系统使用传感器或标签对实验室设备进行标识和定位 每个设备都被配
  • Move 向未来,2024 开发者大会热潮涌动

    1 月 13 日至 14 日 2024 Move 开发者大会 Move 生态关键的一年 将于上海举办 本次大会由 MoveFuns OpenBuild 和 MoveBit 主办 Rooch AptosGlobal alcove zkMove
  • go-carbon v2.3.4 发布,轻量级、语义化、对开发者友好的 Golang 时间处理库

    carbon 是一个轻量级 语义化 对开发者友好的 golang 时间处理库 支持链式调用 目前已被 awesome go 收录 如果您觉得不错 请给个 star 吧 github com golang module carbon gite
  • mybatis 增删改查

    MyBatis 是一种持久化框架 主要用于简化数据库访问代码的编写 它允许开发者使用 XML 或注解来配置 SQL 映射 并提供了自动将数据库操作映射到 Java 对象的功能 以下是 MyBatis 中的基本增删改查操作的示例 1 增加 I
  • Spark 中 BroadCast 导致的内存溢出(SparkFatalException)

    背景 本文基于 Spark 3 1 1 open jdk 1 8 0 352 目前在排查 Spark 任务的时候 遇到了一个很奇怪的问题 在此记录一下 现象描述 一个 Spark Application Driver端的内存为 5GB 一直