ZooKeeper does not recover

2023-05-16

ZooKeeper does not recover from crash when disk was full

Description

The disk that ZooKeeper was using filled up. During a snapshot write, I got the following exception

2013-01-16 03:11:14,098 - ERROR [SyncThread:0:SyncRequestProcessor@151] - Severe unrecoverable error, exiting
java.io.IOException: No space left on device
at java.io.FileOutputStream.writeBytes(Native Method)
at java.io.FileOutputStream.write(FileOutputStream.java:282)
at java.io.BufferedOutputStream.flushBuffer(BufferedOutputStream.java:65)
at java.io.BufferedOutputStream.flush(BufferedOutputStream.java:123)
at org.apache.zookeeper.server.persistence.FileTxnLog.commit(FileTxnLog.java:309)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.commit(FileTxnSnapLog.java:306)
at org.apache.zookeeper.server.ZKDatabase.commit(ZKDatabase.java:484)
at org.apache.zookeeper.server.SyncRequestProcessor.flush(SyncRequestProcessor.java:162)
at org.apache.zookeeper.server.SyncRequestProcessor.run(SyncRequestProcessor.java:101)

Then many subsequent exceptions like:

2013-01-16 15:02:23,984 - ERROR [main:Util@239] - Last transaction was partial.
2013-01-16 15:02:23,985 - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504)
at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:130)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:259)
at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:386)
at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:138)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)

It seems to me that writing the transaction log should be fully atomic to avoid such situations. Is this not the case?

  • Answers

  • delete zookeeper snapshot file its solved.

1.I was able to workaround the issue by deleting the partially written snapshot file

2.I believe the exception is being thrown while reading the snapshot and the partial transaction message is not an indication of what is causing it to crash. It sounds right that we should try a different snapshot, but according to the log messages you posted, it sounds like the problem is that we are not catching EOFException.
3.So there exceptions are thrown when ZooKeeper is running? Am not sure why its exiting so many times. Do you guys restart the ZK server if it dies?

4.We run ZooKeeper with runit, so yes it is restarted when it dies. It ends up in a loop of:

  • No space left on device
  • Starting server
  • Last transaction was partial
  • Snapshotting: 0x19a3d to /opt/zookeeper-3.4.3/data/version-2/snapshot.19a3d
  • No space left on device

5.I thought you said it does not recover when disk was full, but looks like the disk is still full? No?

6.Here is the full sequence of events (sorry for the confusion):

  • Noticed disk was full
  • Cleaned up disk space
  • Tried zkCli.sh, got errors
  • Checked ZK log, loop of:

2013-01-16 15:01:35,194 - ERROR [main:Util@239] - Last transaction was partial.
2013-01-16 15:01:35,196 - ERROR [main:ZooKeeperServerMain@63] - Unexpected exception, exiting abnormally
java.io.EOFException
at java.io.DataInputStream.readInt(DataInputStream.java:375)
at org.apache.jute.BinaryInputArchive.readInt(BinaryInputArchive.java:63)
at org.apache.zookeeper.server.persistence.FileHeader.deserialize(FileHeader.java:64)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.inStreamCreated(FileTxnLog.java:558)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.createInputArchive(FileTxnLog.java:577)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.goToNextLog(FileTxnLog.java:543)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.next(FileTxnLog.java:625)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.init(FileTxnLog.java:529)
at org.apache.zookeeper.server.persistence.FileTxnLog$FileTxnIterator.<init>(FileTxnLog.java:504)
at org.apache.zookeeper.server.persistence.FileTxnLog.read(FileTxnLog.java:341)
at org.apache.zookeeper.server.persistence.FileTxnSnapLog.restore(FileTxnSnapLog.java:130)
at org.apache.zookeeper.server.ZKDatabase.loadDataBase(ZKDatabase.java:223)
at org.apache.zookeeper.server.ZooKeeperServer.loadData(ZooKeeperServer.java:259)
at org.apache.zookeeper.server.ZooKeeperServer.startdata(ZooKeeperServer.java:386)
at org.apache.zookeeper.server.NIOServerCnxnFactory.startup(NIOServerCnxnFactory.java:138)
at org.apache.zookeeper.server.ZooKeeperServerMain.runFromConfig(ZooKeeperServerMain.java:112)
at org.apache.zookeeper.server.ZooKeeperServerMain.initializeAndRun(ZooKeeperServerMain.java:86)
at org.apache.zookeeper.server.ZooKeeperServerMain.main(ZooKeeperServerMain.java:52)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.initializeAndRun(QuorumPeerMain.java:116)
at org.apache.zookeeper.server.quorum.QuorumPeerMain.main(QuorumPeerMain.java:78)

  • Stopped ZK
  • Listed ZK data directory

ubuntu@ip-10-78-19-254:/opt/zookeeper-3.4.3/data/version-2$ ls -lat
total 18096
drwxr-xr-x 2 zookeeper zookeeper 4096 Jan 16 06:41 .
rw-rr- 1 zookeeper zookeeper 0 Jan 16 06:41 log.19a3e
rw-rr- 1 zookeeper zookeeper 585377 Jan 16 06:41 snapshot.19a3d
rw-rr- 1 zookeeper zookeeper 67108880 Jan 16 03:11 log.19a2a
rw-rr- 1 zookeeper zookeeper 585911 Jan 16 03:11 snapshot.19a29
rw-rr- 1 zookeeper zookeeper 67108880 Jan 16 03:11 log.11549
rw-rr- 1 zookeeper zookeeper 585190 Jan 15 17:28 snapshot.11547
rw-rr- 1 zookeeper zookeeper 67108880 Jan 15 17:28 log.1
rw-rr- 1 zookeeper zookeeper 296 Jan 14 16:44 snapshot.0
drwxr-xr-x 3 zookeeper zookeeper 4096 Jan 14 16:44 ..

  • Removed log.19a3e and snapshot.19a3d

ubuntu@ip-10-78-19-254:/opt/zookeeper-3.4.3/data/version-2$ sudo rm log.19a3e
ubuntu@ip-10-78-19-254:/opt/zookeeper-3.4.3/data/version-2$ sudo rm snapshot.19a3d

  • Started ZK
  • Back to normal

1. Attaching zookeeper.log

2.FYI, this issue is a duplication of ZOOKEEPER-1612 (curiously, a permutation of the last two digits, heh). I'd suggest to close 1612 as dup instead, if possible.

3.Ill makr 1612 as dup. Thanks for pointing that out Edward.

4.Looks like the header was incomplete. Unfortunately we do not handle corrupt header but do handle corrupt txn's later. Am suprised that this happened twice in a row for 2 users. Ill upload a patch and test case.

5.Should FileTxnIterator.goToNextLog() return false if the header is corrupted/incomplete, or should it skip the log file and go to the next log file if it exists?

6.-1 overall. Here are the results of testing the latest attachment 

http://issues.apache.org/jira/secure/attachment/12645856/ZOOKEEPER-1621.patch
against trunk revision 1596284.

+1 @author. The patch does not contain any @author tags.

+1 tests included. The patch appears to include 3 new or modified tests.

+1 javadoc. The javadoc tool did not generate any warning messages.

+1 javac. The applied patch does not increase the total number of javac compiler warnings.

+1 findbugs. The patch does not introduce any new Findbugs (version 1.3.9) warnings.

+1 release audit. The applied patch does not increase the total number of release audit warnings.

-1 core tests. The patch failed core unit tests.

+1 contrib tests. The patch passed contrib unit tests.

Test results: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2105//testReport/
Findbugs warnings: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2105//artifact/trunk/build/test/findbugs/newPatchFindbugsWarnings.html
Console output: https://builds.apache.org/job/PreCommit-ZOOKEEPER-Build/2105//console

This message is automatically generated.

1.Here's a different option - intuitively once zookeeper fails to write to disk, by continuing to operate normally it violates its promises to users (which is that if a majority acked the data is always there even if reboots happen). Once we realize the promise can't be kept it may be better to crash the server at that point and violate liveness (no availability) rather than to continue and risk coming up with a partial log at a later point violating safety (inconsistent state, lost transactions, etc).

2.I'm fine with Alex's suggestion. We should document how to manually recover when the server doesn't start because the log file doesn't contain the complete header.

3.I actually like Alexander Shraer's suggestion. However, if this is going to be the way you recommended recovering a corrupt log file, there should be a script that does it for users: zk-recover.sh or some such. In this world of deployment automation, it's not a nice thing to say "go delete the most recent log segment from ZK's data dir". Much better for the application to handle it through a script or command.

4.Apart from these corrective measures there should be some preventive measures as well. 

Can we have disk space availability checker which check periodically whether disk space is available or not and if not available then close the Zookeeper gracefully.

5.You mean, like a ZK thread dedicated to this? What would the behavior be, only shutdown if it's the leader?

6.Yes, dedicated thread for this like org.apache.zookeeper.server.DatadirCleanupManager

  • shut-down in every case, because without disk space zookeeper can not serve any purpose
  • The idea is as follows
    • add two new zookeeper properties
      diskspace.min.threshold=5% (values can be % of data directory available space or in GB)
      diskspace.check.interval=5 second (default:5,min:1,max:Long.MAX_VALUE)
    • add dedicated disk check thread
      • which runs on every {{diskspace.check.interval)) second
      • if disk space is less than diskspace.min.threshold then shutdown zookeeper instance
  • Some clarifications:
    • Query: Suppose diskspace.check.interval=5 and disk space can be full within 5 second by zookeeper or by other process. What is handling for this?
      Ans: User should know what is their usage scenario, and what other processes are using the same disk space and based on that they should optimize the diskspace.check.interval values
    • Query: let say diskspace.check.interval=1 but disk space can be filled even within 1 second by zookeeper and other process
      Ans: yes it can be filled if diskspace.min.threshold is less, again based on disk space usage user need to optimize diskspace.min.threshold

7.Reviving this old thread. Alexander Shraer has a valid concern about trading off consistency for availability. However, for the specific issue being addressed here, we can have both.

The patch skips transaction logs with an incomplete header (the first 16 bytes). Skipping such files should not cause any loss of data as the header is an internal bookkeeping write from Zookeeper and does not contain any user data. This avoids the current behavior of Zookeeper crashing on encountering an incomplete header, which compromises availability.

This has been a recurring problem for us in production because our app's operating environment occasionally causes a Zookeeper server's disk to become full. After that, the server invariably runs into this problem - perhaps because there's something else that deterministically triggers a log rotation when the previous txn log throws an IOException due to disk full?

That said, we can tighten the exception being caught in Michi Mutsuzaki's patch to EOFException instead of IOException to make sure that the log we are skipping indeed only has a partially written header and nothing else (in FileTxnLog.goToNextLog).

Additionally, I have written a test to verify that EOFException is thrown if and only if the header is truncated. Zookeeper already ignores any other partially written transactions in the txn log. If that's useful, I can upload the test, thanks.

8.Agreed. Forcing users to manually clean up the partial/empty header in this scenario seems undesirable, and if we only catch EOFException instead of IOException, we shouldn't run into any problems with correctness. Additionally, since this issue should only occur "legitimately" in the most recent txn log file, we can be even more conservative and only continue in that case.

9.Thanks Meyer Kizner. Your suggestion of doing this only for the most recent txn log file is sound. Are you also suggesting that we delete this truncated txn log file?Cause, if we skip it and don't delete, then in the future, newer txn log files will get created. So, the truncated txn log file will no longer be the latest txn log when we do a purge afterwards.Deletion seems consistent with this approach as well as consistent with PurgeTxnLog's behavior.

10.Yes, we would have to delete such a log file upon encountering it. I don't believe this would cause any problems, and it seems desirable to have the extra check this enables.

11.he proposal of the fix makes sense to me.

Is it feasible to make a stronger guarantee for the ZooKeeper serialization semantics - that is, under no cases (disk full, power failure, hardware failure) would ZooKeeper generates invalid persistent files (for both snapshot and tx logs)? This might be possible by serializing things to a swap file first and then at one point do an atomic rename of the file. With the guarantee of the sanity of the on disk formats the deserializing logic would be simplified, as there will not be many corner cases to consider, besides the existing basic checksum check logic.

I can think two potential drawback of this approach:

  • Performance: if we write to swap file and then rename for every writes, we will be making more sys calls per write. Might impact performance / latency of write?
  • Potential data loss during recover: to improve performance, we could batch writes and only do rename at certain points - (i.e. every 1000 writes). In case of a failure, part of the data might loss as those data (possibly corrupted / partially serialized) living in swap file will not be parsed by ZK during start up (we will only load and parse renamed files.).

My feeling is the best approach might be a mix of efforts on both serialization and deserialization side:

  • When serializing, we do our best efforts to avoid generate corrupted files (i.e. through atomic writes to files.).
  • When deserializing, we do best efforts to detect corrupt files and recover conservatively - the success of recovery might be case by case - for example for this disk full case the proposed fix sounds pretty safe to perform while in other cases it might not be straightforward to tell which data is good and which is bad.
  • As a result - the expectation is when things crash and files corrupted, ZK should be able to recover later without manual intervention. This would be good for users.

详情查看:https://issues.apache.org/jira/browse/ZOOKEEPER-1621

关注 关注
本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

ZooKeeper does not recover 的相关文章

  • Linux学习笔记——ZooKeeper集群安装部署

    5 8 ZooKeeper集群安装部署 5 8 1 简介 Zookeeper是一个分布式的 开放源码的分布式应用程序协调服务 是Hadoop和HBase的重要组件 它是一个为分布式应用提供一致性服务的软件 提供的功能包括 配置维护 域名服务
  • Kafka

    1 概述 1 1 定义 Kafka是一个分布式的基于发布 订阅模式的消息队列 主要应用于大数据实时处理领域 优势 kafka可以做到 使用非常普通的硬件 也可以支持每秒数百万的消息读写 MQ 代表消息队列 kafka只是众多mq中一款产品
  • zookeeper入门到精通03——zookeeper集群搭建

    zookeeper集群搭建 3 1 多虚拟机环境搭建 3 2 zookeeper集群搭建 3 1 多虚拟机环境搭建 我们需要搭建zookeeper集群 而由于zookeeper的的服务器数量需要设置为单数 前文介绍了原因 一个zookeep
  • ZooKeeper(八)伸缩性

    一 ZooKeeper中Observer 1 1 ZooKeeper角色 经过前面的介绍 我想大家都已经知道了在ZooKeeper集群当中有两种角色Leader和Follower Leader可以接受client 请求 也接收其他Serve
  • Arthas 定位CPU跑满问题,源头竟是Apache Beanutils

    一 背景 大早上 线上k8s 机子 某个机子 cpu 飙高 导致k8s 健康检查失败 线上环境会自动执行jstack 上传到oss 通知到 钉钉告警群 直接分析锁 cpu 高的线程 二 过程分析 2 1 排查cpu 占用最高的线程 使用js
  • dubbo zookeeper spring mvc简单整合的工程例子demo

    该demo只是简单的集成 包括了5个工程 都是maven结构的 如下图所示 服务端 dubbo demo server api 服务接口定义工程 dubbo demo server biz 服务接口实现工程 web dubbo server
  • ZooKeeper之Java客户端API使用—创建节点。

    客户端可以通过ZooKeeper的API来创建一个数据节点 有如下两个接口 String create final String path byte data List
  • Zookeeper的常见面试题

    1 Zookeeper 1 1 Zookeeper基本概念 Zookeeper作为一个优秀高效且可靠的分布式协调框架 ZooKeeper 在解决分布式数据一致性问题时并没有直接使用Paxos算法 而是专门定制了一致性协议叫做 ZAB Zoo
  • Dubbo SpringBoot版本入门

    Dubbo SpringBoot版本入门 1 目的 2 方法 2 1 系统工程总体介绍 2 2 系统工程环境搭建 2 2 1 新建 springboot order service consumer 服务 2 2 2 新建 springbo
  • CAP和BASE

    CAP概念 Consistency 一致性 所有节点在同一时间具有相同的数据 Availability 可用性 保证每个请求不管成功或者失败都有响应 Partition Tolerance 分区容错性 系统中任意信息的丢失或失败不会影响系统
  • Eureka与Zookeeper的区别

    著名的CAP 理论指出 一个分布式系统不可能同时满足 C 一致性 A 可用性 和 P 分区容错性 由于分区容错性在是分布式系统中必须要保证的 因此我们只能在 A 和 C 之间进行权衡 在此 Zookeeper 保证的是 CP 而 Eurek
  • ZooKeeper的一些总结

    文章目录 前言 一 Zookeeper是什么 二 Zookeeper用来做什么 三 Zookeeper的优势是什么 四 为什么用zookeeper 五 zookeeper解决了什么问题 总结 前言 Zookeeper作为一个分布式协调服务
  • etcd 集群搭建及常用场景分析

    概述 etcd 是一个分布式一致性k v存储系统 可用于服务注册发现与共享配置 具有以下优点 简单 相比于晦涩难懂的paxos算法 etcd基于相对简单且易实现的raft算法实现一致性 并通过gRPC提供接口调用 安全 支持TLS通信 并可
  • kafka详解及集群环境搭建

    一 kafka详解 安装包下载地址 https download csdn net download weixin 45894220 87020758 1 1Kafka是什么 1 Kafka是一个开源消息系统 由Scala写成 是由Apac
  • kafka的安装和使用

    ZooKeeper简介 ZooKeeper 是一个为分布式应用所设计的分布的 开源的 java 协调服务 分布式的应用可以建立在同步配置管理 选举 分布式锁 分组和命名等服务的更高级别的实现的基础之上 ZooKeeper 意欲设计一个易于编
  • 分布式锁解决方案_Zookeeper分布式锁原理

    通过召zk实现分布式锁可靠性时最高的 公平锁和可重入锁的原理 取水秩序 1 取水之前 先取号 2 号排在前面的 就可以先取水 3 先到的排在前面 那些后到的 一个一个挨着 在井边排成一队 公平锁 这种排队取水模型 就是一种锁的模型 什么是可
  • 从zookeeper官方文档系统学习zookeeper

    从zookeeper官方文档系统学习zookeeper 1 zookeeper 2 zookeeper 文档 3 zookeeper 单机版 3 1 配置 3 2 启动 3 3 验证 4 zookeeper 集群版 4 1 配置 4 2 启
  • 微服务框架

    微服务框架 1 SOA思想 面向服务的架构 SOA 是一个组件模型 它将应用程序的不同功能单元 称为服务 进行拆分 并通过这些服务之间定义良好的接口和协议联系起来 接口是采用中立的方式进行定义的 它应该独立于实现服务的硬件平台 操作系统和编
  • 大数据技术之Zookeeper

    大数据技术之Zookeeper 一 zookeeper特点 二 zookeeper单机模式 三 zookeeper 常用命令 四 查看zookeeper 状态的几种方式 一 zookeeper特点 Zookeeper 文件系统 通知机制 1
  • SpringCloud使用Zookeeper作为服务注册发现中心

    本篇文章主要记录SpringCloud使用Zookeeper作为服务注册发现中心 通过服务提供者和消费者为例 来真正掌握zk注册中心 目录 一 搭建服务提供者 1 创建cloud provider payment8004项目 2 修改配置

随机推荐

  • Java自学宝典pdf,高性能缓存-Caffeine-原理及实战

    2 1 2 W TinyLFU 算法 Caffeine 使用了 W TinyLFU 算法 xff0c 解决了 LRU 和LFU上述的缺点 W TinyLFU 算法由论文 TinyLFU A Highly Efficient Cache Ad
  • stm32f103最小系统板详细介绍

    一 什么是单片机最小系统 常见的单片机最小系统为单片机能独立运行程序及控制外围电路的最简单电路 xff0c 主要由单片机 晶振电路 复位电路三部分构成 Stm32f103c8t6也不例外 xff0c 构成最小的运行电路也需要以上三部分 St
  • vscode集成gitee

    第一步 1 百度搜索Git xff0c 出现的第一个网站 xff08 如下图 xff09 Git xff09 2 下载 3 下载之后 双击安装 xff08 安装过程中只需要默认下一步 不需要多余操作 放心大胆的点击下一步 xff09 第二步
  • #include<string> #include<cstring>

    C 43 43 strings string 只能用cin cout处理 xff0c 不能 用scanf xff0c 和printf transform s begin s end s begin tolower 转换成小写的函数 tran
  • 一步步教你如何安装idea

    1 下载idea安装包 2 打开后完成解压 3 点击Next进入下一步 4 选择好我们需要安装的位置 xff0c 这里我选择D盘的一个文件夹进行安装 5 下面根据自己电脑的型号去选择 xff0c 32位或者64位 xff0c 我的电脑是64
  • 一步步教你如何配置Java环境变量(超级详细)

    1 首先需要去官网下载jdk的安装包 xff0c 下载网站 xff1a www oracle com 2 选择版本 xff0c 然后安装开发工具JDK 3 先右击此电脑 xff08 win10 然后点击属性 4 然后找到右边的高级系统设置
  • VMware虚拟机 ——Operation inconsistent with current state。操作与当前状态不一致解决方法

    今天一打开VMware虚拟机 就跳出个界面 xff0c 如下图 所示 xff1a Operation inconsistent with current state xff0c 这句话的意思是操作与当前状态不一致 我想着试试恢复快照行不行
  • MobaXterm远程连接虚拟机的Network error: Connection timed out 网络错误:连接超时解决办法

    今天打开MobaXterm远程连接我VMware虚拟机的时候出现以下界面 xff0c 问题详情如下 xff1a Network error Connection timed out Session stopped Press lt retu
  • 内外网端口映射

    总的来说 xff0c 外网就是我们一般说的Internet 相对的内网是指局域网 xff0c 内网需要一台服务器或路由器做网关 通过它来控制能否访问外网 映射的概念 xff1a 路由器一口接外网 一口接内网的交换机 交换机连接到各个电脑 路
  • VMware虚拟机进行配置网络 Linux

    1 首先输入账户密码登录我们要配置网络的那台虚拟机 2 开始手动输入命令 xff1a vi etc sysconfig network scripts ifcfg ens33 3 进行修改 xff1a 通过移动键盘的方向键将光标移到要修改的
  • 教你如何在idea里进行设置实现快捷键自动生成序列化版本号

    1 打开IDEA 2 找到左上方的File并点击 xff0c 然后找到下面的Setting xff0c 就是前面是个小扳手的 xff0c 点进去进行设置 3 开始设置 找到左边Editor并点左边箭头展开 xff0c 再找到下方的Inspe
  • 关于IDEA的一些设置

    一 IDEA 软件设置Settings页面 Settings是对软件本身的一些属性进行配置 xff0c 例如字体 主题 背景图 插件等 二 如何打开Settings设置页面 左上角File gt Setting 三 Appearance a
  • Linux命令退格键变成^H的解决办法

    方法一 xff1a 按住ctrl键再去按退格键 xff08 backspace xff09 xff0c 就ok了 xff1b 方法二 xff1a 把 stty erase H 添加到 bash profile中 操作如下 xff1a 1 v
  • Typora图床设置

    1 使用SM MS xff0c 进入User Login SM MS Simple Free Image Hosting 2 注册并登录 3 进入typora 的偏好设置中 4 选PicGo Core然后下载 xff0c 下载完毕之后打开配
  • loading加载效果(纯css)

    一 平滑加载 lt div class 61 34 loading 1 34 gt lt div gt box sizing border box loading 1 margin 0 auto width 120px height 20p
  • PX4和Pixhawk的故事

    Pixhawk由Lorenz Meier于2008年创建 2008 寻找自主飞行 故事始于对自主飞行的追求 xff0c Lorenz想让无人机使用计算机视觉自主飞行 xff0c 他在苏黎世联邦理工学院攻读硕士学位的时候开始了一个研究项目 x
  • 备赛电赛学习STM32(十四):MPU6050

    一 MPU6050的简介 6轴是3轴加速度 3轴角速度 9轴就是3轴加速度 3轴角速度 3轴磁场强度 10轴就是3轴加速度 3轴角速度 3轴磁场强度 气场强度 这么多的数据 经过融合之后可进一步得到姿态角或者叫欧拉角 以我们这个飞机为例 欧
  • 【STM32】STM32单片机结构及部件原理

    STM32是目前比较常见并且多功能的单片机 xff0c 要想学习STM32 xff0c 首先要去了解它的基本构成部分以及各部分的原理 单片机型号 xff1a 正点原子STM32F103ZET6 目录 STM32内部结构总览图 xff1a 2
  • hadoop伪分布模式搭建(详细步骤)

    一 前期准备 1 关闭防火墙 2 安装好JDK 3 准备hadoop安装包 二 安装hadoop伪分布模式 1 在home hadoop software 路径下创建hadooptmp目录 2 解压hadoop 3 3 0 tar gz 3
  • ZooKeeper does not recover

    ZooKeeper does not recover from crash when disk was full Description The disk that ZooKeeper was using filled up During