DVCon US 2022论文集合

2023-11-15

2022年DVCon US Paper共55篇,已开放下载论文全集,在此整理各篇论文的摘要和下载链接,方便大家获取和交流,也可后台私信获取。


1. A-Comparative-Study-of-CHISEL-and-SystemVerilog-Based-on-Logical-Equivalent-SweRV-EL2-RISCV-Core

Abstract-

Download: https://dvcon-proceedings.org/document/a-comparative-study-of-chisel-and-systemverilog-based-on-logical-equivalent-swerv-el2-riscv-core

2. A-Low-Maintenance-Infrastructure-to-Jumpstart-CPU-Regression-and-Performance-Correlation

Abstract- Surge is a low-maintenance RTL jumpstart mechanism that works on any architectural boundary for CPU simulation. Surge's design makes it resilient to frequent RTL rewrites and design changes. It has a small code base to maintain, supports multicore, and is compatible with all soft IP configurations. Legacy RTL jumpstart mechanisms relied on a specialized SVTB module with a detailed mapping of every architectural state to its microarchitectural (µArch) counterpart. The legacy jumpstart module must understand every relevant signal path, register shadow copy, and array configuration; this results in monolithic code that frequently breaks due to µArch changes. Surge utilizes the CPU's functional high-level model and legacy CPU power state feature to abstract out the µArch dependencies. Under Surge, any test will first run very fast under E-core’s [1] (formerly known as Intel Atom) functional model to the point of interest then seamlessly migrate to RTL simulation. As a result, Intel E-core validation skips the RTL simulation cycles spent on test setup and E-core performance team can more reliably complete performance correlation simulation.

Download: https://dvcon-proceedings.org/document/a-low-maintenance-infrastructure-to-jumpstart-cpu-regression-and-performance-correlation

3. A-New-Approach-to-Easily-Resolve-the-Hidden-Timing-Dangers-of-False-Path-Constraints-on-Clock-Domain-Crossings

Abstract- The need for preventing data skew in digital designs is well known in the industry, and standard simulation or formal checks ensure that these functional bugs don’t reach silicon. However, by labelling Clock Domain Crossing (CDC) signals as “false paths,” different bits of a CDC bus or its qualifier may be routed with varying amounts of time delay. If the delay on certain bits is greater than 1 clock cycle, this can result in data skew or metastability in the sampling. Such issues can create silicon-killing functional bugs, and since these timing issues don’t appear in RTL simulation, designers need to spend a large amount of effort to identify and constrain these paths for the back-end. This paper will describe new and improved methodologies for detecting and preventing such timing-analysis CDC bugs that also remove the overhead from designers.

Download: https://dvcon-proceedings.org/document/a-new-approach-to-easily-resolve-the-hidden-timing-dangers-of-false-path-constraints-on-clock-domain-crossings

4. A-UVM-SystemVerilog-Testbench-for-AnalogMixed-Signal-Verification-A-Digitally-Programmable-Analog-Filter-Example

Abstract-A simple way of extending the UVM framework to verify an analog/mixed-signal device-under-test (DUT)is presented. A digitally-programmable analog bandpass filter circuit serves as an example. The SystemVerilog UVMtestbench presented checks the filter's transfer gain at randomly-chosen frequencies against the values predicted bySPICE. It collects the results in a UVM-based scoreboard. It uses SystemVerilog assertions to check the supply currentand bias voltage levels during power-down mode. The test suite can adjust the number of transactions until the desiredcoverage is met. The proposed testbench uses standard UVM components as-is by encapsulating all the analog-specificcontents into a fixture submodule. It contains the DUT itself, generates analog stimuli, and measures analog amplitudeusing XMODEL primitives. By seamlessly integrating XMODEL's ability to run fast and accurate analog simulationsinto this UVM testbench, an efficient SystemVerilog-based verification with SPICE-level accuracy is demonstrated.

Download: https://dvcon-proceedings.org/document/a-uvm-systemverilog-testbench-for-analogmixed-signal-verification-a-digitally-programmable-analog-filter-example

5. Accelerating-Error-Handling-Verification-of-Complex-Systems-A-Formal-Approach

Abstract-Error handling verification is one of the key phases in determining reliability of any embedded system. It involves verifying that the system correctly detects and gracefully reports various errors. This is especially critical for Smart Network Interface Cards (NICs) as they are usually located in an isolated environment and need to be continuously online with a very little to no human interaction. Failure to report an error may expose security vulnerabilities such as denial of service. Due to the technological advancement in recent years, the complexity of Smart NICs has increased, resulting in a greater number of error scenarios. This has made the task of error handling verification even more challenging using constraint based random verification (CBRV). In this paper we will demonstrate how leveraging Formal Property Verification (FPV) can address these challenges using our work on error handling verification of a hardware (HW) Decompression IP.

Download: https://dvcon-proceedings.org/document/accelerating-error-handling-verification-of-complex-systems-a-formal-approach

6. Accelerating-Performance-Power-and-Functional-validation-of-Computer-Vision-Use-cases-on-next-generation-Edge-Inferencing-Products

Abstract- In this paper we present the methodology to accelerate three validation vectors targeted for Computer Vision use cases on AI edge inferencing products using Emulation platform. The vectors are Functional, Performance and Power validation. Keywords – Vision, AI, Edge inferencing, Emulation, Validation, Performance, Power

Download: https://dvcon-proceedings.org/document/accelerating-performance-power-and-functional-validation-of-computer-vision-use-cases-on-next-generation-edge-inferencing-products

7. Adaptive-Test-Generation-for-Fast-Functional-Coverage-Closure

Abstract-Coverage-driven test generation (CDG) is a well-studied approach to accelerate design verification. However, existing CDGmethods rely on verification experts to extrapolate coverage-to-input relationships or use costly data-driven techniques to learnthese dependencies, both of which introduce significant overhead to the verification process. In this work, we propose CDG4CDG,a low-overhead framework that automatically extracts the coverage dependency graph associated with a design, and leverages thedependency information for adaptive test stimuli generation. The test generation is performed by maximum likelihood estimationmethod in CDG4CDG that uses the dependency graph as prior information and adaptively changes the distribution of the randominput stimuli to exercise the design more comprehensively towards higher coverage. We integrated CDG4CDG with standard CRVflow. Our evaluations on open-source as well as large-scale real-world designs of a complex block in tensor processing unit (TPU)show that our approach significantly speeds up the coverage convergence with no human effort required.

Download: https://dvcon-proceedings.org/document/adaptive-test-generation-for-fast-functional-coverage-closure

8. Advanced-Functional-Verification-for-Automotive-System-on-a-Chip

Abstract-Rapidly growing automotive semiconductor industry and the market requirements for advanced driver assistance system (ADAS) are leading a more complex and larger size automotive system on a chip (SoC) development. This paper studies on system level functional verification for automotive SoC. Specifically, it explains the automotive features that impose huge challenges on system level verification in terms of completeness and efficiency, and presents advanced verification approaches to solve these challenges. Firstly, complex power scheme in the automotive SoC is discussed as verification completeness challenge and the following approaches are provided as a solution: i) automatic coverage generation based on UPF, specification of power intents; ii) extension of the functional coverage with system control events; iii) assertion based low-power feature debugging methods. These approaches enhance metric driven low-power design verification and unveil undetected design bugs. Identified issues through proposed approaches are summarized in this paper so that it could be of practical help. Secondly, for self-diagnosis features that guarantee high system reliability of automotive SoC, limitations of existing simulation approaches when running usage scenarios are explained in terms of verification efficiency challenges. To deal with these challenges, new testbench architecture and test platform for efficient use of emulators are proposed. The proposed platform significantly reduces the TAT, thus confirming that tests difficult to validate earlier, can now be validated at the pre-silicon phase.

Download: https://dvcon-proceedings.org/document/advanced-functional-verification-for-automotive-system-on-a-chip

9. Advanced-UVM-command-line-processor-for-central-maintenance-and-randomization-of-control-knobs

Abstract - this paper describes an advanced command line processor, an extension of the uvm_cmdline_processor class which enables users to pass a combination of range or specific values and specify weights to enable randomization of control knobs. The original functionality of the base class is kept intact and additional features are developed on top of it. The source code is made available in the appendix section of the paper.

Download: https://dvcon-proceedings.org/document/advanced-uvm-command-line-processor-for-central-maintenance-and-randomization-of-control-knobs

10. Automatic-Translation-of-Natural-Language-to-SystemVerilog-Assertions

Abstract-

Download: https://dvcon-proceedings.org/document/automatic-translation-of-natural-language-to-systemverilog-assertions

11. Avoiding-Confounding-Configurations-An-RDC-Methodology-for-Configurable-Designs

Abstract- Requirements for asynchronous reset behavior extend the complexities of Clock Domain Crossings as designs add reset domains to meet power and functional requirements. Design configurability common in IP development and in construction of subsystems built of configurable elements introduces challenges to the verification of Reset Domain Crossings (RDC). This paper outlines an approach to RDC verification that efficiently addresses these challenges.

Download: https://dvcon-proceedings.org/document/avoiding-confounding-configurations-an-rdc-methodology-for-configurable-designs

12. BatchSolve-A-Divide-and-Conquer-Approach-to-Solving-the-Memory-Ordering-Problem

Abstract- Memory Ordering Problem (MOP) arises frequently in unit and integration level testbenches (TBs) where Bus Function Models (BFMs) drive Memory Operations (MemOps) on various interfaces of the Design Under Test (DUT). Each MemOp has a set of attributes such as, MemOpType (Reads/Writes/Atomics/Barrier), destination MemType (System/Video Memory), SourceType (CPU core, GPU SM), address, length etc. Certain MemOps like reads, non-posted writes, certain atomics and barriers get response (or acknowledgment) back. These responses/acks, their attributes and timings will be collectively termed as Resp-Attributes. Given the issue order of MemOps and all information about the MemOp-Attributes and Response-Attributes as observed on the DUT boundary, the problem is to find whether there exists a Global Order (GO) of the MemOps such that: 1) The read data observed matches the last write data in GO to the same address and 2) The order of MemOps in GO satisfies various ordering rules in the Implementational and Architectural Specifications (IAS). Ordering rules can vary greatly across various design units and link connectivity. For example, in a GPU TB when MemOps are going to CPU memory, the ordering rules can vary based on the type of the link (PCIE vs NVLink) over which CPU is connected. MOP is similar to Memory Consistency Problem (MCP) which is known to be NP-complete [1]. Existing techniques modify this problem under reasonable assumptions to make it tractable. We will provide an overview of such techniques in the previous work section. Such techniques either require high development and maintenance cost or are not flexible enough to allow users to write arbitrary ordering rules at a high level. In this paper, we introduce Batch-Solve (BATS) - a low-cost, scalable, portable, and flexible approach to solving MOP. Our formulation allows users to specify ordering rules as high-level SystemVerilog (SV) constraints. We leverage the power of SV constraint solver to determine if there exists a GO that satisfies the user specified high-level ordering constraints and matches the results of an execution. BATS can be easily ported over to another TB (horizontal re-use) and is immune to architectural modifications (vertical re-use across projects). To the best of our knowledge, this is the first SV Solver based approach to solving the MOP. Figure 1. Inferring Global Order based on execution and checking for existence of legal GO

Download: https://dvcon-proceedings.org/document/batchsolve-a-divide-and-conquer-approach-to-solving-the-memory-ordering-problem

13. Caching-Tool-Run-Results-in-Large-Scale-RTL-Development-Projects

Abstract - Caching is a widely used technique to improve efficiency in both software and hardware, including EDA tools and compute/storage infrastructure. But there is significant untapped potential when it comes to sharing tool run results within a large-scale RTL development team. Members of such a large team often repeat various tool runs simply because there's no efficient way to share the results of previous runs among themselves. This causes wasted compute resources and unnecessary wait time for the designers, which ultimately leads to long turn-around time (TAT) and higher cost of the SoC. In this paper, we discuss the challenges in sharing tool run results among the designers of such large teams and explore a few caching methods to address these challenges. We discuss pros and cons of these methods, with particular focus on one of our proposed methods that we found to be the most balanced solution. We present data from a recent SoC at Intel where this balanced method is saving thousands of hours of compute resources as well as significant amount of engineering resources.

Download: https://dvcon-proceedings.org/document/caching-tool-run-results-in-large-scale-rtl-development-projects

14. CAMEL-A-Flexible-Cache-Model-for-Cache-Verification

Abstract- With the increasingly complex cache design in recent years, cache verification has been regarded more challenging. In this paper, we demonstrate a flexible cache model for cache verification. The cache model is built with a simple structure, but it can fulfill the most important verification criteria -- data correctness, even in a complex design. Extra 'location' information is added to the model for more precise checking and stimulus generation.

Download: https://dvcon-proceedings.org/document/camel-a-flexible-cache-model-for-cache-verification

15. Case-Study-Successes-and-Challenges-of-Reuse

Abstract - At Intel, the IP validation team creates internal test content used to validate the functionality of various IPs in the SoC. This content is flexibly designed to support different pre-silicon and post-silicon environments, and has been used to validate successive generations of IP revisions. This content is also scalable and has been reused in SoC validation where these IPs are ultimately integrated. This paper discusses the challenges to apply reuse from IP to SoC, and the key design and strategic decisions that we made to overcome them. We discuss our results, including headcount savings, RTL bugs found, and operational cost savings. Key Words: Verification, Validation, Portable Stimulus, PSS, Reuse, IP, SoC, Manufacturing

Download: https://dvcon-proceedings.org/document/case-study-successes-and-challenges-of-reuse

16. Co-Developing-IP-and-SoC-Bring-up-Firmware-with-PSS

Abstract- Runtime-configuration and operation of design IP is increasingly dependent on firmware. However, firmware for those IPs is often created too late, and in an unsuitable form, to be helpful in SoC-bringup tests. This presents an obstacle to creating SoC integration tests, and often results in late discovery of hardware/software interaction issues. This paper proposes an Accellera Portable Test and Stimulus (PSS) -enabled flow for co-developing and co-verifying design IP and firmware.

Download: https://dvcon-proceedings.org/document/co-developing-ip-and-soc-bring-up-firmware-with-pss

17. Confidently-Sign-off-any-Low-Power-Designs-without-Consequences

Abstract- Successful verification of low-power designs has always been a challenge in the industry. The IEEE 1801 (UPF) standard introduces low-power modeling concepts into today’s complex designs. Different empirical studies shows that 40% or more of engineering time-effort is typically spent only on debug for any design verification project. For low-power design and verification, these debug challenges are further complicated, resource and time consuming as a consequences of complicated power management architectures and instrumentations implication on actual design. With increasing SoC complexity, it is inevitable to comprehend the common low-power design issue so that these can be avoided or caught early during the SoC design cycle and save on debug time-effort or even re-spin of the chip. Thus, a well implied low-power verification methodology is important for signing off with confidence. In this paper, we will provide an in-depth analysis of various low-power design issues that are on a faced daily basis. By taking relevant examples and case studies, this paper demonstrates how these issues can be either avoided or solved during RTL bring up phase. Based on the dynamic verification & coverage techniques, this paper includes an efficient low-power verification methodology which can be followed for signing off the chip without consequences.

Download: https://dvcon-proceedings.org/document/confidently-sign-off-any-low-power-designs-without-consequences

18. Emulation-based-Power-and-Performance-Workloads-on-ML-NPUs

Abstract— Modern SoCs are touching new heights in terms of size and complexity. With this, the demand for low-power devices has also increased. Hence, Power Verification has become essential, and it is important to keep the power numbers under limits. To achieve this, Power calculations need to be performed from the early stages of the design cycle. Software Simulators were used for this purpose, but for large and complex designs, simulators slow down. That is where emulators come into the picture. In Emulation, verification is done on hardware that can be FPGA-based or Processor-based, hence they are much faster than Simulators. Emulators significantly help to reduce turnaround in Performance and Power coverage closure. In this paper, we will discuss the design being migrated from a simulation platform to an emulation platform and the benefits of using Emulation for Functional Verification and Power Verification. Keywords— Emulation, Power Verification, Functional Verification

Download: https://dvcon-proceedings.org/document/emulation-based-power-and-performance-workloads-on-ml-npus

19. Enhanced-Dynamic-Hybrid-Simulation-Framework-for-Hardware-Software-Verification

Abstract- The hardware-software co-verification feasibility constitutes a significant challenge to the System-on-Chip (SoC) development. A comprehensive co-simulation can take months, if not years of CPU time. To solve this problem, a hybrid approach was introduced several years ago. The motivation to use it derives from its ability to address the contradiction between hardware and software development requirements. On the one hand, the software is more attractive to accelerate the simulation and does not require high accuracy all the time. On the other hand, the hardware development relies mainly on a time-aware simulation and can only sacrifice precision occasionally. This paper presents an enhanced dynamic hybrid framework that can satisfy both requirements and may be used for hardware-software co-verification.

Download: https://dvcon-proceedings.org/document/enhanced-dynamic-hybrid-simulation-framework-for-hardware-software-verification

20. Evaluating-the-feasibility-of-a-RISCV-core-for-real-time-applications-using-a-virtual-prototype

Abstract - The replacement of a key component within industrial embedded systems usually requires huge verification and validation efforts. Especially the replacement of the MCU (microcontroller unit) core architecture normally entails significant changes to the HW/SW co-design and co-verification process, possibly including the purchase of costly design and verification IP. Our intended use-case is a system redesign where an established MCU is replaced by a (reduced instructions set computer) RISC-V core. Since the complete redesign process requires a significant effort, a feasibility evaluation study helps to elaborate the system requirements and to detect possible issues early in the replacement process. Once feasibility has been demonstrated, hardware (re-)design may start. In this paper we propose a HW/SW co-verification methodology to evaluate the feasibility of an MCU core replacement based on a virtual prototype (VP), thereby saving time and cost for the redesign process. This methodology links the VP development process with the requirements management process to re-use the test cases.

Download: https://dvcon-proceedings.org/document/evaluating-the-feasibility-of-a-riscv-core-for-real-time-applications-using-a-virtual-prototype

21. Extension-of-the-power-aware-IP-reuse-approach-to-ESL

Abstract-

Download: https://dvcon-proceedings.org/document/extension-of-the-power-aware-ip-reuse-approach-to-esl

22. Finding-a-Needle-in-a-Haystack-A-Novel-Log-Analysis-Method-with-Test-Clustering-in-Distributed-System

Abstract- As the diversity and the complexity in digital design increases, the cost of design verification also increases. In such a trend, verification engineers are struggling with repetitive and tedious process, such as debugging common errors from different test cases. To solve this problem, verification engineers can use their own domain knowledge to prioritize test cases to find different errors, however, it takes time and efforts to do it manually. It is also not so easy tasks to understand different designs and test benches with limited time and resources. In this paper, we propose a novel approach of a test case prioritization algorithm based on clustering. Components from terabytes of simulation logs are considered as features in clustering algorithms. After clustering, the proposed algorithm reorders test cases with priority and recommends test cases that allow verification engineers to achieve a specific rate of error detection as early as possible. We also propose robust and scalable system architecture that can handle terabytes of data for practical use in design verification industry.

Download: https://dvcon-proceedings.org/document/finding-a-needle-in-a-haystack-a-novel-log-analysis-method-with-test-clustering-in-distributed-system

23. Flattening-the-UVM-Learning-Curve-Automated-solutions-for-DSP-filter-Verification

Abstract- By their nature, DSP filters are computationally intensive, predominantly structured as data paths with relatively simple control paths. Algorithm developers often develop filters using MATLAB®, with hardware engineers developing RTL to implement the MATLAB specification. Traditionally, the verification of filters at unit level has been limited, with the RTL designer developing a simple testbench comparing RTL outputs against pre-generated MATLAB golden vectors. That approach is now changing because the complexity of control paths is increasing, algorithms are being modified frequently to conform with evolving standards, and optimization is being done to minimize power consumption. These developments drive the need for more comprehensive verification methods. When considered as a standalone testbench, the DSP filter testbench is still quite small, but the number of such benches in a project is quite high. So, while we see the obvious benefits of implementing our testbenches in UVM, both the size and the number of testbenches make it prohibitive. This led us to explore "lite-UVM" testbenches with a high degree of automation. Our goal was to realize the benefits of UVM-based verification, but with minimal bring-up efforts. In this paper, we document various solutions we explored to develop more robust verification, spanning from home-brewed solutions to off-the-shelf industry tools. The intended audience for this presentation is DV engineers looking for a light UVM framework that verifies DSP filters with complex data paths, but simple control logic. We have classified this paper into four parts - Introduction, Related Work, Automation Strategies for DSP Filters, and Conclusion. Keywords: DSP verification, UVM code generation. dpigen, uvmbuild

Download: https://dvcon-proceedings.org/document/flattening-the-uvm-learning-curve-automated-solutions-for-dsp-filter-verification

24. Fnob-Command-Line-Dynamic-Random-Generator

Abstract-Constrained random has been a founding methodology for DV to close coverage within an affordable number of tests. However, there’re several limitations found in current constraint syntax implementation. For example, duplicated code needs to be created for similar tests with different values, introducing error-prone code which is hard to scale. Second, it takes several layers of abstraction to override random behavior for the same variable across the test, which lengthen the schedule hitting the same coverage goal. At last, re-compile is needed for every new random variation added by constraint syntax, which increases turn-around time for test development. An alternative novel way of constrained random called “Fnob” is introduced in this paper to overcome above shortfalls. By providing user freedom to override both random type and value from both command-line and in-line code without re-compile, “Fnob” speeds up coverage closure through less error-prone constrained random coding and faster test development. Keywords—Constrained Random; SystemVerilog; Command-line Override; UVM Configuration Database

Download: https://dvcon-proceedings.org/document/fnob-command-line-dynamic-random-generator

25. Hierarchical-UPF-Uniform-UPF-across-FE-SD

Abstract- Power intent specification of a System-on-Chip (SoC) using IEEE 1801 Unified Power Format (UPF) is a complicated task and requires coordinated efforts from RTL, validation, and BE (Back End) teams. Ideally, the same set of UPF files (hierarchical UPF) should be consumed across the different SoC domains, however, due to technological limitations in BE tools and flows, this was not done till today. The tools, flows and methodologies used in past projects dictate that the BE should use merged UPF, which requires 70% extra effort for its generation and about 2 months for the initial bring-ups with multiple configurations and manual hacks needed in Front End (FE). This also leads to a common scenario where verification is done on hierarchical UPF whereas the BE implementations are done on merged UPF. But the process of validating the logical equivalence of merged UPF with the hierarchical UPF is cumbersome and never foolproof. Ultimately this results in higher TAT (turn-around-time) and possible silicon escapes. But with the advancement of tools & methodologies, the industry is now going forward with adopting the hierarchical UPF for all the tools and flows, while most projects in Intel are still using hierarchical UPF for FE and merged UPF for BE implementations. So, in this paper, we describe the learnings from the "proof of concepts" (POC) of using hierarchical UPF for the BE implementations. The recommendations/guidelines for the seamless consumptions of hierarchical UPF, which is being followed by the current SoC, are also listed down. These guidelines empowered the adoption of hierarchical UPF for the current SoC, allowing us to consume the 3rd party IP UPF files directly after the integration, which is the right thing to do and was not possible earlier; and in turn, reducing the manual work required for generating and validating the merged UPF files. Keywords: Unified Power Format (UPF), hierarchical UPF, SoC, merged UPF

Download: https://dvcon-proceedings.org/document/hierarchical-upf-uniform-upf-across-fe-sd

26. Hopscotch-A-Scalable-FlowGraph-Based-Approach-to-Formally-Specify-and-Verify-MemoryTagged-Store-Execution-in-Arm-CPUs

Abstract-

Download: https://dvcon-proceedings.org/document/hopscotch-a-scalable-flowgraph-based-approach-to-formally-specify-and-verify-memorytagged-store-execution-in-arm-cpus

27. How-to-Avoid-the-Pitfalls-of-Mixing-Formal-and-Simulation-Coverage

Abstract-Driven by the need to objectively measure the progress of their verification efforts, and the relative contributions of different verification techniques, customers have adopted "coverage" as a metric. However, what exactly is being measured is different depending on underlying verification technology in use. Consequently, simply merging coverage measurements from different sources -- in particular, blindly combining functional coverage from constrained-random simulations and formal analysis -- can fool the end-user into believing that they have made more progress, and/or they have observed behaviors of importance, when neither is the case. In this paper we will first review what these forms of "coverage" are telling the user, and how to merge them together in a manner that accurately reports status and expected behaviors.

Download: https://dvcon-proceedings.org/document/how-to-avoid-the-pitfalls-of-mixing-formal-and-simulation-coverage

28. Hybrid-Emulation-Accelerating-Software-driven-Verification-and-Debug

Abstract- Time to market is one of the key factors in the semiconductor industry, not only for commercial success but also to meet the technology demands of today's world. This can be achieved through a highly optimized verification strategy including early software development and testing. As the size and complexity of the designs increases, it is no longer possible to load the whole design into the emulator especially where OS boot and software applications need to be validated. This is where Hybrid Emulation fills the gap. Debugging hardware and software issues using real OS Apps on FPGA based Juno platform was challenging due to low design visibility. Hence, it takes significant amount of time to root cause the underlying issues. Hybrid system consists of a Virtual Platform connected to the SoC sub block (GPU) running in the emulator. It can run full software stack GPU graphics and compute content with full design visibility. Now it is easier to reproduce, debug and find the root cause of Software or Hardware issues reported in FPGA platform. Hybrid platform also enables faster design bring-up and verification closure.

Download: https://dvcon-proceedings.org/document/hybrid-emulation-accelerating-software-driven-verification-and-debug

29. Innovative-Uses-of-SystemVerilog-Bind-Statements-within-Formal-Verification

Abstract- Bind statements inside SystemVerilog are frequently used by simulation-based verification benches to add verification code to RTL design modules, interfaces, or compilation-unit scopes. With bind statements, verification code is separated from RTL design, so design and verification teams can maintain their own files. Under the context of formal verification, bind statements are also heavily used. In addition to common usages, due to the unique nature of formal verification, the bind statements become more powerful. For example, inside formal verification, an assumption can be used to constrain inputs; a snip point can be used to cut the driver logic of a signal; a floating net or un-driven signal can be controlled by formal tools the same way as primary inputs. In this paper, the authors will use real-life examples to demonstrate a few innovative uses of bind statements within formal verification.

Download: https://dvcon-proceedings.org/document/innovative-uses-of-systemverilog-bind-statements-within-formal-verification

30. Is-it-a-software-bug-It-is-a-hardware-bug

Abstract-Pre-silicon firmware and software (FW/SW) testing on emulation platforms drastically shorten the time to market by shifting left the project schedules. As a result, the FW/SW is integrated with the RTL in a much earlier stage when the RTL code is less stable. When a test failure occurs, sometimes it is very hard to determine whether the issue is originated in the software code or the hardware code. In this paper, we will present our strategy on how to triage the root cause of the bug, provides tips and tricks on the implementation of the software/hardware co-verification utilities used in our process. We will also include some pointers on how to reuse the emulation debug flow for post-silicon debug.

Download: https://dvcon-proceedings.org/document/is-it-a-software-bug-it-is-a-hardware-bug

31. Leaping-Left-Seamless-IP-to-SoC-Hand-off

Abstract- IP packaging and qualification are an integral part of IP integration into SoC. Higher turn-around times (TAT) to integrate Graphics IPs into SoC has been a constant challenge in the past. Inefficiencies in package generation, undocumented hacks/workarounds and lack of automation are few of the causes resulting in low quality IP drops. In addition, there weren’t any efficient methodologies to check IP design quality against Industry and SoC standards. This required substantial manual effort to generate IP package and thorough testing at IPs before making an SoC delivery.  In this paper, we introduce a novel Tools, Flows and Methodology (TFM) agnostic approach to automated IP packaging along with a sophisticated Quality Assurance (QA) as well as a Quality Checker (QC) infrastructure aiming to shift-left identification of integration bugs using a Continuous Integration (CI) system. The methodology is based on robust and improved methods which can generate fully qualified drops to help reduce SoC Integration TAT from 2-3 weeks to less than 3 days per milestone drop as shown in Figure 1. Within a Product Life Cycle (PLC), there has been around one quarter savings and 50% reduction in resourcing through these efforts.

Download: https://dvcon-proceedings.org/document/leaping-left-seamless-ip-to-soc-hand-off

32. Left-Shift-Mechanism-to-Mitigate-Gate-Level-Asynchronous-Design-Challenges

Abstract-In the Intel’s IOTG(Internet of Things Group)SoC Design, it is often a challenge to signoff gate level Clock Domain Crossing (CDC) or Reset Domain Crossing(RDC) verification because of the lack of IP/Subsystem hierarchal collaterals, lack of clock-reset architectural understanding in back-end domain, waiver importing issues, strict schedule pressure etc. However, with the advent of complex clock-reset architecture and due to the increase in reset signaling complexity with the emergence of multiple reset domains, gate level clock and reset domain crossing verification becomes an absolute need to ensure glitch free reset assertions during various power states. Therefore, there is a tangible need to define a methodology to ensure all the CDC or RDC issues post low power cell insertions, scan insertion and synthesis optimization are left-shifted in the Front-End CDC or RDC. This paper details about complex gate level CDC or RDC challenges in the IOTG SoC design and proposed few simple techniques in the front-end verification to address issues which are mainly related to implementation flows.

Download: https://dvcon-proceedings.org/document/left-shift-mechanism-to-mitigate-gate-level-asynchronous-design-challenges

33. Machine-Learning-Based-Verification-Planning-Methodology-Using-Design-and-Verification-Data

Abstract- As complexity and integration of SOC continue to increase, its development time increases significantly and the verification time does increase even more dramatically. In a typical verification process, a large number of di-rected or randomized simulation test cases are generated and run as a regression form. In a full chip verification en-vironment, this regression takes significant amount of time to be completed and could be a critical path in a project schedule. In this paper, we discuss the methodology to efficiently manage the regression and reduce its run time. We classify test cases in regression by using various Machine Learning algorithms and correlate with the amount of de-sign changes and previously failed simulation results. During this process, we are able to predict failing test cases in the early regression stage otherwise could find them in the later regression stage. It gives an additional debugging time and eventually help us to reduce total regression time. The proposed methodology is being applied to the latest mobile SOC projects.

Download: https://dvcon-proceedings.org/document/machine-learning-based-verification-planning-methodology-using-design-and-verification-data

34. Maximizing-Formal-ROI-through-Accelerated-IP-Verification-Sign-off

Abstract- Over the past few years, formal verification (FV) has become an essential piece of our verification sign-off methodology. We have successfully used FV to sign off several critical design blocks with zero escapes and have also set up a mature FV sign-off flow that is well integrated into our mainstream verification process. However, despite all the great return on investment (ROI) generated on each block signed off using FV, the overall impact to previous projects was somehow limited due to the scope of FV adoption. To unleash the full power of FV, we believe that 1) FV should be considered as a primary method to achieve block-level verification sign-off and 2) FV should also be leveraged at the system level to uncover "superbugs" which are beyond the reach of the traditional simulation approach. This idea is supported by our management team. As a result, we started to deploy FV on a much larger scale to accelerate the verification sign-off of a brand-new IP. This paper shares our recent experience on how we upscale FV usage to maximize formal RO

Download: https://dvcon-proceedings.org/document/maximizing-formal-roi-through-accelerated-ip-verification-sign-off

35. Metadata-Based-Testbench-Generation

Abstract: This paper introduces the concept of automated testbench generation techniques using metadata of design spec. It focuses on full chip level structural testbench for register and interconnect verification. We demonstrate what contents need to be captured in metadata and how to automate UVM (Universal Verification Methodology) testbench generation with metadata.

Download: https://dvcon-proceedings.org/document/metadata-based-testbench-generation

36. Mixed-Signal-Design-Verification-Leveraging-the-Best-of-AMS-and-DMS

Abstract-Analog and mixed-signal (AMS) design and verification methodology exists from the very beginning of mixed-signal IC design practice, but its role has gradually become less clear since the emerging dominance of digital mixed-signal (DMS) methodology. In this paper, we seek to analyze individual roles of AMS and DMS, why both are necessary and complementary, and how we can take advantage of each flow’s strength to optimize verification resources and job efficiency.

Download: https://dvcon-proceedings.org/document/mixed-signal-design-verification-leveraging-the-best-of-ams-and-dms

37. Mixed-signal-Functional-Verification-Methodology-for-embedded-Non-volatile-Memory-using-ESP-simulation

Abstract- IP verification is one of the key features of Foundry business. This is because only sufficiently validated IPs can guarantee the robustness of system level operation. However, mixed-signal IPs are difficult to verify and most of them are not supported by EDA tools. In this paper, we introduce our mixed-signal IP verification method using netlist-extraction, module conversion techniques, and ESP symbolic simulation. We used three types of memory to verify our methodology. One is eMRAM which is the state-of-art emerging NVM and the others are eFLASH and OTP memory which are most commonly used in Foundry business. Our proposed method enables various types of mixed-signal IPs for existing EDA tool and provides effective verification environment.

Download: https://dvcon-proceedings.org/document/mixed-signal-functional-verification-methodology-for-embedded-non-volatile-memory-using-esp-simulation

38. Modeling-Analog-Devices-using-SV-RNM

Download: https://dvcon-proceedings.org/document/modeling-analog-devices-using-sv-rnm

39. Modeling-Memory-Coherency-during-concurrentsimultaneous-accesses

Abstract- The presence of multiple actors that can concurrently modify and read memory complicates the problem of modeling and verifying memory coherency. Practically, it may not be feasible to precisely model the expected value of a memory location observed on a read when there are multiple concurrent writers/readers to the same memory location. In this paper, we develop a framework that can predict the multiple potential coherent values that could be observed upon a read when there are concurrent/simultaneous reads and writes to memory. Using the concept of window of uncertainty, we store all potential write values in our memory model and remove values over time that are known incoherent, thereby verifying coherency even in the presence of uncertainty of read outcomes.

Download: https://dvcon-proceedings.org/document/modeling-memory-coherency-during-concurrentsimultaneous-accesses

40. Never-too-late-with-formal-Stepwise-guide-for-applying-formal-verification-in-post-silicon-phase-to-avoid-re-spins

Abstract- Formal Verification (FV) is a proven efficient technique to exhaustively verify complex hardware designs. While FV has been a preferred pre silicon verification tool, it also is highly effective in assisting post-silicon debug and bringing in additional confidence for the bug fixes. Enabling formal verification in post-silicon phase poses a different set of challenges than the traditional pre-silicon mode. In this paper, we present a stepwise guide to apply formal verification in post-silicon phase to reproduce post silicon failures in pre-silicon, validate the bug fixes, build a comprehensive solution to uncover additional vulnerabilities, thereby guaranteeing higher confidence in the design. We also discuss two case studies that illustrate the effectiveness of the methodology on a successful re-spin.

Download: https://dvcon-proceedings.org/document/never-too-late-with-formal-stepwise-guide-for-applying-formal-verification-in-post-silicon-phase-to-avoid-re-spins

41. Novel-GUI-Based-UVM-Test-Bench-Template-Builder

Abstract- Adoption rate of Universal Verification Methodology (UVM) is increasing day by day across industry and the need for building new Verification Intellectual Property (VIP) or testbench is in great demand. Writing effective and structured UVM testbench from scratch is cumbersome most of the time and following a standard structure with provision for better re-usability across projects is also challenging. What if the time taken for initial development cycle is reduced to minutes instead of days with the help of a Graphic User Interface (GUI) to build the verification component templates? This abstract presents an overview about the GUI interface used to develop the individual UVM components or the entire VIP templates loaded with features to customize and configure as per the user requirements.

Download: https://dvcon-proceedings.org/document/novel-gui-based-uvm-test-bench-template-builder

42. Optimizing-Turnaround-Times-In-Continuous-Integration-Using-Scheduler-Implementation

Abstract-NULL

Download: https://dvcon-proceedings.org/document/optimizing-turnaround-times-in-continuous-integration-using-scheduler-implementation

43. Path-based-UPF-Strategies-Optimally-Manage-Power-on-your-Designs

Abstract-UPF 3.0 and 3.1 LRM combined, introduces path-based semantics for isolation, level-shifter, and repeater strategies in conjunction with -sink and -diff_supply_only TRUE commands. This feature explicitly defines paths from source to sink domains for which any of the above strategies applies. Before path-based semantics were introduced, isolation, level-shifter, and repeater strategies utilized ad hoc methodologies, such as port splitting, which made power management very difficult. For example, when an isolation strategy is specified on ports and a port has a fanout going to multiple receiving logic supplies, the port-based semantics would place isolation cells on all paths. Contrast this with path-based semantics, which will place isolation cells in paths that go to specified sinks or receiving logic in the strategy. This paves the way to optimally manage power on any design. Evidently, path-based methodology adoption is not straight forward as it depends on the contents of strategies. For example, isolation strategies, -location fanout, self, and parent pose extra complexity to imply a strategy according to expectations. This paper addresses the complexity of adopting path-based strategies through numerous examples and real designs. This paper will also help the UPF user to smoothly transition from a port-based ad hoc to a path-based standard methodology and to understand how isolation, level-shifter, repeater strategies, and cells are inferred between source and sink power domains. Key words: Port-based, Path-based, Port-splitting, Net-splitting, Redundant, Optimal, Area, Power

Download: https://dvcon-proceedings.org/document/path-based-upf-strategies-optimally-manage-power-on-your-designs

44. Pre-Silicon-Validation-of-Production-BIOS-Software-Use-Cases-and-Accelerator-IP-Workloads-using-Hybrid-System-Level-Emulation-SoC-Platform

Abstract- In this paper we present the methodology to perform validation of Software (SW) collaterals and SW use cases using custom Hybrid System Level Emulation (Hybrid SLE) Pre-Silicon SoC platform. SW collaterals include production BIOS, OS, Firmware (FW), Drivers and production Fuses. SW use cases include platform reset, power management and SW application for Core and Accelerator IP workloads. This validation at Pre-Si helps to achieve high quality for Intel products, target A0 PRQ (Production Release Qualification) and reduce Time to Market. Keywords- BIOS, OS, SW Use Case, FW, Drivers, Accelerator IP Workloads, Hybrid SLE, Emulation, Validation, VP Simics

Download: https://dvcon-proceedings.org/document/pre-silicon-validation-of-production-bios-software-use-cases-and-accelerator-ip-workloads-using-hybrid-system-level-emulation-soc-platform

45. Problematic-Bi-Directional-Port-Connections-How-Well-is-Your-Simulator-Filling-the-UPF-LRM-Void

Abstract—The UPF LRM does not describe in detail the handling of bi-directional (inout) HDL port connections, and this leaves the handling of commonly-used connections open to the interpretation of EDA vendors. In this paper we present two common bi-directional supply port connections that are often the source of unexpected corruption before discussing three possible methods for working around them. We then examine how well each of the three main EDA simulators support the initial scenario and the proposed workarounds. Finally, possible future enhancements to the UPF LRM to ensure well-defined behavior across all tools is discussed.

Download: https://dvcon-proceedings.org/document/problematic-bi-directional-port-connections-how-well-is-your-simulator-filling-the-upf-lrm-void

46. PSS-action-sequence-modeling-using-Machine-Learning

Abstract- In general, one of the most difficult aspects of ML is collecting data for learning. This is because the more learning data, the higher the prediction accuracy of ML. In this regard, we expected that the combination of PSS, which can easily generate numerous tests, and ML, which finds regularity based on collected data, could exert fantastic synergy. And we were able to dramatically increase the verification coverage by being able to freely create concurrent function events that were previously considered impossible at the simulation level through PSS and ML

Download: https://dvcon-proceedings.org/document/pss-action-sequence-modeling-using-machine-learning

47. Raising-the-level-of-Formal-Signoff-with-End-to-End-Checking-Methodology

Abstract - The use of formal verification has been steadily increasing thanks to the widespread adoption of automatic formal, formal applications and assertion-based formal checking. However, to continue finding bugs earlier in the design process, we must advance formal verification beyond focusing on a handful of localized functionalities toward completely verifying all block-level design behaviors. An end-to-end formal test bench methodology allows the RTL designer and formal verification engineer to work parallelly to finish design and verification on all functionality formally signed-off as bug-free. Given that today’s formal tools cannot close the end-to-end checkers required to verify complex IP blocks, we must rely on methodology to tackle design complexity in a way that allows the formal tool to converge in project time. This paper aims to demystify the end-to-end formal test bench methodology and discusses how we can reduce the com-plexity of the design with functional decomposition and abstraction techniques.

Download: https://dvcon-proceedings.org/document/raising-the-level-of-formal-signoff-with-end-to-end-checking-methodology

48. Successive-Refinement-An-approach-to-decouple-Front-End-and-Back-end-Power-Intent

Abstract- IEEE 1801 UPF [1] format comes with a limitation that it doesn’t entirely support decoupling of front and backend power intent files and as many SoC projects in Intel are marching towards ASIC products on different process technologies, it becomes all the more important for designers to code power intent with the process agnostic approach. Therefore, IEEE 1801-2015 UPF (UPF3.1) [2]has come up with a methodology called Successive refinement that supports Incremental specification. This methodology enables incremental design and verification of the power management architecture, and it is specifically designed to support specification of power management requirements for IP components used in a low power design. This incremental flow accelerates design and verification of the power management architecture using partition methodology wherein the power intent is partitioned into constraints, configuration, and implementation. In this paper, we will present the new methodology Successive refinement implemented for IOTG-SOC in which power intent is specified in a technology independent manner and verified abstractly before implementation.

Download: https://dvcon-proceedings.org/document/successive-refinement-an-approach-to-decouple-front-end-and-back-end-power-intent

49. Systematic-Constraint-Relaxation-SCR-Hunting-for-Over-Constrained-Stimulus

Abstract- Modern Verification Environments rely heavily on SystemVerilog (SV) constraint solver to generate legal stimulus [1]. Verification engineers write constraints based on design specifications to carve out a feasible region for stimulus that a Design Under Test (DUT) can support. The quality of such constraints often decides the quality of testing that is being done. The definition of legality of stimulus changes during the course of a project as new features get added to the design. It also changes across projects when certain old features are selectively enabled or disabled for a particular chip. Constraints typically get added on top of one another over projects and there is a significant burden of legacy code. In unit and integration level TestBenches (TBs), typically the number of such constraints could be anywhere from 1K-20K (or sometimes even larger). Constraints are also added temporarily to prevent tests from hitting known checker issues or known design bugs. These constraints are meant to be removed once the checker issue or RTL bug is fixed. What if these constraints were accidentally not removed? What happens if constraints were added with an incorrect understanding of the specifications? It is also possible that constraints were coded up with a correct understanding of the specification, but later the specification changed, and the constraints were not updated. Under these situations, a TB can contain over-constraints. Unlike under-constraints, over-constraints do not cause test failures and can silently degrade coverage and possibly impact simulation performance. The question then arises: How do we know if there are constraints which are over-constraining the Feasible Space and degrading quality of the stimulus? How do we identify redundant constraints which do not affect the stimulus, but cause performance degradation? In this paper, we propose Systematic Constraint Relaxation (SCR) – a technique that can automatically identify such over-constraints with minimal engineering effort. Some of these over-constraints can even escape functional coverage analysis.

Download: https://dvcon-proceedings.org/document/systematic-constraint-relaxation-scr-hunting-for-over-constrained-stimulus

50. SystemC-Virtual-Prototype-Ride-the-earliest-train-for-Time-To-Market

Abstract- As the design complexities increase, it is evident that initial software development and validation must get the necessary head-start so that major software collateral can be developed much earlier in the IC design cycle to meet aggressive time-to-market windows. This paper focuses on how the time between design conception and actual development can be used for developing a Virtual Prototype of design to achieve dual goals of first silicon success and significant time-to-market reduction. It highlights how embedded software could run on a virtual model and be used for early driver development of multiple peripherals, software validation infrastructure and testing framework. Unlike conventional software development and validation methodologies like FPGA prototyping and emulation that are hardware dependent, a virtual prototype can be developed independent of hardware, thereby giving an opportunity to begin software development much before the hardware development has even begun.

Download: https://dvcon-proceedings.org/document/systemc-virtual-prototype-ride-the-earliest-train-for-time-to-market

51. Test-Parameter-Tuning-with-Blackbox-Optimization-A-Simple-Yet-Effective-Way-to-Improve-Coverage

Abstract-

Download: https://dvcon-proceedings.org/document/test-parameter-tuning-with-blackbox-optimization-a-simple-yet-effective-way-to-improve-coverage

52. Two-stage-framework-for-corner-case-stimuli-generation-Using-Transformer-and-Reinforcement-Learning

Abstract- Constrained-Random Verification (CRV) is a common method to achieve full function coverage by generating test cases randomly with some meaningful constrained setting. Some corner cases verification relies on a deep understanding of the DUT and component design from experts. The constrained creation and tuning are usually the time-consuming tasks. Furthermore, some of the corner cases are the “difficult” corner cases, which means the targets are hard to be hit even if experts tune the constraints many times. We deal with the difficult corner case verification of FIFO full condition in this paper. The two-stage process was introduced. The framework combines the concept of CRV and machine learning, including attention mechanism and Reinforcement Learning (RL). Our experiment results demonstrate that we improve the hit rate up to 380x of corner case for FIFO verification.

Download: https://dvcon-proceedings.org/document/two-stage-framework-for-corner-case-stimuli-generation-using-transformer-and-reinforcement-learning

53. Using-Portable-Stimulus-Standards-Hardware-Software-Interface-PSS-HSI-to-validate-4G5G-Forward-Error-Correction-EncoderDecoder-IP-in-emulatio

Abstract - For effective validation, it is critical to have a test framework which can seamlessly scale from IP level to subsystem level to SoC level test cases. The PSS-2.0 standard introduces the Hardware Software Interface (HSI) as part of the core library, which provides a representation of registers. The HSI layer enables users to write a single implementation of device driver logic. In this paper, we discuss how we used the HSI layer to enable vertical reuse of test content across platforms.

Download: https://dvcon-proceedings.org/document/using-portable-stimulus-standards-hardware-software-interface-pss-hsi-to-validate-4g5g-forward-error-correction-encoderdecoder-ip-in-emulatio

54. What-Does-the-Sequence-Say-Powering-Productivity-with-Polymorphism

Abstract-In a SystemVerilog UVM testbench a UVM sequence is much like a program or a function call or a test. Writing interesting sequences can help with productivity and coverage closure. On one hand a sequence is simply a list of instructions, but on the other hand how those instructions are built or how they are used with other instructions can improve the test. This paper will demonstrate easy ways to incorporate new transactions and sequences into a SystemVerilog UVM Testbench.

Download: https://dvcon-proceedings.org/document/what-does-the-sequence-say-powering-productivity-with-polymorphism

55. Why-not-Connect-using-UVM-Connect-Mixed-Language-communication-got-easier-with-UVMC

Abstract- Today's world deals with a lot of designs involving mixed languages like SystemVerilog (SV) and SystemC (SC). This paper describes an easy method of integrating these two languages, using TLM connections made via UVM Connect (UVMC). Using a UVMC example[1], this paper will demonstrate how to build, connect and execute a verification simulation with SystemVerilog and SystemC.

Download: https://dvcon-proceedings.org/document/why-not-connect-using-uvm-connect-mixed-language-communication-got-easier-with-uvmc

本文内容由网友自发贡献,版权归原作者所有,本站不承担相应法律责任。如您发现有涉嫌抄袭侵权的内容,请联系:hwhale#tublm.com(使用前将#替换为@)

DVCon US 2022论文集合 的相关文章

  • 国内可用的ntp服务器地址

    ntp sjtu edu cn 202 120 2 101 上海交通大学网络中心NTP服务器地址 s1a time edu cn 北京邮电大学 s1b time edu cn 清华大学 s1c time edu cn 北京大学 s1d ti
  • 程序员常用的计算机cmd指令

    windows cmd 查看command命令帮助说明 calc 计算器 mspaint 图画 notepad 记事本 dir 遍历当前目录 cd 路径名 进入该目录 cd 返回上级目录 netstat ano 查看端口占用 netstat
  • java中获取比毫秒更为精确的时间

    关键词 java 毫秒 微秒 纳秒 System currentTimeMillis 误差 在对新写的超快xml解析器和xpath引擎进行效率测试时 为获取执行时间 开始也没多想就用了System currentTimeMillis 来做的
  • 【Metashape精品教程17】导出产品和报告

    Metashape精品教程17 导出产品和报告 文章目录 Metashape精品教程17 导出产品和报告 前言 一 导出空三 二 导出DEM 三 导出DOM 四 导出点云 五 生成报告 前言 本章是整套教程的终结 简单介绍一下Metasha

随机推荐

  • 声学测试软件手机版_最新手机性能排名:小米84万分拿到第一,iQOO5Pro第五,华为?...

    华为Mate40 Pro首发麒麟9000处理器 安兔兔跑分高达69 是今年最强旗舰 不过在此之前还是以骁龙865 麒麟990 5G为主 鲁大师发布了2020年Q3季度手机性能排行榜 第一名的跑分高达84万 第一名 小米10至尊纪念版 84
  • 超详细webpack的plugin讲解,看不懂算我输,案例>原理>实践

    前言 本篇文章为webpack系列文章的第三篇 主要内容是对webpack的plugin进行详细的讲解 从使用 到原理 再到自己开发一个plugin 对每个过程都会进行详细的分析介绍 如果你对webpack了解的还比较少 建议你先阅读以下往
  • SpringMVC使用HttpClient实现文件上传

    HttpClient httpclient new DefaultHttpClient int statusCode 0 1 使用Get方式访问 HttpGet get null HttpParams conParams httpclien
  • 关于ip定位

    其实这种通过ip抓的方式并不认可因为对他人毕竟有点不尊重 希望大家不要恶搞 我这边暂时不提供抓ip的软件 给大家提供一个网址 至于如何抓ip大家自行百度 我就不进行讲解了 csdn不允许 虚拟位置 其实方案有很多 因为这种方法比较简单 可以
  • Mojibakes来自哪里? 编码要点

    This article explores the basic concepts behind character encoding and then takes a dive deeper into the technical detai
  • html表格嵌套

    table width 560 height 300 border 1 cellspacing 0 align center thead tr height 70 td width 160 网站logo td td width 400 网站
  • Android开发常用开源框架:图片处理

    1 图片加载 缓存 处理 框架名称 功能描述 Android Universal Image Loader 一个强大的加载 缓存 展示图片的库 已过时 Picasso 一个强大的图片下载与缓存的库 Fresco 一个用于管理图像和他们使用的
  • 【python数据挖掘课程】十八.线性回归及多项式回归分析四个案例分享

    这是 Python数据挖掘课程 系列文章 也是我这学期大数据金融学院上课的部分内容 本文主要讲述和分享线性回归作业中 学生们做得比较好的四个案例 经过我修改后供大家学习 内容包括 1 线性回归预测Pizza价格案例 2 线性回归分析波士顿房
  • C++ Windows API IsDebuggerPresent的作用

    IsDebuggerPresent 是 Windows API 中的一个函数 它用于检测当前运行的程序是否正在被调试 当程序被如 Visual Studio 这样的调试器附加时 此函数会返回 TRUE 否则 它会返回 FALSE 这个函数经
  • Android SeekBar使用 监听方法

    1 SeekBar 是一个可以拖动的控件 需要实现 seekbar setOnSeekBarChangeListener new SeekBar OnSeekBarChangeListener Override public void on
  • K8s基础10——数据卷、PV和PVC、StorageClass动态补给、StatefulSet控制器

    文章目录 一 数据卷类型 1 1 临时数据卷 节点挂载 1 2 节点数据卷 节点挂载 1 3 网络数据卷NFS 1 3 1 效果测试 1 4 持久数据卷 PVC PV 1 4 1 效果测试 1 4 2 测试结论 二 PV PVC生命周期 2
  • Python分析5000+抖音大V,发现大家都喜欢这类视频!

    最近 我在知乎上看到一个关于抖音的问题 里面提到了 目前我国人均每天刷短视频110分钟 看这数据 看来我又被平均了 不过老实说 只要一打开抖音 我确实是有一种停不下来的感觉 所以还是少刷抖音 多看书 要不然时间全流逝了 本期就给大家用数据分
  • 【云原生之Docker实战】使用Docker部署pigallery2照片库网站

    云原生之Docker实战 使用Docker部署pigallery2照片库网站 一 pigallery2介绍 二 本地环境介绍 2 1 本地环境规划 2 2 本次实践介绍 三 本地环境检查 3 1 检查Docker服务状态 3 2 检查Doc
  • pysot-toolkit--eval.py笔记(读取算法结果,根据评价指标计算结果并可视化)

    pysot toolkit 的eval文件 目前pysot toolkit与pysot的eval不同之处在于是否有VOT2019等最新的数据集评价程序 包含的数据有 OTB系列 VOT2016 2018 2017 短时序列 VOT2018
  • STM32嵌入式FLASH擦除与写入

    嵌入式Flash Flash具有以下主要特性 1 对于STM32F40x和 STM32F41x 容量高达1 MB 对于STM32F42x和STM32F43x 容量高达2MB 128位宽数据读取 意思就是128 8 16 字节 2 字节 半字
  • 服务器更换主板后系统无法启动

    针对 2008R2 linux6 以上版本更换主板后无法启动 由于机器故障不得不更换主板 这样主板上的启动项就会随着老主板一起报废开机后找不到 启动项无法进入系统 新更换的主板没有操作系统的启动项 进入 RAID 看 raid 信息是否完整
  • 正大国际期货:你身边有朋友或者亲人做期货挣钱的没有?

    有 但不长久 可能是这段时间行情匹配了他的交易系统 那么才有可能 期货市场上账到钱的概率是极少的 基本可能说 5 都不到 来了都是亏了 暴仓了 然后再走了 剩下的就是还在默默的亏损 但现实就是这个样子 但不是谁都能够做到和盈利 我们抛开一切
  • PowerShell使用教程(挑战全网最全,不喜勿喷)

    PowerShell使用教程 遇到它是因为我有一个appx文件要安装 结果 win10没法安装 最后遇到了它 PowerShell 1 背景及定义 微软是一个很 低调 的公司 取名为微软 感觉有 微微软下去 的意思 这是个玩笑了 windo
  • C++循环案例

    目录 1 while循环练习案例 猜数字 2 练习案例 水仙花数 3 练习案例 敲桌子 4 练习案例 乘法口诀表 1 while循环练习案例 猜数字 案例描述 系统随机生成一个1到100之间的数字 玩家进行猜测 如果猜错 提示玩家数字过大或
  • DVCon US 2022论文集合

    2022年DVCon US Paper共55篇 已开放下载论文全集 在此整理各篇论文的摘要和下载链接 方便大家获取和交流 也可后台私信获取 1 A Comparative Study of CHISEL and SystemVerilog