Tuesday 22 May 2018

Developing & Delivering KnowHow

Home > Knowhow > Sysverilog > Uvm > Easier Uvm Guidelines > Coverage-Driven Verification Methodology

Coverage-Driven Verification Methodology



For the Easier UVM guidelines that relate to coverage-driven verification, see Functional Coverage.

Verification planning and management involves identifying the features of the DUT that need to be verified, prioritizing those features, measuring progress, and adjusting the allocation of verification resources so that verification closure can be reached on the required timescale. The mechanics of verification can be accomplished using static formal verification (also known as property checking), simulation, emulation, or FPGA prototyping. This discussion on coverage-driven verification in the context of UVM focusses on the simulation-based verification environment.

There are two contrasting approaches to coverage-driven verification in current use. "Classical" constrained random verification starts with random stimulus and gradually tightens the constraints until coverage goals are met, relying on the brute power of randomization and compute server farms to cover the state space. More recently, graph-based stimulus generation (also known as Intelligent Testbench) starts from an abstract description of the legal transitions between the high-level states of the DUT, and automatically enumerates the minimum set of tests needed to cover the paths through this state space. For many applications, graph-based stimulus generation is able to achieve high coverage in far fewer cycles than "classical" constrained random. UVM directly supports constrained random, whereas graph-based stimulus generation requires a separate, dedicated tool. Stimulus generated from the graph-based approach can be executed on a UVM verification environment.

Functional coverage and code coverage measure different things. Code coverage measures the execution of the actual RTL code (which must therefore exist before code coverage can be run at all). The collection of code coverage information, including statement and branch coverage, state coverage, and state transition coverage, is largely automatic. Code coverage can be considered as a quantitative measure of DUT code execution. Functional coverage, on the other hand, attempts to measure whether the features described in the verification plan have actually been executed by the DUT. The features to be measured have to be "brainstormed" from the specification and implementation of the design to create the verification plan, and so functional coverage can be considered as a qualitative measure of DUT code execution. The number of features to be hit by functional coverage depends upon the diligence and thoroughness of the humans who draw up the verification plan.

It is quite possible to achieve 100% code coverage but only 50% functional coverage, for example when the design is half complete. Equally, it is possible to have 50% code coverage but 100% functional coverage, which might indicate that the functional coverage model is missing some key features of the design. The two coverage approaches are complementary, and high quality verification will benefit from both.

It is best practice to create a verification plan that consists of a list of features to be tested as opposed to a list of directed test descriptions. All stakeholders in the verification process should contribute to the identification and prioritization of features in the verification plan, since this feature set will form the foundation for the subsequent verification process. If features are missing from the verification plan this would severely undermine verification quality.

Features listed in the verification plan should be cross-referenced to the original design specification or to the RTL implementation. Features should be identified as being mission-critical, primary, secondary, optional, stretch goals, and so forth, and should be prioritized. This helps with requirements traceability and makes it easier to assess verification progress against broader product goals, such as the ability to distinguish between the verification of mission-critical features versus secondary features.

This verification plan is then implemented by converting each feature into code to collect coverage information, otherwise know as the coverage model. This can be a combination of sample-based coverage (SystemVerilog covergroups), property-based coverage (SystemVerilog cover property statements), and code coverage. Coverage-driven verification requires a significant change in mindset and practice when compared to directed testing. Instead of writing tests to exercise specific features, the features to be tested are fully enumerated in the coverage model, and tests serve only to steer the constrained random stimulus generation toward filling any coverage holes. Using coverage as the measure of successful execution of the verification plan results in a close association between the original specification and the output from the functional verification process. This association improves requirements traceability and makes it easier to respond to specification changes.

Effective verification planning and management relies on a combination of functional coverage, code coverage, and automated checking. Agent-specific functional coverage, implemented within subscribers connected to individual agents, can collect coverage data for individual interfaces. Coverage collector components that receive transactions from multiple agents can cross data from multiple DUT interfaces. In either case, coverage data collection can be triggered by the arrival of incoming transactions at subscriber and scoreboard components.

The coverage model should be formally reviewed against the verification plan and the verification plan formally reviewed against the design specification before coverage collection starts.

A feature can only be considered covered when the design works correctly: failed tests should not contribute toward the cumulative coverage record. Self-checking verification environments are critical to the coverage-driven verification methodology. Checks can be performed at all levels in the verification environment including assertions in SystemVerilog interfaces, end-to-end checking in scoreboards, and checks integrated with coverage collection.

Code for the stimulus generation, drivers, monitors, the coverage model and the checkers should be complete and debugged before starting to accumulate functional coverage information: coverage information collected in the absence of comprehensive checking is meaningless.

During the verification process the same verification environment is executed repeatedly using different tests until the coverage goals are met. One test can differ from another by configuring or constraining the verification environment in a different way or merely by starting simulation with a different random seed. Tests can be graded according to a variety of criteria including functional coverage, bugs detected, and simulation run time. The best-performing tests, according to some combination of these criteria, can then be used as the basis for the regression test suite: you would typically select a set of tests that achieve the maximum coverage in the minimum run time, and discard any tests that do not add to the functional coverage achieved by the best set of tests. Towards the end of the verification process it is possible to arrive at a highly optimized regression test suite.

It is usual to run multiple tests in parallel on compute server farms. The coverage results from individual tests need to be merged together and annotated back onto the verification plan to form a cumulative record of progress. This needs to be done efficiently in order to provide up-to-date information for verification management. Coverage results from failing tests should be excluded from the merge.

Sufficient information should be captured from each test run to allow the simulation run to be reproduced. This information will include the random seed, the revision identifiers of the RTL code and the verification code, and all relevant software version numbers.

Even after extensive random testing, features in the verification plan can remain uncovered for a number of reasons, and the reasons need to be analyzed. Features could be unimplemented, unused or disabled in the product, inadvertently forgotten about, or just very hard-to-reach with random tests. After analysis, features that still remain uncovered can be reached by adding further specific tests. Where practical, it is preferable to adjust the knobs used in the constraints to tighten up random distributions rather than resorting to very specific directed tests.

Functional coverage helps identify:

  • Which features in the verification plan have been tested successfully
  • Which features in the verification plan have not yet been tested and thus require further work
  • What proportion of the features have been tested and thus how close the verification process is to completion
  • The set of tests that provide maximum coverage using the minimum number of CPU cycles
  • Confidence that previously fixed design bugs have not been reintroduced into the design

(This can be contrasted with a traditional directed testing methodology in which the absence of further bugs being detected is taken as evidence that verification is nearing completion. Such a methodology can result in an over-optimistic view of the true state of the verification process.)

The essential steps in the coverage-driven verification process are as follows:

  1. Create the verification plan with the involvement of stakeholders
  2. Create the coverage model from the verification plan
  3. Debug the verification environment, checkers, and coverage model
  4. Run tests with multiple random seeds until cumulative coverage flattens off
  5. Annotate coverage results back onto the verification plan
  6. Run further tests with modified stimulus constraints to close coverage holes
  7. Analyze and prioritize any unverified features and allocate resources accordingly
  8. Run directed tests for particularly hard-to-reach coverage holes



Links

Easier UVM Coding Guidelines
Introduction to the Easier UVM Coding Guidelines
Summary of the Easier UVM Coding Guidelines
Detailed Explanation of the Easier UVM Coding Guidelines
Easier UVM Glossary
Easier UVM Coding Guidelines - Download

Easier UVM - Deeper Explanations
Coverage-Driven Verification Methodology
Requests, Responses, Layered Protocols and Layered Agents
How to Access a Parameterized SystemVerilog Interface from UVM

Easier UVM Code Generator
Easier UVM Code Generator - Download
Easier UVM Code Generator - Tutorial Part 1: Getting Started
Easier UVM Code Generator - Tutorial Part 2: Adding User-Defined Code
Easier UVM Code Generator - Tutorial Part 3: Adding the Register Layer
Easier UVM Code Generator - Tutorial Part 4: Hierarchical Verification Environments
Easier UVM Code Generator - Tutorial Part 5: Split Transactors
Easier UVM Code Generator - Frequently Asked Questions (FAQ)
Easier UVM Code Generator - Reference Guide

Easier UVM Video Tutorial
Introducing Easier UVM
Easier UVM - The Big Picture
Key Concepts of the Easier UVM Code Generator
Running Easier UVM in EDA Playground
Easier UVM - Components and Phases
Easier UVM - Configuration
TLM Connections in UVM
Easier UVM - Transaction Classes
Easier UVM - Sequences
Easier UVM - Tests
Easier UVM - Reporting
Easier UVM - Register Layer
Easier UVM - Parameterized Interfaces
Easier UVM - Scoreboards
The Finer Points of UVM Sequences (Recorded Webinar)
UVM Run-Time Phasing (Recorded Webinar)

A YouTube playlist with all the above videos and more

Easier UVM Paper and Poster
Easier UVM - Coding Guidelines and Code Generation - as presented at DVCon 2014

Easier UVM Q&A Forum
Easier UVM Google Group

Easier UVM Examples Ready-to-Run on EDA Playground
Minimal example with driver
Minimal example with coverage in a subscriber as well as driver and monitor.
Minimal example with register sequence and register block
Example with four interfaces/agents, two of which use a register model.
Minimal example with dual-top modules and split transactors
Minimal example showing a UVM sequence getting information from the config database
Minimal example showing features of objections and the command line processor
Minimal example showing the reporting features of UVM.
Example that drops an objection when coverage exceeds some threshold
Example that sends a response transaction from the driver back to the uvm_reg_adapter
Example that uses a frontdoor sequence to pass a response object back to the register sequence that called read/write
Example of a parameterized interface generated from an Easier UVM interface template file
Example that pulls in a user-defined parameterized interface
Example of a reference model with the Syosil scoreboard

Back to the full list of UVM Resources

Privacy Policy Site Map Contact Us