Global training solutions for engineers creating the world's electronics

Coverage-Driven Verification Methodology

For the Easier UVM guidelines that relate to coverage-driven verification, see Functional Coverage.

Verification planning and management involves identifying the features of the DUT that need to be verified, prioritizing those features, measuring progress, and adjusting the allocation of verification resources so that verification closure can be reached on the required timescale. The mechanics of verification can be accomplished using static formal verification (also known as property checking), simulation, emulation, or FPGA prototyping. This discussion on coverage-driven verification in the context of UVM focusses on the simulation-based verification environment.

There are two contrasting approaches to coverage-driven verification in current use. "Classical" constrained random verification starts with random stimulus and gradually tightens the constraints until coverage goals are met, relying on the brute power of randomization and compute server farms to cover the state space. More recently, graph-based stimulus generation (also known as Intelligent Testbench) starts from an abstract description of the legal transitions between the high-level states of the DUT, and automatically enumerates the minimum set of tests needed to cover the paths through this state space. For many applications, graph-based stimulus generation is able to achieve high coverage in far fewer cycles than "classical" constrained random. UVM directly supports constrained random, whereas graph-based stimulus generation requires a separate, dedicated tool. Stimulus generated from the graph-based approach can be executed on a UVM verification environment.

Functional coverage and code coverage measure different things. Code coverage measures the execution of the actual RTL code (which must therefore exist before code coverage can be run at all). The collection of code coverage information, including statement and branch coverage, state coverage, and state transition coverage, is largely automatic. Code coverage can be considered as a quantitative measure of DUT code execution. Functional coverage, on the other hand, attempts to measure whether the features described in the verification plan have actually been executed by the DUT. The features to be measured have to be "brainstormed" from the specification and implementation of the design to create the verification plan, and so functional coverage can be considered as a qualitative measure of DUT code execution. The number of features to be hit by functional coverage depends upon the diligence and thoroughness of the humans who draw up the verification plan.

It is quite possible to achieve 100% code coverage but only 50% functional coverage, for example when the design is half complete. Equally, it is possible to have 50% code coverage but 100% functional coverage, which might indicate that the functional coverage model is missing some key features of the design. The two coverage approaches are complementary, and high quality verification will benefit from both.

It is best practice to create a verification plan that consists of a list of features to be tested as opposed to a list of directed test descriptions. All stakeholders in the verification process should contribute to the identification and prioritization of features in the verification plan, since this feature set will form the foundation for the subsequent verification process. If features are missing from the verification plan this would severely undermine verification quality.

Features listed in the verification plan should be cross-referenced to the original design specification or to the RTL implementation. Features should be identified as being mission-critical, primary, secondary, optional, stretch goals, and so forth, and should be prioritized. This helps with requirements traceability and makes it easier to assess verification progress against broader product goals, such as the ability to distinguish between the verification of mission-critical features versus secondary features.

This verification plan is then implemented by converting each feature into code to collect coverage information, otherwise know as the coverage model. This can be a combination of sample-based coverage (SystemVerilog covergroups), property-based coverage (SystemVerilog cover property statements), and code coverage. Coverage-driven verification requires a significant change in mindset and practice when compared to directed testing. Instead of writing tests to exercise specific features, the features to be tested are fully enumerated in the coverage model, and tests serve only to steer the constrained random stimulus generation toward filling any coverage holes. Using coverage as the measure of successful execution of the verification plan results in a close association between the original specification and the output from the functional verification process. This association improves requirements traceability and makes it easier to respond to specification changes.

Effective verification planning and management relies on a combination of functional coverage, code coverage, and automated checking. Agent-specific functional coverage, implemented within subscribers connected to individual agents, can collect coverage data for individual interfaces. Coverage collector components that receive transactions from multiple agents can cross data from multiple DUT interfaces. In either case, coverage data collection can be triggered by the arrival of incoming transactions at subscriber and scoreboard components.

The coverage model should be formally reviewed against the verification plan and the verification plan formally reviewed against the design specification before coverage collection starts.

A feature can only be considered covered when the design works correctly: Failed tests should not contribute toward the cumulative coverage record. Self-checking verification environments are critical to the coverage-driven verification methodology. Checks can be performed at all levels in the verification environment including assertions in SystemVerilog interfaces, end-to-end checking in scoreboards, and checks integrated with coverage collection.

Code for the stimulus generation, drivers, monitors, the coverage model and the checkers should be complete and debugged before starting to accumulate functional coverage information: Coverage information collected in the absence of comprehensive checking is meaningless.

During the verification process the same verification environment is executed repeatedly using different tests until the coverage goals are met. One test can differ from another by configuring or constraining the verification environment in a different way or merely by starting simulation with a different random seed. Tests can be graded according to a variety of criteria including functional coverage, bugs detected, and simulation run time. The best-performing tests, according to some combination of these criteria, can then be used as the basis for the regression test suite: You would typically select a set of tests that achieve the maximum coverage in the minimum run time, and discard any tests that do not add to the functional coverage achieved by the best set of tests. Towards the end of the verification process it is possible to arrive at a highly optimized regression test suite.

It is usual to run multiple tests in parallel on compute server farms. The coverage results from individual tests need to be merged together and annotated back onto the verification plan to form a cumulative record of progress. This needs to be done efficiently in order to provide up-to-date information for verification management. Coverage results from failing tests should be excluded from the merge.

Sufficient information should be captured from each test run to allow the simulation run to be reproduced. This information will include the random seed, the revision identifiers of the RTL code and the verification code, and all relevant software version numbers.

Even after extensive random testing, features in the verification plan can remain uncovered for a number of reasons, and the reasons need to be analyzed. Features could be unimplemented, unused or disabled in the product, inadvertently forgotten about, or just very hard-to-reach with random tests. After analysis, features that still remain uncovered can be reached by adding further specific tests. Where practical, it is preferable to adjust the knobs used in the constraints to tighten up random distributions rather than resorting to very specific directed tests.

Functional coverage helps identify:

  • Which features in the verification plan have been tested successfully
  • Which features in the verification plan have not yet been tested and thus require further work
  • What proportion of the features have been tested and thus how close the verification process is to completion
  • The set of tests that provide maximum coverage using the minimum number of CPU cycles
  • Confidence that previously fixed design bugs have not been reintroduced into the design

(This can be contrasted with a traditional directed testing methodology in which the absence of further bugs being detected is taken as evidence that verification is nearing completion. Such a methodology can result in an over-optimistic view of the true state of the verification process.)

The essential steps in the coverage-driven verification process are as follows:

  1. Create the verification plan with the involvement of stakeholders
  2. Create the coverage model from the verification plan
  3. Debug the verification environment, checkers, and coverage model
  4. Run tests with multiple random seeds until cumulative coverage flattens off
  5. Annotate coverage results back onto the verification plan
  6. Run further tests with modified stimulus constraints to close coverage holes
  7. Analyze and prioritize any unverified features and allocate resources accordingly
  8. Run directed tests for particularly hard-to-reach coverage holes

 

Links

Easier UVM Coding Guidelines

Easier UVM - Deeper Explanations

Easier UVM Code Generator

Easier UVM Video Tutorial

Easier UVM Paper and Poster

Easier UVM Examples Ready-to-Run on EDA Playground

 

Back to the full list of UVM Resources