Global training solutions for engineers creating the world's electronics

UVM Verification Primer

John Aynsley, Doulos, June 2010

True to the spirit of UVM, this tutorial was created by taking an existing tutorial on OVM and replacing the letter "OVM" with "UVM" throughout. Please let us know if you find any inconsistencies!

Introduction

UVM is a methodology for functional verification using SystemVerilog, complete with a supporting library of SystemVerilog code. The letters UVM stand for the Universal Verification Methodology. UVM was created by Accellera based on the OVM (Open Verification Methodology) version 2.1.1. The roots of these methodologies lie in the application of the languages IEEE 1800™ SystemVerilog, IEEE 1666™ SystemC, and IEEE 1647™ e.

UVM is a methodology for the functional verification of digital hardware, primarily using simulation. The hardware or system to be verified would typically be described using Verilog, SystemVerilog, VHDL or SystemC at any appropriate abstraction level. This could be behavioral, register transfer level, or gate level. UVM is explicitly simulation-oriented, but UVM can also be used alongside assertion-based verification, hardware acceleration or emulation.

If you currently run RTL simulations in Verilog or VHDL, you can think of UVM as replacing whatever framework and coding style you use for your testbenches. But UVM testbenches are more than traditional HDL testbenches, which might wiggle a few pins on the design-under-test (DUT) and rely on the designer to inspect a waveform diagram to verify correct operation. UVM testbenches are complete verification environments composed of reusable verification components, and used as part of an overarching methodology of constrained random, coverage-driven, verification.

This tutorial outlines the basics of constrained random, coverage-driven verification.

If you are already familiar with these topics, you can jump straight to the next tutorial.

Turning Simulation into Verification

Simulation might be caricatured as the process of poking test vectors into a model of the design-under-test and observing how that model behaves. A traditional Verilog or VHDL test bench might contain processes to read raw vectors or commands from a file, use those to change the values of the wires connected to the DUT over time, and perhaps collect output from the DUT and dump it to another file. This is fine as far as it goes, but this process does not scale up well to support the reliable verification of very complex systems.

A good verification methodology starts with a statement of the function the DUT is intended to perform. From this is derived a verification plan, broken down feature-by-feature, and agreed in advance by all those with a specific interest in creating a working product. This verification plan is the basis for the whole verification process. Verification is only complete when every item on the plan has been tested to an acceptable level, where the meaning of "acceptable" and the priorities assigned to testing the various features have also been agreed in advance and are continually reviewed during the project.

Verification of complex systems should not be reliant on manual inspection of detailed waveforms and vector sets. Functional checking must be automated if the process is to scale well, as must the collection of verification metrics such as the coverage of features in the verification plan and the number of bugs found by each test. Along with the verification plan, automated checking and functional coverage collection and analysis are cornerstones of any good verification methodology, and are explicitly addressed by SystemVerilog and UVM. Checkers and a functional coverage model, linked back to the verification plan, take engineering time to create but result in much improved quality of verification.

All simulation-based verification suffers from the issue that you can never run enough test vectors to exhaustively test the whole design, or even any significant part of a complex design. One way to address this issue is using constrained random stimulus. The use of random stimulus brings two very significant benefits. Firstly, random stimulus is great for uncovering unexpected bugs, because given enough time and resources it can allow the entire state space of the design to be explored free from the selective biases of a human test writer. Secondly, random stimulus allows compute resources to be maximally utilised by running parallel compute farms and overnight runs. Of course, pure random stimulus would be nonsensical, so adding constraints to make random stimulus legal is an important part of the verification process, and is explicitly supported by SystemVerilog and UVM.

The best way to approach the verification process is to start with simple directed (non-random) tests to bring up the design, then move to fully random tests to explore the state space in a broad fashion and flush out as many bugs as possible with minimum human effort devoted to test writing. This will typically achieve much less than 100% functional coverage, and the remainder of the verification process is spent defining a series of tests, each of which constrains and shapes the random stimulus is a different way to push the design into interesting corner cases. The state space of a typical design is so vast that random stimulus alone is not enough to explore all the key use cases, yet directed or highly constrained tests can be too narrow to give good overall coverage. Constrained random stimulus is a compromise between the two extremes, but effective usage comes down to making a series of good engineering judgements. The solution is to use the priorities set in the verification plan to direct verification resources to the key areas.

Checkers, Coverage and Constraints

Constrained random verification relies on Checkers, Coverage and Constraints. Each of these "three C's" plays a key role in the verification process and is supported by explicit features of the SystemVerilog language.

Firstly, checkers ensure functional correctness. Nothing is gained by throwing more and more random stimulus into a design to take functional coverage to ever higher levels unless the design-under-test is being checked automatically for functional correctness. Checkers can be implemented using SystemVerilog assertions or using regular procedural code. Assertions can be embedded within the design-under-test, placed on the external interfaces, or can be part of the verification environment. UVM provides mechanisms and guidelines for building checkers into the verification environment and for logging reports.

Secondly, coverage provides a measure of the functional completeness of the testing, and tells you when you've met the goals set out in the verification plan, and thus when you have finished simulating. SystemVerilog offers two separate mechanisms for functional coverage collection; property-based coverage (cover directives) and sample-based coverage (covergroups). Both can be used in a UVM verification environment. The specification and execution of the coverage model is intimately tied to the verification plan, and many simulation tools are able to annotate coverage information onto the verification plan document, facilitating tight management control.

Thirdly, constraints provide the means to reach coverage goals by shaping the random stimulus to push the design-under-test into interesting corner cases. Without shaping, random stimulus alone may be insufficient to exercise many of the deeper states of the design-under-test. Constrained random stimulus is still random, but the statistical distribution of the vectors is shaped to ensure that interesting cases are reached. SystemVerilog has dedicated language features for expressing constraints, and UVM goes further by providing mechanisms that allow constraints to be written as part of a test rather then embedded within dedicated verification components. This and other features of UVM facilitate the creating of reusable verification components.

Tests and Coverage

The features enumerated in the verification plan should be captured as a set of coverage statements that together form an executable coverage model. With many simulation tools, the verification plan will include references to the corresponding coverage statements, and as simulation runs, coverage data is back-annotated from the simulator onto the verification plan feature-by-features. This provides direct feedback on the effectiveness of any given test. Holes in the coverage goals can be plugged by writing further tests. The verification plan itself is not part of UVM proper, but is a vital element in the verification process. UVM provides guidance on how to collect coverage data in a reusable manner.

With directed testing, tests are written with the purpose of pushing the design into specific states and exercising specific cases. With constrained random testing, the role of the tests shifts slightly. Although a constrained random test may be written with specific coverage goals in mind, it is not assumed before-the-fact that any particular test will actually test one feature rather than another. The constrained random test is run, and the coverage model is used to empirically measure which features the test did in fact exercise. Tests can be graded after-the-fact using the coverage data, and the most effective tests, that is those that achieve the highest coverage in the fewest number of cycles, can be used to form the basis of a regression test set.

Engineering Effort

With constrained random verification, the focus of the engineering effort changes from writing directed tests to building automated checkers and an executable coverage model. Random stimulus then enables compute resources to be fully utilized in the pursuit of hitting coverage goals. The total number of man-hours dedicated to verification will not necessarily decrease, but verification quality will be dramatically improved, and the verification process will become far more transparent and predictable, both to the verification team itself and to outside observers. Automated coverage collection gives accurate feedback on the progress of the verification effort, and the emphasis on verification planning ensures that resources are focussed on achieving agreed goals.

Verification Reuse

UVM facilitates the construction of verification environments and tests, both by providing reusable machinery in the form of a library of SystemVerilog classes, and also by providing a set of guidelines for best practice when using SystemVerilog for verification.

Verification productivity can be enhanced by reusing verification components, and this is an important objective of UVM. Verification reuse is enabled by having a modular verification environment where each component has clearly defined responsibilities, by allowing flexibility in the way in which components are configured and used, by having a mechanism to allow imported components to be customized to the application at hand, and by having well-defined coding guidelines to ensure consistency.

The architecture of UVM has been designed to encourage modular and layered verification environments, where verification components at all layers can be reused in different environments. Low-level driver and monitor components can be reused across multiple designs-under-test. The whole verification environment can be reused by multiple tests and configured top-down by those tests. Finally, test scenarios can be reused from application to application. This degree of reuse is enabled by having UVM verification components able to be configured in a very flexible way without modification to their source code. This flexibility is built into the UVM class library.

Next Steps

In the next tutorial, you will be taken though a very simple, complete example of a UVM application.

Next:  From OVM to UVM: Getting Started with UVM

LINKS

Easier UVM Coding Guidelines

Easier UVM - Deeper Explanations

Easier UVM Code Generator

Easier UVM Video Tutorial

Easier UVM Paper and Poster

Easier UVM Examples Ready-to-Run on EDA Playground

 

Back to the full list of UVM Resources