Building a verification test plan: trading brute force for finesse
The increasing complexity of today's designs has only served to further intensify the pain of functional verification. Any strategy for success here must include a verification test plan - one that trades brute force with finesse. In so doing, not only is the pain reduced, but additional benefi...
Gespeichert in:
Hauptverfasser: | , , , , , , , |
---|---|
Format: | Tagungsbericht |
Sprache: | eng |
Schlagworte: | |
Online-Zugang: | Volltext bestellen |
Tags: |
Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
|
Zusammenfassung: | The increasing complexity of today's designs has only served to further intensify the pain of functional verification. Any strategy for success here must include a verification test plan - one that trades brute force with finesse. In so doing, not only is the pain reduced, but additional benefits are quickly derived, such as greater predictability, more aggressive innovation and late stage spec changes which can be made with confidence. Certainly, verification teams have a need to know, while a design is being verified, if their efforts are progressing according to schedule. In order to declare verification done, the team needs to be able to effectively assess the risk of bug escapes. Teams typically use a verification coverage plan to help them address these issues, but increasing complexity and IP integration place rising demands upon verification teams. In the past, test plans described the scenarios under which you needed to test a device. With the changes imposed by emerging technologies, test plans have needed to expand and evolve to adapt. While coverage is at the core of functional verification, it is a topic of much discussion and debate. What are the different coverage metrics being used today, and do these metrics properly address the functional verification challenge? How do you begin to integrate multiple verification processes into a single test plan? This panel attempts to address these important questions. As chip complexity and IP integration have increased, companies have begun testing at higher levels of abstraction, moving more and more to ESL techniques to better understand the overall system functionality. Industry leading companies have begun to embrace the use of formal verification, emulation and acceleration to drive greater verification success in the drive for greater quality and schedule predictability. But, what value do these new verification processes bring, and what usage and integration challenges do they pose? In this panel users and suppliers debate the optimal mix of formal, simulation, hardware acceleration and emulation, examining ways to ensure new features aren't dropped pre-tapeout from `inadequate verification'. It's no longer just a test plan discussion, but an examination of how test plans are going to need to change given emerging technologies |
---|---|
ISSN: | 0738-100X |
DOI: | 10.1109/DAC.2006.229328 |