Electronic Design

SoC Designers: Learn The What, Why, And How Of Transactions

System-on-a-chip (SoC) platforms are heterogeneous entities. They typically contain at least one processing element, such as a microprocessor or DSP, along with peripherals, random logic, embedded memory, communication infrastructure, and external interface components like sensors and actuators. These diverse design platforms are moving the design focus and tradeoff analysis toward communication aspects.

Because the functional units in the SoC often communicate through several standard and proprietary bus-based protocols, understanding interblock communication has become a key aspect of verification. This shift in design practice to communication infrastructure design emphasis also promotes the adoption of mixed-level modeling and debug technologies. These techniques promise to empower designers to fully embrace the move from RTL to the higher transaction level, and do so without disrupting current functional-verification methodologies.

Understanding the synchronization interaction at the detailed signal level, given the complexity of modern protocols, is hard and time-consuming. Moreover, because different teams and individuals tackle the design as it's refined from specification to implementation, there's been a need to identify a common frame of reference among and within design teams.

The representation must be flexible to accommodate multiple application domains. It also must be amenable for both abstraction and refinement to evolve with the design as it's developed top-down or configured bottom-up. Transaction-level modeling (TLM) serves as this required intermediate modeling abstraction stage that bridges the gap between the top and bottom layers.

Why And What Of Transactions
The current SoC design flow is a mix of top-down creation from specification to implementation, and bottom-up component integration of design and verification intellectual property (IP) from external providers or reused in-house blocks. Transactions can be employed as a signoff refinement specification that bridges the gap between various design modeling layers. These range from the untimed purely functional modeling typically done in a high-level language; the functional level with approximate timing from architectural estimation; down to the implementation-level, cycle-accurate RTL.

In addition, TLM can serve as a common description medium between system engineers and specialized block developers, one that cuts across the different languages suited for any particular design or verification activity. Transactions then become a formalism where architectural exploration and tradeoff analysis can be performed. It's a means for automating the design understanding and debug process by analyzing the system function validity, and performance metrics such as overall throughput and block- and memory- interaction latencies.

Figure 1 shows the abstraction layers from the algorithmic down to the implementation levels. Several functional modeling and verification languages (and standards) are typically used in design, including SystemC (IEEE P1666), SystemVerilog (IEEE P1800), and e (IEEE P1647). Also, leading methodology best practices, such as those distilled in the reference-verification-methodology (RVM) guidelines of OpenVera and e-reuse-methodology (eRM) recommendations of e, provide constructs for algorithmic, architectural, and TLM.

TLM is a quite natural exercise. It takes every functionality thread of the design and describes it. The focus is on the how, particularly of communication interaction, and not the what of the function. Transactions provide for a temporal abstraction and spatial encapsulation of implementation details—an embodiment of initially focusing on communication infrastructure rather than functional components. The advantage of doing such abstract modeling is validation efficiency. Indeed, the uses of transactions are increasingly becoming extensive, varied, and mainstream.

Transactions capturing the synchronized transfers between and among blocks are also destined to become the workhorse of tradeoff analysis. Therefore, modeling and recording transactions are crucial for leveraging the advanced transaction-based verification and debug techniques needed to improve development efficiency and design quality.

How To Model In And Use Transactions
High-level languages, or HLLs, (e.g., SystemC), a host of other hardware verification languages, or HVLs, (e.g., OpenVera, e), and testbench and design languages (e.g., SystemVerilog) have varying degrees of native support for transactions. SystemC (www.systemc.org) leads with native support for user-driven transaction creation within the modeling language and recording into a database that could be the same one sc_trace() writes into using the SCV library. The SCV has a collection of useful predefined classes, including the following three major classes of the recording facility:

  • scv_tr_db: The transaction database object that permits users to control the recording. This object is generic and database-format independent. Third-party recording API providers can map its underlying services to their own database scheme.
  • scv_tr_stream: The transaction stream modeling object. A stream is an abstract communication medium on which transactions, including overlapped ones, can occur, such as a memory stream with read and write transactions. A stream can therefore be thought of as an abstract signal where transactions are abstract values that can be taken by the signal-for example, an address or a data stream of a bus.
  • scv_tr_generator: The object that surrounds a particular transaction type and allows for its creation and buildup of attributes, which can be anything from design signals and messages to generic payload data.
  • The code snippet below shows how transactions can be created in a relatively straightforward manner using SCV. The comments before each segment indicate the purpose of the statements that follow. The transactions can be recorded into the database in a seamless manner that doesn't require direct user intervention. To accomplish this, tool vendors can register callbacks through the registration mechanism provided in the respective three aforementioned classes to implement the recording function. The user need only add some initialization calls.

    // Inside sc_main() or some other context
    
    // SCV startup
    scv_startup();
    
    // Initialization
    API_vendor_initialization(); // set SCV callbacks here
    scv_tr_db db("my_db");
    scv_tr_db::set_default_db(&db);
    
    // Define a stream and a generator
    scv_tr_stream mem_stream("memory", "transactor");
    scv_tr_generator read_gen("read", mem_stream, "mem");
    scv_tr_handle tr_handle;
    
    // Modeling code here
    ?
    
    // Transaction begin with a tr_data attribute
    tr_data.addr=   addr_signal;
    tr_data.data=	data_signal;
    tr_handle=	write_gen.begin_transaction(tr_data);
    
    // Transaction end
    tr_handle.end_transaction();
    
    // Other modeling code here
    ?
    

    SCV also has many other classes-for instance, scv_tr_relation to create a relationship between different transactions. Relationships are quite useful in analysis and debug when determining causal relationships like predecessor-successor; hierarchy, like parent-child; and aggregation for analyzing composition.

    OpenVera (www.open-vera.org), by virtue of being an object-oriented (OO) modeling language, can easily accommodate the encapsulation concepts of TLM. The language doesn't currently have built-in transaction classes like SCV. Yet it's possible to create classes for this purpose, such as the following minimal set:

  • trans_db: For the database.
  • trans_stream: The transaction stream modeling object.
  • trans_type: (or generator) For creating transactions and their attributes.
  • trans_handle: For easy manipulation of handles.
  • The following is a simple code snippet that models the transactor shown in Figure 2. The classes are intended to also record the transaction data into the transaction database shown below as class trans_db instance dump_file. The output is shown in display in the rightmost part of Figure 3.

    // Inside program or some other OpenVera context
    trans_db       dump_file;
    trans_stream   stream1;
    trans_type     mem_read;
    trans_handle   h1; 
        
    // open a database file
    dump_file=new("test");
        
    // create the memory stream under the test.duv.bus scope
    stream1=new(dump_file, "test.duv.bus", "memory");
        
    // create the read transaction type in the memory stream
    mem_read=new(stream1, "Read");
        
    // define 2 attributes in the read transaction type
    mem_read.create_attr("Addr", INTEGER_DT);
    mem_read.create_attr("Data", INTEGER_DT);
    
    delay(10);
    // begin a memory read transaction at 10
    h1=mem_read.begin_now();
    h1.log_integer_attr("Addr", 170);
    h1.log_integer_attr("Data", 123);
    
    delay(20);
    // end h1 transaction at 20
    h1.end_now();
    
    // close the database file
    dump_file.close();

    Of course, as mentioned earlier for SCV, another useful class is trans_relation, which accounts for relationships between and among the different transactions. The code snippet below shows how the class is exercised when trans_relation is unidirectional. An analysis and visualization tool also can have predefined relationships to get a particular representation for analysis and display.

    trans_relation r1;
     trans_relation r2;
     trans_handle h1;
     trans_handle h2;   
        
     r1=new(f, "parent");
     r2=new(f, "child");
     ...
     h1.add_relation(r1, h2); // h2 is the parent of h1
     h2.add_relation(r2, h1); // h1 is the child of h2   

    To make the recording possible, the classes must wrap a transaction trace and record API that hides the database recording details from the user. The API must be open to foster a communal value-added culture between engineers and vendors alike. It also must be amenable for use across languages when creating, generating, and recording transactions, giving designers a high-value utility. And, it must be simple and basic, yet broad enough to cover the essential elements of transaction recording. Further specialization of the API can be built on top for certain (modeling-language specific) incarnations.

    The authors developed such an API, called the Open Transaction Interface (OTI). It's a layer built on top of Novas' FSDB writer interface. The API is written in C for large portability. In fact, the classes listed earlier are part of an OpenVera transaction library of classes written in-house. They form a wrapper around the OTI and interface with OpenVera using DirectC. Users can directly call the OTI from any location in their code using DirectC. But, in general, it's a good idea to supply a shell around it that hides the interfacing requirements to the modeling language-users can then focus on the modeling and ignore the details of the database recording.

    APIs similar to OTI can be made readily available to designers who wish to develop and record transaction data in any language that can interface to C. This includes the other popular HVLs, such as e (www.ieee1647.org). In fact, the API also can be used in HDLs with appropriate wrapper system tasks and with software C/C++ code to directly dump transaction data from the implementation. Figure 4 shows a system with all of these components.

    SystemVerilog (3.1a version) (www.systemverilog.org) is an up-and-coming design modeling and testbench language. It currently doesn't have built-in transaction classes as in SystemC, yet its very being revolves around the notion of abstracting and encapsulating the Verilog wire and reg data into more meaningful grouped data. This warrants a look at how we can model and then record transaction data. In fact, there's plenty of ways to do this within SystemVerilog:

    a) Wait for standardization efforts to add built-in classes. This might happen eventually, but SystemVerilog is available today. So why wait for the construct and then again for the vendors to support it? b) Create your own transaction classes and methods (tasks and functions), then maybe donate this back to the community as a sharable library. c) Integrate with SystemC through the direct programming interface (DPI) or programming language interface (PLI). d) Call system tasks to do the work at opportune places in the modeling.

    We believe option (d) is a very good starting point for designers. It covers many of the demands of the interface and gets the job done in a simple, straightforward, and quite flexible manner. Designers can call SystemVerilog tasks or C/C++ routines through the PLI, or better yet though the DPI. Like DirectC, DPI is a mechanism designed for easy interfacing with external untimed blocks written in C or C++.

    So, two questions arise. What are the transaction objects? And, how do designers create/generate/record transactions and develop the API? Our answer is, again, a simple API facility, such as the OTI shown in Figure 3. It hides the implementation particulars and offers a sound and complete foundation for transaction recording.

    More Automation To Come
    The transaction recording discussed so far is quite useful and efficient. But it's still effectively manual—that is, users must execute the transaction modeling and invoke the database recording. As standards mature and tool vendors come together, the community can develop additional automated generation and recording. For example, users can and already do use the transactor classes within Vera RVM (similarly, sequences of e) for transaction generation.

    Therefore, the immediate automation would be to use callback facilities within these constructs, so that users needn't worry about modeling the transactions separately. They can just extend from the provided base class and automatically get the required recording routines.

    Furthermore, we find that SoCs typically comprise many blocks, including legacy IP. The modeling languages for both design and verification also are quite varied, as shown in Figure 3. So, a complete SoC forms a hard-to-decipher soup of data. In this case, it would be great if there were some way for engineers to extract transactions out of the available data to better understand the system operation. We'll leave that for a future discussion.

    Modeling, verification, and debug require a unified notation and framework for architects and implementers to work jointly on the design and development of complex SoCs. TLM is the ideal model for this educated analysis. Designers should dig deeper into the details of transaction modeling and use it to reap the benefits of both efficient cosimulation and productive analysis and debug.

    Acknowledgements: Many thanks go to Luke Lin from Novas R&D, who developed the OpenVera transaction recording library OTI wrapper.

    Hide comments

    Comments

    • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

    Plain text

    • No HTML tags allowed.
    • Web page addresses and e-mail addresses turn into links automatically.
    • Lines and paragraphs break automatically.
    Publish