Source: EETOP Forum
Link: http://bbs.eetop.cn/thread-662712-1-1.html
EETOP forum verification sub-forum netizens like water like smoke Asked:
<h1>At what angle validators should read the spec</h1>
In the development process, the focus of the design and verifier is definitely different, especially in the understanding of specs, validators often need to have their own independent understanding. When getting the spec, as a verifier, how should you refine the functions to transform it into a corresponding inference model to achieve cross-validation and detailed design. What experience can you discuss?
The following are the answers from netizens:
jimbo1006:
I think validators need to focus on the inputs, the outputs, and the time it takes to get from input to output when they look at the feature points in the spec.
First of all, the "time it takes to go from input to output", that is, the delay inside the RTL, I think this is the biggest difficulty in designing the regression model. Because even if you ask the person who wrote the spec, he most likely doesn't know. At this time, we will ask the designer or look at the RTL code, but then we are likely to be affected by the designer's thinking, and the ref model that is built may be wrong with the RTL, so that your verification environment may never find the BUG.
Then there's the input to the output, which is like a truth table, and all we need to do is design and constrain the incentives according to the stochastic strategy. But as the complexity of logic increases, this truth table will become larger and larger, so large that it is difficult for us to write it all. At this time, we can design a large module and divide this large truth table into several small truth tables. It's easy to say, but the workload here goes up exponentially with the logic complexity. If you want to do the so-called cross-validation, it is better to find another designer, also design such a module, and then they compare the results, and then run in the delay, there is no need for a verifier.
Finally, as a digression, when I was doing design in college, I first designed the reference model (written in C++, run it with software to see the effect), and then designed the module according to the reference model, and finally designed the module on the FPGA to run. If you add a verification step here, you can directly take this reference model to verify. So I think it's supposed to be a regression model, and then there's an RTL that needs to be validated, which is the most reasonable process. But when I actually work, I have RTL and then ref model, and the landlord's company should be the same, otherwise I wouldn't have asked this question. Let's analyze that the ref model, although written according to spec, has to get the internal delay of the RTL, and then we use this part of the logic derived from the RTL's ref model to verify the RTL. If you are not careful, you will let go of the BUG, because under normal circumstances, the auto_check in our scoreboard is directly used to take the output of the ref model, and will not check its internal logic.
zxm92:
1. The subject's excessive attention to the referentation model reflects that the subject is more from the designer's point of view to do verification, at the beginning I was also obsessed with the experience model, automatic comparison to the first time to verify the great pleasure. 2. Upstairs said input-to-output time verification, I think the reference model is more focused on the comparison of data streams, and the checker above the timing can use assertion.
Personally, I think designers and validators look at spec differently:
1. The designer is usually from the perspective of the function implementation, and the verifier should be more from the user's point of view, that is, how to use the chip made
2. How to use the chip determines your incentive, your incentive determines the situation that the chip faces, the chip can remain correct in all possible conditions, then the quality of the chip is guaranteed, so how to design your incentive is the core of the verification.
jimbo1006 Reply zxm92:
I don't work very long, and I'm particularly confused about the reference model now, feeling that if I write this, I will leave a part of the logic to the ref model to judge, and if this part of the logic is the same as the DUT, it may cause a very bad situation.
The reference model focuses more on the comparison of data streams, which I agree with, because I have verified the UART module, and when verifying the function points like this, the results mainly focus on the values in the register, and the ref model is easy to set up. And later, through reading UVM_COOKBOOK found that the comparison of data streams like this can not even be ref model. It is enough to design the transaction in the slave agent and then directly compare it with the transaction of the master agent to the scoreboard.
I also tried to use assertion to verify the practice of timing, but then I gave up. Because writing assertion I need to know the time between 2 events. Because the logic is very complex, the DUT has a lot of internal signals, for the verification of a function point, the transmission of the entire signal can be seen as "input-> internal signal a-> internal signal b->... Output". The input and output we can see through sepc, but the time it takes to enter and exit even if we ask the person who wrote the spec, he probably doesn't know. Here I began to choose to write assertion, because I can't trust the internal signal, I can only go directly to find the time from input to output, and when I asked the designer, I found that he also deduced it according to these internal signals, so I could only choose the imitation waveform to confirm. But such a large amount, can not all rely on the waveform ah, so I designed the output waveform according to my own understanding, this waveform to compare with the actual output waveform of the dut, as long as the UVM is wrong, go to the system engineer and designer to confirm. At this time, I found that even the SV assertion could not meet my requirements at all, and I could only use SV or even C++ to build the logic of automatic check.
Motivation in validation is important and I agree, but I don't feel like it's the core. For UVM verification methodology, I think the core should be how to judge whether the results after the incentive input are correct and the design of coverage. The coverage design is like an outline, the incentive is just to follow that outline step by step (it should be better to divide it into 2 people here), and SV is really suitable for doing this.
Like water like smoke:
Everyone feels very deeply, and they can also see that they are groping on the road of verification.
More agree with the upstairs statement, black box verification is undoubtedly relatively labor-saving, only need to pay attention to input and output, more energy on how to judge right and wrong and build incentives according to coverage. But the root of these problems is actually in the understanding of specs, which I estimate that in addition to good methods, we still focus on accumulation, and we do not have good experience in this regard.
1. When reading the spec, carefully consider each sentence after reading, treat yourself as a customer, and think about what the customer will think when he sees this sentence. How will it be used.
2. Carefully consider each function point and think about it from multiple angles. For example, when an enablable signal is turned on, an output of 1 is 1. But when this enable signal is turned off, what is this output, 0,1 or not care. If spec doesn't make it explicit, we need to check with the systems engineer (the person who wrote the spec).
3. System engineers and designers in the design module stage, because of the long period of cooperation, it is likely to have some tacit understanding. For example, after a switch is turned on, dut needs several clocks to take this signal. The inside of spec is described according to the ideal situation.
This tacit understanding may improve the efficiency of designing modules, but it is a big problem for us validators. Because every word of spec may hide such a tacit understanding. This tacit understanding will not only affect the efficiency of verification, but also may affect the reliability of verification.
For example, I wrote the automatic comparison code according to the ideal spec for a certain function point, and after the simulation, I found that the result was not right, and the system engineer told me that there was such a tacit understanding, I could only automatically compare the code to slowly find where this point was involved, and then change it back. But there is actually a risk here, suppose that my auto-comparison code misses these clocks somewhere, resulting in the output under a certain configuration has been outputting the default value of 0 (the correct output should be 1), and the DUT configuration also happens to be wrong output of 0, so I think that the auto-comparison and DUT are correct, resulting in missing bugs.
1. Write assertion I need to know the time between 2 events - > I think this sentence should be replaced by spec does not define the time when 2 events occur, if there is no standard for defining time, and to check, not that the insertion can not be done, but nothing else can be done. Suppose I am worried that the response time from input to output is too long, and there is no spec, I will record the input time, record the output time, take the corresponding input and output time to calculate the time of both, get the maximum value, and then consider whether the maximum value is too large.
2. According to their own understanding of the design of the output waveform, the waveform to and the actual output waveform of the dut to compare, as long as the UVM reported wrong, go to the system engineer and designer to confirm - > do not understand how to design the output waveform, if the input to the output time in the range of 3us ~ 5us is correct, how do you design the output waveform
3. Coverage design is like an outline, incentives are just written step by step according to that outline
--> if you're talking about function coverage, I don't think coverage and motivation should be written by the same person, and questions and exams shouldn't be the same person. Incentive and coverage are the two manifestations of scornario, not to have coverage first, before doing incentive coverage, nor with incentives, to write coverage. Suppose there are ten functions in duts, do these ten functions have to be done serially, or can they be done in parallel? Is there a sequence limit to doing it serially? What does it need sync to do in parallel? Often error will appear under the scornario that has not been thought of.
4. The core should be how to judge whether the result after the incentive input is correct
--> judge the result after the incentive input, there is the incentive input first, and then there is the result, and only then can there be a judgment. The incentives are not perfect enough, and the correct judgment does not mean that the dut is correct
5. When reading the spec, after reading each sentence, you must carefully consider it, treat yourself as a customer, and think about what the customer will think when they see this sentence. How will it be used.
--> this sentence and the sentence I replied to before is a meaning of "the designer is usually from the perspective of the function implementation, and the verifier should be more from the user's point of view, that is, how to use the chip made"
jimbo1006 Reply 7# zxm92 :
I'm glad that most of our views are the same or similar, because the main purpose of my coming to the forum is to test my own methods and some ideas. And where we disagree, I think the main reason is "auto-alignment". As you said 3L, as a first-time verifier, the temptation of "automatic comparison" for me is too great, and my current idea is that "automatic comparison" is the main value of the current, at least the future verifier, and the romance of our verifier. I've only validated a few modules now, and most of each module is validated by automatic comparison. And I've found a lot of designers and system engineers who are amazed by bugs, and when they ask me how I got to validate such a particular case at the review meeting, the sense of accomplishment and satisfaction that comes with it makes me unable to extricate myself. If you try to compare my validation environments automatically, some of my points you may accept.
The following few points correspond to the points on your 7th floor
1. I need to know when the 2 events occurred because I was designing the code for automatic comparison, and I found that it was difficult to count the output of all the DUTs without the help of the internal signals of the DUT or the intermediate signals of my own design. This is equivalent to the designer when designing a large module, he will divide into a small module. And I need to do the same when designing automatic comparisons. I will design an intermediate point, because there are many sets of inputs and corresponding outputs, so the input to the middle to the output is not a pure linear structure, but a network. The mesh structure would cause me to have to know the time to these intermediate points under each configuration, at least approximately. Of course, if the systems engineer could give me a table that lists all the input combinations and corresponding outputs and the time between them, it wouldn't be a problem. But they certainly can't do it, and even if they do, several configurations are linearly input to the DUT in different combinations, and the corresponding output time may change.
2. Automatic alignment of output waveforms is what I did when I was validating the design of the motivation control module (made up of many PWM function-related modules). I designed a collection module to connect with the slave agent's monotor, based on the output signal of the DUT and the corresponding output oe signal, designed a transaction, the transaction will reflect the value of the output waveform (0 or 1), the duration of this value (to the system clock /2+1 sampling frequency to confirm the output signal), whether the output waveform is high-impedance (output oe signal is 0), etc. I then pass this transaction to the coreborad, and the other input combination I pass to the sequencer through the master agent's monoitor. Then I do the so-called automatic comparison of the ideal waveform and the output waveform at scoreborad. Using the fork join function, 3 concurrency paths are designed, the first one is randomly a time, the second one compares the ideal waveform and the actual output waveform under the input configuration in real time, and the third one modifies some parameters as needed, such as the sampling frequency. The range you said 3us ~ 5us is correct, there are many solutions under my structure, in order to standardize the structure to improve reusability, I can insert 2 more fork joins under the second path, each 2 concurrent paths, the time of the first one is 3us and 5us (5us is issued below a flag1), the second one is to compare the waveform, but the corresponding 3us finally put another flag2, At the end I design another piece of code that confirms all the falgs designed.
3. Coverage and incentives really shouldn't be written alone, and I'm the same. But at the moment our company is studying UVM verification alone, where do I go to find someone to write incentives. Even if you recruit individuals in the future, it is impossible to let 2 people do a module verification for the time being, after all, it takes a lot of time to understand spec. Then you think, if I really design the code for coverage and automatic comparison, I can find a few undergraduate graduates and let them exhaust all means, only if they can meet the coverage requirements and pass my automatic comparison, it will be OK. This will save a lot of time and labor costs. It is naturally beautiful for people who write incentives and people who write coverage to check each other, and under careful analysis, the real decision is actually in the hands of people who write coverage. You said 10 functions of parallel and serial problems that I happened to encounter in the verification MCM module, the solution refer to point2.
4. As I mentioned in point 3, as long as I design the coverage, you can always check the coverage situation (I use VCS+verdi), which you can directly find out which points you have not covered, and the person who writes the incentive can redesign or add new incentives according to that. Of course, I'm not saying that writing incentives is not important, and incentives are imperfect can be fed back through the percentage of coverage, but who can feedback if the coverage design is not perfect (code coverage and functional coverage can detect each other, but the reliability of this is really not high)?
5. I am very pleased that we agree on this point of view.