• The meaning of code coverage in VHDL

    From Maciej Sobczak@21:1/5 to All on Tue Sep 24 00:42:59 2019
    Hi, my first post here, I'm glad to join the group.

    I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is that
    VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break
    apart - one of such points is the notion of code coverage.

    Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to that
    extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.

    But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this is
    all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage
    traces are not necessarily indicative of the final coverage in the programmed chip.
    I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.

    Is this considered to be an issue?

    --
    Maciej Sobczak * http://www.inspirel.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HT-Lab@21:1/5 to Maciej Sobczak on Tue Sep 24 09:30:05 2019
    On 24/09/2019 08:42, Maciej Sobczak wrote:
    Hi, my first post here, I'm glad to join the group.

    I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
    that VHDL programming is a software experience.


    I would argue against this and would say it is more of a hardware
    experience, as many engineers will tell you think hardware when you
    write your RTL. I would say that only languages like untimed C/C++ can
    be considered more of a software than hardware experience.


    This in turn suggests that with respect to quality and verification in
    the industrial practice, the software standards should be relevant. All
    is well until we reach the point where the easy analogies break apart -
    one of such points is the notion of code coverage.

    Software world has a pretty good understanding of code coverage and of techniques to measure it.

    The same applies in the hardware world, Code Coverage has been used for
    many decades. My first introduction to Code Coverage was nearly 25 years
    ago with the VN tools.

    That is, my 100% object code coverage has a precise meaning and I know
    how to get the evidence that my test suite actually exercises the given
    design to that extent. This is how I can convince myself, the customers,
    the certification authorities or whoever else, that the software was
    done right.

    Are you sure? I have never used Code Coverage in the software world but
    Code Coverage is a measure how well your testbench/testfixture is
    stimulating your design, it tells you nothing if your design is working
    OK. In the hardware world you have to use functional coverage to answer
    that questions.


    But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this is
    all software exercise.

    This can also be done one a synthesised/P&R'd design or even on chip.

    But the synthesis seems to be a black box, further concealed by the IP
    concerns of the toolset and chip vendor. That is, because the
    synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage traces are not necessarily indicative of the final
    coverage in the programmed chip.

    There are a lots of answers and subtleties to all the points you have
    raised. However, I would say in general that if you fully validated your synchronous RTL design you can assume that after synthesis and P&R the
    produce netlist is equivalent to your RTL. If you want to validate the
    netlist you can use an equivalence checker. This is especially important
    if additional logic such as TMR, BIST etc is added.

    One important aspect which might be different from the software world is
    that 100% Code Coverage is not always achievable. I also expect that
    there are more Code Coverage metrics in the hardware world than in the
    software world. We have metrics like path and toggle coverage to name a
    few.

    Regards,
    Hans
    www.ht-lab.com


    I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.

    Is this considered to be an issue?


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Richard Damon@21:1/5 to Maciej Sobczak on Tue Sep 24 07:45:15 2019
    On 9/24/19 3:42 AM, Maciej Sobczak wrote:
    Hi, my first post here, I'm glad to join the group.

    I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
    that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break
    apart - one of such points is the notion of code coverage.

    Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to
    that extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.

    But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this is
    all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage
    traces are not necessarily indicative of the final coverage in the programmed chip.
    I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.

    Is this considered to be an issue?


    My first thought on this is that fundamentally, the concept of code
    coverage just works for HDLs, ultimately it would be great if you can
    verify that all 'paths' are exercised and tested.

    The big issue is that the HDL languages are very different than
    traditional software, as HDLs are at their core, parallel execution vs
    the sequential execution of traditional software.

    This makes coverage metrics harder to work with. Part of the issue is
    that it isn't enough that you have tested every execution 'path' in the
    form of statements, but that the execution path becomes combinatorial
    (have you tested all combinations of statements being active).

    In some ways the testing answer is the same as large software projects,
    you break it down into smaller pieces which you can build tests for, and
    verify functionality, then you combine the pieces and perform
    integration tests.

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Charles Bailey@21:1/5 to Richard Damon on Tue Sep 24 12:36:48 2019
    On 2019-09-24 06:45, Richard Damon wrote:
    On 9/24/19 3:42 AM, Maciej Sobczak wrote:
    Hi, my first post here, I'm glad to join the group.

    I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
    that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break
    apart - one of such points is the notion of code coverage.

    Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to
    that extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.

    But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this
    is all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage
    traces are not necessarily indicative of the final coverage in the programmed chip.
    I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.

    Is this considered to be an issue?


    My first thought on this is that fundamentally, the concept of code
    coverage just works for HDLs, ultimately it would be great if you can
    verify that all 'paths' are exercised and tested.

    The big issue is that the HDL languages are very different than
    traditional software, as HDLs are at their core, parallel execution vs
    the sequential execution of traditional software.

    This makes coverage metrics harder to work with. Part of the issue is
    that it isn't enough that you have tested every execution 'path' in the
    form of statements, but that the execution path becomes combinatorial
    (have you tested all combinations of statements being active).

    In some ways the testing answer is the same as large software projects,
    you break it down into smaller pieces which you can build tests for, and verify functionality, then you combine the pieces and perform
    integration tests.

    Richard Damon gave some good answers to your questions, but I'll add a
    little bit to it.

    Most commercial logic simulators have a function to give you a
    first-level coverage report of the thoroughness of your logic
    simulations. That is, they can give you a report of what percentage of
    the signals in your circuit switched to both the '1' and '0' logic state
    at some point in the simulation.

    But then, as Richard pointed out, you have the question of all possible combinations of those signals. This is usually fairly straight forward
    for purely combinatorial logic, such as decoders, Galois field adders
    and multipliers, address decoders, etc. For these you typically write a testbench that instantiates your logic as a Circuit Under Test. The
    testbench applies all possible combinations of input signals to your
    circuit, computes what the outputs should be, and compares the expected
    with the actual results.

    Once you are satisfied that your HDL is functioning correctly, the
    mapping to the actual hardware implementation by a synthesis tool is
    usually not a big concern, at least for logic functionality. There are equivalence checkers that can verify that the two are equivalent. There
    are also testability analyzers that can verify how thoroughly the logic
    can be tested and identify redundancies.

    Charles Bailey

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Maciej Sobczak@21:1/5 to All on Wed Sep 25 00:20:04 2019
    Most commercial logic simulators have a function to give you a
    first-level coverage report of the thoroughness of your logic
    simulations.

    Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the question
    marks.

    Once you are satisfied that your HDL is functioning correctly, the
    mapping to the actual hardware implementation by a synthesis tool is
    usually not a big concern, at least for logic functionality. There are equivalence checkers that can verify that the two are equivalent.

    And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.

    --
    Maciej Sobczak * http://www.inspirel.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rick C@21:1/5 to Maciej Sobczak on Wed Sep 25 09:12:55 2019
    On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote:
    Most commercial logic simulators have a function to give you a
    first-level coverage report of the thoroughness of your logic
    simulations.

    Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
    question marks.

    Once you are satisfied that your HDL is functioning correctly, the
    mapping to the actual hardware implementation by a synthesis tool is usually not a big concern, at least for logic functionality. There are equivalence checkers that can verify that the two are equivalent.

    And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.

    The mapping to hardware does not limit further testing to actual hardware. The components inferred in the HDL can be netlisted and simulated as well. Any code coverage tool that is used on simulated code should be able to handle the post synthesis code.


    What tools are you working with?

    My designs are not so complex that code coverage tools are required or particularly useful. I perform functional simulation which verifies the code is doing the job it is asked to do. When I took a class in program management the testing was to verify
    that the requirements are being addressed. If those tests do not cover some of the code I would ask why that code is in the code base? Can it not be removed and still pass the requirements testing or was there a failure in the test design?

    --

    Rick C.

    - Get 2,000 miles of free Supercharging
    - Tesla referral code - https://ts.la/richard11209

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HT-Lab@21:1/5 to Rick C on Wed Sep 25 17:55:57 2019
    On 25/09/2019 17:12, Rick C wrote:
    On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote:
    Most commercial logic simulators have a function to give you a
    first-level coverage report of the thoroughness of your logic
    simulations.

    Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
    question marks.

    Once you are satisfied that your HDL is functioning correctly, the
    mapping to the actual hardware implementation by a synthesis tool is
    usually not a big concern, at least for logic functionality. There are
    equivalence checkers that can verify that the two are equivalent.

    And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.

    The mapping to hardware does not limit further testing to actual hardware. The components inferred in the HDL can be netlisted and simulated as well. Any code coverage tool that is used on simulated code should be able to handle the post synthesis
    code.

    There is absolutely no point in running code coverage on a netlist with primitives, the results is meaningless. Perhaps toggle coverage might
    have some usage to estimate power consumption but even that is generally
    not recommended (custom tools/functions are more accurate).


    What tools are you working with?

    My designs are not so complex that code coverage tools are required or particularly useful.


    I find Code Coverage very useful even on a small designs. Any tool that analyses your design (incl synthesis) can tell you something new about
    your design. Code Coverage is quick and easy to run so why not use it.
    I have spoken to many engineers who claimed their testbench is perfect
    and when they run Code Coverage for the first time they always find
    holes even on small designs. It is very difficult to look at a piece of
    code and say yep my testbench has fully tested that.

    The only problem with Code Coverage is that the free tools don't support
    it which I think is a shame.

    Hans
    www.ht-lab.com


    I perform functional simulation which verifies the code is doing the job it is asked to do. When I took a class in program management the testing was to verify that the requirements are being addressed. If those tests do not cover some of the code I
    would ask why that code is in the code base? Can it not be removed and still pass the requirements testing or was there a failure in the test design?


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Rick C@21:1/5 to HT-Lab on Wed Sep 25 13:45:40 2019
    On Wednesday, September 25, 2019 at 12:55:59 PM UTC-4, HT-Lab wrote:
    On 25/09/2019 17:12, Rick C wrote:
    On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote:
    Most commercial logic simulators have a function to give you a
    first-level coverage report of the thoroughness of your logic
    simulations.

    Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
    question marks.

    Once you are satisfied that your HDL is functioning correctly, the
    mapping to the actual hardware implementation by a synthesis tool is
    usually not a big concern, at least for logic functionality. There are >>> equivalence checkers that can verify that the two are equivalent.

    And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.

    The mapping to hardware does not limit further testing to actual hardware. The components inferred in the HDL can be netlisted and simulated as well. Any code coverage tool that is used on simulated code should be able to handle the post synthesis
    code.

    There is absolutely no point in running code coverage on a netlist with primitives, the results is meaningless. Perhaps toggle coverage might
    have some usage to estimate power consumption but even that is generally
    not recommended (custom tools/functions are more accurate).

    Why? The same issue exists, how much of the design does the test simulation actually test? If you want to know how much it covers in simulation of the HDL it would be useful to know how much of the design is covered in post synthesis simulation.


    What tools are you working with?

    My designs are not so complex that code coverage tools are required or particularly useful.


    I find Code Coverage very useful even on a small designs. Any tool that analyses your design (incl synthesis) can tell you something new about
    your design. Code Coverage is quick and easy to run so why not use it.
    I have spoken to many engineers who claimed their testbench is perfect
    and when they run Code Coverage for the first time they always find
    holes even on small designs. It is very difficult to look at a piece of
    code and say yep my testbench has fully tested that.

    The only problem with Code Coverage is that the free tools don't support
    it which I think is a shame.

    As I said, I use a different test technique. Code coverage tells you if the tests detect the correct operation of a line of code. It doesn't tell you if the test is the right test or not. That is the part I care about.

    No point in testing a line of code if that line of code is not doing something called for in the requirements.

    We find it much more important to know the coverage of tests performed on hardware to verify it is working. In that case the design has already be verified to be a correct design. Now we want to know how much of the design is actually tested by the
    tests. Not at all the same thing as code coverage since there is not a direct correspondence between hardware and lines of code.

    When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?

    --

    Rick C.

    + Get 2,000 miles of free Supercharging
    + Tesla referral code - https://ts.la/richard11209

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From Maciej Sobczak@21:1/5 to All on Thu Sep 26 00:43:48 2019
    When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?

    Both code and tests should be written based on some requirements. If you end up with any kind of mismatch between them, then one of them or even both might need to be fixed, but the only way to tell is to analyze the issue on a case-by-basis. No up-front
    rules here are possible, unless you *arbitrarily* decide to treat one of these artifacts as primary, for whatever reason.
    Still, the information whether there is any mismatch is something that is worth having.

    --
    Maciej Sobczak * http://www.inspirel.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HT-Lab@21:1/5 to Rick C on Thu Sep 26 10:27:19 2019
    On 25/09/2019 21:45, Rick C wrote:
    On Wednesday, September 25, 2019 at 12:55:59 PM UTC-4, HT-Lab wrote:
    On 25/09/2019 17:12, Rick C wrote:
    On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote: ..

    There is absolutely no point in running code coverage on a netlist with
    primitives, the results is meaningless. Perhaps toggle coverage might
    have some usage to estimate power consumption but even that is generally
    not recommended (custom tools/functions are more accurate).

    Why? The same issue exists, how much of the design does the test simulation actually test? If you want to know how much it covers in simulation of the HDL it would be useful to know how much of the design is covered in post synthesis simulation.

    I think you have never used Code Coverage before but that is OK.
    Consider a simple statement of the form:

    if (a ='1' or b='1' )

    Next assume Conditional Coverage tells you that ab="10" and ab="11" have
    been recorded. This might be correct or not in which case you change
    your testbench or seed to hit the "00" and "01" test cases. Statement
    Coverage might give you 100% coverage even if "b" is stuck at 1, toggle coverage might tell you that "a" went from 0 to 1 but never back to 0
    again etc.
    Now synthesis your design which converts the above RTL into luts, what information are you expected to extract with Code Coverage? and if you
    do get say 66% coverage what does this tell you?



    What tools are you working with?

    My designs are not so complex that code coverage tools are required or particularly useful.


    ..

    The only problem with Code Coverage is that the free tools don't support
    it which I think is a shame.

    As I said, I use a different test technique. Code coverage tells you if the tests detect the correct operation of a line of code. It doesn't tell you if the test is the right test or not. That is the part I care about.

    You are repeating my words from my original reply.

    No point in testing a line of code if that line of code is not doing something called for in the requirements.

    We find it much more important to know the coverage of tests performed on hardware to verify it is working. In that case the design has already be verified to be a correct design. Now we want to know how much of the design is actually tested by the
    tests. Not at all the same thing as code coverage since there is not a direct correspondence between hardware and lines of code.

    You are confusing Functional Coverage with Code Coverage, they are
    different methods. Code Coverage not only check the quality of your
    testbench but it also makes you better understand you design.
    I would suggest you find somebody/client who has access to Code Coverage
    and use it on one of your design which you think you have fully tested.
    I will guarantee you will find statements/expressions/conditions/toggles/branches which have not been stimulated. This does not mean your design is faulty but it does mean
    your testbench was not complete and you have been lucky.


    When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?

    If anybody asks me that question I would politely suggest they hand the
    design back to the designer and let him run the Code Coverage.

    Regards,
    Hans.
    www.ht-lab.com

    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)
  • From HT-Lab@21:1/5 to Maciej Sobczak on Thu Sep 26 10:44:31 2019
    On 26/09/2019 08:43, Maciej Sobczak wrote:
    When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?

    Both code and tests should be written based on some requirements.

    My experience is that companies do not write requirements for Code
    Coverage, what they tend to do is to specify a single coverage value.
    For DO-254 this is normally 100% (which is a real pain and might
    requires expensive formal tools and lots of waivers if the design is
    complex). Other companies just say you need at least 100% Statement (not
    line) and Branch coverage, which I find quite sensible.

    Regards,
    Hans.
    www.ht-lab.com

    If you end up with any kind of mismatch between them, then one of them
    or even both might need to be fixed, but the only way to tell is to
    analyze the issue on a case-by-basis. No up-front rules here are
    possible, unless you *arbitrarily* decide to treat one of these
    artifacts as primary, for whatever reason.
    Still, the information whether there is any mismatch is something that is worth having.


    --- SoupGate-Win32 v1.05
    * Origin: fsxNet Usenet Gateway (21:1/5)