Hi, my first post here, I'm glad to join the group.that VHDL programming is a software experience.
I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
Software world has a pretty good understanding of code coverage and of techniques to measure it.
But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this isall software exercise.
I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.
Is this considered to be an issue?
Hi, my first post here, I'm glad to join the group.that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break
I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design tothat extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.
But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this isall software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage
I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.
Is this considered to be an issue?
On 9/24/19 3:42 AM, Maciej Sobczak wrote:that VHDL programming is a software experience. This in turn suggests that with respect to quality and verification in the industrial practice, the software standards should be relevant. All is well until we reach the point where the easy analogies break
Hi, my first post here, I'm glad to join the group.
I'm a software engineer with interests in embedded designs and I have decided to learn VHDL as a way to broaden my perspective on the world of programmable devices. My understanding, confirmed by authors of several books that I have seen so far, is
that extent. This is how I can convince myself, the customers, the certification authorities or whoever else, that the software was done right.
Software world has a pretty good understanding of code coverage and of techniques to measure it. That is, my 100% object code coverage has a precise meaning and I know how to get the evidence that my test suite actually exercises the given design to
is all software exercise. But the synthesis seems to be a black box, further concealed by the IP concerns of the toolset and chip vendor. That is, because the synthesized structures do not necessarily mirror the VHDL structures, my simulation coverage
But what is the analogy of code coverage in VHDL? I can imagine that when we limit the discussion to simulation only, there are no new problems, because the design can be instrumented or the debugger instructed to gather the coverage traces and this
I short, I'm not testing what I'm selling. I can still run my tests on the final hardware to keep some level of confidence, but the coverage evidence is gone. This seems to be against some software quality standards.
Is this considered to be an issue?
My first thought on this is that fundamentally, the concept of code
coverage just works for HDLs, ultimately it would be great if you can
verify that all 'paths' are exercised and tested.
The big issue is that the HDL languages are very different than
traditional software, as HDLs are at their core, parallel execution vs
the sequential execution of traditional software.
This makes coverage metrics harder to work with. Part of the issue is
that it isn't enough that you have tested every execution 'path' in the
form of statements, but that the execution path becomes combinatorial
(have you tested all combinations of statements being active).
In some ways the testing answer is the same as large software projects,
you break it down into smaller pieces which you can build tests for, and verify functionality, then you combine the pieces and perform
integration tests.
Most commercial logic simulators have a function to give you a
first-level coverage report of the thoroughness of your logic
simulations.
Once you are satisfied that your HDL is functioning correctly, the
mapping to the actual hardware implementation by a synthesis tool is
usually not a big concern, at least for logic functionality. There are equivalence checkers that can verify that the two are equivalent.
question marks.Most commercial logic simulators have a function to give you a
first-level coverage report of the thoroughness of your logic
simulations.
Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
Once you are satisfied that your HDL is functioning correctly, the
mapping to the actual hardware implementation by a synthesis tool is usually not a big concern, at least for logic functionality. There are equivalence checkers that can verify that the two are equivalent.
And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.
On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote:question marks.
Most commercial logic simulators have a function to give you a
first-level coverage report of the thoroughness of your logic
simulations.
Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
code.
Once you are satisfied that your HDL is functioning correctly, the
mapping to the actual hardware implementation by a synthesis tool is
usually not a big concern, at least for logic functionality. There are
equivalence checkers that can verify that the two are equivalent.
And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.
The mapping to hardware does not limit further testing to actual hardware. The components inferred in the HDL can be netlisted and simulated as well. Any code coverage tool that is used on simulated code should be able to handle the post synthesis
What tools are you working with?
My designs are not so complex that code coverage tools are required or particularly useful.
I perform functional simulation which verifies the code is doing the job it is asked to do. When I took a class in program management the testing was to verify that the requirements are being addressed. If those tests do not cover some of the code Iwould ask why that code is in the code base? Can it not be removed and still pass the requirements testing or was there a failure in the test design?
On 25/09/2019 17:12, Rick C wrote:question marks.
On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote:
Most commercial logic simulators have a function to give you a
first-level coverage report of the thoroughness of your logic
simulations.
Yes, this is what I expected - but as already noted, as long as work is kept in the software domain, it is a well-understood software exercise, where everything can be instrumented. It's the transition to hardware that is generating most of the
code.
Once you are satisfied that your HDL is functioning correctly, the
mapping to the actual hardware implementation by a synthesis tool is
usually not a big concern, at least for logic functionality. There are >>> equivalence checkers that can verify that the two are equivalent.
And this is the part that I hoped you to confirm. Thanks for your replies they were very instructive.
The mapping to hardware does not limit further testing to actual hardware. The components inferred in the HDL can be netlisted and simulated as well. Any code coverage tool that is used on simulated code should be able to handle the post synthesis
There is absolutely no point in running code coverage on a netlist with primitives, the results is meaningless. Perhaps toggle coverage might
have some usage to estimate power consumption but even that is generally
not recommended (custom tools/functions are more accurate).
What tools are you working with?
My designs are not so complex that code coverage tools are required or particularly useful.
I find Code Coverage very useful even on a small designs. Any tool that analyses your design (incl synthesis) can tell you something new about
your design. Code Coverage is quick and easy to run so why not use it.
I have spoken to many engineers who claimed their testbench is perfect
and when they run Code Coverage for the first time they always find
holes even on small designs. It is very difficult to look at a piece of
code and say yep my testbench has fully tested that.
The only problem with Code Coverage is that the free tools don't support
it which I think is a shame.
When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?
On Wednesday, September 25, 2019 at 12:55:59 PM UTC-4, HT-Lab wrote:
On 25/09/2019 17:12, Rick C wrote:
On Wednesday, September 25, 2019 at 3:20:07 AM UTC-4, Maciej Sobczak wrote: ..
There is absolutely no point in running code coverage on a netlist with
primitives, the results is meaningless. Perhaps toggle coverage might
have some usage to estimate power consumption but even that is generally
not recommended (custom tools/functions are more accurate).
Why? The same issue exists, how much of the design does the test simulation actually test? If you want to know how much it covers in simulation of the HDL it would be useful to know how much of the design is covered in post synthesis simulation.
..What tools are you working with?
My designs are not so complex that code coverage tools are required or particularly useful.
The only problem with Code Coverage is that the free tools don't support
it which I think is a shame.
As I said, I use a different test technique. Code coverage tells you if the tests detect the correct operation of a line of code. It doesn't tell you if the test is the right test or not. That is the part I care about.
No point in testing a line of code if that line of code is not doing something called for in the requirements.tests. Not at all the same thing as code coverage since there is not a direct correspondence between hardware and lines of code.
We find it much more important to know the coverage of tests performed on hardware to verify it is working. In that case the design has already be verified to be a correct design. Now we want to know how much of the design is actually tested by the
When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?
When you find a mismatch between the code and the test coverage of that code, which do you fix, the code or the tests?
Both code and tests should be written based on some requirements.
Still, the information whether there is any mismatch is something that is worth having.
Sysop: | Keyop |
---|---|
Location: | Huddersfield, West Yorkshire, UK |
Users: | 285 |
Nodes: | 16 (2 / 14) |
Uptime: | 71:42:52 |
Calls: | 6,488 |
Calls today: | 1 |
Files: | 12,096 |
Messages: | 5,275,722 |