-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Generate dashboard using test results #1137
base: main
Are you sure you want to change the base?
Conversation
fb61c27
to
72a3d9d
Compare
72a3d9d
to
66f3470
Compare
It would be nice to integrate some of that metric generation with https://github.com/hdl/bazel_rules_hdl and https://github.com/google/xls/tree/main/xls/build_rules so that we can easily generate / configure those report from |
Yes, it would be nice. However, can we consider adding more metrics in subsequent PRs? |
66f3470
to
f5b7896
Compare
f5b7896
to
4e670a4
Compare
curious if you've considered integrating with: |
remove cpython and embedded_python_interpreter Signed-off-by: Pawel Czarnecki <pczarnecki@antmicro.com>
Generate GDS for process technologies: * ASAP7 * SKY130 Signed-off-by: Pawel Czarnecki <pczarnecki@antmicro.com>
Add GDS write examples for RLE encoder and decoder for process technologies: * ASAP7 * SKY130 Signed-off-by: Pawel Czarnecki <pczarnecki@antmicro.com>
Internal-tag: [#46586] Signed-off-by: Robert Winkler's avatarRobert Winkler <rwinkler@antmicro.com>
The library contains XLSChannel, XLSChannelDriver and XLSChannelMonitor classes. * XLSChannel - provides a mechanism for wrapping all signals related to XLS channels into one object. * XLSChannelDriver - may be used to send data to XLS channel * XLSChannelMonitor - may be used to monitor transaction taking place in XLS Channel Internal-tag: [#46586] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
…ation This commit adds a simple DSLX module that sends back the information received on the input channel. The example contains tests written in DSLX to verify the IR, as well as tests that use Cocotb framework to validate behavior of the generated Verilog sources. Internal-tag: [#46586] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
- `jsonschema` is used for validating in data provided for the dashboard generation script in for of a dedicated Dashboard JSON is valid - `mdutils` is used to generate markdown from the obtained Dashboard JSON data - `mkdocs`, `mkdocs-material` are used to generate HTML website out of the markdown files generated previously from Dashboard JSON Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
- `dashboard.py` is the main script responsible for generating the dashboard. It uses the rest of the scripts as utilities. - `run_and_parse.py` contains functions for running tests and parsing their output to a Dashboard JSON format - `validate_dashboard_json.py` contains function for validating if the provided JSON is in the Dashboard JSON format - `json_to_markdown.py` converts the Dashboard JSON to a markdown document - `mkdocs_creator.py` converts the markdown to HTML using mkdocs Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
This commit adds three parsers that can be used by the user to extract the data for creating a dashboard: - `cocotb_results_xml_parser.py` can extract the information about successful and failed cocotb tests using result.xml saved by the test - `dslx_test_parser.py` can be used to extract the information about successful and failed DSLX tests using the log from the test - `generic_parser.py` can be used to get the Dashboard JSON data dumped directly to the log within special delimiters. To dump data in this format, one can use a dedicated function contained in `utils.py` Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
The test check correctness of the encoding and measures both delay and performance of the core Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
The dashboard contains results of DSLX and cocotb tests as well as delay and performance measurements obtained in the cocotb test. Internal-tag: #[46111] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
Internal Tag: [#47739] Signed-off-by: Pawel Czarnecki <pczarnecki@antmicro.com>
Internal-tag: [#47739] Signed-off-by: Robert Winkler <rwinkler@antmicro.com>
Internal-tag: [#46111] Signed-off-by: Pawel Czarnecki <pczarnecki@antmicro.com>
4e670a4
to
f68952a
Compare
what do you think of moving this (along side #1160) in separate repo similar to https://github.com/antmicro/xls-cosimulation-demonstrator ? |
This commit adds an initial version of the dashboard generation feature to XLS. To demonstrate the dashboard generation process, two examples are provided - a simple dashboard for a passthrough example and a more complex dashboard for a basic RLE encoder design.
The dashboard is generated from data extracted from tests or their output. To extract the data, users may use parsers provided in this PR or their custom scripts, which are responsible for extracting the information and saving it in a Dashboard JSON format. Custom parsers should read the log from stdin and write the extracted information to stdout.
The dashboard generation is done in a few steps. The whole process is controlled by the main
dashboard.py
script. Generating a dashboard is as follows:The
dashboard.py
script is invoked with a set of parameters that specify two types of operations: parsing output of a test (-p
option) or parsing output files generated by the test (-f
option). The arguments contain information about the test to run, the parser to use, and the potential file from which the data should be extracted. The script parses arguments and saves them in a more convenient form internally.The tests are then executed and parsed using the run_and_parse function from the run_and_parse.py file. For the
-p
option, the parser is run on the log produced by the test, for the-f
option, the parser is run on the output file created by the test. The output files are assumed to be located in$TEST_UNDECLARED_OUTPUTS_DIR
, which is a default location for output files produced by Bazel tests.The parsed data in JSON format is collected and then verified if it matches the Dashboard JSON schema. Both the verification mechanism and the schema are available in the
validate_dashboard_json.py
file.Next, the correct piece of information in the Dashboard JSON format is sorted and converted to a markdown file by the
json_to_markdown
function from the Python file with the same name.Finally, the utilities from
mkdocs_creator.py
are used to produce an HTML usingmkdocs
.Since the dashboard relies on tests, it cannot be produced on build time. To generate the dashboard, one has to provide a path to which the HTML should be saved. For example:
Here is a screenshot of the RLE Dashboard:
Relies on changes from #1031
Resolves #1058
CC @proppy