Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[FR] Test statistics #5995

Closed
2 tasks done
martonmiklos opened this issue Nov 27, 2023 · 31 comments · Fixed by #7164
Closed
2 tasks done

[FR] Test statistics #5995

martonmiklos opened this issue Nov 27, 2023 · 31 comments · Fixed by #7164
Labels
build Build orders enhancement This is an suggested enhancement or new feature inactive Indicates lack of activity
Milestone

Comments

@martonmiklos
Copy link
Contributor

Please verify that this feature request has NOT been suggested before.

  • I checked and didn't find a similar feature request

Problem statement

It would be great if statistics could be made from test results easily.
I am not only thinking about from the part perspective view like suggested in the #821, but from other, higher perspective (per test templates for e.g.) or in general (see later).

What I am thinking about:

Test failure type analysis:
As I read about the test reporting documentation I realized that we use somewhat differently the test results concepts what it was originally designed:
We run test sequences (composed from several test steps). If any of the test steps failing the sequence terminated immediately and get logged in the following manner: I store the failed step name in the value field. In the case if all steps passing a passed result will be logged w/o any value.
I do not know if this InvenTree usage approach should be follow-able, but logging each step as a separate test result would not make much sense in our case (we have looped test steps with a few K iterations for e.g.).
Anyway: it would be good if the test results count could be grouped and counted by the failure reason. If we use the value for this calculation or we introduce a failure reason field for the test run is up for discussion. The calculation could be filtered by test run date (time interval), part type or test template.

Retest statistics
Sometimes it is important to know how "smoothly" going the testing at the manufacturing floor. It would be great if the test results could include a retest field which would be zero for the first test results saved for a specific stock item for the given test template, and would be set to true in the case if the test is executed again for the same stock item. From this retest flag we can make some statistics about the retest metrics to see how stable the testing goes on the floor.

Test equipment statistics
It would be great if the test equipment information (test station number for e.g.) could be logged to each test result and some metrics could be created to see how much the specific stations are utilizeed. (See next point.)

Execution time statistics
There are tests which require operator intervention which contributes to the testing processes capacity. It would be great if the test execution time (start/stop) could be stored for each test result, and some statistics (min, max, average, distribution of the test time) could be created per test template (with filtering capability on time intervals, or specific build orders). It would be also useful to accumulate the test times grouped by test stations to see the given stations utilization.

Suggested solution

First I would like to discuss if storing more data (test time, station info, failure step) makes sense for the fellow developers. I know that our "InvenTree test adoptation" was went in a somewhat different way as InvenTree used at different places, but I think these additions would makes sense at many other users.
If we agree upon something in this area I think this is something what I can help to move forward.

I know that the timing for new UI features is not the best at the moment (in the meantime of the UI switching), however for my own usage at first I do not need web UI: I would like to create a daily/weekly test statistics digest e-mail which would be created by an external tool with the usage of the InvenTree API.

The API design/discussion/implementation is something what I think I could be capable to help with.

Describe alternatives you've considered

Examples of other systems

I am not aware of any.

Do you want to develop this?

  • I want to develop this.
@martonmiklos martonmiklos added enhancement This is an suggested enhancement or new feature triage:not-checked Item was not checked by the core team labels Nov 27, 2023
@SchrodingersGat SchrodingersGat added build Build orders and removed triage:not-checked Item was not checked by the core team labels Dec 6, 2023
@SchrodingersGat
Copy link
Member

@martonmiklos some very interesting ideas here, it aligns somewhat with our proposed expansion of MES functionality.

Some of the particulars here would be good as a plugin, but there is certainly room for expansion and improvement on the existing functionality. The stock item testing framework has received very little development over the last couple of years.

First I would like to discuss if storing more data (test time, station info, failure step) makes sense for the fellow developers.

This could be added in the metadata for the test result itself, as there may be a wide range of data that different companies want to implement. However if there are some generic data which we can agree would be generally useful, happy to add those in.

It would be great if the test results could include a retest field which would be zero for the first test results saved for a specific stock item for the given test template, and would be set to true in the case if the test is executed again for the same stock item.

The existing implementation keeps all recorded test results, so you can already determine how many pass / fail results there are against a particular test case. Unless I am misreading this?


In general, I am very keen to expand functionality here and provided some measure of statistical analysis would be a great feature. If you have some specific implementation ideas that you want to suggest (in terms of fields, API access, etc) then I'd suggest making a PR proposal and we can work from there?

@martonmiklos
Copy link
Contributor Author

martonmiklos commented Dec 6, 2023

The existing implementation keeps all recorded test results, so you can already determine how many pass / fail results there are against a particular test case. Unless I am misreading this?

Yes, but I would like to "cache" the retest flag to being able to query it effectively. i.e. if I want to make a retest statistics from the past let's say 2 months (10 parts, 3000 test results) then it is just a matter of a count in SQL. If it is not cached then we would need to go through on each stock item, count the individual results per test template etc...

If you have some specific implementation ideas that you want to suggest (in terms of fields, API access, etc) then I'd suggest making a PR proposal and we can work from there?

Sounds like a plan!

I missed one thing what was in my mind previously: the test status view for a build output:

This would be a view where the test templates would be listed for the part and for each test template the tested/passed/failed/retested count would be shown for a given build order. With this view we could easily get an overview of the build outputs progressing on the tests.

Copy link
Contributor

github-actions bot commented Feb 5, 2024

This issue seems stale. Please react to show this is still important.

@github-actions github-actions bot added the inactive Indicates lack of activity label Feb 5, 2024
@martonmiklos
Copy link
Contributor Author

Not stale

@github-actions github-actions bot removed the inactive Indicates lack of activity label Feb 6, 2024
Copy link
Contributor

github-actions bot commented Apr 6, 2024

This issue seems stale. Please react to show this is still important.

@github-actions github-actions bot added the inactive Indicates lack of activity label Apr 6, 2024
@martonmiklos
Copy link
Contributor Author

I am still working on it

@github-actions github-actions bot removed the inactive Indicates lack of activity label Apr 8, 2024
@martonmiklos
Copy link
Contributor Author

martonmiklos commented Apr 11, 2024

Hi @SchrodingersGat @matmair

I started to implement new API endpoints for generating a pass/fail statistic table broken down by part, build and test template.

As the test results are belong under the stock I added the following API endpoints:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1765

I added stock/test-statistics/[by-part/by-build/by-test-template] endpoints.

What do you think about this approach?
Shall I move the first two API endpoints to part/build?

On the other hand I am struggling with the filtering: I would like to get the test statistics to be filterable by date.
I tried to try out everything here:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1360

But the filtering UI did not got available in the rest UI:
kép
neither the url filtering worked.

I do not even know that the approach what I am trying to do is the correct way to implement.
The actual "result matrix" generation is done here:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/models.py#L2421

I appreciate any feedback!

@SchrodingersGat
Copy link
Member

@martonmiklos I have not had time to review this fully but a few points I'd like to make initially:

  • Rather than expose the same API view multiple times, can you do the same thing by exposing a single endpoint and providing different query parameters to achieve different outputs?
  • For filtering, look at how we use the FilterSet approach - it greatly simplifies things!

But the filtering UI did not got available in the rest UI:

You need to use FilterSet - anything you add in the filter_queryset method is hidden from introspection by the django rest framework

@martonmiklos
Copy link
Contributor Author

* Rather than expose the same API view multiple times, can you do the same thing by exposing a single endpoint and providing different query parameters to achieve different outputs?

Done I changed to use api/stock/test-statistics/by-part and api/stock/test-statistics/by-build API base urls.

* For filtering, look at how we use the [FilterSet](https://github.com/inventree/InvenTree/blob/40e867896bb8a4fb8fce84a0003bdac323ff8666/src/backend/InvenTree/part/api.py#L922) approach - it greatly simplifies things!

But the filtering UI did not got available in the rest UI:

You need to use FilterSet - anything you add in the filter_queryset method is hidden from introspection by the django rest framework

Yes I tried to replicate that: created a very basic FilterSet with two additional filter fields on the finished date:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1312

And attaching it with the filterset_class:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1336

However it is not shown in the API UI and in the schema.
What did I made wrong?

@SchrodingersGat
Copy link
Member

@martonmiklos that looks correct to me - but it does not show up in the DRF browsable API?

@martonmiklos
Copy link
Contributor Author

@SchrodingersGat yes:
kép

I am under the impression that I go against the DJango's the queryset/filterset paradigm.

I expose the queryset to the StockItemTestResult here:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/api.py#L1333

However what I do "back in the model" after the TestStatistic::get -> TestStatisticSerializer::to_representation -> StockItemTestResult::part/build_test_statistics is the following:

I do queries by part/builds to retrive StockItemTestResults and then count them with a manually built query:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/InvenTree/stock/models.py#L2423

If I think correctly I should expose these queries built here with the TestStatistic::queryset somehow and then only apply the passed/failed/part/build filters here.

I think I could workaround the whole issue by flipping everything upside down and moving most of the logic to the client side: query the test templates by a part then query the passed/failed results by part for each template.

@SchrodingersGat
Copy link
Member

I'm at a loss - your implementation looks fine, not sure why it is not working as is :|

@martonmiklos
Copy link
Contributor Author

Well I could access the query parameters and "smuggle" down to the model method, but it would be still good to expose them to the API. Originally I wanted to create a "weekly test digest email" with an external tool and it would be good if the API could be generated.

@martonmiklos
Copy link
Contributor Author

Okay, I digged a bit deeper: the issue is caused by "awkward" request handling.

If I bypass the serializer I got the filters appear:
kép

However I do not know how this could be solved. Maybe @matmair could have some recommendation.

@matmair
Copy link
Member

matmair commented Apr 23, 2024

On a quick look through this seems to be correct, I will try to debug it before the release window closes

@martonmiklos
Copy link
Contributor Author

On a quick look through this seems to be correct, I will try to debug it before the release window closes

It is not that urgent, I have not even PR-ed it yet as I wanted to include a few more related stuff:

  • PUI support
  • Add date filters in the view for quick filtering on last week, last month

@martonmiklos
Copy link
Contributor Author

Ah folks you won't believe it...

The reason why it does not work is the fact that I brought up the data from the model in a dict not in an array.
kép

After wrapping the data into an array it works like a charm:
kép

@martonmiklos
Copy link
Contributor Author

@SchrodingersGat , @matmair
We reached to the most important question:
kép

What icon shall we put to the "Test statistics" menu?

@matmair
Copy link
Member

matmair commented Apr 29, 2024

@martonmiklos TBH I think the current icon is fine? Do we really need another one?

@martonmiklos
Copy link
Contributor Author

@martonmiklos TBH I think the current icon is fine? Do we really need another one?

@matmair having the same icon as the Test templates menu item above was somewhat strange for me. But I am fine with that.

@matmair
Copy link
Member

matmair commented Apr 29, 2024

I have no preference either way

@SchrodingersGat
Copy link
Member

@martonmiklos for the existing user interface you could use something like:

Icon Image
fas fa-vials image
fas fa-chart-line image

for the react interface, which users tabler.io icons, perhaps:

Icon Image
<IconReportAnalytics /> image

@martonmiklos
Copy link
Contributor Author

@martonmiklos for the existing user interface you could use something like:

@SchrodingersGat Cool, thanks, I will use the last two.

One more technical question: I just discovered that the response fields does not get propagated to the API descriptor:
kép

Can you give some guidance how to do it?

I saw some @extend_schema magic like this over the code:

@extend_schema_view(
    post=extend_schema(
        responses={200: OpenApiResponse(description='User successfully logged out')}
    )

but I have not find example for adding fields. Maybe I am thinking in the wrong direction?

@wolflu05
Copy link
Contributor

wolflu05 commented May 1, 2024

You can do this like this:

@extend_schema(
responses={200: MachineSerializers.MachineSettingSerializer(many=True)}
)

@martonmiklos
Copy link
Contributor Author

Hi @matmair , @SchrodingersGat

May I ask for some React/mantine datatable help?
For the test statistics I would need dynamic columns, however I have not found any example in the wild for that use case with mantine datatable.

I have tried to alter the table's columns in the dataFormatter callback, but if I do that I got an error:
kép
what I have no clue how to track down (I have virtually 0 experience with typescript/VSCode)..
If I use static columns (like it is committed right now) the table loads fine.

@wolflu05
Copy link
Contributor

wolflu05 commented May 4, 2024

Can you please show us how you tried to modify the table columns? I can only guess you tried to mutate the return value of the useMemo, but that should be immutable, you would need to put that into a useState and use the second return value function. And regarding your error, I think that's something related on the mantine datatable side, so don't worry. It only affects you if you loop over a list to render elements you you do not provide a unique key prop for the component you render in the loop.

Edit: you may also want to take a look at how the parametric part table is done, because it has dynamic columns.

@martonmiklos
Copy link
Contributor Author

Can you please show us how you tried to modify the table columns? I

@wolflu05 many thanks for your quick response, I really appreciate it!

I forgot to include the link to my code, here is it, I have not used any useState/useMemo:
https://github.com/martonmiklos/InvenTree/blob/add_test_statistics/src/frontend/src/tables/stock/TestStatisticsTable.tsx#L106

Edit: you may also want to take a look at how the parametric part table is done, because it has dynamic columns.

Which table do you refer exactly?
This one?
kép

@wolflu05
Copy link
Contributor

wolflu05 commented May 5, 2024

Have a look at https://github.com/inventree/InvenTree/blob/master/src/frontend/src/tables/part/ParametricPartTable.tsx

And yes, how you do it will not work. That's not how react works. But the linked example should pretty much cover what you are trying to do.

@martonmiklos
Copy link
Contributor Author

And yes, how you do it will not work. That's not how react works. But the linked example should pretty much cover what you are trying to do.

Yeah I trying to get into react/typescript in "monkey style" and it is hard sometimes. 🤣

Anyway I got it rolling:
kép

I will doing some polishing and then I will PR it.

martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 5, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 5, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 8, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 16, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 22, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue May 24, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue Jun 24, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue Jun 24, 2024
martonmiklos added a commit to martonmiklos/InvenTree that referenced this issue Jul 3, 2024
Copy link
Contributor

github-actions bot commented Jul 5, 2024

This issue seems stale. Please react to show this is still important.

@github-actions github-actions bot added the inactive Indicates lack of activity label Jul 5, 2024
@SchrodingersGat
Copy link
Member

Not stale

@SchrodingersGat SchrodingersGat added this to the 0.16.0 milestone Jul 5, 2024
matmair added a commit to matmair/InvenTree that referenced this issue Jul 31, 2024
* fix style issue (inventree#7768)

* Pin OAS action (inventree#7767)

* [PUI] Bug fix for API forms (inventree#7758)

* [PUI] Bug fix for API forms

- Ensure that "blank" data does not get overriden with previous fields
- Better logic for form data fetching

* Playwright test fixes

* Add extra fields to BOM export (inventree#7775)

* Add extra fields to BOM export

* Bump API version

* Fix permissions for importing data (inventree#7776)

* Added test statistics (inventree#7164)

* Added test statistics

Fixed inventree#5995

* Bump API version

* Fix javascript varible scopes

* Fix javascript exports

* Remove duplicated import

* Add files modified by the pre-commit scripts

* Move test statistics API urls to a separate endpoint

* Merge test-statistics urls

* Undo unrelated changes

* Formatting fix

* Fix API urls for test statistics in PUI

* Fix prefixing in test statistic functions

---------

Co-authored-by: Oliver <[email protected]>

* Add more names

* split build and publish

* add attestation and SBOM

* format file

* Add toplevel permissions

---------

Co-authored-by: Oliver <[email protected]>
Co-authored-by: Miklós Márton <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
build Build orders enhancement This is an suggested enhancement or new feature inactive Indicates lack of activity
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants