Comments (6)
I agree we need to make a few changes.
I think we should keep all the test cases we have for the processor but move them to a blackbox testing, so we only test the output of the models being generated since this will give us the indication of whether something is wrong or not. These tests have saved me a few time so I dont think there is any way around it. Problem is, if you change something in the model generator you would still need to update them, but IMO this is still better alternative
from modelina.
I think we should keep all the test cases we have for the processor but move them to a blackbox testing, so we only test the output of the models being generated since this will give us the indication of whether something is wrong or not. These tests have saved me a few time so I dont think there is any way around it. Problem is, if you change something in the model generator you would still need to update them, but IMO this is still better alternative
But then you have this same problem as I described. You change something slightly inside implementation (like new simplifier) and then probably you must update whole tests. In my approach, you test everything like (maybe you have this in mind):
except(models[0].type).toEqual(...)
At the moment we have tests looking like manually created snapshot, which is very bad. if it is already bad for us, think what people will feel by contributing. It took me 2 hours to implement the simplifier for names, the rest of the 2 days was for correcting the tests.
from modelina.
To me this is not a solution. Imagine you change the structure of the output? Remove a field? then you are back to having to update all the tests. And with this change, adapting the tests will IMO be far worse then what it is now. But I agree that that we should move on from testing the simplifier output so we dont test JSON output but actual files in the generator. And then use a snapshot system in Jest if that provide a better way to see changes.
Also the changing of outcomes can be done using a debugger where you simply cope the entire actual output -> paste it into the expected -> see what was changed in git and then commit it. Now I agree testing the input processing output is far from perfect, but is necessary to test at least the expected output from the generator.
We need to know how changes to the library affect the outcome of all kinds of situations and combinations of schema files, of course most unit tests should catch something like this, but some things you just cant catch in unit testing.
from modelina.
To me this is not a solution. Imagine you change the structure of the output? Remove a field? then you are back to having to update all the tests.
This same you must do when you have tests based on pure json raw data, like currently, so this is a problem in my idea and your and we cannot avoid it.
Also the changing of outcomes can be done using a debugger where you simply cope the entire actual output -> paste it into the expected -> see what was changed in git and then commit it.
And then what? :) People probably will paste output to expected
folder and tests will pass and also then we have a requirerement in contribution guide to have additional plugins for jest etc. Also tests based on snapshots were never good option in previous project, because then someone updated snapshot without checking changes (reviewer also didn't it) and we had a broken production - when we switched to unit and small integration tests as described below, when someone removed some tests (because we changed logic) then reviewer saw it.
We need to know how changes to the library affect the outcome of all kinds of situations and combinations of schema files, of course most unit tests should catch something like this, but some things you just cant catch in unit testing.
I'm agree with that, but I would prefer to have one big integration test that does something similar like currently, not 20-30 tests with checking originalInput
, models
and not used customization
field :) Also this integration test should be splitted to smaller parts something, for example, we have:
{
"models": {
"ArrayType": {},
"UnionType": {},
"Tuple": {},
"Extend": {},
}
}
then you can have separated expected
models written in files and test them in this way:
expected(models["ArrayType"]).toEqual(...path)
from modelina.
And then what? :) People probably will paste output to
expected
folder and tests will pass and also then we have a requirerement in contribution guide to have additional plugins for jest etc. Also tests based on snapshots were never good option in previous project, because then someone updated snapshot without checking changes (reviewer also didn't it) and we had a broken production - when we switched to unit and small integration tests as described below, when someone removed some tests (because we changed logic) then reviewer saw it.
hmm, yea I can see this being a thing, I just fear that one big integration test wont catch anything or at least catch very few problems.
I dont see how we can rewrite them without creating more clutter, but if you have a clear vision on how to do it feel free to do so 😄
from modelina.
This issue has been automatically marked as stale because it has not had recent activity 😴
It will be closed in 60 days if no further activity occurs. To unstale this issue, add a comment with detailed explanation.
Thank you for your contributions ❤️
from modelina.
Related Issues (20)
- Python cannot handle circular dependencies
- Python generate models with inconsistant indentation
- Incorrect naming formatting for properties, functions and model names HOT 2
- Allow literal types other than 'string' to be used and properly interpreted as const values HOT 5
- Create a one for all input document for runtime to test all functionality HOT 2
- Wrong model is getting generatred for asyncapi version object in v2 schema for golang HOT 2
- Exception occurs when using System.Text.Json deserializer to deserialize enum. HOT 4
- [BUG] go generator is unable to handle unions types correctly. HOT 1
- [BUG] go generator is not unable to use the models it is generating in other models.
- Divide "Maintainer" role into two categories: Triager and Commiter HOT 2
- [FEATURE] Add Stretchable Sidebar to Playground HOT 12
- Go enum generator does not call `additionalContent` hook
- Change C# Equals() overload to allow reference type properties to be properly compared HOT 13
- [BUG] C# setter for property named `value` prevents assignment. HOT 4
- Allow changing `additionalProperties` property name interpretation
- [BUG] AsyncAPI generator for Java doesn't generate class correctly when there's reference to itself HOT 14
- [FEATURE] Generate code from OpenAPI spec without paths HOT 9
- Java generator does not include import for dictionary models
- Weird behavior with self referencing models not create correct types HOT 1
- AsyncAPIInputProcessor does not respect schemaFormat of the message HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from modelina.