Giter VIP home page Giter VIP logo

Comments (6)

jonaslagoni avatar jonaslagoni commented on June 9, 2024

I agree we need to make a few changes.

I think we should keep all the test cases we have for the processor but move them to a blackbox testing, so we only test the output of the models being generated since this will give us the indication of whether something is wrong or not. These tests have saved me a few time so I dont think there is any way around it. Problem is, if you change something in the model generator you would still need to update them, but IMO this is still better alternative

from modelina.

magicmatatjahu avatar magicmatatjahu commented on June 9, 2024

I think we should keep all the test cases we have for the processor but move them to a blackbox testing, so we only test the output of the models being generated since this will give us the indication of whether something is wrong or not. These tests have saved me a few time so I dont think there is any way around it. Problem is, if you change something in the model generator you would still need to update them, but IMO this is still better alternative

But then you have this same problem as I described. You change something slightly inside implementation (like new simplifier) and then probably you must update whole tests. In my approach, you test everything like (maybe you have this in mind):

except(models[0].type).toEqual(...)

At the moment we have tests looking like manually created snapshot, which is very bad. if it is already bad for us, think what people will feel by contributing. It took me 2 hours to implement the simplifier for names, the rest of the 2 days was for correcting the tests.

from modelina.

jonaslagoni avatar jonaslagoni commented on June 9, 2024

To me this is not a solution. Imagine you change the structure of the output? Remove a field? then you are back to having to update all the tests. And with this change, adapting the tests will IMO be far worse then what it is now. But I agree that that we should move on from testing the simplifier output so we dont test JSON output but actual files in the generator. And then use a snapshot system in Jest if that provide a better way to see changes.

Also the changing of outcomes can be done using a debugger where you simply cope the entire actual output -> paste it into the expected -> see what was changed in git and then commit it. Now I agree testing the input processing output is far from perfect, but is necessary to test at least the expected output from the generator.

We need to know how changes to the library affect the outcome of all kinds of situations and combinations of schema files, of course most unit tests should catch something like this, but some things you just cant catch in unit testing.

from modelina.

magicmatatjahu avatar magicmatatjahu commented on June 9, 2024

To me this is not a solution. Imagine you change the structure of the output? Remove a field? then you are back to having to update all the tests.

This same you must do when you have tests based on pure json raw data, like currently, so this is a problem in my idea and your and we cannot avoid it.

Also the changing of outcomes can be done using a debugger where you simply cope the entire actual output -> paste it into the expected -> see what was changed in git and then commit it.

And then what? :) People probably will paste output to expected folder and tests will pass and also then we have a requirerement in contribution guide to have additional plugins for jest etc. Also tests based on snapshots were never good option in previous project, because then someone updated snapshot without checking changes (reviewer also didn't it) and we had a broken production - when we switched to unit and small integration tests as described below, when someone removed some tests (because we changed logic) then reviewer saw it.

We need to know how changes to the library affect the outcome of all kinds of situations and combinations of schema files, of course most unit tests should catch something like this, but some things you just cant catch in unit testing.

I'm agree with that, but I would prefer to have one big integration test that does something similar like currently, not 20-30 tests with checking originalInput, models and not used customization field :) Also this integration test should be splitted to smaller parts something, for example, we have:

{
  "models": {
    "ArrayType": {},
    "UnionType": {},
    "Tuple": {},
    "Extend": {},
  }
}

then you can have separated expected models written in files and test them in this way:

expected(models["ArrayType"]).toEqual(...path)

from modelina.

jonaslagoni avatar jonaslagoni commented on June 9, 2024

And then what? :) People probably will paste output to expected folder and tests will pass and also then we have a requirerement in contribution guide to have additional plugins for jest etc. Also tests based on snapshots were never good option in previous project, because then someone updated snapshot without checking changes (reviewer also didn't it) and we had a broken production - when we switched to unit and small integration tests as described below, when someone removed some tests (because we changed logic) then reviewer saw it.

hmm, yea I can see this being a thing, I just fear that one big integration test wont catch anything or at least catch very few problems.

I dont see how we can rewrite them without creating more clutter, but if you have a clear vision on how to do it feel free to do so 😄

from modelina.

github-actions avatar github-actions commented on June 9, 2024

This issue has been automatically marked as stale because it has not had recent activity 😴
It will be closed in 60 days if no further activity occurs. To unstale this issue, add a comment with detailed explanation.
Thank you for your contributions ❤️

from modelina.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.