Giter VIP home page Giter VIP logo

ppb-vector's People

Contributors

ak9999 avatar amberasaurus avatar astraluma avatar avik-pal avatar bors[bot] avatar deveshd2k avatar fkorotkov avatar hphu avatar jms avatar kurokochin avatar maybeking avatar nbraud avatar pathunstrom avatar royopa avatar vitkhab avatar zuzanatoldyova avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar

ppb-vector's Issues

Falky tests

Found some more flakyness by running Hypothesis for 6h (max_examples set to half-a-million):

=========================================== FAILURES ============================================
_____________________________________ test_trig_invariance ______________________________________

    @given(angle=angles(), n=st.integers(min_value=0, max_value=1e5))
>   def test_trig_invariance(angle: float, n: int):

tests/test_vector2_rotate.py:111: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.venv/lib/python3.7/site-packages/hypothesis/core.py:593: in execute
    % (test.__name__, text_repr[0])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <hypothesis.core.StateForActualGivenExecution object at 0x7f52d5e7ffd0>
message = 'Hypothesis test_trig_invariance(angle=82.80876812840495, n=44343) produces unreliable results: Falsified on the first call but did not on a subsequent one'

    def __flaky(self, message):
        if len(self.falsifying_examples) <= 1:
>           raise Flaky(message)
E           hypothesis.errors.Flaky: Hypothesis test_trig_invariance(angle=82.80876812840495, n=44343) produces unreliable results: Falsified on the first call but did not on a subsequent one

../.venv/lib/python3.7/site-packages/hypothesis/core.py:761: Flaky
------------------------------------------ Hypothesis -------------------------------------------
Falsifying example: test_trig_invariance(angle=82.80876812840495, n=44343)
δcos: -2.0268564604464245e-11
rel_max = 0.12518140611629808
diff = 2.0268564604464245e-11 = 1.619135399839974e-10 * rel_max
δsin: 2.557398737224048e-12
rel_max = 0.9921338697816041
diff = 2.557398737224048e-12 = 2.5776750649456224e-12 * rel_max
Unreliable test timings! On an initial run, this test took 643.91ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 0.08 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.
______________________________________ test_scalar_linear _______________________________________

    @given(scalar=floats(), x=vectors(), y=vectors())
>   def test_scalar_linear(scalar: float, x: Vector2, y: Vector2):

tests/test_vector2_scalar_multiplication.py:24: 
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _
../.venv/lib/python3.7/site-packages/hypothesis/core.py:593: in execute
    % (test.__name__, text_repr[0])
_ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _ _

self = <hypothesis.core.StateForActualGivenExecution object at 0x7f52cee241d0>
message = 'Hypothesis test_scalar_linear(scalar=4.539659888108493e+74, x=Vector2(4.539659888108493e+74, 0.0), y=Vector2(3.272143794426157e+74, -0.0)) produces unreliable results: Falsified on the first call but did not on a subsequent one'

    def __flaky(self, message):
        if len(self.falsifying_examples) <= 1:
>           raise Flaky(message)
E           hypothesis.errors.Flaky: Hypothesis test_scalar_linear(scalar=4.539659888108493e+74, x=Vector2(4.539659888108493e+74, 0.0), y=Vector2(3.272143794426157e+74, -0.0)) produces unreliable results: Falsified on the first call but did not on a subsequent one

../.venv/lib/python3.7/site-packages/hypothesis/core.py:761: Flaky
------------------------------------------ Hypothesis -------------------------------------------
Falsifying example: test_scalar_linear(scalar=4.539659888108493e+74, x=Vector2(4.539659888108493e+74, 0.0), y=Vector2(3.272143794426157e+74, -0.0))
Unreliable test timings! On an initial run, this test took 784.57ms, which exceeded the deadline of 200.00ms, but on a subsequent run it took 0.04 ms, which did not. If you expect this sort of variability in your test timings, consider turning deadlines off for this test by setting deadline=None.
=========================== 2 failed, 265 passed in 21859.03 seconds ============================
python3 -m pytest --hypothesis-profile ridiculous -v tests/test_vector2_*  21834.05s user 8.80s system 99% cpu 6:04:19.95 total

Need to investigate tomorrow.

Addition too strict

Should not type check, just try for vector like objects.

try:
    x = other.x
    y = other.y
except AttributeError:
    try:
        x = other[0]
        y = other[1]
    except IndexError:
        . . .

There's not a good way to do this without getattr, dict.get, or index access.

Testing parameter names should be improved for readability

It took me a bit to realize l in #64 was length, and if I can't read it, newer devs who don't have domain knowledge definitely can't.

Let's avoid single character variable names everywhere we can. Exceptions include x and y inside much of the code.

Subtraction too strict.

Would not work with an older version of the same class. Should not check type, just attempt subtraction on vector like objects.

Test Vector Subtraction

Add tests to test vector subtraction between 2 Vector2s, and Vector2s and vector-like objects: tuples, lists, dicts, and other objects that can support indexing or key access.

Add property-based testing

I think it would make sense, in many cases, to run tests not against hardcoded inputs (and expected outputs), but against random inputs (and properties/equations that should hold).
For instance, x.angle(x.rotate(alpha)) ≃ alpha for any vector x and angle alpha.

Given that ppb-vector is a math library, and written in an effects-free style, it seems to be a great fit for property-based testing, and I can recommend the Hypothesis property-based testing library.

It would probably make sense to first add some automation to track line- or path-coverage of the testsuite, ideally ran automatically in CI, but this isn't a strong blocker either.

Add type hinting

Does not need to be all at once, but a remark that we could use type hinting across the board.

Add reflect method

A reflect method would be useful for managing bouncing physics.

Expected usage:

vector = Vector2(1, 1)
surface_normal = Vector(-1, 0)
reflected_vector = vector.reflect(surface_normal)
assert reflected_vector == Vector(-1, 1)

Drop Py3.6 support?

Now that Python 3.8 is out, and Python 3.7 is available starting in Debian 10 “Buster” (current stable) and Ubuntu 19.04 “Disco Dingo”, should we consider dropping Python 3.6 support?

Python 3.6 support currently relies on the dataclasses backport, which seems essentially-unmaintained since November 2018, and lacks important bugfixes. Those bugs are blocking for API improvements like #168.

Improve docstrings & type signatures

From #106 :

That's indeed not very nice. OTOH, help() is currently useless on Vector2 because:

  • Type constructs like VectorLike are expanded into an illegible mess.
  • Description & documentation are in the README rather than in docstrings.

I think it would be worth fixing, but it's way out of scope for this PR.

Move CI away from Travis

Since its acquisition, Travis CI fired most of its core engineering team, and the free service has become unbearably slow (dozens of minutes before a job even starts).

Given the impact on development velocity and QA (a couple of PRs were just merged without waiting for CI), and given the apparent lack of future for travis-ci.org, I suggest we move somewhere else.

I seem to recall @astronouth7303 suggesting Cirrus CI, which has native support for Linux, macOS, Windows and FreeBSD. As a plus, having something container-based means we can stop piling up hacks to deal with Travis' environments.

Test Vector addition

Add tests to test vector addition between 2 Vector2s, and Vector2s and vector-like objects: tuples, lists, dicts, and other objects that can support indexing or key access.

Make Vector hashable

Allow Vector to be used as dictionary keys.

While I don't expect to do much indexing by Vector, this is useful if you want to map Vector -> value for some reason (eg, in ppb-oneko, mapping directional vectors to their animation).

Add Projection

Add a projection method.

The simplest implementation seems to be (a * b.normalize()) * b.normalize().

lint.sh should check that requirements are fulfilled

I just noticed that #144 was passing lints locally but not in CI because I had forgotten to (re)install requirements-lint.txt.

Would it be possible (& easy) to check in lint.sh whether the necessary tools and plugins are installed?

Tracking test coverage and performance

I know we kinda-discussed that in #59, but I thought it would be useful to resume that discussion & track it in a separate issue. (And if you feel it's inappropriate for me to bring it up again, please let me know and close the issue :3)

I think it would be pretty nice to have coverage and performance tracking, if only because we could answer questions like “how bad is the slowdown of #89” or “is this adequately tested” without having to reinvent a new way to get that data.

I totally agree with @pathunstrom that we should minimise the amount of tooling a user has to interact with, so it should happen automatically for them. I'd like to suggest doing it during CI, and automatically posting a message to the PR (if appropriate) with:

  • a link to the full report;
  • if coverage changed significantly, say it did (and by how much), congratulate the contributor on a positive change;
  • same for performance.

I would happily do the tooling & integration work, if there's consensus on it being desirable (and how it should behave). :)

Cirrus CI: The allow_failure parameter is incorrect

Looks like $CIRRUS_TASK_NAME doesn't include the matrix build modifier, so it's just Linux or Windows. As such, setting allow_failures: $CIRRUS_TASK_NAME =~ '.*-rc-.*' doesn't work (failures are never allowed).

Unfortunately, Cirrus does not document an environment variable that does depend on container image's name (or anything else we could use to distinguish the CPython 3.8-rc builds).

Add __neg__

Currently, we can't use -vector to negate a vector, discovered due to a failure in #51. This is part of the fix required for #45.

Round all calculations like in rotate

For now, all rounding can happen to 5 digits.

Line 115 looks how it does specifically because of rounding errors. If we clean this up, the code becomes cleaner and slightly easier to manage.

Test Vector2 member access

Vector2's can be accessed via:

dot access: Vector2.x, Vector2.y
index: Vector2[0], Vector2[1]
key: Vector2["x"], Vector2["y"]

Update __repr__

Use f-strings! ie f"myClass({arg1}, {arg2}"

Make it subclass friendly. Use type(self) instead of the class name in the string.

Fix FreeBSD

There's something up with the FreeBSD CI.

Fix it.

Test dot product

Test dot product using the * operator between Vector2s and vector-like objects.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.