Giter VIP home page Giter VIP logo

ideas's Issues

Make this working with pytest?

Take an example (ideas were installed from the git):

$ cat conftest.py 
from ideas.examples import fractions_ast
fractions_ast.add_hook()
$ cat test_fractions.py
import fractions


def test_not_float():
    assert isinstance(1/2, fractions.Fraction)
$ pytest -q test_fractions.py
.                                                                                                     [100%]
1 passed in 0.01s

Great!

But pytest does some magic AST transformation with the assert statement. Thus for following:

$ cat test_fractions.py 
import fractions


def test_not_float():
    assert isinstance(1/2, fractions.Fraction)


def test_arith():
    assert 1/2 + 3/5 == fractions.Fraction(11, 12)  # wrong!

new test blow up:

$ pytest -q test_fractions.py 
.F                                                                                                                                                     [100%]
========================================================================== FAILURES ==========================================================================
_________________________________________________________________________ test_arith _________________________________________________________________________

Traceback (most recent call last):
  File "/home/sk/venv/ideas/bin/pytest", line 8, in <module>
    sys.exit(console_main())
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/config/__init__.py", line 185, in console_main
    code = main()
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/config/__init__.py", line 162, in main
    ret: Union[ExitCode, int] = config.hook.pytest_cmdline_main(
  File "/usr/lib/python3/dist-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 92, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 83, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 208, in _multicall
    return outcome.get_result()
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 80, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 187, in _multicall
    res = hook_impl.function(*args)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/main.py", line 316, in pytest_cmdline_main
    return wrap_session(config, _main)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/main.py", line 304, in wrap_session
    config.hook.pytest_sessionfinish(
  File "/usr/lib/python3/dist-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 92, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 83, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 203, in _multicall
    gen.send(outcome)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/terminal.py", line 813, in pytest_sessionfinish
    self.config.hook.pytest_terminal_summary(
  File "/usr/lib/python3/dist-packages/pluggy/hooks.py", line 286, in __call__
    return self._hookexec(self, self.get_hookimpls(), kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 92, in _hookexec
    return self._inner_hookexec(hook, methods, kwargs)
  File "/usr/lib/python3/dist-packages/pluggy/manager.py", line 83, in <lambda>
    self._inner_hookexec = lambda hook, methods, kwargs: hook.multicall(
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 208, in _multicall
    return outcome.get_result()
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 80, in get_result
    raise ex[1].with_traceback(ex[2])
  File "/usr/lib/python3/dist-packages/pluggy/callers.py", line 182, in _multicall
    next(gen)  # first yield
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/terminal.py", line 828, in pytest_terminal_summary
    self.summary_failures()
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/terminal.py", line 1011, in summary_failures
    self._outrep_summary(rep)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/terminal.py", line 1030, in _outrep_summary
    rep.toterminal(self._tw)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/reports.py", line 87, in toterminal
    longrepr_terminal.toterminal(out)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_code/code.py", line 995, in toterminal
    element[0].toterminal(tw)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_code/code.py", line 1025, in toterminal
    entry.toterminal(tw)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_code/code.py", line 1119, in toterminal
    self._write_entry_lines(tw)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_code/code.py", line 1101, in _write_entry_lines
    tw._write_source(source_lines, indents)
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_io/terminalwriter.py", line 192, in _write_source
    new_lines = self._highlight(source).splitlines()
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/_pytest/_io/terminalwriter.py", line 201, in _highlight
    from pygments.formatters.terminal import TerminalFormatter
  File "/usr/lib/python3/dist-packages/pygments/formatters/__init__.py", line 18, in <module>
    from pygments.formatters._mapping import FORMATTERS
  File "<frozen importlib._bootstrap>", line 1007, in _find_and_load
  File "<frozen importlib._bootstrap>", line 986, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 680, in _load_unlocked
  File "/home/sk/venv/ideas/lib/python3.9/site-packages/ideas/import_hook.py", line 144, in exec_module
    with open(self.filename, mode="r+b") as f:
PermissionError: [Errno 13] Permission denied: '/usr/lib/python3/dist-packages/pygments/formatters/_mapping.py'

Add all explanation in docstring of example modules

For the documentation of the examples, instead of having a separate .rst file with content (other than an automodule directive), include all the required text in the module docstring. This will make it possible for people perusing the code to have all the relevant information in a single place.

Deprecate (or modify) switch.py

switch.py should be deprecated, or at least modified because of match statements that debuted in 3.10.6.

Relevant PEPs:

match statements look like this:

from random import randint
match randint(0, 10):
  case 0 | 1 | 2 | 3:
    print("Terrible!")
  case 4:
    print("My favorite number!")
  case 5 | 6:
    print("Average.")
  case 7 | 8:
    print("Good!")
  case 9 | 10:
    print("Terrific!")
  case _: # default
    print("Uh oh, something has happened.")

My idea is to introduce default: as an alias of case _: (which also means disallowing default from being an identifier). If reserving default is not an option, just use else.

Using poetry for requirements and changing Python supported values to 3.7+

Using poetry to keep track of requirements, both for release and development version. Doing so, moving to only supporting latest versions of tools, which means that Python 3.6 will no longer be supported. This does not mean that ideas will not work with Python 3.6; however, it will no longer be tested with it.

Todo

  • include codec example
    • Contrast use with import hook
    • Add code for easy creation of custom codecs
  • Include ast example
  • Include bytecode example
  • Include summary for each example
  • Include description of all examples (built from summaries?)
  • Review constants example; see if a custom module might be possible
  • Do switch statement example
  • Add call with frames removed and debug option to disable it.

Add "comprehensive builder" example

Generalize experimental_syntax_encoding.py, but use an import hook, to build a list of tranformers. Perhaps use somethine like the following:

from ideas-examples import ...  # from ideas.examples
from experimental-ideas import ...  # from anywhere on sys.path

Change the default prompt

I should probably change the prompt, either inspired by Julia's:

ideas> begin here
  ...  continue here

or using IPython's style, instead of cPython.

PEP 505 -- None-aware operators

PEP 505 proposes to use ? and ?? as new syntax element to create compact expressions dealing with identifiers that could have a value of None.

maybe() from sorcery as well as maybe() from pymaybe appear to have implemented a similar functionality through function calls instead of the proposed and currently invalid Python syntax described in PEP 505.

It might be possible to write an import hook that would convert the proposed syntax into calls to one of the existing maybe() functions, thus allowing one to experiment with the proposed syntax.

Implicit multiplication

When writing equations on paper, one does not usually use a symbol to represent multiplication. For example, instead of writing

a = b * c + d * e

one might simply write

a = bc + de

However, without additional information, this cannot be parsed properly by a computer. One possible solution is to add spaces between quantities that are to be multiplied:

a = b c + d e

It might be interesting to see if an "implicit multiplication" mode could be written allowing something like this.
Furthermore, by having a variable declaration, one could disambiguate between
function calls and multiplication as in:

variables a, b, c, d

a = b( c + d)
a = f(c + d)

Refer to
https://mail.python.org/archives/list/[email protected]/message/52DLME5DKNZYFEETCTRENRNKWJ2B4DD5/

python as decimal calculator?

Python supports floating point arithmetic of fixed precision (usually something like 53 bits in mantissa). So, "big enough" floating point literals silently truncated:

>>> s = 'x = 3.1111111111111112'
>>> t = ast.parse(s)
>>> v = t.body[0].value
>>> ast.dump(v)
'Constant(value=3.111111111111111)'

But we could recover all digits!

>>> s[v.col_offset:v.end_col_offset]
'3.1111111111111112'

Given this, we could transform arbitrary floating point literals to Decimal instances and then do arithmetic. Similar AST transformation could be useful for arbitrary-precision floating point arithmetic libraries in Python (e.g. mpmath) or CAS (e.g. diofant).

Edit: see diofant/diofant@27ecabc as an implementation example.

Improve error handling

There are a few moments in the code base with unhandled exceptions that could make it harder to the users to understand the errors.
Examples:

  • experimental_syntax_encoding.py, line 52 -> transform source could be None. We could either raise a better error here or replace None with lambda s: s.
  • Same file, line 15 -> search_function can return None instead of an encoding. Even though it's incredibly unlikely, mypy/pylance won't like it and in the future we might encounter cases where utf-8 is not available. In that case, adding a single check in the beginning can help the users in that rare case. Additionally, less red text is good :)

I am not touching on examples because they obviously need a bit less error handling.

Operate on encapsulated phrases

It would be useful to operate on encapsulated phrases. For example:

a = b + operators.mul(3, 7 + 2)

As far as I can tell, we can parse the tokens in order; but there are specific whole contexts. operators.mul() is a single phrase (imagine trying to skip over a function call, which may have parenthesis which may come inside quotes). Inside that are 3 and 7+2. You might say the whole b + operators.mul(3, 7 + 2) is one phrase—a single, whole valid syntax that can contain nested syntax.

Work on newer version.

Goals for new version:

  • Support for IPython/Jupyter
  • Remove argument passing to transformers for showing original or transformed code, and rely on a configuration settings.
  • Revise the documentation to show the command line option as the default, instead of importing, adding hook, etc.
  • Add an example showing how to load a custom transformer from the command line.
  • Add info about adding *kwargs to future-proofing transformers.
  • Separate out encodings example, reducing the emphasis.
  • See if it is possible to adapt/update the failing examples for Python 3.8+
  • See if it is possible to show the transformed code for Python with AST for Python 3.9 as it includes unparse by default.

Import code from database

Create an example where we import code from a database (perhaps using sqlite3) as an example of importing code from a non-file object.

Using parsers for transforming

It would be very useful to implement the search and processing of fragments of transformable code using parsers

For example, using lark-parser, we could make a transformation like this:

# specifying a fragment grammar with lark syntax (EBNF-like)
# such a grammar could be intended for a fragment like:
# 5 $ 10
grammar = """
    new_operation: NUMBER '$' NUMBER 
    %import common.SIGNED_NUMBER -> NUMBER
"""

@detect(grammar)
def transform(parsed_tree, detected_source, full_source, **kwargs):
    # parsed_tree will be like:
    #
    # new_operation:
    # .... 5
    # .... 10

    return do_smth_with_tree(parsed_tree)

Consider using poetry for packaging

With poetry, it's slightly easier to start contributing to ideas. It also simplifies publishing, is a bit more readable, and really popular in community right now.

Add a contributor's guide or automate it somehow

It wasn't too bad to contribute but I faced a lot of unknown issues and guessed my way through (for example, with pre-commit + pytest + black and with black-click incompatibility). Good thing I know which tools are necessary in general and how to use them (or solve issues with them). But I am afraid that a less experienced programmer will not be able to figure these things out without giving up.

Add entry point

Instead of writing

python -m ideas ...

make it possible to write

ideas ...

Note that instead of python, in some systems the user might have to write python3 or py; this requires additional information in the documentation which might be confusing. By having an entry point, we do not have to deal with this issue.

To support the equivalent of

python -im ideas ...

a new -i option will have to be added.

Addition to "else" clause in "for" and "while" loops

https://mail.python.org/archives/list/[email protected]/thread/WNKNETPRGQ3MPQOVG4KG2QV6L7KAPNWM/#N65ZZJPLN6LBPQOURDKJNXWGT64T3ZZK is a discussion on various additions to the else clause in for and while loops.

One of the problems of the suggested syntax is that the use of an if clause can, at first glance, create an ambiguity: is it a stand-alone if clause or part of a loop. Something that could help reduce the ambiguity would be using elif.

Thus, using only existing keywords, it might be interesting to implement something like:

for item in iterable:
    # code block 1
elif pass:
    # code block 2
elif not break:
    # code block 3

which would be translated, using the current syntax, as

looped = False
for item in iterable:
    looped = True
   # code block 1
else:
    if not looped:
        # code block 2
    else:
        # code block 3

The simpler case, corresponding to the existing else clause would be written as

for item in iterable:
    # code block 1
elif not break:
    # code block 2

Extract token_utils as separate package

Currently, token_utils exists as a separate project. However, the development version is what's included in ideas and the separate project is updated by copying the version from ideas and uploading to pypi when needed.

It should be removed and added as a dependency so that issues related only to token_utils could be more easily tracked, and so that token_utils could be better documented as some features might not need to be used by ideas.

New example: support a more wide range of unicode identifiers

Python do NKFC-normalization while parsing identifiers. That disallow some fancy unicode identifiers like ℕ (it will be N for Python), see e.g. this. Other languages, that support unicode identifiers usually lack this "feature" and/or use different normalization, like Julia. E.g. the Scheme:

$ scheme
MIT/GNU Scheme running under GNU/Linux
...
1 ]=> (define ℕ 1)

;Value: ℕ

1 ]=> (define N 2)

;Value: n

1 ]=> ℕ

;Value: 1

1 ]=> N

;Value: 2

It's possible to "patch" this unfortunate feature with transform_source-based transformation: parse source to ast tree, then "fix" normalized identifiers, using lineno/col_offset/etc into something like N_1, instead of in the original source. This might look tricky, but I think this will fit nicely into your collection of examples: it combines ast parsing and some parsing of the original source string (i.e. with tokenize) to get disallowed symbols back.

Cannot combine function_keyword and repeat

This might be a more general bug. After doing a quick test working on issue #17, I found that I could not combine function_keyword and repeat; only one was allowed at a time - at least in the console.

More readable transformed source code display

Currently, if the option to show the transformed source is active, the code is shown with something like this:

========Transformed=========
new code here
----------------------------

This is ok when the source spans multiple lines, but not so much for a single line.

In this case, I think that the following would be preferable

## single new line of new code here

Furthermore, the option to show the original code should only be active when not using the interpreter.

Implement literals for units

See if https://quantiphy.readthedocs.io/en/stable/ could be implemented as literals.

As seen in this discussion https://mail.python.org/archives/list/[email protected]/thread/JDYXM3AH3ESOL7L6ALPRZOOURL3ZLRHP/#DXCF42FJNXEBDMPJPGZCYPIYOH6UZBY3

See, in particular, this post https://mail.python.org/archives/list/[email protected]/message/4FP4RRDPODVRMALKPRZKGVEVM7YOP4GP/ and have a look at https://pypi.org/project/units/

Idea: perhaps use the [unit] notation and convert it to that of units, or use the "unit enhanced" syntax of project units.

Actually, a better choice might be https://docs.astropy.org/en/stable/units/index.html

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.