Comments (3)
I do not understand what the problem is.
ideas provide an easy framework to create your own import hooks, pre-process files as you wish before either as text files and/or after an abstract syntax tree is created, compile them (into bytecode) and possibly transform the result. All of these text/ast/bytecode parsing and transformations are left to the user and implementing them is outside the scope of this project.
For example, the simplest example https://github.com/aroberge/ideas/blob/master/ideas/examples/function_simplest.py shows how a function named transform_source
receive the entire source
read from a file. You are free to process the source
as you wish. I gave some examples doing the parsing phase using the standard tokenization from Python, only because I am familiar with this approach; these are just examples.
Unless I am very much mistaken, what you are asking for is implementing a different kind of parsing - which you can certainly do.
from ideas.
Fair enough. I'm trying to do just that, from the implicit multiplier example. It's proving troublesome.
def add_reverse_polish(source):
tokens = token_utils.tokenize(source)
if not tokens:
return tokens
prev_token = tokens[0]
new_tokens = [prev_token]
sub_phrase = False
sub_paren = 0
token_stack = []
for token in tokens[1:]:
# The code has been written in a way to demonstrate that this type of
# transformation could be done as the source is tokenized by Python.
if (
(
# doesn't have is_op
token.type == py_tokenize.OP
and token == "("
)
or sub_paren > 0
):
# Skip initial open parenthesis
if token != "(" and token != ")":
token_stack.append(token)
if token == "(":
token_stack.append(token)
print("U 0x %s" % token_utils.untokenize([token]))
print("U 0 %s" % token_utils.untokenize([token_stack[0]]))
# track enveloping parenthesis
if token.type == py_tokenize.OP:
if token == "(":
if sub_paren > 0:
token_stack.append(token)
sub_paren += 1
elif token == ")":
sub_paren -= 1
if sub_paren > 0:
token_stack.append(token)
# At this point, if sub_paren is zero, we're done.
# The ( and ) are added here
if sub_paren == 0:
s = add_reverse_polish(token_utils.untokenize(token_stack) + "\n")
s = "(" + s + ")"
token_stack.clear()
s_tokens = token_utils.tokenize(s)
new_tokens.extend(s_tokens)
else:
new_tokens.append(token)
prev_token = token
return token_utils.untokenize(new_tokens)
The problem I'm having right now is when I feed this print(a+(2-1))
, it doesn't register print
as a separate token, and I get oddness:
~>> print(a+(2-1))
U 0x print(
U 0 print(
...
Somehow token == "("
yet token_utils.untokenize([token]) == "print("
.
Aside from that, the above code almost does grab what's inside ( )
, then grabs what's inside ( )
inside that, etc.. I haven't figured on how I'm going to handle ,
yet, and I'm not handling identifying 1 + 1
from a = 1 + 1
. It appears to me if you have foo()
, then the tokens are ["foo", "("]
but the untokenized tokens are ["foo","foo("]
.
Maybe that's a bug I should put up on token_utils.
from ideas.
As I mentioned on the token_utils
issue, this might be more easily done using Python's own tokenize module https://docs.python.org/3/library/tokenize.html as it is meant to work with individual tokens. token_utils
is intended to preserve the context in which tokens are found, so that the original source can be recreated exactly with all original spacing, which is not done with Python's tokenize module.
from ideas.
Related Issues (20)
- Problems with "newer" python versions.
- Cannot combine function_keyword and repeat HOT 1
- More readable transformed source code display
- Add hook for iPython/Jupyter
- Work on newer version. HOT 4
- Add "comprehensive builder" example
- Change verbose_finder to be a configuration parameter
- Change the default prompt
- Consider using poetry for packaging HOT 8
- Add all explanation in docstring of example modules
- Using parsers for transforming HOT 6
- Improve error handling HOT 1
- Add support for higher versions of python in setup.py tags HOT 1
- pre-commit not listed as dev dependency HOT 2
- pytest is not listed as a dev dependency HOT 1
- Add a contributor's guide or automate it somehow HOT 3
- Using poetry for requirements and changing Python supported values to 3.7+
- Add entry point
- Make it possible to patch a module HOT 2
- Deprecate (or modify) switch.py HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from ideas.