Giter VIP home page Giter VIP logo

micropython's Introduction

Unix CI badge STM32 CI badge Docs CI badge codecov

The MicroPython project

MicroPython Logo

This is the MicroPython project, which aims to put an implementation of Python 3.x on microcontrollers and small embedded systems. You can find the official website at micropython.org.

WARNING: this project is in beta stage and is subject to changes of the code-base, including project-wide name changes and API changes.

MicroPython implements the entire Python 3.4 syntax (including exceptions, with, yield from, etc., and additionally async/await keywords from Python 3.5 and some select features from later versions). The following core datatypes are provided: str(including basic Unicode support), bytes, bytearray, tuple, list, dict, set, frozenset, array.array, collections.namedtuple, classes and instances. Builtin modules include os, sys, time, re, and struct, etc. Select ports have support for _thread module (multithreading), socket and ssl for networking, and asyncio. Note that only a subset of Python 3 functionality is implemented for the data types and modules.

MicroPython can execute scripts in textual source form (.py files) or from precompiled bytecode (.mpy files), in both cases either from an on-device filesystem or "frozen" into the MicroPython executable.

MicroPython also provides a set of MicroPython-specific modules to access hardware-specific functionality and peripherals such as GPIO, Timers, ADC, DAC, PWM, SPI, I2C, CAN, Bluetooth, and USB.

Getting started

See the online documentation for API references and information about using MicroPython and information about how it is implemented.

We use GitHub Discussions as our forum, and Discord for chat. These are great places to ask questions and advice from the community or to discuss your MicroPython-based projects.

For bugs and feature requests, please raise an issue and follow the templates there.

For information about the MicroPython pyboard, the officially supported board from the original Kickstarter campaign, see the schematics and pinouts and documentation.

Contributing

MicroPython is an open-source project and welcomes contributions. To be productive, please be sure to follow the Contributors' Guidelines and the Code Conventions. Note that MicroPython is licenced under the MIT license, and all contributions should follow this license.

About this repository

This repository contains the following components:

  • py/ -- the core Python implementation, including compiler, runtime, and core library.
  • mpy-cross/ -- the MicroPython cross-compiler which is used to turn scripts into precompiled bytecode.
  • ports/ -- platform-specific code for the various ports and architectures that MicroPython runs on.
  • lib/ -- submodules for external dependencies.
  • tests/ -- test framework and test scripts.
  • docs/ -- user documentation in Sphinx reStructuredText format. This is used to generate the online documentation.
  • extmod/ -- additional (non-core) modules implemented in C.
  • tools/ -- various tools, including the pyboard.py module.
  • examples/ -- a few example Python scripts.

"make" is used to build the components, or "gmake" on BSD-based systems. You will also need bash, gcc, and Python 3.3+ available as the command python3 (if your system only has Python 2.7 then invoke make with the additional option PYTHON=python2). Some ports (rp2 and esp32) additionally use CMake.

Supported platforms & architectures

MicroPython runs on a wide range of microcontrollers, as well as on Unix-like (including Linux, BSD, macOS, WSL) and Windows systems.

Microcontroller targets can be as small as 256kiB flash + 16kiB RAM, although devices with at least 512kiB flash + 128kiB RAM allow a much more full-featured experience.

The Unix and Windows ports allow both development and testing of MicroPython itself, as well as providing lightweight alternative to CPython on these platforms (in particular on embedded Linux systems).

The "minimal" port provides an example of a very basic MicroPython port and can be compiled as both a standalone Linux binary as well as for ARM Cortex M4. Start with this if you want to port MicroPython to another microcontroller. Additionally the "bare-arm" port is an example of the absolute minimum configuration, and is used to keep track of the code size of the core runtime and VM.

In addition, the following ports are provided in this repository:

  • cc3200 -- Texas Instruments CC3200 (including PyCom WiPy).
  • esp32 -- Espressif ESP32 SoC (including ESP32S2, ESP32S3, ESP32C3).
  • esp8266 -- Espressif ESP8266 SoC.
  • mimxrt -- NXP m.iMX RT (including Teensy 4.x).
  • nrf -- Nordic Semiconductor nRF51 and nRF52.
  • pic16bit -- Microchip PIC 16-bit.
  • powerpc -- IBM PowerPC (including Microwatt)
  • qemu-arm -- QEMU-based Arm emulated target (for testing)
  • qemu-riscv -- QEMU-based RISC-V emulated target (for testing)
  • renesas-ra -- Renesas RA family.
  • rp2 -- Raspberry Pi RP2040 (including Pico and Pico W).
  • samd -- Microchip (formerly Atmel) SAMD21 and SAMD51.
  • stm32 -- STMicroelectronics STM32 family (including F0, F4, F7, G0, G4, H7, L0, L4, WB)
  • webassembly -- Emscripten port targeting browsers and NodeJS.
  • zephyr -- Zephyr RTOS.

The MicroPython cross-compiler, mpy-cross

Most ports require the MicroPython cross-compiler to be built first. This program, called mpy-cross, is used to pre-compile Python scripts to .mpy files which can then be included (frozen) into the firmware/executable for a port. To build mpy-cross use:

$ cd mpy-cross
$ make

External dependencies

The core MicroPython VM and runtime has no external dependencies, but a given port might depend on third-party drivers or vendor HALs. This repository includes several submodules linking to these external dependencies. Before compiling a given port, use

$ cd ports/name
$ make submodules

to ensure that all required submodules are initialised.

micropython's People

Contributors

andrewleech avatar aykevl avatar blmorris avatar chipaca avatar danicampora avatar deshipu avatar dhylands avatar dlech avatar dpgeorge avatar dvdgiessen avatar flowergrass avatar glennrub avatar iabdalkader avatar jepler avatar jimmo avatar jonathanhogg avatar jongy avatar lurch avatar maureenhelm avatar mcauser avatar peterhinch avatar pfalcon avatar pi-anl avatar projectgus avatar robert-hh avatar rolandvs avatar stinos avatar takeotakahashi2020 avatar tve avatar yn386 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

micropython's Issues

mbed HAL

Some of the work from the mbed HAL (Hardware Abstraction Layer) could be leveraged by the Micro Python project.

https://github.com/mbedmicro/mbed/tree/master/libraries/mbed/targets/hal

https://github.com/mbedmicro/mbed/tree/master/libraries/mbed/hal

The mbed C API supports basic functionality for GPIO, UART, SPI, I2C, ADC, DAC, RTC, PWM, Ethernet and other I/O peripherals. Drivers have been implemented for ARM Cortex-M MCUs from NXP, Freescale and ST Micro. All code is freely available and open source (using an Apache 2.0 license). ARM Ltd has significantly increased the resources available to mbed as part of its Internet of Things strategy. Several external developers are also involved with new mbed ports and extensions. Using the mbed C APIs would leverage these efforts and simplify porting Micro Python to different ARM Cortex-M MCUs.

Would there be interest in work using the mbed C API as an I/O layer alternative? Micromint worked with the mbed port to the NXP LPC43xx MCUs. I could start work on a preliminary implementation when I'm back from the holidays (Jan-2). Some mbed developers may also get involved. At ARM Techcon they expressed interest in promoting the use of the mbed framework as a low level I/O foundation for Python, Basic and other interpreters.

STM Makefile python2: Command not found

The stm/Makefile calls python2 after building to run the dfu script, I assume the script uses Python 2 syntax ? anyway, there's no python2 link by default, at least not on my machine, so a link to python2.6 or python2.7 has to be created first, a workaround might be to call a wrapper script first and look for a python2.x and then use it to run the dfu.py.

python2 ../tools/dfu.py -b 0x08000000:build/flash0.bin -b 0x08020000:build/flash1.bin build/flash.dfu
make: python2: Command not found
make: *** [build/flash.dfu] Error 127

List addition and multiplication not working

The standard Python 3 operations of list addition and multiplication are not functioning in the current version of micropython. In Cpython we have the following:

>>> a = [0, 1, 2]
>>> a + a
[0, 1, 2, 0, 1, 2]
>>> a * 3
[0, 1, 2, 0, 1, 2, 0, 1, 2]

In the current version of micropython we get:

>>> a = [0, 1, 2]
>>> a + a
TypeError: unsupported operand type for binary operator: 'list'
>>> a * 3
TypeError: unsupported operand type for binary operator: 'list'

The use of + and * for concatenation and replication of lists is a very common idiom and this really needs to be implemented for existing code to stand a good chance of running on micropython.

unix build fails on MacOSX - error: unknown directive

sekrier-mac:unix sekrier$ make
mkdir -p build
gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE -c -o build/main.o main.c
gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE -c -o build/nlrx86.o ../py/nlrx86.S
gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE -c -o build/nlrx64.o ../py/nlrx64.S
../py/nlrx64.S:9:5: error: unknown directive
.type nlr_push, @function
^
../py/nlrx64.S:25:5: error: unknown directive
.size nlr_push, .-nlr_push
^
../py/nlrx64.S:29:5: error: unknown directive
.type nlr_pop, @function
^
../py/nlrx64.S:35:5: error: unknown directive
.size nlr_pop, .-nlr_pop
^
../py/nlrx64.S:39:5: error: unknown directive
.type nlr_jump, @function
^
../py/nlrx64.S:58:5: error: unknown directive
.size nlr_jump, .-nlr_jump
^
../py/nlrx64.S:60:5: error: unknown directive
.local nlr_top
^
make: *** [build/nlrx64.o] Error 1

sekrier-mac:build sekrier$ gcc -v
Configured with: --prefix=/Applications/Xcode.app/Contents/Developer/usr --with-gxx-include-dir=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.9.sdk/usr/include/c++/4.2.1
Apple LLVM version 5.0 (clang-500.2.79) (based on LLVM 3.3svn)
Target: x86_64-apple-darwin13.0.0
Thread model: posix

PA13 used as USR_SW conflicts with SW debug

When the software initializes PA13 as an input for USR_SW it's use as SWDIO ends. The chip must be erased before SW debbug can be used again. This makes debugging with a jtag or SW like the stlink 2 difficult. Can the PCB be changed? One alternative to changing the PCB would be to only init the pin when needed by calling some py function.

The unix version crashes

The following code results in segmentation fault:

gen = (i for i in range(10))
for i in gen: print(i)

Compiling with debugging symbols (-g) and running though gdb gives this:

Current directory is /home/alex/micropython/unix/
GNU gdb (GDB) 7.5.91.20130417-cvs-ubuntu
Copyright (C) 2013 Free Software Foundation, Inc.
License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html
This is free software: you are free to change and redistribute it.
There is NO WARRANTY, to the extent permitted by law. Type "show copying"
and "show warranty" for details.
This GDB was configured as "x86_64-linux-gnu".
For bug reporting instructions, please see:
http://www.gnu.org/software/gdb/bugs/...
Reading symbols from /home/alex/micropython/unix/py...done.
(gdb) run
Starting program: /home/alex/micropython/unix/py

gen = (i for i in range(10))
gen.next()

Program received signal SIGSEGV, Segmentation fault.
0x000000000041acbf in mp_obj_print_helper (print=0x41abcd <printf_wrapper>, env=0x0, o_in=0x64fe30) at ../py/obj.c:40
(gdb) where
#0  0x000000000041acbf in mp_obj_print_helper (print=0x41abcd <printf_wrapper>, env=0x0, o_in=0x64fe30) at ../py/obj.c:40
#1  0x000000000041ad2c in mp_obj_print (o_in=0x64fe30) at ../py/obj.c:49
#2  0x000000000041e82b in mp_builtin___repl_print__ (o=0x64fe30) at ../py/builtin.c:45
#3  0x000000000041c534 in fun_native_call_n (self_in=0x635030, n_args=1, args=0x7fffffffe350) at ../py/objfun.c:35
#4  0x0000000000419cea in rt_call_function_n (fun_in=0x635030, n_args=1, args=0x7fffffffe350) at ../py/runtime.c:698
#5  0x0000000000421148 in mp_execute_byte_code_2 (ip_in_out=0x7fffffffe300, fastn=0x7fffffffe310, sp_in_out=0x7fffffffe2f8) at ../py/vm.c:435
#6  0x000000000041f9a5 in mp_execute_byte_code (code=0x6502c0 "", args=0x0, n_args=0, n_state=2) at ../py/vm.c:56
#7  0x000000000041c8a8 in fun_bc_call_n (self_in=0x652580, n_args=0, args=0x0) at ../py/objfun.c:145
#8  0x0000000000419cea in rt_call_function_n (fun_in=0x652580, n_args=0, args=0x0) at ../py/runtime.c:698
#9  0x0000000000419c1a in rt_call_function_0 (fun=0x652580) at ../py/runtime.c:672
#10 0x0000000000400fef in do_repl () at main.c:94
#11 0x00000000004011ec in main (argc=1, argv=0x7fffffffe5c8) at main.c:215
(gdb) 

Improve memory stats

With fix to #2, it's now possible to fix alloc stats collection, but instead of throwing a random patch, I'd like to be sure there's good understanding what we can/want to measure.

  1. We can measure total allocation size. Why, it's useful to know memory throughput of an app. This is what uPy appear to measure currently with total_bytes_allocated. Except that realloc case is not correct. Memory usage grows not by new_num_bytes, but by difference between new and old size. And by logic, this metric is monotonically growing, so, (new_num_bytes - old_num_bytes) < 0 should be handled separately (specifically, ignored).
  2. Besides total memory allocation, it makes sense to know current memory allocation (that allows to know how much is free in particular). It takes having another var, updated also on m_free (and on realloc w/o any conditions). So, this metric is expected to be 0 at app end and proper interpreter shutdown.
  3. Finally, it's useful to know peak memory usage we achieved. This will answer question if particular app can run on particular heap (== on particular hardware). This would be just max(current usage).

Hopefully, all these metrics are useful (well, I'd say that 2&3 are more useful than 1). If you agree, I can implement them.

Cache in stm/storage.c loses data

The cache in storage.c is used only when writing. Thus when data is read from the last page which was written too before the page is saved to flash, the data is wrong. Test by using the following MAIN.C:

f1 = open("TEST1", "w")
f2 = open("TEST2", "w")
f1.write("12345")
f2.write("67890")
f1.write("abcde")
f2.write("fghij")
f1.close()
f2.close()

Store the file, reset. After reset, the content of the second file is wrong.

Micropython uses stpcpy(3) and strcat(3)

stpcpy(3) and strcat(3) are considered unsafe.

gcc -o py build/main.o build/nlrx86.o build/nlrx64.o build/nlrthumb.o build/malloc.o build/qstr.o build/vstr.o build/unicode.o build/lexer.o build/lexerunix.o build/parse.o build/scope.o build/compile.o build/emitcommon.o build/emitpass1.o build/emitcpy.o build/emitbc.o build/asmx64.o build/emitnx64.o build/asmthumb.o build/emitnthumb.o build/emitinlinethumb.o build/runtime.o build/map.o build/obj.o build/objbool.o build/objboundmeth.o build/objcell.o build/objclass.o build/objclosure.o build/objcomplex.o build/objdict.o build/objexcept.o build/objfloat.o build/objfun.o build/objgenerator.o build/objinstance.o build/objlist.o build/objnone.o build/objrange.o build/objset.o build/objstr.o build/objtuple.o build/objtype.o build/builtin.o build/vm.o build/showbc.o build/repl.o -lreadline -lm -ltermcap
build/objstr.o(.text+0x422): In function `str_binary_op':
: warning: stpcpy() is dangerous GNU crap; don't use it
build/compile.o(.text+0x3da0): In function `compile_atom_string':
: warning: strcat() is almost always misused, please use strlcat()
'''

regex library to implement "re" module

CPython uses (possibly patched) PCRE library http://www.pcre.org/ (BSD license). Nothing would preclude its usage in MicroPython, but:

  1. Worst-case performance is awful.
  2. Nobody would ever say PCRE is optimized for code size/memory usage.

So, PCRE would definitely fit "unix" port, to be completely compatible with CPython "re" module. But I'm not sure how usable it will be for MCU ports.

So, what I imagine is be able to support 2 (or more?) regex libraries - one PCRE for complete compatibility, another using other library(ies) for unbloatness, at the expense of reduced feature set. Note that "reduced" subset can live in a separate module (say, "remini").

This ticket would be a good place to discuss that and collect candidates for alternative re lib.

qstr uniqueness handling is overkill!

Well, checking each string using strcmp() against inter pool obviously has quadratic complexity - on creation on each string, which is often in a dynamic language. That's not a good design choice, not even for small systems, not even as initial implementation, to be optimized later. Please don't follow Espruino way where dead simple algorithms lead to speed of program depend on number of spaces in source.

So, there definitely should be hashing. Here's my suggestion:

  1. Wasting much memory on hash is not good either. Single byte is enough to make a great difference. Yes, that will limit efficiency of very large hash tables, but well, the talk is about handling real-word case and scaling down, for real-world and scaling up there's CPython after all. But see below anyway.
  2. I'd suggest to take a chance and store length with string. That's obviously useful for len(), for comparison, and for hash tables too, effectively adding more bits (maybe not as uniformly spread as hash). It's harder to figure out how to store it. It's definitely not good to store 4 bytes of it (using 32-bit speak). Arbitrary limits on string length are not good either. Variable-length encoding is the only good choice then - that's still will be much faster then strlen().

Now, I may imagine one of the reason hashing, etc. wasn't added right away. It's definitely nice to be able to do qstr_from_str_static("foo"), and know this doesn't waste any single byte more than absolutely needed. But now hash and length would need to be part of string, or otherwise they should be computed at runtime, and stored in RAM (oh no!).

With C++11, it might be possible to deal with that in compile-time automagically using constexpr's. But C macros are obviously won't help here. So, general way to solve that would be to have custom preprocessor which replace "macros" like HSTR(hash, len, "foo") with HSTR("\xff", "\x03", "foo").

Are you brave enough to go for something like that? ;-)

(Extra note: As you also use C99, I tried to look for a way to (ab)use compound literals, but there doesn't seem to be a way to create static compound literal).

wiki docs

I've expanded the wiki stubs crudely. Damien do you want any of this info here ?
Specifically I am thinking:

  • only short overview.
  • Concentration on the pyb module and/or other aspects like that ?
  • how to handle different architectures and chip variants.
    e.g. if we did the '407' we might add camera capabilities. For themonster 386 based SOC I pointed to the other day - well its almost a completely diff codebase.

Good name for interpreter binary

I wanted to write that "py" name is a bit too generic, and if every python-related project seized that name, there would be conflicts. Well, there're conflicts, from http://docs.python.org/3/whatsnew/3.3.html :

The Python 3.3 Windows installer now includes a py launcher application that can be used to launch Python applications in a version independent fashion.

More info: http://docs.python.org/3/using/windows.html#launcher

So, it would be good idea to come up with a more unclashy name. IMHO, "upython" would be both consistent, clear to users and unique - and based on idea that "u" is common way to represent Greek letter "mu" for "micro"; "mpython" would mean "minipython". But even "mpython" is better than current "py".

And this is of course borders on the question raised in Kickstarter comments - on project shortname/nickname to use in context where full "MicroPython" is too long.

Dealing with Python3 string-like types zoo

Epigraph:

>  * str is now unicode => unicode is no longer a pain in the a****

True. Now byte strings are a pain in the arse.

- https://mail.python.org/pipermail/python-list/2010-June/580107.html

So, one of the changes in Python3 with respect to Python2 is that strings are by default Unicode. And that's actually the change which is not friendly to constrained environments. And well, it may sound progressive and innovative to have Unicode by default, but conservative approach says that Unicode is extra, advanced, special-purpose feature, and forcing it as "default" is hardly wise. You can get far along with just 8bit-transparent byte strings, for example, you can write fairly advanced web applications which just accept input from user, store it, and then render back - without trying to interpret string contents. That's exactly how Python2 string type worked. And not only Python3 forced Unicode on everyone, it also killed unobtrusive Python2 8-bit strings. Instead it introduced "byte strings" (bytes type). But they're not Python2 strings:

$ python3.3
>>> b"123"[0]
49

So, if you had good times with Python2, with Python3 you either need to thrash your heap (all 8Kb of it), or write code which looks more complicated than Python2 ("if s[0] == ord('0')"?) and not compatible with it.

So, how to deal with this madness in MicroPython? First of all, let's look what we now:

$ ./py
>>> "123"[0]
49

Ahem, so unlike "// XXX a massive hack!" comments says, it's not hack, it's just uPy so far implements byte strings. But:

$ ./py
>>> b"123"[0]
code 0x8b21fac, byte code 0x17 not implemented
py: ../py/vm.c:477: mp_execute_byte_code_2: Assertion `0' failed.
Aborted

So, what to do with "default Unicode" strings in uPy? It goes w/o saying that in-memory representation for them should be utf8 - we simply don't have wealth of memory to waste on 2- or 4-byte representations. Of course, using utf8 means expensive random access, so it would be nice to have special (but oh-so-common) case for ASCII-only strings to support fast random access. Here Python2 lover says that special-case 1-byte strings are well, just Python2 strings. Outlawed by Python3, they are still useful for optimizing MCU-specific apps. And well, 2- or 4-byte representations don't scale for MCUs, but not so mad for POSIX build.

Let's also backtrack to byte strings - they also have mutable counterpart, bytearray.

So, here're all potential string types which are nice to support:

  1. bytes
  2. bytearray
  3. utf8
  4. ASCII/8bit string
  5. 16bit string
  6. 32bit string

And don't forget about interned strings, which are apparently utf8, but of course with ascii optimization:
7. interned utf8
8. interned ascii

We can also remember array.array - it's very important for uPy, but can stay extension type.

So, there's clearly no free tag bits in object pointers to store string type. But #8 proposes to add string header with hash and variable-length size encoding. Well, as it's variable-length, we can steal few bits from initial size byte to hold type, so still keeping only 2-byte overhead for short strings.

Thoughts?

malloc.h is obsolete

Building on OpenBSD I get:

wilfred:unix> gmake
gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE  -c -o build/main.o main.c
In file included from main.c:4:
/usr/include/malloc.h:4:2: error: #warning "<malloc.h> is obsolete, use <stdlib.h>"
Makefile:73: recipe for target 'build/main.o' failed
gmake: *** [build/main.o] Error 1
'''

Indeed, using stdlib.h silences this.

Adding musl c library to micro python - musl branch

Hi All,

I've created a new branch 'musl' to import the musl c library

http://www.musl-libc.org

The musl c library has a MIT license, which works nicely with incorporating it into micro python. I'll be looking at adding some (obviously not all the code) of the features like printf, UTF-8, math, regex, etc. to micro python.

I've imported the entire library (including parts that probably won't be used), including history and placed it in a directory called musl. You should be able to track changes back in time in the musl project with

git log --follow

--follow flag is needed because I moved all the files from the root to the musl directory.

Anyways, lots of changes will be made, this branch probably will be unstable at times. If there is any other features that you think might be important let me know. Feel free to comment, hack away or review as this little adventure begins!

Hagen

Python module support is not yet implemented

Not trying to cause any hassle, but instead to have fact recorded, and possible to get some comments regarding planning and issues related to implementation.

So, "import" statement (well, import() builtin func) is currently not implemented (just dumps its args), and mp_module_new() has following comment:

// temporary way of making C modules
// hack: use class to mimic a module

I understand that full-fledged modules support is probably not top-priority for MCU port - being able to create C modules and using just single main Python app already allows to do a lot of things. But sooner or later it will be needed - that's what we all expect from Python - being easily to reuse 3rd-party modules, right? (Actually #7 already touched on that.) And "unix" port is pretty orphaned without it.

So, any planning/ETA when this might be implemented? Any blockers on the road? For example, I don't know if all needed things on core side are there, but I may imagine there're many "boring" questions like modules search paths, then differences between search paths/mehods for ports (MCU vs unix), support for precompiled bytecode files, etc. etc.

`list.pop` should default to -1.

In python,

>>> a = [1, 2, 3]
>>> a.pop()
3

In µpy,

>>> a = [1, 2, 3]
>>> a.pop()
TypeError: function takes 2 positional arguments but 1 were given

Need to work out license of STM libraries

The code in stm/lib comes from the STM32F4xx standard peripheral library, v1.1.0. I'm not sure about the license for this, and whether or not we can have it in the repo. Need to work out the situation, and how to deal with this.

how do we associate contribs

As we add people, they will want to write python modules that work in micropython. (I know I do :))
We could have a contrib folder but it'd be a foodfight for changes probably.

  1. If we don't have a contrib folder and instead allow people to make new projects that will work on micropython - how do we want everyone to be able to find them ?
    Recommend they start with mupython- or micropython-... ?
  2. I think we should have a pyb module stub which people can include to get their projects working.
    Initially with only a few definitions (pyb.led, pyb.i2c,..)but eventually stabilising to some release as the exposed interface.
    Where should this go ?
    How should it be handled ?

If we can work out how to do it - I will doc in Wiki so people can get started...

Create list of supported HW

A list of supported (tested) components like displays, ethernet, wifi or bluetooth modules, etc. would be nice. It definitely should contain a commented example of use.
This would be great for every non-trivial component which was ever tested and worked with the MicroPython board. Maybe to create a community based section on micropython.org? Or two sections: "compatible hardware" and "code examples" (the second one could contain examples for trivial components too).

micropython organization needs a gravatar

I think either a squared up photo of the board would work well. A cropped skewed pic from the official website may look good too but I'm not sure.

Otherwise I would say maybe you could have the community submit designs as a contest(with maybe a board as the prize or something else cool)

Why qstr is passed by handle (instead of pointer)?

Is there any special design decisions why qstr's are referenced by index in a table, instead of direct pointers? Having extra indirection of course affects performance, especially for cached archs.

My first thought was about limiting value domain, but "qstr" type is still defined as uint, so generally takes same size as a pointer. Though I found that qstr's are stored in parse nodes as MP_PARSE_NODE_LEAF_ARG(pn), which takes 4 bits for kind. But for pointer that would mean that qstr should be 16-byte aligned. Probably too harsh for strings in constrained environment indeed. But on the other hand, if this is only required for parsing source code...

So, I wonder is there're more tricks/optimizations which assume that qstr's have limited value domain, of that's the only one?

Consider memory alloc API with explicit size param for m_free()

When dealing with interpreters, object size size is either implicitly known (for example, basic representation of object has fixed size, like 8 or 32 bytes), or size is stored explicitly on interpreter's level (for example, array size needs to be stored in object header anyway). This means that it may be possible to optimize low-level memory allocation system by not storing memory chunk size (thus saving memory), instead relying that higher levels will pass size explicitly.

Quick look at current sources doesn't show that MicroPython is ideally suited for such optimization, but that's why I write - to propose to add that as a (non-immediate) design goal.

First steps towards that can be simple: make m_free() have signature m_free(ptr, size), and m_realloc(ptr, old_size, new_size), then while going over code to adjust call sites, see if what to pass as "size" param is obvious. In case it's not, well, pass 0, and leave larger refactors to someone who really will implement alternative allocators. Ultimately, there're good reasons for having explicit size field for all variable-length objects, IMHO.

Replacing stm/string0.c with code from the musl c library

In the musl branch. cherry-pick these commits

7741268
7990eba
347280b
69b723e
18a48f8
2ae7f77

Summary of commits

The copyright/readme from musl was copied to micromusl, then a small edit to the README was made to describe the micromusl directory.

The original make system that musl uses has some steps that build header files and such, so some header files were copied over to the micromusl directory to satisfy the build system.

Some changes were made to the Makefile to include the same CFLAGS that stm uses to build.

Then a select handful of c files from src/string were brought over, these are the functions that micromusl replaces in stm/string0.c

Finally the stm/Makefile was altered to remove string0.c and to link against the libc.a archive in micromusl.

How to build/test (assumes cross compiler in your $PATH)

cd micromusl
export CROSS_COMPILE=arm-none-eabi-
./configure --disable-shared --prefix=build
make
make install
cd ../stm
make clean
make

Should build a binary that can be uploaded with dfu-utils

Quickly tested here. Comments and questions welcome.

Note: this is a first run preliminary test, not ready to merge with master, until fully tested.

Discussion - STM port & GCC Runtime Library Exception

It looks like it is possible to link to libgcc.a with the stm port of micropython without affecting the mit license. Gcc contains a runtime library library exception.

References:
http://gcc.gnu.org/onlinedocs/gccint/Libgcc.html
http://www.gnu.org/licenses/gcc-exception-3.1.html

Not being a expert at the gnu licenses, let's discuss if this is correct.

By changing the linker from ld to gcc and then linking with libgcc.a a lot of work can be eliminated. For example, in main.c there are two stub fuctions- __aeabi_f2d and __aeabi_d2f that can be eliminated by using the built-in functions in libgcc.a

Changes to be made-

diff --git a/stm/Makefile b/stm/Makefile
index 33d738d..f465b9da 100644
--- a/stm/Makefile
+++ b/stm/Makefile
@@ -7,10 +7,10 @@ DFU=../tools/dfu.py

 AS = arm-none-eabi-as
 CC = arm-none-eabi-gcc
-LD = arm-none-eabi-ld
+LD = arm-none-eabi-gcc
 CFLAGS_CORTEX_M4 = -mthumb -mtune=cortex-m4 -mabi=aapcs-linux -mcpu=cortex-m4 -mfpu=fpv4-sp-d16 -mfloat-abi=hard -fsingle-precision-co
 CFLAGS = -I. -I$(PYSRC) -I$(FATFSSRC) -I$(STMSRC) -Wall -ansi -std=gnu99 -Os -DNDEBUG $(CFLAGS_CORTEX_M4)
-LDFLAGS = --nostdlib -T stm32f405.ld
+LDFLAGS = -nostdlib -T stm32f405.ld

 SRC_C = \
        main.c \
@@ -147,7 +147,7 @@ $(BUILD)/flash1.bin: $(BUILD)/flash.elf
        arm-none-eabi-objcopy -O binary -j .text -j .data $^ $@

 $(BUILD)/flash.elf: $(OBJ)
-       $(LD) $(LDFLAGS) -o $@ $(OBJ)
+       $(LD) $(LDFLAGS) -o $@ $(OBJ) -lgcc
        arm-none-eabi-size $@

and

diff --git a/stm/main.c b/stm/main.c
index 8c2c96a..36ca775 100644
--- a/stm/main.c
+++ b/stm/main.c
@@ -1268,15 +1268,6 @@ soft_reset:
     goto soft_reset;
 }

-double __aeabi_f2d(float x) {
-    // TODO
-    return 0.0;
-}
-
-float __aeabi_d2f(double x) {
-    // TODO
-    return 0.0;
-}

 double sqrt(double x) {
     // TODO

MicroPython UNIX does not compile on FreeBSD

Hello,

Tried compiling MicroPython UNIX on FreeBSD 9.2 (amd64) and although the majority of the code complies fine I had the following problems:

a) Need to use 'gmake' rather than 'make' (this just needs a note in the docs)

b) malloc.h is deprecated and should be replaced with stdlib.h (think this is true on Linix as well)

gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE -c -o build/main.o main.c
In file included from main.c:4:
/usr/include/malloc.h:3:2: error: #error "<malloc.h> has been replaced by <stdlib.h>"
cc1: warnings being treated as errors
main.c: In function 'do_repl':
main.c:76: warning: implicit declaration of function 'free'
gmake: *** [build/main.o] Error 1

Replacing malloc.h with stdlib.h fixes the problem and allows most the the rest of the code to complie cleanly.

c) mp_map_t is redefined

gcc -I. -I../py -Wall -Werror -ansi -std=gnu99 -Os -DUSE_READLINE -c -o build/runtime.o ../py/runtime.c
In file included from ../py/runtime.c:17:
../py/map.h:18: error: redefinition of typedef 'mp_map_t'
../py/obj.h:117: error: previous declaration of 'mp_map_t' was here
gmake: *** [build/runtime.o] Error 1

Haven't been able to work out logic of includes to fix this yet

OSX: Build fails with array-bounds errors

When building in the unix dir, I get the following errors in vm.c:

../py/vm.c:81:40: error: array index -1 is before the beginning of the array [-Werror,-Warray-bounds]
    machine_uint_t *volatile exc_sp = &exc_stack[-1]; // stack grows up, exc_sp points to top of stack
                                       ^         ~~
../py/vm.c:80:5: note: array 'exc_stack' declared here
    machine_uint_t exc_stack[8]; // on the exception stack we store (ip, sp | X) for each block, X = pre...
    ^
../py/vm.c:451:43: error: array index -1 is before the beginning of the array [-Werror,-Warray-bounds]
                        assert(exc_sp == &exc_stack[-1]);
                                          ^         ~~
/usr/include/assert.h:93:25: note: expanded from macro 'assert'
(__builtin_expect(!(e), 0) ? __assert_rtn(__func__, __FILE__, __LINE__, #e) : (void)0)
                        ^
../py/vm.c:80:5: note: array 'exc_stack' declared here
machine_uint_t exc_stack[8]; // on the exception stack we store (ip, sp | X) for each block, X = pre...
    ^
2 errors generated.
make: *** [build/vm.o] Error 1

This is when building with Apple's clang-based gcc-alike.

structure and naming: i2c python interface vs arduino

The arduino interface is documented here:

keywords in here:

It uses a different set of function names than micro python currently does.

There is a small reason (existing users) to follow their lead, or to make similar wrapped interfaces to help migrating users from Arduino.
Although we clearly do not want to slavish about this. The pythonic approach suits us better but for low lying interfaces there is the advantage of familiarity where applicable.
What do we want to do ? Here and in general ?

(We could also make an arduino module which wrapped micropython calls to look similar to existing arduino libraries ?)

Notes:
Specifically micropython (current) vs arduino i2cdevlib:
(note that i2cdevlib also wraps the Wire library which inherits from Stream)
pyb_I2C
i2c_obj_print
i2c_obj_start
i2c_obj_write
i2c_obj_read
i2c_obj_readAndStop
i2c_obj_stop

vs:
from Wire:
begin([addr])
requestFrom(address, quantity, [stop])
beginTransmission(address)
endTransmission([stop])
write(value, [stop]) (where value may be byte, str, array)
available()
read()
onReceive(handler)
onRequest(handler)

From I2Cdev
readBit
readBitW
readBits
readBitsW
readByte
readBytes
readWord
readWords
writeBit
writeBitW
writeBits
writeBitsW
writeByte
writeBytes
writeWord
writeWords

Where - for example - readWords supports timeouts
int8_t I2Cdev::readWords(uint8_t devAddr, uint8_t regAddr, uint8_t length, uint16_t *data, uint16_t timeout)
The arduino lib uses its Wire and Fastwire libraries for also supporting one wire and twowire devices and deals with requests for reads longer than the inbuilt buffer...

Limitations in Wire include only allowing 7bit i2c addresses.

Extra info - maybe I should put this on a wikipage ?
One-wire on Arduino:

Stream lib

SPI lib

Forked repo adding all members.

I don't know about you guys, but I have received a bunch of emails about being added to everyone's forked repos. I don't want to be added to everyone's fork. I don't know if this is an automatic feature of github, but please figure it out!

machine_int_t and int dichotomy

I don't do 64bits. 32 bits should be enough if not for everyone, then at least for peaceful users, right? So, I never paid that much attention to the fact that Linux using LP64 memory model, where int is still 32 bits. So, uPy wants sizeof(integer type) == sizeof(void*) and defines machine_int_t. That's what stored in uPy objects then. But uPy still uses int everywhere, which is ~ incorrect, as may lead to overflow (unless it's done in covert careful manner which I missed from random grep). Ok, so solution would seem simple: just use machine_int_t everywhere. But! libc functions still take/return ints, so don't allow to use full range of uPy values.

What are guidelines on dealing with this?

Compiling with "-O0 -ggdb" Overflows the FLASH_TEXT Section

Using "-O0 -ggdb" to debug, the resulting binary overflows the FLASH_TEXT section:

arm-none-eabi-ld: build/flash.elf section `.text' will not fit in region `FLASH_TEXT'
arm-none-eabi-ld: region `FLASH_TEXT' overflowed by 110372 bytes

Looking at the linker script, it seems that only one 128KB sector is allocated for the FLASH_TEXT section:

FLASH_ISR (rx)  : ORIGIN = 0x08000000, LENGTH = 0x004000 /* sector 0, 16 KiB */
FLASH_TEXT (rx) : ORIGIN = 0x08020000, LENGTH = 0x020000 /* sector 5, 128 KiB */

I'm not sure I completely understand the flash layout, why does it start at sector 5 specifically ? and why sectors 1-4 and remaining of the flash memory are not used ? if the scripts are stored after sector 5, can't we use sectors 1-4 for the application ?

Have platform-dependent code in only one place

Have a "platform" folder with subdirs for each platform with their platform dependent code inside (i.e., no asembler code on the core) like PyMite does. This will make more clear where to look and how to port it to another platforms, and also would easier to have a unique central makefile (just define as a param what platform do you want to compile).

RFC: Configurability and parametrization

When developing full-fledged language implementation for constrained environment, there're number of choices, compromises and optimizations to make. For different environments, different choices make sense. So, it's import to support (compile-time) configuration, and in this ticket I'd like to discuss:

  1. Amount of configurability we want to allow.
  2. Specific guidelines on how overall configuration and individual parameters are handled.

Let's start with pronouncing the of adding too much configurability - it may be hard to maintain the project and test individual optimizations, and even harder to test various combinations of them. With that in mind, I still think that having localized optimizations (i.e. not interdependent) and good testsuite should make advanced configurability feasible. As an example, I imagine configurability down to the level of individual methods of builtin types (so for example, if particular application never uses str.join() method, MicroPython can be built without it, which will allow it to run in a more flash-constrained environment, or fit more useful functionality in flash instead).

Regarding configuration params, currently it is handled by per-port (essentially, per build variant) mpconfig.h file, like:

#define MICROPY_ENABLE_FLOAT        (1)
#define MICROPY_EMIT_CPYTHON        (0)
#define MICROPY_EMIT_X64            (1)
#define MICROPY_EMIT_THUMB          (0)
#define MICROPY_EMIT_INLINE_THUMB   (0)

That seems good approach, because passing gazillion of potential option via compiler command line doesn't scale. To not require to specify each and every option in mpconfig.h, it makes sense to have py/default_config.h which will use #ifdef to set sensible defaults for options. Otherwise, mpconfig sets convention of having "MICROPY_" prefix for config option, and used #if (not #ifdef) for testing option in source code.

I myself already introduced "USE_READLINE" config param which doesn't follow the convention above, but it's kinda special as it affects only "unix" port (not language core) and related to licensing (readline casts resulting binary to GPL).

Thoughts?

list.clear is unimplemented

Python:

>>> x = [1, 2, 3, 4]
>>> x.clear()
>>> x
[]

µpy:

>>> x = [1, 2, 3, 4]
>>> x.clear()
AttributeError: 'list' object has no attribute 'clear'

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.