ponylang / rfcs Goto Github PK
View Code? Open in Web Editor NEWRFCs for changes to Pony
Home Page: https://ponylang.io/
RFCs for changes to Pony
Home Page: https://ponylang.io/
This was originally opened as ponyc issue 60.
If we can prove a match is exhaustive, we can give an error if an else branch is present, and we can avoid unioning None with the expression type (from an else branch that will never be taken).
If we can prove that a match case is unreachable, we can give an error.
Specifically:
As previously discussed, since values are matched by calling eq(), we absolutely cannot determine if a set of matches are exhaustive if values are matched and any types involved provide their own eq().
However, we can do checks when matching is based only on type. I think this would be well worth doing to aid cases such as switching on an enum.
See original issue for full details.
The "no operator precedence" rule has been quite new to me, but I am getting used to it.
What I would suggest, however, would be to soften the rule by introducing groups of infix operators: (infix) operators of the same group could be chained without needing to put parentheses around them, just like you would be able to do with only one operator:
a + b - c + d
a * b / c * d
The above examples would then be valid, and the unsuggared version would look like this:
a.add(b).sub(c).add(d)
a.mul(b).div(c).mul(d)
The groups would be:
+
(add) and -
(sub)*
(mul) and /
(div)<<
(lshift) and >>
(rshift)Now, to why I am doing this proposal: on a mathematical standpoint, doing a + b - c
and a - c + b
is essentially the same thing: you have at the end added a
and b
and have substracted at some point c
; all operations will be done from left to right, without any concept of precedence being at play.
When writing code, one could think to: "add, then substract, then add" his values, as in a big sumation, and would so write: a + b - c + d
. However, pony's rule will force him to put parentheses around his code: ((a + b) - c) + d
or (a + b + d) - c
, essentially changing what they were thinking first ("first add, then substract to the result, then add to the result" or "add the three then substract to the result").
Surely, they could've written a + b + -c + d
, which, in a mathematical way, is also correct, but this wouldn't be possible with the other group(s) of operators.
Implementing this would not break any code already done, as it only allows more to be done by softening the rules.
Theses rules would look like this:
Repeated use of a single operator, or of operators of the same groups, is fine:
`1 + 2 + 3`
<mention operator groups>
`1 + 2 - 3`
That was it for the infix operators grouping.
This is just a simple edge case: doing not match ... end
results in an error:
syntax error: expected expression after not
The same error is raised when trying to do not if
, it is easily fixed when putting parentheses around the match
or the if
expression. It bugs me a little as it is said that match
es and if
s are considered as expressions.
Here is an example:
https://playground.ponylang.io/?gist=2652feb86d95f77db66c4b1589358beb
Let me know if I should move this issue to another github issue.
Thanks for reading up to here, if you did, and have a nice day!
Optionally allow docstrings for class/actor fields below or above the field definition.
class MyClass
"""class docstring"""
let number: U32 = 1
"""field docstring below"""
fun method(arg: String): None =>
"""method docstring"""
None
class MyClass
"""class docstring"""
"""field docstring above"""
let another_number: U32 = 1
fun method(arg: String): None =>
"""method docstring"""
None
I would like your opinion on this. Maybe a kind of a vote, No
, Above
or Below
?
As discussed on today's sync call, due to @Theodus finding a standard library breakage in the planned fix for https://github.com/ponylang/ponyc/issues/2182#issuecomment-335878405, I think that adding syntax to our constraint language to fix that issue is a higher priority than previously thought.
Specifically, we need an RFC for new syntax that can be used in a type parameter constraint to require that the type parameter is safe to put in a match expression against some other type.
So, for ListNode
, it might look like this (with a more elegant syntax):
class ListNode[A: canbematchedagainst None]
This would imply that a match expression containing the right-side operand type and the type argument would not be subject to the error "this match violates capabilities".
What I love is to know which features are implemented without to read closed pull requests.
So, I think it's good if the RFC repository is marked as a release.
And it would be nice if a changelog* shows all implemented features.
That will be nice. I hope I'm not alone.
(*) I recommdend the name implemented(.md)
With following structure (also a recommendation from me, only); [...] means optional
/some space/
I'm creating an Issue based on the comments in #131 to remind myself to draft an RFC for static collection initializers.
let collection = { 0, 2, 4, 8 }
Compiler crash on following code:
trait T
interface I
class C
actor Main
fun f[A, B](x: (A | B)): (A | B) =>
match x
| let a: A => a
| let b: B => b
end
new create(env: Env) =>
f[I32, I](1) // NG
// f[I32, T](1) // OK
// f[I32, C](1) // OK
// f[ReadSeq[I32], I](Array[I32]) // NG
// f[ReadSeq[I32], T](Array[I32]) // NG
// f[ReadSeq[I32], C](Array[I32]) // OK
Backtrace(lldb):
Building builtin -> /home/ta3ta1/Code/ponyc/packages/builtin
Building /home/ta3ta1/Code/issue/ -> /home/ta3ta1/Code/issue
Generating
Reachability
Selector painting
Data prototypes
Data types
Function prototypes
Functions
Process 1671 stopped
* thread ponylang/ponyc#1: tid = 1671, 0x00007ffff58e4230 libLLVM-3.9.so.1`llvm::BasicBlock::getContext() const, name = 'ponyc', stop reason = signal SIGSEGV: invalid address (fault address: 0x8)
frame #0: 0x00007ffff58e4230 libLLVM-3.9.so.1`llvm::BasicBlock::getContext() const
libLLVM-3.9.so.1`llvm::BasicBlock::getContext:
-> 0x7ffff58e4230 <+0>: movq 0x8(%rdi), %rax
0x7ffff58e4234 <+4>: movq (%rax), %rax
0x7ffff58e4237 <+7>: retq
0x7ffff58e4238: nopl (%rax,%rax)
(lldb) bt
* thread ponylang/ponyc#1: tid = 1671, 0x00007ffff58e4230 libLLVM-3.9.so.1`llvm::BasicBlock::getContext() const, name = 'ponyc', stop reason = signal SIGSEGV: invalid address (fault address: 0x8)
* frame #0: 0x00007ffff58e4230 libLLVM-3.9.so.1`llvm::BasicBlock::getContext() const
frame ponylang/ponyc#1: 0x00007ffff595758f libLLVM-3.9.so.1`llvm::BranchInst::BranchInst(llvm::BasicBlock*, llvm::Instruction*) + 31
frame ponylang/ponyc#2: 0x00007ffff590ae07 libLLVM-3.9.so.1`LLVMBuildBr + 55
frame ponylang/ponyc#3: 0x00005555555fa174 ponyc`gen_match(c=0x00007fffffffdc30, ast=0x00007ffff00c5980) + 837 at genmatch.c:816
frame ponylang/ponyc#4: 0x00005555555ecdb9 ponyc`gen_expr(c=0x00007fffffffdc30, ast=0x00007ffff00c5980) + 485 at genexpr.c:86
frame ponylang/ponyc#5: 0x00005555555fa404 ponyc`gen_seq(c=0x00007fffffffdc30, ast=0x00007ffff00c5e00) + 78 at gencontrol.c:22
frame ponylang/ponyc#6: 0x00005555555ecc5c ponyc`gen_expr(c=0x00007fffffffdc30, ast=0x00007ffff00c5e00) + 136 at genexpr.c:26
frame ponylang/ponyc#7: 0x0000555555624daa ponyc`genfun_fun(c=0x00007fffffffdc30, t=0x00007ffff2a65800, m=0x00007ffff00a5e80) + 424 at genfun.c:436
frame ponylang/ponyc#8: 0x00005555556261f6 ponyc`genfun_method_bodies(c=0x00007fffffffdc30, t=0x00007ffff2a65800) + 340 at genfun.c:895
frame ponylang/ponyc#9: 0x00005555555f0ae8 ponyc`gentypes(c=0x00007fffffffdc30) + 1084 at gentype.c:884
frame ponylang/ponyc#10: 0x000055555560b7f0 ponyc`genexe(c=0x00007fffffffdc30, program=0x00007ffff304dd00) + 559 at genexe.c:409
frame ponylang/ponyc#11: 0x00005555555e9f5c ponyc`codegen(program=0x00007ffff304dd00, opt=0x00007fffffffe0d0) + 298 at codegen.c:1043
frame ponylang/ponyc#12: 0x000055555559c8c2 ponyc`generate_passes(program=0x00007ffff304dd00, options=0x00007fffffffe0d0) + 53 at pass.c:301
frame ponylang/ponyc#13: 0x000055555559b8f9 ponyc`compile_package(path="/home/ta3ta1/Code/issue/", opt=0x00007fffffffe0d0, print_program_ast=false, print_package_ast=false) + 147 at main.c:255
frame ponylang/ponyc#14: 0x000055555559bdc6 ponyc`main(argc=2, argv=0x00007fffffffe248) + 1208 at main.c:398
frame ponylang/ponyc#15: 0x00007ffff390d2b1 libc.so.6`__libc_start_main + 241
frame ponylang/ponyc#16: 0x000055555559b6ba ponyc`_start + 42
(lldb) fr sel 3
frame ponylang/ponyc#3: 0x00005555555fa174 ponyc`gen_match(c=0x00007fffffffdc30, ast=0x00007ffff00c5980) + 837 at genmatch.c:816
813 if(is_matchtype(match_type, pattern_type, c->opt) != MATCHTYPE_ACCEPT)
814 {
815 // If there's no possible match, jump directly to the next block.
-> 816 LLVMBuildBr(c->builder, next_block);
817 } else {
818 // Check the pattern.
819 ok = static_match(c, match_value, match_type, pattern, next_block);
(lldb) fr va
(compile_t *) c = 0x00007fffffffdc30
(ast_t *) ast = 0x00007ffff00c5980
(bool) needed = true
(ast_t *) type = 0x00007ffff00c36c0
(ast_ptr_t) match_expr = 0x00007ffff00c4d80
(ast_ptr_t) cases = 0x00007ffff00c8280
(ast_ptr_t) else_expr = 0x00007ffff00c6b40
(LLVMTypeRef) phi_type = 0x0000555555929f30
(ast_t *) match_type = 0x00007fffefdd3980
(LLVMValueRef) match_value = 0x000055555594f170
(LLVMBasicBlockRef) pattern_block = 0x00005555559b75b0
(LLVMBasicBlockRef) else_block = 0x0000000000000000
(LLVMBasicBlockRef) post_block = 0x00005555559b74b0
(LLVMBasicBlockRef) next_block = 0x0000000000000000
(LLVMValueRef) phi = 0x00005555559b7538
(ast_t *) the_case = 0x00007ffff00ca940
(ast_t *) next_case = 0x0000000000000000
(ast_ptr_t) pattern = 0x00007ffff00c9040
(ast_ptr_t) guard = 0x00007ffff00c9380
(ast_ptr_t) body = 0x00007ffff00cb5c0
(ast_t *) pattern_type = 0x00007ffff00ca480
(bool) ok = true
NG patterns give same crash.
Compiler version:
0.18.0-96e96187 [debug]
compiled with: llvm 3.9.1 -- cc (Debian 6.3.0-18) 6.3.0 20170516
- * - coding:utf-8 - * -
When coding is not uniform, the editor often overwrites the entire code
Result in code rewrite
I think we should unify the encoding. The default encoding of VS is not necessarily utf-8, sometimes gbk, but UTF-8 by default of gcc. When we switch compiler tools, we often encoding in disorder.
Sort(@{ (key) => key}, array)
In this issue I define arithmetic error as an computation that does not yield the same result that it would purely from a mathematical point of view (like underflow, overflow) or a computation that yield a result when one is not defined (division by 0). From an user point of view, these kind of errors are almost never done on purpose and will surely result in a program that doesn't have the desired behavior (at least locally). As such I would like that somehow integer arithmetic errors can be caught by Pony and "dealt" with.
As a reference I stumbled on this puffs project that can be a great source of insight. Among other thing the project tries to provide a solution to integer arithmetic overflows. Moreover the README compare itself to other languages (C, Rust, D and Swift) about how they approach the problem.
TL;DR: Explanation of the issue with extracting fields from a consumed reference, and initial proposal of a syntax
When handling viewpoint adaptation field extraction (especially with iso
origins), there is no way to safely extract multiple let
fields, or extracting var
fields without reallocating them. Examples:
class Foo
let x: String iso
let y: String iso
new iso create(x': String iso, y': String iso) =>
x = consume x'; y = consume y'
class Bar
var x: String iso
var y: String iso
new iso create(x': String iso, y': String iso) =>
x = consume x'; y = consume y'
class Baz
let arr: Array[String iso]
new iso create(x': String iso, y': String iso) =>
arr = [consume x'; consume y']
class Qux
let tup: (String iso, String iso) // or "var tup: ..."
new iso create(x': String iso, y': String iso) =>
tup = (consume x', consume y')
actor Main
var _x: String iso = recover String end
var _y: String iso = recover String end
new create(env: Env) =>
let x = "x"; let y = "y"
let foo: Foo iso = Foo(x.clone(), y.clone())
let bar: Bar iso = Bar(x.clone(), y.clone())
let baz: Baz iso = Baz(x.clone(), y.clone())
let qux: Qux iso = Qux(x.clone(), y.clone())
// Doesn't work, since you can only pick one field
_x = (consume foo).x
//_y = (consume foo).y // "can't use a consumed local in an expression"
// Works, but only for `var` and requires potentially unnecessary allocations
_x = bar.x = recover String end; _y = bar.y = recover String end
// Works, but has runtime issues and requires an error block
try
let baz': Baz = consume baz
_x = baz'.arr.shift()?
_y = baz'.arr.shift()?
end
// Works, but data needs to be in a well-specified tuple
(_x, _y) = (consume qux).tup
The solution in Qux
seems to be the best way to do this currently, but it is still error-prone if the elements are incorrectly rearranged (which is less likely to be the case for fields with different types), and it can get complicated if the receiver needs to read a few fields of a huge tuple.
I specifically mentioned viewpoint adaptation earlier as this should be valid for other reference capabilities, but iso
and maybe trn
origins would be the most relevant cases.
I would expect something like this to work, but it violates capabilities as expected:
// Error: iso! is not a subcap of iso
(_x, _y) = recover
let foo': Foo ref = consume foo
(foo'.x, foo'.y)
end
And consuming from fields directly is not allowed, either:
// Error: Consume expressions must specify a single identifier
_x = consume (foo.x); _y = consume (foo.y)
Some sort of syntax that "consumes a reference and returns a tuple of its fields, specified by the programmer" would make sense to respect capabilities. For now, I thought of a "destructor" (as in the reverse of a constructor, or the de-structuring of the origin object) syntax:
(_x, _y) = destruct (foo' = consume foo) (foo'.x, foo'.y) end
When checking refcaps, if foo
is of type Any A
, foo'
would be Any A^
in both tuple calls.
As far as I can tell, since foo'
only lives inside the "destruct" block, no other refcaps could leak, so long as no other values, fields or functions can exist in the rightside (tuple expression) of the block. Alternatively, foo'
isn't necessary for purposes other than readability. Some alternative syntaxes:
(_x, _y) = destruct (_ = consume foo) (_.x, _.y) end
(_x, _y) = destruct (consume foo) (x, y) end
(_x, _y) = destruct (consume foo) (.x, .y) end
(_x, _y) = destruct (consume foo) (_.x, _.y) end
// etc.
Do note that consume
keyword should still be used, as we might not need to consume our original variable when working with refcaps that alias as themselves -- in which case, the syntax is not necessary at all outside of generics.
I'll turn this into a PR if nobody makes a mention about this being unsound, or proposes a better syntax.
While fiddling with Pony 0.24.4, I've discovered that a suspended/hung/machine crashed TCP peer can interfere with the Pony runtime's shutdown. "Interfere" means blocking the runtime's timely shutdown due to network sockets remaining partially open and thus causing the ASIO subsystem to continue being noisy.
General steps to reproduce:
SIGSTOP
dispose()
ing of the socket (and any others), stopping all remaining timers from firing, etc.SIGCONT
), killed, or the peer's host crashes.A demo program is at https://gist.github.com/slfritchie/558f44bcef5a29ad4ae9eaf208723bbc. Use as follows:
nc -l 8888
Ticker, dispose socket
The hang-bug program will exit 5 seconds after starting if the netcat process's execution is not interfered with.
AFAICT, this delay is a feature of the runtime. TCP sockets are implemented by actors, and reads & writes & dispose()
requests with sockets involve async messaging as any other Pony actor. In keeping with synchronous socket behavior of a quick sequence of several writes followed by a close by something written in C for a POSIX OS, if the TCP socket isn't closed prematurely, we expect all bytes written to be sent prior to the close. Any bytes not written due to flow control would be signalled by the return value of thewrite
/writev
/send
/etc system calls.
Pony's async messaging doesn't give the sending actor direct feedback of the system call return status; the TCPConnection
actor is responsible for buffering not-yet-sent data and managing yet-to-be-read bytes from the socket.
TCPConnection
actor in the _pending_writev
array, the socket will not be closed, and the ASIO subsystem will remain noisy.TCPConnection
needs to observe a read of 0 bytes from the socket to trigger its final closing logic. If the remote peer is suspended/hung/crashed, that event is delayed for an unknown period of time.TCPConnection
will use the hard close path if dispose()
is called and the actor is in muted state. However, if the actor is in throttled state, the hard close path is not taken. I think there's a good argument to make that a hard close is appropriate when in throttled state.Possible remedies might include:
a. Adding a hard_close()
behavior to give a "close the socket NOW" option to socket users.
b. Add an optional timer + per-socket configurable that starts when dispose()
is called. If the timer fires, and the socket isn't yet fully closed, then the socket will go the hard close path.
When a lambda is assigned to a reference with a definite type, or passed as an argument, we want to infer the capability of the resulting object literal based on the left hand side, instead of infering based on the content/structure of the lamba.
See ponylang/ponyc#415 for the discussion that led to this conclusion.
They don't work as we want. They are a sneaky source of bugs etc:
See: ponylang/ponyc#1893
This is a concept, not a final draft, it will almost definitely break everything if you try and implement it.
In fact, don't try at all, it probably wont even work.
A Module is a version of a Package that 'plugs in' to it's parent code, and the parent does not know types from the child (But could be depended upon by other modules in the code, which CAN access another module's types)
a Module only has to provide one actor, its own Main, which provides create().
the program that is the module's parent is given a list of some sort that contains all the Module actors. These actors will, in your average use case, probably implement multiple different Traits and Interfaces.
Modules can pass around their own internal structs, even if the main program doesn't actually recognize them, because they are found and noted at compile time, where this can be done easily and efficiently, and can be treated as one large program.
If you call the primitive method without parentheses, the call is automatically wrapped as lambda
primitive Str
// Tweak whitespace
fun trim (s: String): String =>
...
actor Main
new create(env: Env) =>
Seqs.map([" a "; " b "; " c "], Str.trim) //Result: ["a"; "b"; "c"]
If you try to register signal handlers multiple times for the same signal, the last one wins. This can lead to undefined behaviour if libraries use signal handlers.
Moving this from the ponyc repo issues.
We need a RFC that proposes a way to embed string literals inside of string literals. This would allow docstrings to have example code that is full and complete in that the code in the example code have docstrings as would "real" pony code.
You can see an example of the problem as it currently exist here: https://github.com/ponylang/ponyc/pull/615/files
Any solution should maintain LL(1) parsing if at all possible. "Less pretty" solutions that maintain LL(1) are strongly preferred.
On today's sync call, @sylvanc started exploring the idea of a system that would allow code to alias an iso
into a modified capability that could not be consume
d or sent to another actor.
The purpose of this ticket is to explore this idea a bit more, figure out what it would look like, how it could be used, and if it is sound.
The baseline logging in Pony is incredibly simple, but leans perhaps too much so. In order to support RFCs like RFC 67 (#175), we may need to expand the interface to be more flexible in passing information, besides the simple string computations of the current logger design.
Pony's standard library is not meant to be "batteries-included", in implementing all features. Nevertheless, a standard interface for logging will allow more code to interact in the future, and ensure critically important consistency in output. There are many existing, mature logging facades in other languages, which we may use for reference. Some such mature facades include SLF4J, python's logging module, as well as competing standards such as Flogger
We'd welcome an RFC that based on the discussion that was started here: ponylang/ponyc#317.
As discussed on the sync call, we want to audit the Seq
interface, as it currently contains a lot of methods that make extra assumptions about what kind of sequence is being dealt with, whether it has a fixed size, whether the underlying pointer can be resized, etc.
In the upcoming value-dependent type changes, we will have a Vector
type with a fixed size that won't be compatible with Seq
as it currently stands.
I would suggest reducing the number of methods in Seq
and possibly adding other interfaces to cover Seq
s that have a size that isn't fixed, and the ability to reserve more space in the pointed-to-buffer.
There was already an audit performed in ponylang/ponyc#1131, so this just needs an RFC.
The correct way in Pony to iterate over items seems to be through the for
control structure which uses the Iterator
interface. However if I want to iterate over a subset, I have to use Range
from collections
and use that to index into the array.
Changing .keys()
, .values()
and .pairs()
to take (from: USize, to: USize)
would make this a lot easier. The change seems quite small, but where I'm unsure is how to handle from > .size()
and to < from
. Some might say that these should cause an error
(making these functions partial) and some might argue that they should cause an empty set / iterator.
I propose that the respective Iterator
classes should change their default constructor to:
new create(array: B, from: USize = 0, to: USize = USize.max_value()) =>
_array = array
_i = from
_end = to.min(array.size())
Another issue is arrays that somehow become shorter than the iterator, while it is in use, since the signature of the array iterators are eg. ArrayKeys[A, B: Array[A] #read]
, where only #read
is required.
We'd welcome an RFC based on the discussion that was started here: ponylang/ponyc#701
This should probably be written up as a change proposal but I've opened up this issue first to get feedback.
There is a simple gotcha with pony use strings. If you're working within a file that wants to use some library, "xyz" and that file is next to a directory xyz, then it will always resolve to that location, even if there is no pony code within that directory. This might seem like an intended trade but it can make certain cases in multi-language projects a bit more difficult when directory names and layout might collide.
As I see it, there are two reasonable fixes that come to mind. We could only resolve it as a package directory if there is at least one pony file in that path or we could make local resolution relative in all cases. The latter seems to be a little clearer, like C does with #include "xyz.h"
vs #include <xyz.h>
.
The impact is minor but widespread since there is a lot of code which would now require a ./
prefix on the path. Pony could warn when something should be prefixed as a local path before making this kind of change.
Following the discussion of this issue there was an agreement that the String class has some paint points that I myself ran into.
One issue is that the String class is not encoding aware and expose the underlying byte sequence. So a user could start with a valid string with a known encoding (UTF-8 for instance) and end-up with an invalid one.
As a user I would like to have a String class that do not expose the inner byte array. As such it should have the following properties:
Here is an example that illustrates a few of the problems:
use "format"
actor Main
new create(env: Env) =>
let sentence = " this is a pony -> ๐"
try
let index_of_pony = sentence.find("๐")?
(let char_u32 : U32, let nb_bytes_of_char : U8) = sentence.utf32(index_of_pony)?
env.out.print(recover val String.from_utf32(char_u32) end) // correct
env.out.print(sentence.substring(index_of_pony, index_of_pony + 1)) // incorrect
env.out.print(sentence.substring(index_of_pony, index_of_pony + 4)) //correct
end
var sentence2 : String ref = "hi".clone() // so far valid UTF-8
let invalid_code_point = U32(0x11FFF) // 0x11FFF is not a valid codepoint look at http://unicode.org/faq/utf_bom.html#gen6
sentence2.push_utf32(invalid_code_point)
env.out.print(recover val sentence2.clone() end)
try
(let char_u32 : U32, let nb_bytes : U8) = sentence2.utf32(2)?
env.out.print(
"char as codepoint is " + Format.int[U32](char_u32, FormatHex) +
" takes " + nb_bytes.string() + " bytes")
end
As you can see in the first try, finding a character in a string is actually error prone. Calling naively the substring function can return a invalid UTF-8 string even in the original one is valid. And the input string in only a valid UTF-8 string if the code sample is itself encoded in UTF-8. In the second try we see that we can insert in a valid UTF-8 string a codepoint that is not defined by the unicode standard (or so it seems).
The Power operator is used to raise a number by a power of x.
Pony, however, lacks this rather important operator.
Say you have two actors, 'Bar' and 'Foo'. Bar keeps a list of Foo actors.
Foo has one function, tick(b: Bar tag), and Bar has one function, tickallfoo()
Bar needs to pass a tag of itself to all Foo, but it can't do so.
Simple concept, potentially big impact (I don't know the internals)
If this actually already exists, then feel free to prod me about it, because I overlooked it pretty hard. (I spent an hour looking)
This request to create a RFC comes from issue 440 on the ponyc repo.
Currently some APIs are unusable via FFI because they require a pointer to a function to use as a callback. Full details are in the original issue and should be addressed in the RFC.
Lately a lot of my time has been on the Go programing language. Go has lots and lots of things that are very nice, one is the context package, it's based on this interface:
type Context interface {
Deadline() (deadline time.Time, ok bool)
Done() <-chan struct{}
Err() error
Value(key interface{}) interface{}
}
This can be used for anything, on web programming, every request has a context embedded, it can be used to cancel all the request chain on timeout or if the client disconnect. Can be used too to share values on request chain.
I'm new to Pony, don't know how this could be included on the language or if it should be included. Being on the stdlib, encourages others to use the same pattern and produces a unified way to control concurrency.
More about the context package on Go: https://blog.golang.org/context
The pipe operator |>
passes the result of an expression as the first parameter of another expression.
The pipe operator |>>
passes the result of an expression as the last parameter of another expression.
primitive Str
fun trim (s: String): String =>
...
fun upper (s: String): String =>
...
fun print(s: String, env: Env) =>
env.out.print(s)
primitive Trace
fun print(env: Env, s: String) =>
env.out.print(s)
actor Main
new create(env: Env) =>
let s = " I Love Pony "
s |> Str.trim |> Str.upper |> Str.print(env) //"I LOVE PONY"
s |>> Str.trim |>> Str.upper |>> Trace.print(env) //"I LOVE PONY"
This was originally opened as issue 139 on the ponyc repo:
There is a detailed list of requirements and some discussion.
The idea of field type inference was raised in ponyc issue 418.
If anyone is interested in this feature and wants to write up a RFC, we would welcome it.
However, there are some serious concerns raised in the original issue that would need to be addressed.
I'm opening this issue as more of a question about which option would be best to go forward with. I see two options for improving the syntax for chaining together operations that are not methods on classes.
To show the differences between the two options here is a simple class that we want to work with:
// package a
class Foo[A]
fun foo[B](): Foo[B] => ...
fun bar() => ...
Option 1 would be to add something similar to the pipeline operator:
// package b
primitive Baz[A]
fun baz(foo: Foo[A]) => ...
actor Main
new create(env: Env) =>
Foo[A]
.foo[B]()
.> bar()
|> Baz[B].baz()
Option 2 is to allow extending a class, interface, or trait from outside of the package in which it is defined. These extensions would not have access to the private members of the type that they extend:
// package b
extention Foo[A]
fun baz() => ...
actor Main
new create(env: Env) =>
Foo[A]
.foo[B]()
.>bar()
.>baz()
The first option seems better to me unless we could extend an algebraic data type:
type Option[A] is (A | None)
extention Option[A]
fun map[B]({(A): B^}): Option[B] => ...
This is the new home of the ponyc issue 87.
The reflection API should be mirror-based and capabilities-secure.
If you'd like to take on writing this RFC, we'd welcome working with you.
Hey,
first of all: the command line parser in the cli package is great! But, I do have a suggestion for minor changes:
Sylvan shared an example of an alternative POC approach that he came up with that would allow env.root to be something besides just AmbientAuth | None: https://playground.ponylang.io/?gist=f0dca3abd61d28d2c9ad806393d3c025
We discussed on a recent sync call the idea of being able to load struct definitions from a C header, so that Pony code could potentially depend on platform dependent struct definitions.
This came up in discussion of ponylang/ponyc#1513, in which an openssl dependency was added to the pony runtime in order to put ponyint
functions that use the SSL_CTX
type there. We discussed that it would be better if we could avoid that by writing those accessors in Pony. But to do that, we'd need Pony to be able to load the struct defs from a header when compiling.
This would probably require a libclang
dependency for ponyc
to be able to read C header files.
This idea needs more discussion to flesh out the details and any feasibility issues.
This is a repost of the issue mentionned in #139
Doing not match ... end
and not if ... end
results in an error:
syntax error: expected expression after not
As the documentation states, if ... end
and match ... end
are expressions, so this edge case should not return this particular error.
This issue is only minor, as putting parentheses around the match
or the if
expression fixes the issue.
I would like to open a discussion about the ability to create an alias to a set of constraints in Pony, for instance:
type Readable[A]ย = (Equatable[A] #read & Any #send)
currently, not being able to create aliases to type constraints leads to duplication of potentially complex type expression across traits/classes/actors interacting with that type:
class Reader[A:(Equatable[A] #read & Any #send)]
actor Writer[A:(Equatable[A] #read & Any #send)]
trait ParsingCombinator[A:(Equatable[A] #read & Any #send)]
actor Word[A:(Equatable[A] #read & Any #send)] is ParsingCombinator[A]
...
this duplication of type expression is potentially problematic for the following reasons:
Any #send and Equatable[A] #read
vs Equatable[A] #read & Any #send
)the benefits of doing so would be:
Current issues with collections.Range
that hurt its usability:
negative Ranges (where min > max
) is only expressable for signed integers:
// this is not possible, as the inc parameter is also U8
Range[U8](10, 0, -1)
The default inc
should only be 1
if min <= max
, otherwise it should be -1
to ensure that we iterate into the expected direction:
// this should infer a default step of -1
Range[U8](10, 0)
// this should infer a default step of 1
Range[U64](0, 10)
This issue tries to spark the discussion around 1. the need for asynchronous file IO, 2. The possible implementations thereof and 3. The new look and feel of such an asynchronous file API for pony. The new asynchronous file IO could be added alongside the existing blocking file io apis.
Current File operations in Pony use standard POSIX file operations like write/writev, read etc. which are all possibly blocking. This means that on performing such an operation on a file, one scheduler thread will be blocked during that operation. This can be a great performance problem. This is the reason I am bringing this up.
This is the actually tricky part. Afaik ASIO which is used for all other networking, pipe, stdstream IO will not work on regular files. Winfows has some kind of asynchronous file IO which i know nothing about, if anyone could shed some light on this, that would be great. Posix offers the aio_* apis, basically offloading file IO to a separate threadpool in userland. This API, i think, is a good candidate due to cross-platform compatibility. Another one would be libuv which is completely cross platform and offers async name resolution as well. It does file io in a conceptually similar manner than the aio api such that it uses blocking file apis but executed them on a separate threadpool. It seems a bit overkill for the problem at hand and possibly it makes most sense to completely move all io operations to libuv instead of adding it alongside asio.
The Reader class of the buffered package contains functions that are useful to read common basic type (such as F32, F64, U32, etc..) from a byte array. Unfortunately, those functions are not stateless and cannot be used in some wider context (my use case was a Array[U8 val] ref). It would be great to add stateless functions that Reader can be built on and Pony's users can reused too. They would have a signature similar to this one:
primitive BigEndian
primitive LittleEndian
type EndianMode is (BigEndian | LittleEndian)
fun box peek_u32(array: Array[U8] box, endianMode: EndianMode, offset: USize) : U32 ?
I suggest that it could be a great addition to add endian-less function that would pick automatically one of the two method depending on which architecture it is running to avoid code like this:
let x =
if is_running_le() then
peek_u32(my_array, 0, LittleEndian)?
else
peek_u32(my_array, 0, BigEndian)?
end
Essentially, add a compiletime way of modularly adding code to the program.
Say we have the following program:
main.pony
src/
foo-module
bar-module
importthemodules.pony
dothings.pony
'importthemodules.pony' serves one purpose in this example: 'use' all the modules, and run their init function on a class, we'll call this class 'baz'
It could, with my idea, potentially be structured to look like this:
main.pony
src/
modules/
foo
bar
dothings.pony
What's happening here? Well, in main.pony, we simply are 'use' ing all the modules at once, and setting each one up in a modular fashion, without even explictly mentioning the modules (just giving them their own folder)
This is just a simple concept, completely unpolished, please consider it. But remember absolutely none of this is a final idea, its a unpolished, probably dangerous concept.
Currently there is no possibility in pony to map directly to C
unions. Since unions are much similar to struct type, we could
just have annotation, for example:
struct \union\ MyUnion
i: I32
f: I64
LLVM does not have union types. So allocation could be done
using size of biggest union field. Accessing fields could be
done just by casting to apropriate type.
Here is example for LLVM IR generated for union types:
union {
struct {
float f1;
double d1;
};
int i;
long long l;
} u1;
union {
struct {
int i;
void* ptr;
unsigned long long ull;
};
char c1;
} u2;
int
main(void)
{
u1.i = 0;
char c = u2.c1;
return 0;
}
Results in generated IR
%union.anon = type { %struct.anon }
%struct.anon = type { float, double }
%union.anon.0 = type { %struct.anon.1 }
%struct.anon.1 = type { i32, i8*, i64 }
@u1 = common global %union.anon zeroinitializer, align 8
@u2 = common global %union.anon.0 zeroinitializer, align 8
; Function Attrs: nounwind uwtable
define i32 @main() #0 {
%1 = alloca i32, align 4
%c = alloca i8, align 1
store i32 0, i32* %1, align 4
store i32 0, i32* bitcast (%union.anon* @u1 to i32*), align 8
%2 = load i8, i8* bitcast (%union.anon.0* @u2 to i8*), align 8
store i8 %2, i8* %c, align 1
ret i32 0
}
Extend the standard library with BigDecimal and BigInteger. I have started to working on this, and I would like to submit this as a pull request soon.
This request for RFC comes from the now close ponyc issue 489.
Currently the ponyc codebase uses a number of hardcoded paths that contain /
that break on Windows where you need \
.
We need a way to seamlessly support platform path differences.
This RFC doesn't have to address all platform differences, just the path differences however, having an eye towards how it would impact on other "platform normalizations" is important and should be address in the RFC.
As you can see, there are subtle issues that would impact on this solution and trying out solutions before creating the RFC is highly advised.
We need someone to come up with a less confusing name for the MaybePointer
type, and champion that in an RFC.
See ponylang/ponyc#1248 for more background.
We're looking someone to make a proposal for a type language for changing generated code based on type parameter constraints, based on discussion here: ponylang/ponyc#683
This would make conditional compilation possible as we currently have for the platform (windows, darwin, linux, ...) but for different pony versions for which there might exist breaking changes in the stdlib or the comper itself.
This would make it possible to maintain compatibility between different pony versions within one codebase.
With the rise of Rust, it has become more obvious now than ever before that memory-safety can be guaranteed without runtime overhead.
Pony has most of what it needs (if not everything) to implement this and completely remove GC: strong lexical scoping information (actually, Pony's capability system likely means this is easier than it is in Rust).
This would not be a small change, but I would love to see it be considered.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.