Giter VIP home page Giter VIP logo

problem-solving's Introduction

🦋 Problem Solving

Raku/problem-solving repository is used for working on all issues that require discussion and/or consensus. This document describes the process in more detail.

The Problem-Solving Process

Step 1: Reporting a problem

Anyone is welcome to report a problem by creating a new issue. The issue should only contain the description of the actual problem (X of the XY). The issue you create should start with a short description of the problem, followed by any additional details, if needed.

This repository has broad scope, but not all problems belong here – only problems where consensus needs to exist but doesn't yet. For example, if everyone already agrees on the proper approach to solving a problem, then that issue isn't appropriate for this repo (even if a few technical questions about the implementation still need to be worked out). As a more concrete example, bugs in the Rakudo compiler should be reported in rakudo repository.

If someone opens an issue that doesn't belong in Problem Solving, then anyone with sufficient GitHub permissions to do so should close the issue and either direct the user to the correct place to address this issue (e.g., the Rakudo repo) or open an issue themselves.

Step 2: Initial proposed solution

Anyone (including the person who opened the issue) can submit an initial proposal as a comment in reply to the issue.

To do so, start a comment with “Initial proposal:” and provide a short and clear description of the solution you're suggesting. Include just enough details to paint the general picture and refrain from writing too much (which will be required for the next step, but not now).

Good solutions may resolve more than one problem: if so, link all other related problems that will be affected by your solution.

By proposing a solution, you are typically volunteering to implement that solution. If you know that you won't be able to work on a full solution, please say so when proposing that solution. Having a hero who will carry a solution though to implementation is often essential, but it is also important to hear out good heroless ideas.

Step 3: Discussion of the proposal

After the initial proposal, everyone is encouraged to discuss the idea and point out any flaws, implementation details, suggested changes, etc. that they might see. (Of course, both before and during step 3, people can discuss the problem generally rather than a particular solution.)

In step 3, anyone can comment, but two people have an especially important role to play: the person who wrote the original proposal and the assignee for the issue.

Assignees

Issues are assigned to corresponding devs (see the list). The assignee's job is to help drive the process, provide vision, and assist everyone involved. Assignees can provide feedback, ask for clarifications, suggest changes and so on as they see fit.

Assignees can provide feedback on an initial solution, provide guidance about what the full solution should include (e.g. what should be covered in the document, additional requirements, etc). They can also reject initial proposals at step 3 (though this shouldn't happen often).

If an assignee is pleased with the initial proposal, they can ask to submit a fullblown solution. Alternatively, if the person who suggested the solution feels they have received enough feedback, they can decide to submit a full solution. Either way, submitting a solution moves us to the next step.

Step 4: Submitting a full solution

After a problem/solution has been discussed sufficiently, someone should submit a Pull Request with a detailed proposal that would solve the problem.
This PR should add a document to the "solutions" directory in this repo. Typically, the solution will come after a user has officially proposed an initial solution (as described in Step 2, above) and, typically, the PR would be written by the same user who wrote that initial solution.

However, neither of these part of the above is absolutely required. For example, if someone writes an initial solution but doesn't have time (or no longer has time) to work on implementing the solution, it's fine for someone who does have time to submit the PR. Similarly, if an initial solution has been discussed without anyone formally writing an "initial solution" comment, someone who was involved in that discussion can step forward and draft a PR. (This can sometimes happen when one user has a partial solution and other users add to it without anyone drafting a formal initial solution.) That said, it's still better to have a written initial solution and for the PR to be written by the same person.

The PR should act as a documentation for the solution and should provide all details that are required to implement the solution. Keep your document consistent with other files in the solutions directory (naming, directory structure, markup and so on).

At this point, anyone can provide feedback on the full solution and, if desired, the PR author can revise the PR based on that feedback.

Step 5: Solution resolution

Once someone has submitted a PR, it can be resolved in one of 4 ways:

  1. Consensus acceptance
  2. Speedy acceptance
  3. Acceptance without consensus
  4. Non-acceptance

Consensus acceptance

The most common way for a proposal to be accepted is for the discussion to proceed to a point where a consensus exists in favor of the solution. A consensus does not require unanimity, but it should be clear that the Raku community as a whole supports the solution. In particular, if 14 days pass between the time when the PR (or the latest revision of the PR, if it's edited) was proposed and none of the reviewers (listed below) have objected, then the proposer of the PR can use their judgement to determine whether consensus exists. However, if any of the reviewers objects to the PR's solution or 14 days have not yet passed, then consensus does not support that solution.

Speedy acceptance

If all reviewers approve a solution, then it can be accepted even if 14 days have not passed.

Acceptance without consensus

In very rare cases, a PR's solution can be accepted even when the Raku community cannot reach consensus. When the Problem Solving process was first adopted, the way Raku solved problems when consensus couldn't be reached was by an action by the Benevolent Dictator for Life (Larry Wall). After the Raku Steering Council Code was adopted, the process for resolving deadlock shifted to the RSC as described in that document. In either case, this power should be exercised only as a last resort and in extremely exceptional cases.

Non-acceptance

If a solution is not accepted by any of the methods described above, then it is not accepted. Arguably, this isn't a resolution at all – the problem remains unsolved and the issue stays open. But that particular solution has failed to gain acceptance (though, of course, a different solution or even a revised version of the same solution could later be accepted).

Edge cases and other notes

  • If any of the merged solutions needs an adjustment, the process should start from the beginning. That is, an issue should be filed stating the problem with the current solution, and the process continues as normal. PRs are allowed to change, modify and shadow existing solutions.
  • Assignees are allowed to call for a “shortcut” to any problem, in which case the solution is applied directly without going through the whole process.
  • Non-functional changes to existing solutions automatically go through a shortcut (typos, grammar, formatting, etc.), just submit a PR right away.
  • If a shortcut receives any criticism from the corresponding development team or other affected parties, it can be reverted and the full formal process should begin.
  • Passing the assignee status is allowed provided that the receiving party agrees. One-time assignees are allowed through this process.
  • People are allowed to be assigned to their own PRs.

Labels and responsible devs

File a meta issue if you want to create a new label or if you want to be added as a responsible dev.

  • meta – changes to the problem-solving repo and this document
  • language – changes to the Raku language
  • rakudo – big changes to Rakudo
  • moarvm – big changes to MoarVM
  • documentation – big changes to Raku documentation and other learning resources
  • unicode – Unicode and encoding/decoding
  • infrastructure - servers, hosting, cloud, monitoring, backup and automation
  • fallback – if no other label fits

Reviewers

File a meta issue if you want to be added to this list.

problem-solving's People

Contributors

alexdaniel avatar altai-man avatar codesections avatar coke avatar fco avatar jj avatar jnthn avatar lizmat avatar moritz avatar patzim avatar rba avatar ugexe avatar vrurg avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

problem-solving's Issues

%0 and &0 should probably be syntax errors

This started off with:

$ perl6 -e 'my $width = 4; printf("%0{$width}d", 42)'
Use of uninitialized value element of type Any in string context.
Methods .^name, .perl, .gist, or .say can be used to stringify it to something meaningful.
  in block <unit> at -e line 1
Your printf-style directives specify 0 arguments, but 1 argument was supplied
  in block <unit> at -e line 1

which, in my view, is a completely legit way of specifying a variable width printf format.

Turns out, this does not do what you mean. Apparently, this is short for %($/[0]).

$ perl6 -e '"abcd" ~~ m/(\w)+/; say %0'
{a => 「b」, c => 「d」}

which looks nice, but with an odd number of captures, becomes:

$ perl6 -e '"abc" ~~ m/(\w)+/; say %0'
Odd number of elements found where hash initializer expected:
Found 3 (implicit) elements:
Last element seen: Match.new(list => (), from => 2, hash => Map.new(()), made => Any, pos => 3, orig => "abc")
  in block <unit> at -e line 1

It even gets weirder with &0:

$ perl6 -e '"abc" ~~ m/(\w)+/; say &0'
[「a」 「b」 「c」]

which, btw is also what @0 and $0 give.

I propose that %N and &N (where N is a 0..Inf) do not interpolate, and become syntax errors outside of interpolation.

Editor modes for Raku

I'm afraid the state of editor modes for Perl 6 is not very good.

  • Support for VSCode was updated in February last year. 3 issues, also years old.
  • Support for Atom is here, but it has not changed for the last two years. It's failing lately on my Atom installation. There are 45 issues on the repo. And that's on us, since it's in the Perl 6 orga
  • Support for vim is one year old. I don't know how it works, and for the time being it does not have a license (they voted on it last year).
  • Support for emacs, well, 3 years old today. Most stuff does not work. 2 year-old issue, the newest one.
  • Comma is OK, in general.

So the situation is that we have a mode (Atom) that is community (not) maintained, and 3 others that are author-maintained. Most of them with varying degrees of obsolescence and bitrot. Shouldn't we do something about this?

require with non-sigiled arguments

Followup to rakudo/rakudo#2983.

Currently require with arguments is supposed to import specified symbols into its lexical scope. The thing which is not specified in the documentation is that the symbols are treated as variables. What happens then is that if a role is requested for import:

require ("module.pm") <a_role>;

rakudo installs a same-name variable a_role. I wonder if its a legit behavior or not? On one hand, we could allow only sigiled entities to import. But this would effectively cut off constants. Though they're not correctly imported now anyway.

Another way is to allow sigilless symbols to be required. But this needs installing a symbol stub which could be resolved at run-time by REQUIRE_IMPORT. So far, I didn't find a way how this could be done.

`infrastructure` label and a corresponding subject-matter expert

Looking at #9, it is clear that we need a new infrastructure label in this repository. However, it is unclear who would act as a subject-matter expert for that label.

If you want to volunteer, please leave a relatively short comment describing your expertise, the amount of effort/time you are ready to provide for resolving issues, and your initial plan as to what has to be changed.

For more info, see some docs here: https://github.com/perl6/infrastructure-doc/, as well as a discussion on #9.

Language revision dependent spectests: change the approach.

Preamble

Correct me if I'm wrong, but from docs and from the code it looks like currently each language revision needs its own individual spectest.data.6.<rev> which would largely duplicate the default spectest.data file. Besides, what is tested is dependent on the content of the VERSION file which belongs to roast and somewhat bound to Rakudo versioning. The latter is my personal opinion based on versioning doc.

Proposal

I would like to base my proposal on the following assumptions:

  1. Rakudo is possibly not the only one implementation.
  2. Other implementations might not catch up with new revision releases
  3. Information about what revision a test belongs to must be a part of roast because it defines the language.

What I propose:

  1. spectest.data* files are moving from Rakudo to roast.

  2. spectest.data define all revision-independent tests. I.e. those which must pass any revision implementation.

  3. spectest.data.6.<rev> must contain only tests which won't pass on lower revisions. I.e. spectest.data.6.d tests won't pass with use v6.c;.

  4. Revision-specific tests are cumulative. For example, 6.e must also run tests from 6.d and 6.c. To ensure correct testing each revision-specific test must have the version pragma.

    This statement is based on assumption that old revisions are never dropped and any future v6.t release will handle use v6.c; pragma as expected.

  5. In an exceptional case a test from earlier release must never be run on a compiler implementing a newer release. To allow such exception the test could be marked as invalid in a revision-specific spectest.data.

  6. What tests are to be ran is up to particular compiler build subsystem. Rakudo would determine the set of tests based on tools/templates/PERL6_SPECS file where supported revisions are listed.

With the proposed scheme VERSION file from roast becomes obsolete. I consider it redundant.

New named parameters to .classify

Id like to suggest to add:

  • :&reduce
  • :&produce
  • :$initial-value

so,

^10.classify: * %% 2, :reduce(&[+])

would return

{True => 25, False => 20}

,

^10.classify: * %% 2, :produce(&[+])

would return

({}, {True => 0}, {True => 0, False => 1}, {True => 2, False => 1}, {True => 2, False => 4}, {True => 6, False => 4}, {True => 6, False => 9}, {True => 12, False => 9}, {True => 12, False => 16}, {True => 20, False => 16}, {True => 20, False => 25}).Seq

and

^5.classify: * %% 2, :produce(-> %agg, $i { %agg{$i} = <a e i o u>[$i]; %agg }), :initial-value{}

would return

{True => {0 => "a", 2 => "I", 4 => "u"}, False => {1 => "e", 3 => "o"}}

That would also be interesting to add the :$initial-value (or with any name) to .reduce and .produce

Ecosystem: should we strive for integration with MetaCPAN

This came up in a discussion between @niner and me at the PTS: @niner thinks it would be a unique selling point for MetaCPAN and Perl in general (both 5 and 6) if Perl 6 modules could be found on MetaCPAN with Inline::Perl6 as a dependency when searching for Perl 5 modules, and Perl 5 modules could be listed with an Inline::Perl5 dependency when searching for Perl 6 modules.

Before taking this to the MetaCPAN team, I think the Perl 6 leadership will have to make up its mind about it.

perl6-infra: rules and guidelines

We like transparently decide about the upcoming infrastructure changes together.

I therefore propose change on a service level or for a group of services. A service could be for example "hosting perl6.org static website" and an example for a group of service could be "dns hosting".

There will always be a proposed solution. If there is no better proposal in the comments, we will start implementing the proposed solution, a week after opening the issue.

Here is how we like to handle the Perl6 Infrastructure. Feel free to comment.

Rules and guidelines

  1. Automate everything
  2. Everything is a service
  3. Categorize the service and add additional attributes (monitored, backuped, static, dynamic, redundant, CDN)
    1. hack
    2. build
    3. run
  4. Use top level domains perl6.org, rakudo.org, moarvm.org
  5. Use subdomains to separate services
  6. Make sure every service has at least two admins and every core member has access
  7. All technical usernames and passwords are stored securely in either a password tool or at least in an encrypted document
  8. Where possible add the admins to a 3-party-services and give authorization. For services with a single user, create a technical user (e.g. perl6-infra).
  9. Use what‘s already there, operate own service where needed (DNS services instead of running bind ourselves; github instead of gitolite on a server, etc.)
  10. Choose free or sponsored services wherever possible
  11. Keep infrastructure documentation updated

Semantics of coercion type on an "rw" parameter

Currently, if writing:

sub foo(Num() $n is rw) { $n++ }

Then we can call it successfully like this:

my $x = 1e0;
foo($x);
dd $x;    # Num $x = 2e0

However, if the coercion is applied, such as in this case:

my $x = 1;
foo($x);

Then it binds the result of the coercion to $n in foo, resulting in an error since ++ is being done on an immutable value. This is almost certainly the result of not having considered how this interaction should work.

take single letter CLI options with one leading hyphen, such as `-lwc`

when wanting to implement some clones of common unix utilities i came across the fact that if you want to take single character options they need to be run as

> ./wc.p6 -l -w -c

but a common way of doing that in unix tools is

> wc -lwc

i think this would fit well as another option for %*SUB-MAIN-OPTS

Inconsistent handling of too-large array index between match variables and regular arrays

say $/[9999999999999999999999999999999999999999999999] gives «Nil␤»
my @a = 1,2; say @a[9999999999999999999999999999999999999999999999] gives «(exit code 1) Cannot unbox 153 bit wide bigint into native integer␤ in block <unit> at /tmp/ldjuzMCrIl line 1␤␤»

I have two versions of a patch for the Match variable case, one throws X::Syntax::Variable::Match (could be a different type though), the other dies with ===SORRY!=== Cannot unbox 203 bit wide bigint into native integer.

Thoughts on what's best?

Some useful math/statistics functions are missing

Some examples of things that are missing:

  • clamp or clip https://stackoverflow.com/questions/55250700/is-there-a-clamp-method-sub-for-ranges-num-etc-in-perl6
  • “One final observation about Perl 6 and math: although Perl 6 has all the usual functions from math.h, it could certainly use a few more.” https://www.evanmiller.org/statistical-shortcomings-in-standard-math-libraries.html
    • double incbet(double a, double b, double x); # Regularized incomplete beta function
    • double incbi(double a, double b, double y); # Inverse of incomplete beta integral
    • double igam(double a, double x); # Regularized incomplete gamma integral
    • double igamc(double a, double x); # Complemented incomplete gamma integral
    • double igami(double a, double p); # Inverse of complemented incomplete gamma integral
    • double ndtr(double x); # Normal distribution function
    • double ndtri(double y); # Inverse of Normal distribution function
    • double jv(double v, double x); # Bessel function of non-integer order
  • prod. It's easy to do it yourself but if we have sum then why not have prod too (for example, numpy has both)
  • mean
  • median
  • mode ?
  • peak-to-peak (range) – (numpy example)
  • standard-deviation
  • histogram
  • and so on…

There's a huge PR/issue deficit in the Rakudo repo

Only one issue is closed for every 6 open (roughly), and one PR for every two. There are too many PRs, and that will discourage people from sending new PRs. Someone with a stale PR from two years ago will probably never contribute again to the community.
So, can you think about some way to deal with this? Clear the backlog of PRs and try to get a bit more up to date with issues?

Branches are also a problem. There are more than two hundred. That's probably better left to another issue here.

So, some possible ideas to solve that:

  • Devote part of the time every release to address and clear old PRs. Maybe devote exclusively a release cycle (maybe the one in the summer?) to review and accept/comment/close old PRs.
  • Deal with new issues ASAP: at least label them, vet them (for adequacy, relatedness) and close them if they are not going to be addressed
  • Devote one squashathon to them?

Any new idea? What would be the best way to deal with this?

How should Proc::Async.new, run and shell call cmd.exe?

On Windows it is not possible to call cmd.exe with complex arguments. Calling .bat files is affected as well. Rakudo bug r#2005 also reports this.

Short problem description

On Windows the API to start programs (CreateProcess) does not take an array, but a single string. There is a convention nearly all programs adhere to of how to turn an array of arguments into a single string (a well defined quoting). This convention is what rakudo currently implements (via libuv, which does the actual quoting).
Not all programs adhere to this convention. Most prominently cmd.exe and as a result .bat scripts.
When trying to call a .bat file rakudo applies the usual quoting to the commandline arguments and as a result makes it impossible to pass some arguments.

Perl 6 functionality affected: Proc::Async, shell and run.

How could the API look that in addition to the current behaviour also allows to call cmd.exe and any other program not adhering to the commandline quoting convention?

perl6-infra: group of services: DNS hosting

DNS hosting

Category: run
Attributes/tags: monitored, redundant

Domains:

  1. perl6.org
  2. rakudo.org
  3. moarvm.org

Optional: p6c.org

Next steps

Registrar and Owners don’t need to be changed.

  1. New DNS hosting (separated from the Registrar)
  2. Transfer the zones or quickly add the entries manually to the new DNS hosting.
  3. Ask the domain owner to change to the new DNS servers at there registrars website

Proposed solution

Use a combination of free dns services to get servers in North America, Europe and Asia.
Master server to change the DNS entries: https://www.metanet.ch/email-domains/dns-hosting
Slaves:

Admins

If you have additional location preferences, please add them as a comment.

If someone has another free or sponsored services which could be recommended, please let me know.

Please beware of eliminating (or changing) documented features

This has probably happened several times, but I just found about this one. Clearly the compiler has to evolve and change, but people in charge of the documentation can't check each and every commit for changes or, as it might be the case, elimination of features. Our only way of knowing what's happened and needs to be documented is looking at the change log of releases, or maybe issues. And I'm aware that the particular case I'm going to point to need not be necessarily out of the documentation, but it does not address an issue or refers to one or is apparently discussed anywhere else than the commit itself. And, in any case, it would be really a good thing to just check if something is documented and raise an issue in the doc repo. Or do something, anything, really, about it.

Case in point: Compiler.build-date was eliminated here and is documented here. It does not seem to be in any issue. It really makes sense, but it implies that, somewhere down the line, someone might discover that that particular thing does not work and might be disappointed to see it on the documentation.

What we are doing now is parsing the release documents of every release: see here: Raku/doc#2632 and here Raku/doc#2673 But those out-of-schedule changes (which I don't know if it's the case, so all this might be really premature) are really missed, and we have found many cases of source changed or missing which we can't know if it's within normal schedule or has simply being changed.

sprintf is a mess

When Perl 6 got designed, it was decided that things that needed to be broken, were broken.

I feel that sprintf somehow escaped that scrutiny.

Fact is that the current nqp implementation has a number of bugs, and a number of inconsistencies.

One way forward would be to make sprintf completely consistent, without special cases for the value 0 or the "o" format. But this will break a number of spectests.

Another way forward would be to just fix the bugs in the nqp implementation, making it match as much as possible to what Perl 5 is doing.

A third way forward would be to do both, but expose the logic with another name.

A fourth option would be all of the above, but with a named parameter to indicate that you want either the new consistent behaviour, or the old inconsistent behaviour.

Or perhaps there are other options still.

Why am I asking this? Because I'm working on re-implementing sprintf in HLL Perl 6, which makes it faster on repeated calls with the same format.

Removed Syntactic Feature (-i flag for in-place file editing)

While digging into Perl6, I try to focus on users - those who use Perl 5 for some reason meanwhile, and how to catch them.

I found Perl Is Still The Goddess For Text Manipulation nowadays and walk over it. Unfortunately, -i has been removed (see https://github.com/perl6/specs/blob/master/S19-commandline.pod), because:

  -i *extension*
   Modify files in-place. Haven't thought about it enough to add yet, but I'm certain it has a strong following. {{TODO review decision here}}

When Perl 6 is still a Perl, this is a mandatory feature.

Maybe Any:D.await should return self?

Currently, it is an error to await on something that is not an Awaitable. But I wonder if that makes sense.

Maybe it makes more sense to have it simply return the invocant on concrete objects? I mean, we're already assuming that each scalar is a single element list, why not also assume that each defined value (well, maybe except for Failures) is a Promise that has been kept.

This would make it easier to mix synchronous and asynchronous code, case in point:

my %cache;
method foo($bar) {
    with %cache{$bar} {
        return $_
    }
    else {
        return start { %cache{$bar} = something expensive }
    }

You could then:

    my $value = await $obj.foo("bar");

without having to know if actually a Promise from the start was returned. And if the cached value is actually there, you wouldn't need to wrap it in a Promise to avoid dieing because await only handles something Awaitable.

Most likely I'm missing some concurrent issue that would make such an approach unwise.

Ecosystem issues and a corresponding dev

There are three tickets related to the ecosystem that are currently labeled with fallback (a label for tickets that don't fall into any other existing category):

As I mentioned before, I think we need a new label ecosystem as well as a dev who will be looking after the tickets and driving the progress in that area (current list).

Personally I think @ugexe is the best candidate, and I highly recommend them to apply. Meanwhile others can also do so if they feel strongly about it.

rt.perl.org is shutting down

According to Robert Spier, they are planning on shutting down rt.perl.org this summer. We have to decide if (and if so, how) we're going to migrate the tickets from rt to (presumably) github.

@toddr is leading the charge from the perl5 side, so we can coordinate with him on how to do a migration.

I have a 55.2MB zip file which contains a .json for each ticket (6692 of them); Let me know where I can upload a copy.

I've inlined the smallest json file in the comments in case anyone wants to poke at structure.

Metaop semantics with QuantHashes

Basically this situation, expanded to all QuantHashes:

my %d is SetHash = ^10;
dd %d;  # SetHash.new(6,2,9,0,8,5,4,3,1,7)
%d := %d (-) %d.grep: *.key %% 2;
dd %d;  # SetHash.new(9,5,3,1,7)

Note that to get this result, you need to bind the result of (-) to %d. If you however *assign the result, you wind up with a SetHash that contains a SetHash

my %d is SetHash = ^10;
dd %d;  # SetHash.new(6,2,9,0,8,5,4,3,1,7)
%d = %d (-) %d.grep: *.key %% 2;
dd %d;  # SetHash.new(SetHash.new(9,5,7,1,3))

Which implies, that if you want to use a metaop for this:

my %d is SetHash = ^10;
dd %d;  # SetHash.new(6,2,9,0,8,5,4,3,1,7)
%d (-)= %d.grep: *.key %% 2;
dd %d;  # SetHash.new(SetHash.new(9,5,7,1,3))

you wind up with a result that people find to be unexpected. But is entirely up to spec, as you should be able to put QuantHashes inside of QuantHashes. And in fact, I've used that approach for a recent Perl Weekly Challenge.

On the other hand, if we just had used hashes:

my %d = ^10 Z=> True xx *;
dd %d;  # Hash %d = {"0" => Bool::True, "1" => Bool::True, "2" => Bool::True, "3" => Bool::True, "4" => Bool::True, "5" => Bool::True, "6" => Bool::True, "7" => Bool::True, "8" => Bool::True, "9" => Bool::True}
%d (-)= %d.grep: *.key %% 2;
dd %d;  # Hash %d = {"1" => Bool::True, "3" => Bool::True, "5" => Bool::True, "7" => Bool::True, "9" => Bool::True}

it does work as some people expect. So I guess there's something to be said for that semantic as well, since we're supposed to assume that QuantHashes are just object Hashes with a limitation on what they can have as a value.

Making metaops special case QuantHashes, feels like a bad idea to me.

Since I fear that there is code out there using QuantHashes with the current semantics, if we should decide to give QuantHashes the same semantics as object Hashes in this situation, I think we will need to make it version dependent.

Implement Perl 6 Academy for use as call to action on marketing pieces

Have all applicable marketing materials reference an "academy lesson" that teaches the topic discussed on that particular marketing piece.

The academy lesson would be a page that gives a brief description of the feature and why it's awesome, trains the user on the basics of the usage of the feature, and gives an in-page code evaler for user to play around with what they learned.

The status of PREVIEW modifier.

I didn't plan this for today but the discussion we've got on IRC is probably requiring continuation.

It is all about the question how to we handle PREVIEW modifier. I proposed to deprecate 6.d.PREVIEW soon after the next rakudo release. Deprecation means that any use of the version in code will produce a compile-time warning. No other side effects. In a couple of release cycles deprecated modifier can be dropped and this is where its use would result in an error.

I also drafted my vision in Building Rakudo paper.

What is my point behind having PREVIEW dropped over time. Some proposed features for one reason or another may not make it into the release. Their implementations are not to be kept in the core forever (unless their acceptance has been postponed for the next release). Deprecation of PREVIEW is a signal for those still using it to reconsider their code and refactor it to conform to supported releases. Wether or not PREVIEW is removed after that is not that important anymore. But as long as no experimental code bound to it is left in the core – what is the point of keeping it?

This is just one point of view. A short talk on IRC has revealed a few more and I hope will see them here.

where blocks vs sub signatures

Originally discovered in rt#123596 then further discussed in PR#535.

Rakudo currently evaluates where clauses before unpacking sub-signatures. This allows to verify a parameter before unpacking it. The Documentation states this behaviour clearly.

> multi car($x, [$y, @ys] where $x == $y) {1}; say car 1, [1, [2, 3]];
Type check failed in binding to parameter '<anon>'; expected Any but got Mu (Mu)
  in sub car at <unknown file> line 1
  in block <unit> at <unknown file> line 1

> multi car2($x, [$y, @ys where $x == $y]) {1}; say car2 1, [1, [2, 3]];
1

rakudo/rakudo#535 moves the where processing after the sub-signature unpacking, effectively making both of the above work, but loosing the ability to verify a parameter before unpacking.

As far as I can see only one of the two behaviours can be had.
Which one should it be?

Defining custom coercions from extant types

I remember this being discussed a while back, and I may throw a prototype together at some point but figured I'd propose it here first.

Right now, classes can define how they coerce into other classes which is particularly useful in signatures, but also in other general cases.

class A {
  method Str { ... } # coerces to string 
  method Int { ... } # coerces to int
}

This coercion for many built ins is two way:

"123".Int.Str # from string to integer and back again

Suppose our class A can be created from a Str:

class A { 
   multi method new (Str $foo) { … }
   method Str { … }
}

It is possible to round trip only one way:

my $a = A.new;
A.new($a.Str); # good
$a.Str.A; # error, Str has no method A

The chaining method structure (A.new.Str.A) is not available because that would require augmenting Str (something that's possible, but highly discouraged). While fairly easy to work around in general code, it does prevent us from passing a Str to a sub whose signature allows for A(). Even where we can define each class in full, because of single-pass parsing, it makes doing something like the following quite difficult

class A {
  method B { … }
}
class B {
  method A { ... } 
}

I would propose having a standardized method akin to ACCEPTS by which classes can define how others classes coerce into them. Thus the above cyclic problem could be at least partially solved by only needing one of them to defined the other:

class A { ... }
class B { 
  method A { A.new: … }
  method FROM(A:D $a) { self.bless: … }
}

When type checking the signature in signature like sub foo (B() $b) { … } the process would then flow as

  1. Is $b an B? If so, successful signature check.
  2. If not, does $b have the method .B? If so, $b = $b.B and successful signature check.
  3. If not, does B have the method FROM(Type:D) where Type is any of the types in $b.^mro? If so, $b = B.FROM($b) and successful signature check.
  4. Fail signature check.

While this would work work fantastically for signatures, I'm not sure it would be something that should be automatically tried in method chaining (e.g. A.new.B might want to fail, but a quick sub in the similar to the following would handle it whenever there were any question.

sub coerce ($from, $to) {
  $from.?"{$to.^name}"() 
    // $to.?FROM($from) 
    // die "Cannot coerce $from into {$to.WHAT}";
}

(Obviously, it'd be a bit more complex than that, as if a coercion fails, you'd want to simply try the next one along the ^mro chain).

The utility of such a method would be pretty tangible particularly whenever a custom class can easily be obtained from anything of type Cool. In quite a few of my Intl modules, a LanguageTag object is one of the argument types. A lot of times it's more convenient for people to use pass in a Str directly that then is coerced into a LanguageTag object. The result is a lot of multis to handle the coercion which the idea of Type() $foo signature structure is supposed to help avoid.

perl6-infra: service: Password handling

Password handling

Category: run
Attributes/tags: backuped

As many of the dns hostings are only a "single user" solution, we need a place to put the infrastructure passwords.

Proposed solution

I would give https://www.gopass.pw/ a try. Seems to be similar like https://www.passwordstore.org/ with some tweaks for multiple persons.

Options

  • A simple gpg encrypted txt file
  • https://www.passbolt.com/ Yet needs a server to keep it running. More software, more possible security holes

Admins

Issues with security and reliability of our infrastructure

See matrix-org/matrix.org#371.

Also maybe:

Basically, there's some perl 6 infrastructure that is used to host a bunch of stuff, including rakudo tarballs and msi's. I guess it's just a matter of time before things gets hacked? There's no hardening of any sort that I'm aware of, and definitely no policies to make things more secure. Also, last time I looked I saw a bunch of ssh keys of people who were no longer actively involved in the project, and at least one key of someone who is no longer alive.

I think a lot can be learned from matrix-org/matrix.org#371.

Also, I don't think that fixing a few things will cut it. IMO we need to be taking steps with much broader scope when it comes to security.

`once` block can be executed more than once

Say you have this piece of code:

sub foo { once { say once! } };
foo; foo; foo

Output:

once!

So far so good. Then you decide to add a loop into the sub body:

sub foo { for ^5 { once { say once! } } };
foo; foo; foo

Output:

once!
once!
once!

So a once block can be executed more than once and intuitively it hardly makes any sense.

Looking at the ecosystem, implementing logic for things that should only be done once is better done with a variable:

Ecosystem content, quality and fragmentation issues.

Currently the ecosystem is very messy. To name some problems:

IRC discussion: https://colabti.org/irclogger/irclogger_log/perl6?date=2019-06-14#l399

I don't know what would be the best solution. There are different ideas floating around, like creating a new ecosystem with a better policy, or trying to clean the current one by adding some sort of purgatory for modules. I think we'd benefit from having a person who'd be able to focus on the ecosystem stuff and come up with some solution.

.WHEN

Just to brainstorm.

Would it be possible to fill out the .WHAT, .WHERE, .WHY, .HOW, with a .WHEN?

.WHEN would store the DateTime when the variable was last defined.

Would this incur a significant amount of change in CORE.setting?
How much of a performance penalty would this incur?
How much more memory would this use?

Does this really need a use-case define? If so, I can provide a simple one:

User is tracking entries via a hash. Wants to only consider entries made within the last 5 
minutes. Instead of creating ANOTHER hash entry, can always use %h<Entry>.WHEN

Is this enough to jumpstart a conversation?

Metadata licenses should be required before adding new modules to ecosystem

Since Perl 6 is a language for the future, and our ecosystem is not a permanent solution, we need to ensure that we have a metadata license category, and ensure that it is a permissive license which is suitable for redistribution by a wide variety of other projects.

See: https://www.freedesktop.org/software/appstream/docs/chap-Quickstart.html#sect-Quickstart-DesktopApps

From freedesktop.org:

Recommended metadata file contents

<metadata_license/>
The <metadata_license/> tag is indicating the content license that you are releasing the one metainfo file under. This is not typically the same as the project license. Omitting the license value can result in your data not being incorporated into the distribution metadata (so this is a required tag).
A permissive license ensures your data can be combined with arbitrary other data in one file, without license conflics (this means copyleft licenses like the GPL are not suitable as metadata license). Possible license identifiers include:
FSFAP
CC0-1.0
CC-BY-3.0
CC-BY-SA-3.0
GFDL-1.3
MIT
The license codes correspond to the identifiers found at the SPDX OpenSource License Registry. Take a look at <metadata_license/> for more details about this tag.

Proposal

I propose a metadata-license tag, and that this be required for new
additions. And they should use one permissive licenses on this list (so not the GPL ones):
https://wiki.debian.org/DFSGLicenses (this includes the Artistic 2.0 license btw).

OK Metadata licenses:

Artistic 2.0
FSFAP
CC0-1.0
CC-BY-3.0
CC-BY-SA-3.0
GFDL-1.3
MIT

This is totally removed from the license of the project itself. The metadata files need a more permissive license to ensure as wide a distribution as possible and future looking. The project can be whatever the project's creator chooses, but the metadata file itself must be under a permissive license OK for redistribution.

Moving tickets between rakudo and problem-solving repos is impossible

It used to be that feature requests and proposed language changes were filed in rakudo/rakudo, but now we have this repo. However, because of the github limitation, it's not possible to move tickets between different orgs.

Is there any good reason for https://github.com/rakudo/rakudo/ and https://github.com/MoarVM/MoarVM repos to be in separate orgs?

Also, there are some inconsistencies. For example:

Wouldn't it be easier to move everything into perl6 org?

Note that CLA restriction can be kept in place, and it can be implemented with Teams github feature.

Ambiguity in slicing with Ranges / WhateverCodes

On the surface, these two pieces of code do exactly the same thing:

$ perl6 -e 'my @a = ^10; dd @a[^Inf]'
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

$ perl6 -e 'my @a = ^10; dd @a[^*]'
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

But under the hood, they do very different things.

In the first case, the slice is produced from a Range 0..^Inf, so it will just produce values for the slice until the source is exhausted.

In the second case, the slice is produced from a WhateverCode:

$ perl6 -e 'my $a = ^*; dd $a.^name; dd $a(10).list'
"WhateverCode"
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

Specifically, ^* codegens as { ^$_ }, which is effectively the same as ^(*-0).

So, what does one need to do if one wants to have a slice of all but the last values of a Iterable? Well, this does not do the right thing:

$ perl6 -e 'my @a = ^10; dd @a[^*]'
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

because that's the equivalent of doing:

$ perl6 -e 'my @a = ^10; dd @a[^10]'
(0, 1, 2, 3, 4, 5, 6, 7, 8, 9)

The next thing one could do, is to use the *-1 syntax:

$ perl6 -e 'my @a = ^10; say @a[^*-1]'
Effective index out of range. Is: -1, should be in 0..^Inf

This does not work, because:

$ perl6 -e 'dd (^*-1)(10)'
-1..^9

To make this work, one needs to use parentheses:

$ perl6 -e 'my @a = ^10; dd @a[^(*-1)]'
(0, 1, 2, 3, 4, 5, 6, 7, 8)

Issue rakudo/rakudo#3010 indicates that a warning would need to be in place. I'm not sure that that is the correct solution to this situation.

I think part of the underlying issue is the difference in codegen for:

$ perl6 -e 'dd 0..^*'
0..^Inf
$ perl6 -e 'dd ^*'
{ ... }

Perhaps we need to change the codegen for ^*. In any case, any changes here are part of potentially very hot code, so any additional checks will slow things down for all. In that vein, I also think that rakudo/rakudo@35b69f0 should probably be reverted.

Specify rounding mode in CORE

add HALF-[UP|DOWN|EVEN|ODD] | [TO|FROM]-ZERO...

Rounding method was something requested (because of IEEE standards), discussion should be continued here from see:rakudo/rakudo#2831. I don't take issue with this being merged though some say it should be in a Stats module (I don't take issue with that, either) but that door swings both ways and perhaps all rounding should be done in a stats module? The line is blurry.

Anyway, discussion should continue here. If I don't hear much back in the way of arguments against then I will make the assumption that it is ok to push the merge.

Notifying @coke @ugexe @AlexDaniel (because you participated in the original thread).

Inconsistensy of container descriptor default value type for nominalizable types.

May I propose an easy problem to start with?

The type of container descriptor default attribute depends on what particular nominalizable type was used to declare a variable/attribute. For a definite it would be the base type, for a subset – the subset itself:

my Int:D $a = 0; 
say $a.VAR.default.^name; # Int
my Int $a where {True};
say $a.VAR.default.^name, " of ", $a.VAR.default.HOW.^name; # <anon> of Perl6::Metamodel::SubsetHOW

I would suggest that in both cases the implicit default value must be the base type, i.e. Int in the example above. More precisely, the declaration type of the variable must be nominalized and the resulting type object is to be used for default. It would mean that:

my Int:D $a where {True} = 0;
say $a.VAR.default.^name; # Int

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.