Giter VIP home page Giter VIP logo

problem-specifications's People

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

problem-specifications's Issues

Definition of "sublist" in sublist exercise

I recently submitted a pull request to xpython regarding what I thought "sublist" should mean in the context of this problem, and @sjakobi redirected me here.

In my reading of the problem, a list is essentially a set that allows repetition - order doesn't matter, inherently. The way that the tests are written treats the lists more like words, where order does matter.

Given that there's this ambiguity in the concept of lists, can we change the problem specification to clarify this? I don't have an opinion on which way to change things, but it should go one of two ways:

  • Clarify that order doesn't matter in a list, and change test_spread_sublist and its equivalents so that multiples_of_15 is a sublist of multiples_of_3
  • Clarify that order does matter in a list, and keep the tests as they are.

On a related note, I've only had any real experience with Python. Are lists handled in a different way in other languages such that it's less of an issue?

New About Language Section

Hello @exercism/track-maintainers

There is a new about section for tracks. Please add a description of your track language and suggest what would be the best uses for that language.

This section will appear on exercism.io language info pages (http://exercism.io/languages/[track name]). Once a description is available in about for your language, it will appear under the header '[Language Name] About:' .

cc: @kytrinyx @sguermond

meetup, what does "-teenth" mean?

Hi there!

I can't make any sense from the "-teenth" stuff mentioned in the exercise… Perhaps someone could clarify what it means?

Sidenote: Also various translators did not know the term "monteenth" or similar…

Remove spurious reference to other exercise in linked-list metadata

Copied from exercism/exercism#1440 reported by @patbl


Maybe "simple-linked-list" was left out of the JavaScript track by accident? Here are the instructions for Linked List:

In simple-linked-list we created a push-down stack using a purely
functional linked list, but if we allow mutability and add another
pointer we can build a very fast deque data structure (Double-Ended
queue).

[...]

Under the hood we'll use the same Element class from
simple-linked-list, but there should be a @next and @prev attributes
that are both writable. @prev should point to the previous Element in
the list.

A place for track specific metadata in the .yml files

If you look at the ideas that have been floating around regarding exercise quality a fair lot of them involve classifying exercises in whatever kind of way. While there's no clear idea yet about how to best go about that I'd like to propose using the .yml files for track specific metadata that could be processed by program (for example to generate overviews).

The technical mechanism I'm imagining would look somewhat like this:

blurb: "Write a program that, given a number, can find the sum of all the multiples of 3 or 5 up to but not including that number."
source: "A variation on Problem 1 at Project Euler"
source_url: "http://projecteuler.net/problem=1"
common:
  type: practice
  topics: [loops]
go:
  topics: [loops, "anonymous functions"]

The new bits are the common and go mappings. The idea is that common contains attributes for all tracks which can be overridden on a track specific basis, which go does for the topics attribute.

If I understand x-api/lib/x-api/readme.rb correctly adding extra entries to the top level mapping shouldn't cause any problems so any change based on this proposal should be backwards compatible.

largest-series-product: add common data?

I feel like this problem is a pretty good candidate for having common test data.

Well, I'm sure a lot of other problems are too, but this one in particular caught my attention when reviewing code.

Things to think about:

  • Any invalid inputs? largest_series_product("1234a5", 2)? largest_series_product("12345", -1)?
  • I'd really like tests for largest_series_product("123", 0) (should return 1 for empty product) and largest_series_product("", 1) (should error). Currently most tracks only test largest_series_product("", 0) == 1 which is reasonable since it's a boundary case, but in a two-input function isn't it right to test cases where only one input is a boundary as well as both?
  • I'd really like a test for largest_series_product("99099", 3) - this should be 0 (but some people's code say 1 because they assume the minimum is 1... it's not!)
  • Different languages handle error cases differently, be careful. Example: Ruby raise exceptions, Go return (int, error), Clojure just... doesn't? (I feel like this should change...). But this is a problem for each track to handle individually.
  • Obviously making each track actually use the common data is a bigger question, but that's a problem for each track to handle individually - let's at least get the process started by having common data.

tracks with this problem: http://x.exercism.io/problems/largest-series-product

I have an intent to do this myself, but I can't do it immediately so this issue will serve as a reminder for me when I can.

I'll survey the test data being used by the various tracks in a bit and see if I see any interesting features as well.

README for Custom-Set is too ambiguous

I just reviewed a submission on the EcmaScript track in which the submitter chose to implement his Custom Set by extending Set!

I don't think this is the intention of the exercise, but the README doesn't explicitly forbid the use of the Set class like, for example, the Strain README says 'keep your hands off filter/reduce...'

I know this would be an appropriate caveat in Java, JS, and ES-- what about the other tracks?

Is there a way to specify a pattern for test files?

The test framework we're using in Lua expects test files to include "_spec" in the name, but exercism seems to expect test files to include "_test". This mismatch causes exercism to fail to pull up test files from Lua exercises when the "Test Suite" button is clicked when viewing an exercise.

We can include configuration in each exercise that instructs the test framework to look for files named with "_test" instead of "_spec", but since that means maintaining a bunch of duplicate configurations I'm hoping to avoid that.

Is there an existing mechanism for indicating to exercism which files should be considered part of the test suite? If not, is this something worth adding? If someone can point me to the right place I'd be happy to take a crack at it.

Should rikki- submit all the hello worlds?

Right now, if you submit a solution to a hello-world problem, rikki- automatically gives a little bit of feedback.

I think this is really useful, in that it's potential documentation, but it's in the right place at the right time.

If rikki- submit's all of the reference solutions to hello world, then rikki-'s comment will have the little </> icon (used to be the rocket ship).

their-solution

This would be an opportunity to say "hey, when someone comments, you can go see their solution by clicking the </>".

Tangentially, maybe we should add the hello-world to the "exercises" list, even though people aren't expected to comment on them. This would make it so that rikki- also could say: "go look at other people's solutions by clicking on 'Exercises' in the top menu".

Exercise classification

(Not quite sure if this should go here or on exercism/exercism.io, but since it's track/exercise related this seems to be the best fit.)

This is a bit of a brainstorm. I've been thinking about how to improve the quality of the tracks and it seems that the first step would be to get some kind of shared classification to apply to exercises, a shorthand to quickly get an idea of where an exercise fits.

The main categorization I've been able to come up with is focus/practice/challenge.

  • Focus exercises are simple and intended to make the user familiar with specific features of a language. For example leap teaches people about if-statements or boolean logic (depending on how they implement it), Hamming is all about looping over a collection/string. The key point of a good focus exercise is that it steers the user in a particular direction and has little to distract from that.
  • Practice exercises take multiple concepts and let the user experiment with different approaches. Practice exercises are complex enough to enable multiple approaches but simple enough that the user doesn't have to think hard to find at least one possible approach. Bob is a good practice exercise, it involves conditionals and string operations and some fairly simple logic, but there are quite some different ways to implement it (regexes, splitting into different functions, etc).
  • Challenge exercises go beyond practice exercises in that challenge exercises require the user to come up with a non-trivial algorithm or require the user to think of lots of edge cases. These are usually longer exercises, though I would rate prime-factors as challenge as well (since it's a bit tricky to do efficiently).

On top of this there seems to be a kind of "natural" flow, roughly in three phases.

  • Phase 1 uses numbers, conditionals and loops. No classes, higher order functions or anything like that. Functions are used as the way to structure code. Few standard library functions/classes are used.
  • Phase 2 introduces "natural" collections, the basic collections supported by the language, such as lists, maps/dicts and possibly sets. It also introduces some higher organizational concepts like classes or higher order functions, depending on what fits with the language. Core standard library functions/classes are used.
  • Phase 3 brings the full power of the language with features like threads/goroutines, generators, continuations, etc. Less well-known standard library functions/classes can be used.

These phases are very vague unfortunately and they differ a bit for each track since what's basic in one track (anonymous functions in Haskell) may be more advanced in another track (anonymous functions in Go).

Does this make sense as a classification system? Is this a useful way to think about exercises?

Exercises without a test suite?

If you look at BDD an essential part of that is that you write both the code and the test suite at the same time. You can't do red-green-refactor with an existing test suite because you'd only end up at green when the whole job is done.

Would it make sense to have a few exercises that don't have a test suite at all, that are explicitly about applying TDD/BDD principles? Such exercises would of course have to have README's that are very clear about the requirements, but that's quite doable.

Add 1800 to the leap year inputs

I saw a solution recently which passed the tests but had a bug:

year&3 == 0 && !(year%25 == 0 && year&5 != 0)

That last bitwise and should be year&15 not year&5.

Insights - what data would be helpful in improving the tracks?

The question has come up a few times, most recently in exercism/DEPRECATED.javascript#103 by @ZacharyRSmith.

The ordering in the tracks is mostly arbitrary when starting out. It would be incredibly helpful to see where we people are falling off so that we can move harder exercises farther down in the list, or improve boring ones. Eventually, the idea of paths or clusters or topics would be a big improvement (see #63), but that might be a bit of a ways off, as I'm trying to do work around the onboarding user experience at the moment.

If we were to create a quick metrics-dashboard, what data should we have?

Add Bracket-Push

@ginna-baker I completely forgot that we need metadata for this so that we can generate a readme.

Would you post a short description of what the instructions for Bracket Push should be?

I would be happy to put together the actual metadata for you if you don't have time to do it.

It doesn't have to be fancy. Here's an example of how it's done for the clock problem:

Clock exercise improvements

I've been reviewing a lot of submissions for the Go implementation of Clock, and I'm seeing a few interesting edge cases:

  1. Sometimes there's mutation happening in the display method instead of the add, so you might have two copies of clock that are identical, add 1440 minutes to one, 2880 minutes to the other, and now they display the same value, but are not equal.
  2. They only account for one adjustment in either direction, meaning that if someone said add 1,000,000 minutes, then the clock would not end up with a valid display time.

I think that we should have test cases that go much father outside of the valid values, and I also think that we should test equality after adding values.

Find tracks that implement a particular exercise?

I find it useful to look at existing implementations of an exercise when adding a problem to a track. This is particularly useful for exercises without a .json file of tests and/or exercises where the description isn't particularly clear as to the author's intent.

Given this, is there an easy way to determine which tracks implement a particular problem? Right now I just peruse the couple of tracks that I know are most popular (and hence most likely to have a wide array of exercises).

Move language-specific documentation into language-specific repositories

At the moment the docs site (http://help.exercism.io) is a bit hard to use, and people don't really know to go there. Let's move all of the language-specific documentation into each language track repository.

We can create a docs/ directory in the language repository (remember to add it as "ignored" in the config.json file).

Let's standardize on the following files (not all languages will have all sections):

  • ABOUT.md - a friendly, conversational introduction about the language. What problems does it solve really well? What is it typically used for? Why should I learn it?
  • INSTALLATION.md - should contain details about installing the language itself and any dependencies or configuration necessary to work on the exercises in that language.
  • TESTS.md - should contain details about how to run the test suite for each exercise (described generically). Sometimes this will just be a single command, sometimes it will be more involved with instructions about a build system or how to do it via an IDE.
  • LEARNING.md - If you're new to the language, what are some good resources to learn it from scratch?
  • RESOURCES.md - links to useful resources on the web or elsewhere.

The problems API can deliver documentation as easily as test suites and READMEs. We could add a simple /docs/:language endpoint to x-api that serves up this content in its separate sections.

This can then be consumed by the language landing pages that are being planned for the exercism.io site itself (discussion: exercism/exercism#2299).

Also, we can use this for the track-level README:

Later, we can add more files to this directory (e.g. CONTRIBUTING.md for ways in which you can contribute to that language track).

Thoughts? Counter-suggestions? Objections?

Is there a way to add sanity checks for a track?

For the Lua track I have a script that I keep locally that makes sure that every example passes all tests for that problem. Long term I may update it to check for obvious style or formatting problems. I'd really like for this to not be stored locally so that it can be used by other contributors.

So...

  • is there a common way to plug this in to Travis to ensure that pull requests don't break anything?
  • is there some other common method that track maintainers are using that covers this?

Subtracks, 2.0 exercises, tags

Related to this gitter convo

Also maybe related to #63

I've been thinking about the possibility of interest tracks or subtracks:

Example Case: There's a lot of interest in logic programming and one of the popular things I see people doing is working through the exercises in The Reasoned Schemer in Clojure core.logic. Now that we have a Scheme track, making those exercises available in appropriate languages, with appropriate lib deps, would be a great way to work through them with others.

I also like the idea of iterative approaches to problems in the above-cited gitter convo. I think this might be orthogonal to subtracks, but I figured this issue would be a good place to tease out those distinctions.

As for #63 - right after the discussion about subtracks, it occurred to me that a tagging system might be a really useful thing. Classification like focus/practice/challenge is one area, but tags could also cover topical areas problems address, like string manipulation, list comprehension, iteration/recursion, etc.

Tags could also cover exercises that fall outside CS instruction justification; e.g., I was thinking about how differently I would implement ROT13 in Ruby, Scheme, C, etc. ROT13 is of course, at this point, a historical curiosity, but I think there are probably other things that would make for an interesting exercise with odd trivia in the README, e.g.:

Back in the olden days of USENET, it was considered courteous to obfuscate potentially 
offensive posts with ROT13 "encryption", which simply rotated characters 13 places so 
A becomes N, B becomes O, C becomes P, etc.

Implement ROT13 cipher for alphabetic characters, preserving case.

It's not useful, but could involve character encoding, modulo arithmetic, etc., depending on the implementation. Things like this could be tagged historical or curiosity.

Anyhow, just a few ideas on evolving both the granularity and scope of exercises on exercism.

Thoughts?

Exercises about refactoring

I recently used the Gilded Rose refactoring exercise in a group I meet with locally and it was a big hit. I'd like to add it as an exercise here, but I'm not sure how it will fit. The problem is given as an existing (and very ugly) code base for a store's inventory system and it needs to be updated to support a new item type.

The user is supposed to refactor away from a convoluted set of if/else statements to an object oriented solution, then add the new item type. The repository I linked to above has versions in many, many languages.

I have two questions:

First, are there any issues with adding this exercise to exercism.io? I've ported the code to a new language, so I won't be directly importing someone else's work verbatim, but I'm also not the original author.

Second, because the exercise take the form of an existing code base, it's not enough to download a test suite (in fact, not having a test suite to start from is a better exercise). Does the current system support this kind of problem?

wordy: clarify README

In exercism/ruby#116 @dalexj correctly points out that there is an operation that is either blatantly wrong, or the instructions are too ambiguous.

What is -3 plus 7 multiplied by -2?
=> -8

By the usual rules of order of operations, that should be -17.

When I was in primary school we would be asked to solve word problems out loud, and the adult would say it like this: What is -3 plus 7 multiplied by -2?, clarifying that the order of operations was always to be taken left to right.

When the word problems are written down, however, this is not necessarily the obvious choice.

Should we clarify the README to say that order of operations is left-to-right? Or should we fix the answer to reflect the correct order of operations?

Robot Name Requirements

I'm finding the uniqueness requirements of the robot name task unclear.

Random names means a risk of collisions. In some exercism language tracks there are tests to ensure that the same name is never used twice.

Does this indicate that collisions should be avoided in every environment or just in those with tests specifically disallowing collisions?

bracket-push: Improve exercise description

Ensure that all the curly braces and square brackets are matched correctly, and nested correctly.

bracket-push.md

Contrary to this description the go test not only checks for curly and square brackets but for round brackets/parentheses. In Addition the go implementation also returns an error 'unknown bracket' if it discovers any character that is not one of the 3 types of brackets, even whites spaces.

While the README often is just a high level description of the problem and the test suite provides you with all the details, I think this makes it unnecessary hard to implement the exercises similarly through the different tracks.

... at least the README for bracket-push should mention if the input string will contain other characters and how to treat them.

"Hello, World!" spec and implemented problems are inconsistent with the traditional output

The spec currently says Hello, world!, whereas the traditional output is Hello, World! (capitalized W).

http://en.wikipedia.org/wiki/%22Hello,_World!%22_program

We should fix both the spec and the individual problems across the board.

If you're fairly new to programming, open source, or exercism, there's some context about this feature here: https://github.com/exercism/rossconf/issues/12 and here: https://github.com/exercism/rossconf/issues/2

Also: ask questions if anything is unclear, I'd love to fix any sort of documentation to make it easier to get started contributing to exercism.

Duplicate Hamming distance exercises

While add exercises to the C# track, I found that the default list includes both the hamming and the point-mutations exercises. However, upon closer inspection, I believe these two exercises are duplicates. Shouldn't one of the two be removed/deprecated?

date-duration arithmetic exercise

In the Go track the birthday thing was super distracting. I've removed it in exercism/go#222

Have you noticed anything in your respective tracks? It seems like it might work in some languages and not others. /shrug.

rna-transcription -- weird non-biological thing slipped into this problem

I can't remember what happened but we've ended up with some tracks that do the "transcription" both ways, DNA to RNA and also RNA to DNA. Since, biologically speaking, it only goes from DNA to RNA, we should probably figure out which language tracks have the backwards thing implemented and make a note to fix it.

Duplicate: bottles and beer-song

bottles and beer-song have the same md and yml.

beer-song has more implementations than bottles, so I propose that the sole bottles implementation (in xgo) be moved and adapted to beer-song.
The bottles directory should then be removed from x-common.

Boundary conditions for exercises

I've been thinking of exercises that fall at two ends of the spectrum; when I saw the linked-list exercise, it got me thinking: the majority of the exercises fall into the "clever puzzle" category or the "project euler" category, both of which are great.

But I'd love to see more fundamental CS type exercises, as they're both critical to understanding data structures, algorithms, Big O complexity, etc. - for instance, "implement insertion sort" or quicksort, maybe Dijkstra's shortest-path algorithm.

Then I started thinking about knapsacks. Do we want NP-complete problems in the exercise set? Obviously a chess solver would be out of bounds as chess is in EXP, but what about problems in NP?

What about writing tests that are smart enough to determine the space/time complexity of the solution? (Easier in languages with reflection, I know, but just spitballing here.)

What's the lower-bound (I think linked-list sets it pretty low) and what's the upper bound (NP-hard, NP-complete, beyond?), or is it just "try it, people can skip it?"

accumulate: confusion and unfortunate naming

I was reminded about this issue in a recent discussion in the python track: exercism/python#245

The accumulate problem describes what is typically called map or collect, I think, in many other languages.

I think we should rename this across the board, but I'm not sure what we should rename it to. Thoughts?

Sieve of Eratosthenes: Inclusive versus Exclusive

It was brought up in xpython that the README for this problem is vague. I feel the README for this problem should be updated to explicitly state if it is inclusive or exclusive of the limit. This would be [2, limit) vs [2, limit]

Here is a quote from the README:

Create your range, starting at two and ending at the given limit.

"Ending" could mean finish after considering the limit, or break once we get to the limit.

I don't think it matters much if we chose inclusive or exclusive. I think we should just pick one and stick with it to be consistent. At the moment I lean towards inclusive, because I have the feeling this would be more natural for readers when considering the problem.

Does anyone else have an opinion on this?

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.