Giter VIP home page Giter VIP logo

go's Introduction

Exercism Go Track

Configlet Status Exercise Test Status

Exercism exercises in Go.

Issues/Feedback

⚠️ Please be aware that this repository currently does not accept community contributions. This blog post explains the details.

If you have any feedback or experience problems, you can bring them up in the Go section of the Exercism forum.

Development setup

If you work on this repository, you should follow some standard Go development practices. You should have a recent version of Go installed, ideally either the current release or previous release.

You will need a github account and you will need to fork exercism/go to your account. See GitHub Help if you are unfamiliar with the process. Clone your fork with the command: git clone https://github.com/<you>/go. Test your clone by cding to the go directory and typing bin/fetch-golangci-lint and then bin/run-tests. You should see tests pass for all exercises.

Note that unlike most other Go code, it is not necessary to clone this to your GOPATH. This is because this repo only imports from the standard library and isn't expected to be imported by other packages.

Your Go code should be formatted using the gofmt tool.

There is a misspelling tool. You can install and occasionally run it to find low hanging typo problems. #570 It's not added into CI since it could give false positives.

Contributing Guide

Please be familiar with the contributing guide in the docs repository. This describes some great ways to get involved. In particular, please read the Pull Request Guidelines before opening a pull request.

Exercism Go style

Let's walk through an example, non-existent, exercise, which we'll call fizzbuzz to see what could be included in its implementation.

Exercise configuration

An exercise is configured via an entry in the exercises array in config.json file. If fizzbuzz is an optional exercise, it would have an entry below the core exercises that might look like:

{
  "slug": "fizzbuzz",
  "uuid": "mumblety-peg-whatever",
  "core": false,
  "unlocked_by": "two-fer",
  "difficulty": 1,
  "topics": ["conditionals"]
}

See Exercism Docs: config.json for more info.

Exercise files: Overview

For any exercise you may see a number of files present in a directory under exercises/:

~/go/exercises/fizzbuzz
$ tree -a
.
├── cases_test.go
├── example.go
├── fizzbuzz.go
├── fizzbuzz_test.go
├── .meta
│   └── description.md
│   └── gen.go
│   └── hints.md
│   └── metadata.yml
└── README.md

This list of files can vary across exercises. Not all exercises use all of these files. Exercises originate their test data and README text from the Exercism problem-specification repository. This repository collects common information for all exercises across all tracks. However, should track-specific documentation need to be included with the exercise, files in an exercise's .meta/ directory can be used to override or augment the exercise's README.

Let's briefly describe each file:

  • cases_test.go - Contains generated test cases, using test data sourced from the problem-specifications repository. Only in some exercises. Automatically generated by .meta/gen.go.

  • example.go - An example solution for the exercise used to verify the test suite. Ignored by the exercism fetch command. See also ignored files.

  • fizzbuzz.go - A stub file, in some early exercises to give users a starting point.

  • fizzbuzz_test.go - The main test file for the exercise.

  • .meta/ - Contains files not to be included when a user fetches an exercise: See also ignored files.

  • .meta/description.md - Use to generate a track specific description of the exercise in the exercise's README.

  • .meta/gen.go - Generates cases_test.go when present. See also synchronizing exercises with problem specifications.

  • .meta/hints.md - Use to add track specific information in addition to the generic exercise's problem-specification description in the README.

  • .meta/metadata.yml - Track specific exercise metadata, overrides the exercise metadata from the problem-specifications repository.

In some exercises there can be extra files, for instance the series exercise contains extra test files.

Ignored files

When a user fetches an exercise, they do not need to get all the files within an exercise directory. For instance; the example.go files that contain an example solution, or the gen.go files used to generate an exercise's test cases. Therefore there are certain files and directories that are ignored when an exercise is fetched. These are:

  • The .meta directory and anything within it.
  • Any file that matches the ignore_pattern defined in config.json file. This currently matches any filename that contains the word example, unless it is followed by the word test, with any number of characters in between.

Example solutions

example.go is a reference solution. It is a valid solution that the CI (continuous integration) service can run tests against. Files with "example" in the file name are skipped by the exercism fetch command. Because of this, there is less need for this code to be a model of style, expression and readability, or to use the best algorithm. Examples can be plain, simple, concise, even naïve, as long as they are correct.

Stub files

Stub files, such as leap.go, are a starting point for solutions. Not all exercises need to have a stub file, only exercises early in the syllabus. By convention, the stub file for an exercise with slug exercise-slug must be named exercise_slug.go. This is because CI needs to delete stub files to avoid conflicting definitions.

Tests

The test file is fetched for the solver and deserves attention for consistency and appearance.

The leap exercise makes use of data-driven tests. Test cases are defined as data, then a test function iterates over the data. In this exercise, as they are generated, the test cases are defined in the cases_test.go file. The test function that iterates over this data is defined in the leap_test.go file. The cases_test.go file is generated by information found in problem-specifications using generators. To add additional test cases (e.g. test cases that only make sense for Go) add the test cases to <exercise>_test.go. An example of using additional test cases can be found in the exercise two-bucket.

Identifiers within the test function appear in actual-expected order as described at Useful Test Failures. Here the identifier observed is used instead of actual. That's fine. More common are words got and want. They are clear and short. Note Useful Test Failures is part of Code Review Comments. Really we like most of the advice on that page.

In Go we generally have all tests enabled and do not ask the solver to edit the test program, to enable progressive tests for example. t.Fatalf(), as seen in the leap_test.go file, will stop tests at the first failure encountered, so the solver is not faced with too many failures at once.

Testable examples

Some exercises can contain Example tests that document the exercise API. These examples are run alongside the standard exercise tests and will verify that the exercise API is working as expected. They are not required by all exercises and are not intended to replace the data-driven tests. They are most useful for providing examples of how an exercise's API is used. Have a look at the example tests in the clock exercise to see them in action.

Errors

We like errors in Go. It's not idiomatic Go to ignore invalid data or have undefined behavior. Sometimes our Go tests require an error return where other language tracks don't.

Benchmarks

In most test files there will also be benchmark tests, as can be seen at the end of the leap_test.go file. In Go, benchmarking is a first-class citizen of the testing package. We throw in benchmarks because they're interesting, and because it is idiomatic in Go to think about performance. There is no critical use for these though. Usually they will just bench the combined time to run over all the test data rather than attempt precise timings on single function calls. They are useful if they let the solver try a change and see a performance effect.

Synchronizing exercises with problem specifications

Some problems that are implemented in multiple tracks use the same inputs and outputs to define the test suites. Where the problem-specifications repository contains a canonical-data.json file with these inputs and outputs, we can generate the test cases programmatically. The problem-specifications repo also defines the instructions for the exercises, which are also shared across tracks and must also be synchronized.

Test structure

See the gen.go file in the leap exercise for an example of how this can be done.

Test case generators are named gen.go and are kept in a special .meta directory within each exercise that makes use of a test cases generator. This .meta directory will be ignored when a user fetches an exercise.

Whenever the shared JSON data changes, the test cases will need to be regenerated. The generator will first look for a local copy of the problem-specifications repository. If there isn't one it will attempt to get the relevant json data for the exercise from the problem-specifications repository on GitHub. The generator uses the GitHub API to find some information about exercises when it cannot find a local copy of problem-specifications. This can cause throttling issues if working with a large number of exercises. We therefore recommend using a local copy of the repository when possible (remember to keep it current 😄).

To use a local copy of the problem-specifications repository, make sure that it has been cloned into the same parent-directory as the go repository.

$ tree -L 1 .
.
├── problem-specifications
└── go

Adding a test generator to an exercise

For some exercises, a test generator is used to generate the cases_test.go file with the test cases based on information from problem-specifications. To add a new exercise generator to an exercise the following steps are needed:

  1. Create the file gen.go in the directory .meta of the exercise
  2. Add the following template to gen.go:
package main

import (
    "log"
    "text/template"
  
    "../../../../gen"
)

func main() {
	t, err := template.New("").Parse(tmpl)
	if err != nil {
		log.Fatal(err)
	}
	var j = map[string]interface{}{
              "property_1":  &[]Property1Case{},
              "property_2":  &[]Property2Case{},
	}
	if err := gen.Gen("<exercise-name>", j, t); err != nil {
		log.Fatal(err)
	}
}
  1. Insert the name of the exercise to the call of gen.Gen
  2. Add all values for the field property in canonical-data.json to the map j. canonical-data.json can be found at problem-specifications/exercises/<exercise-name>
  3. Create the needed structs for storing the test cases from canonical-data.json (you can for example use JSON-to-Go to convert the JSON to a struct)

NOTE: In some cases, the struct of the data in the input/expected fields is not the same for all test cases of one property. In those situations, an interface{} has to be used to represent the values for these fields. These interface{} values then need to be handled by the test generator. A common way to handle these cases is to create methods on the test case structs that perform type assertions on the interface{} values and return something more meaningful. These methods can then be referenced/called in the tmpl template variable. Examples of this can be found in the exercises forth or bowling.

  1. Add the variable tmpl to gen.go. This template will be used to create the cases_test.go file.

Example:

var tmpl = `package <package of exercise>

{{.Header}}

var testCases = []struct {
	description    string
	input          int
	expected       int       
}{ {{range .J.<property>}}
{
	description: {{printf "%q"  .Description}},
	input: {{printf "%d"  .Score}},
	expected: {{printf "%d"  .Score}},
},{{end}}
}
`
  1. Synchronize the test case using the exercise generator (as described in Synchronizing tests and instructions)
  2. Check the validity of cases_test.go
  3. Use the generated test cases in the <exercise>_test.go file
  4. Check if .meta/example.go passes all tests

Synchronizing tests and instructions

To keep track of which tests are implemented by the exercise the file .meta/tests.toml is used by configlet.

To synchronize the exercise with problem-specifications and to regenerate the tests, navigate into the go directory and perform the following steps:

  1. Synchronize your exercise with exercism/problem-specifications using configlet:
$ configlet sync --update -e <exercise>

configlet synchronizes the following parts, if an updated is needed:

  • docs: .docs/instructions.md, .docs/introduction.md
  • metadata: .meta/config.json
  • tests: .meta/tests.toml
  • filepaths: ./meta/config.json

For further instructions check out configlet.

  1. Run the test case generator to update <exercise>/cases_test.go:
$ GO111MODULE=off go run exercises/practice/<exercise>/.meta/gen.go

NOTE: If you see the error json: cannot unmarshal object into Go value of type []gen.Commit when running the generator you probably have been rate limited by GitHub. Try providing a GitHub access token with the flag -github_token="<Token>". Using the token will result in a higher rate limit. The token does not need any specific scopes as it is only used to fetch infos about commits.

You should see that some/all of the above files have changed. Commit the changes.

Synchronizing all exercises with generators

$ ./bin/run-generators <GitHub Access Token>

NOTE: If you see the error json: cannot unmarshal object into Go value of type []gen.Commit when running the generator you probably have been rate limited by GitHub. Make sure you provided the GitHub access token as first argument to the script as shown above. Using the token will result in a higher rate limit. The token does not need any specific scopes as it is only used to fetch infos about commits.

Managing the Go version

For an easy management of the Go version in the go.mod file in all exercises, we can use gomod-sync. This is a tool made in Go that can be seen in the gomod-sync/ folder.

To update all go.mod files according to the config file (gomod-sync/config.json) run:

$ cd gomod-sync && go run main.go update

To check all exercise go.mod files specify the correct Go version, run:

$ cd gomod-sync && go run main.go check

Pull requests

Pull requests are welcome. You forked, cloned, coded and tested and you have something good? Awesome! Use git to add, commit, and push to your repository. Checkout your repository on the web now. You should see your commit and the invitation to submit a pull request!

Click on that big green button. You have a chance to add more explanation to your pull request here, then send it. Looking at the exercism/go repository now instead of your own, you see this.

That inconspicuous orange dot is important! Hover over it (no, not on this image, on a real page) and you can see it's indicating that a CI build is in progress. After a few minutes (usually) that dot will turn green indicating that tests passed. If there's a problem, it comes up red:

This means you've still got work to do. Click on "details" to go to the CI build details. Look over the build log for clues. Usually error messages will be helpful and you can correct the problem.

Direction

Directions are unlimited. This code is fresh and evolving. Explore the existing code and you will see some new directions being tried. Your fresh ideas and contributions are welcome. ✨

Go icon

The Go logo was designed by Renée French, and has been released under the Creative Commons 3.0 Attributions license.

go's People

Contributors

andrerfcsantos avatar angelikatyborska avatar bitfield avatar bobahop avatar bobtfish avatar dependabot[bot] avatar dvrkps avatar eklatzer avatar erikschierboom avatar exercism-bot avatar ferhatelmas avatar hilary avatar ilmanzo avatar isaacg avatar jmrunkle avatar junedev avatar kytrinyx avatar leenipper avatar mikegehard avatar norbs57 avatar oanaom avatar petertseng avatar pminten avatar robphoenix avatar sebito91 avatar sjakobi avatar soniakeys avatar tehsphinx avatar tleen avatar tompao avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

go's Issues

Make Hamming conform to official definition

From issue exercism/exercism#1867

Wikipedia says the Hamming distance is not defined for strings of different length.

I am not saying the problems cannot be different, but for such a well-defined concept it would make sense to stick to one definition, especially when the READMEs provide so little information about what is expected from the implementation.

Let's clean this up so that we're using the official definition.

Delete configlet binaries from history?

I made a really stupid choice a while back to commit the cross-compiled
binaries for configlet (the tool that sanity-checks the config.json
against the implemented problems) into the repository itself.

Those binaries are HUGE, and every time they change the entire 4 or 5 megs get
recommitted. This means that cloning the repository takes a ridiculously long
time.

I've added a script that can be run on travis to grab the latest release from
the configlet repository (bin/fetch-configlet), and travis is set up to run
this now instead of using the committed binary.

I would really like to thoroughly delete the binaries from the entire git
history, but this will break all the existing clones and forks.

The commands I would run are:

# ensure this happens on an up-to-date master
git checkout master && git fetch origin && git reset --hard origin/master

# delete from history
git filter-branch --index-filter 'git rm -r --cached --ignore-unmatch bin/configlet-*' --prune-empty

# clean up
rm -rf .git/refs/original/
git reflog expire --all
git gc --aggressive --prune

# push up the new master, force override existing master branch
git push -fu origin master

If we do this everyone who has a fork will need to make sure that their master
is reset to the new upstream master:

git checkout master
git fetch upstream master
git reset --hard upstream/master
git push -fu origin master

We can at-mention (@) all the contributors and everyone who has a fork here in this
issue if we decide to do it.

The important question though, is: Is it worth doing?

Do you have any other suggestions of how to make sure this doesn't confuse people and break their
repository if we do proceed with this change?

.travis.yml needs an update

I think the Go developers aren't going to maintain the "release" tag anymore. (Git might not handle moving a tag to a new commit the way Mercurial did.) So I think we'll have to maintain two numbered releases (which currently should be 1.3.3 and 1.4.1) in addition to tip.

update: Oh wait, I see we weren't even using "release". And it looks like gvm is smart enough to take the latest point release. Anyway, it's time for a version update. should be 1.3 and 1.4

Paasio issues

So I've been looking at the paasio test and it strikes me as an odd one. Take for example this bit from the example, one of the core functions:

func (wc *writeCounter) Write(p []byte) (int, error) {
    m, err := wc.w.Write(p)
    wc.wrl.Lock()
    wc.nwr += int64(m)
    wc.nwrops++
    wc.wrl.Unlock()
    return m, err
}

The test code calls this with a bytes.Buffer to write to (i.e. wc.w is a bytes.Buffer). Writing to that is not thread safe, which strongly suggests that io.Writer is not thread safe (otherwise bytes.Buffer would be buggy). But if wc.w.Write is not thread-safe then the whole Write function is not thread-safe and the use of a mutex in it is useless. Furthermore if this function is going to have race problems they're most likely going to be in the buffer write since incrementing an integer is so quick and costs so little time relatively to the buffer write that you need really bad luck to hit it.

Putting the wc.w.Write within the critical section is possible but as I understand the point of the created code is to provide io.Reader and io.Writer wrappers that themselve satisfy io.Reader and io.Writer respectively. In that respect it's a bit odd that the Read and Write methods would be thread-safe if the interface doesn't require it.

Another issue I see with this code is that it's about mutexes and those are somewhat discouraged in Go. They're around for when you need them but you shouldn't use them unless you have a good reason for why channels won't do the job. So is this something we want to teach to newbies?

So I would like to ask, what do you think about this exercise?

new exercise: paasio

You are building a PaaS. And you need a way to bill customers based on network
and filesystem usage.

Create a wrapper for network connections and files that can report IO
statistics.

Hamming and Point mutations

Well this is sad. The way to implement point mutations is to import hamming and we can't do that yet. I would like this to be possible some day but I'm thinking there are lots of considerations and it's not a high priority currently. I propose postponing point mutations until we can import.

How to set up a local dev environment

See issue exercism/exercism#2092 for an overview of operation welcome contributors.


Provide instructions on how to contribute patches to the exercism test suites
and examples: dependencies, running the tests, what gets tested on Travis-CI,
etc.

The contributing document
in the x-api repository describes how all the language tracks are put
together, as well as details about the common metadata, and high-level
information about contributing to existing problems, or adding new problems.

The README here should be language-specific, and can point to the contributing
guide for more context.

From the OpenHatch guide:

Here are common elements of setting up a development environment you’ll want your guide to address:

Preparing their computer
Make sure they’re familiar with their operating system’s tools, such as the terminal/command prompt. You can do this by linking to a tutorial and asking contributors to make sure they understand it. There are usually great tutorials already out there - OpenHatch’s command line tutorial can be found here.
If contributors need to set up a virtual environment, access a virtual machine, or download a specific development kit, give them instructions on how to do so.
List any dependencies needed to run your project, and how to install them. If there are good installation guides for those dependencies, link to them.

Downloading the source
Give detailed instructions on how to download the source of the project, including common missteps or obstacles.

How to view/test changes
Give instructions on how to view and test the changes they’ve made. This may vary depending on what they’ve changed, but do your best to cover common changes. This can be as simple as viewing an html document in a browser, but may be more complicated.

Installation will often differ depending on the operating system of the contributor. You will probably need to create separate instructions in various parts of your guide for Windows, Mac and Linux users. If you only want to support development on a single operating system, make sure that is clear to users, ideally in the top-level documentation.

Add broad range of inputs for leap benchmark?

Right now the benchmark for the leap uses the actual test cases. This is not a very representative sample of years, and could skew the benchmark. How about using a range of numbers like 1000-2000 to get a good spread of values?

binary: should return an error or be removed

Pretty much any parsing function in Go should return an error along with the parsed result. The exception to this rule would be if every string can be parse as a valid input.

The standard library provides strconv.ParseInt which provides the functionality required by the exercise. It may just be a good idea to remove the exercise.

A secondary issue is that the function name is entirely inaccurate. Internally, all numbers are binary. "Decimal" pertains to the string representation of a number. ToDecimal would imply that it returns a string.

The function should be called "Parse" or "ParseInt".

gigasecond_test.go should be improved

People read the test and blindly make a silly 1e18 constant. Instead the first test should do something like if time.Duration(Gigasecond) == 1e9 * time.Second. The cast allows them to use a typed or untyped constant any way they want as long as it is convertible to a time.Duration and has the correct value. E.g. const Gigasecond = 1e18, const Gigasecond float64 = 1e18, const Gigasecond = 1e12 * time.Millisecond, etc would all pass.

The second test is also poor. Some people look at it and try to implement AddGigasecond using time.Unix. Instead it should do something that verifies that time's location hasn't been mucked with.

gigasecond: use times (not dates) for inputs and outputs

A duration of a gigasecond should be measured in seconds, not
days.

The gigasecond problem has been implemented in a number of languages,
and this issue has been generated for each of these language tracks.
This may already be fixed in this track, if so, please make a note of it
and close the issue.

There has been some discussion about whether or not gigaseconds should
take daylight savings time into account, and the conclusion was "no", since
not all locations observe daylight savings time.

Meetup - 5th Monday

There is an interesting edge case in the meetup problem:
some months have five Mondays.

March of 2015 has five Mondays (the fifth being March 30th), whereas
February of 2015 does not, and so should produce an error.


Thanks, @JKesMc9tqIQe9M for pointing out the edge case.
See exercism.io#2142.

Clock: Improve messages and docs.

  1. Add comments to test program about value comparison, with helpful links.
  2. In test program, if == fails but DeepEqual succeeds, give message referring to comments.

Discussion at PR #121

Weird order of tests in config.json

config.json starts with leap instead of the traditional bob, but also has parallel-letter-frequency before bob. Bob is a trivial exercise while parallel-letter-frequency is somewhat more advanced (it requires using channels and goroutines).

Is there a particular logic to the order of exercises in this track?

Int type used in largest-series-product may be too small

The largest-series-product test has this in its test cases:

var tests = []struct {
    digits  string
    span    int
    product int
    ok      bool
}{
...
    {pe1k, 13, 23514624000, true}, // new PE problem
}

The problem here is that the number 23514624000 doesn't fit in an int32. In go the type int may be implemented as int32 or int64, most likely based on the machine word size. That means this will work for a 64-bit platform but will likely break at compile time on a 32-bit platform (since you're not allowed to have overflowing constants).

I'm not quite sure how to best fix this though. Changing the result size to int64 would make the most sense but would break user code.

Implement Go exercises

Copied from exercism/exercism#237


To bring the Go path up to the same standard as the Ruby Path the following exercises need to be converted:

  • bob
  • word-count
  • anagram
  • beer-song
  • nucleotide-count
  • rna-transcription
  • point-mutations superseded by hamming
  • phone-number
  • hamming
  • grade-school
  • robot-name
  • leap
  • etl
  • meetup
  • space-age Just multiplying constants. No metaprogramming or anything (like in Ruby).
  • grains
  • gigasecond Not worth it in Go?
  • triangle
  • scrabble-score
  • roman-numerals
  • binary
  • prime-factors
  • raindrops
  • allergies
  • strain
  • atbash-cipher
  • accumulate Not a good problem in Go
  • crypto-square
  • trinary
  • sieve
  • simple-cipher
  • octal
  • luhn
  • pig-latin
  • pythagorean-triplet
  • series
  • difference-of-squares
  • secret-handshake
  • linked-list go stlib has a doubly linked list
  • wordy
  • hexadecimal
  • largest-series-product
  • kindergarden-garden
  • binary-search-tree
  • matrix
  • robot (robot-movement has no example, robot-pivots has no example)
  • nth-prime
  • palindrome-products
  • pascals-triangle
  • say
  • sum-of-multiples
  • queen-attack
  • saddle-points
  • ocr-numbers

Add stub files to problems that have a testVersion

The testVersion is not documented anywhere, and is a bit tricky for people who are new to Go to figure out.

Since this is a book-keeping thing more than an idiomatic Go / interesting problem solving thing, I think it would make sense to provide a stub file for each of these, defining the test version (see discussion in #145)

  • bob
  • circular-buffer
  • clock
  • crypto-square
  • custom-set
  • food-chain
  • gigasecond
  • hamming
  • leap
  • ledger
  • paasio
  • tournament

Consider creating new Unicode exercise

Previous discussion at #195. The Unicode test case was removed from ledger to limit the scope of that exercise. But maybe a new exercise could be created around the idea of that test case. I imagine the original intent of the test case was to show the difference between the bytes of a string and the runes of a string. This is a good start. The Rosetta Code task String length introduces the additional concept of the graphemes of a string. Formatting text in a fixed-pitch font (the subject of ledger) requires yet another width concept ("cell" width, I think) when full-width asian characters are used. And there's more, but that's probably enough for a good exercise.

The given type in etl

The use a given type in the tests of the etl example prevents introducing a new type in the production code:

cannot use tt.input (type given) as type legacy in function argument

Right now the only reasonable option is to use the underlying type map[int][]string.

Maybe the test should be modified?

A tiny bit of automated feedback in the Go track

I've been doing a lot of code reviews in the Go track over the past few weeks, and some things come up over and over again. A few of these things are pretty easy to detect mechanically, and so I started implementing code smell detectors in rikki-.

At the moment rikki is detecting three things:

  • the code is incorrectly formatted
  • the stub comments haven't been deleted
  • the build constraint is still there

Rikki submits a maximum of one comment per iteration so we don't overwhelm people.

I'm going to look at implementing other things as well, but my weekend is over, so it will have to wait.

Here are three screenshots of reviews that were submitted automatically:
build-constraint
gofmt
stub

Clock: New must return a pointer

The "Clock" exercise wants New() to return non pointer value and it's wrong in my opinion.
New() must return a pointer or a value which represents pointer like reflect.Value
So if:

c1 := New()
c2 := New()

c1 cannot be equal to c2 because it's different pointers.

ledger: handling unicode

While doing the ledger exercise, I noticed that both the example solution and initial implementation have a defect in how they count lines that are too long. The "Description" column of the ledger has width 25, so any string that has more than 25 characters should display the first 22 characters followed by "..."

And yet, the example solution and initial implementation both check for length greater than 27 characters, and cut at index 24 instead of 22.
https://github.com/exercism/xgo/blob/master/ledger/example.go#L44
https://github.com/exercism/xgo/blob/master/ledger/ledger.go#L98

You may ask, is it something to do with the padding in the string " | " that separates each column? No, it's not even that because there is in fact a test that tests for proper cutoffs: https://github.com/exercism/xgo/blob/master/ledger/ledger_test.go#L128

In fact it's because that is the only test that tests too-long descriptions and it happens to have two unicode codepoints that take up two bytes each, the two ö (C3 B6 in utf-8). Therefore, for that particular string, cutting at the 24th byte is cutting at the 22nd rune, and everything is correct.

I'm not sure if I'm comfortable leaving that code in example.go, because it's very much not generic - only works on strings with two more bytes than runes. I'm very tempted to add tests that more thoroughly test cutoffs, such as master...petertseng:ledger-unicode and then fix the code. Both the example and initial implementations currently fail:

--- FAIL: TestFormatLedgerSuccess (0.00s)
    ledger_test.go:347: FormatLedger for input named "overlong ascii descriptions" was expected to return...
        Date_______|_Description_______________|_Change
        01/01/2015_|_This_description_is_wa..._|________$1.00_

        ...but returned...
        Date_______|_Description_______________|_Change
        01/01/2015_|_This_description_is_way_..._|________$1.00_

(They cut the string at the wrong place and misalign the columns)

Thing is, if I add the tests, that means the initial implementation has to pass the tests too, and that means I have to also write bad code that happens to pass the tests. I'll consider whether that's a thing I want to do.

So while I think about that, I wanted to open an issue and ask what y'all think about it as well. Worth adding the tests? And ideas on how to write an intentionally-bad implementation that nevertheless passes tests?

As for a good implementation that works correctly, despite there being https://golang.org/pkg/unicode/utf8/#RuneCountInString , I'm not aware of something in Go that lets you substring at a certain rune count instead of a byte count. So that means I may have to iterate over the first 25 runes of the string (j := 0; for i, _ := range description { if (j == 25) { ... } ... j++ } or similar)

Tests for secret-handshake fails yet correct output

Hi ,
I'm currently working on the exercism "secret handshake". Input 3 and 19 should have the same output "double blink", "wink" however according to the struct below the output is different. Is test for 3 below correct?

var tests = []struct {
    code int
    h    []string
}{
    {1, []string{"wink"}},
    {2, []string{"double blink"}},
    {4, []string{"close your eyes"}},
    {8, []string{"jump"}},
    {3, []string{"wink", "double blink"}},
    {19, []string{"double blink", "wink"}},
    {31, []string{"jump", "close your eyes", "double blink", "wink"}},
    {0, nil},
    {-1, nil},
    {32, nil},
    {33, []string{"wink"}},
}

React adds extra requirement

This issue continues discussion that started at #188.

When I tried solving this exercise I was bothered by the final requirement of the the test program, that a callback on a cell only be called once when both of the cell's inputs change. My issue is that my reading of the readme not only does not make this requirement, but even worse suggests an implementation that does not work with this requirement. The readme says "...allow a change to one value to automatically propagate to other values..." and then ..."allow for registering callbacks which get called when the value of a cell changes." This to me indicates an implementation where changes are propagated by callbacks.

Working the exercise, when I saw the final test fail I understood its intent but I felt I had been led down a garden path. I felt like the readme had indicated that the problem was to be implemented a certain way. But then after coding what the readme indicated, the test program says at the very end, "sorry you did it the way the readme suggested, now here's an extra requirement tacked on that invalidates all that, so you must now code something completely different." I felt cheated and the problem stopped being fun at that point. Now, this happens a lot in the real world of course. Stakeholders give you a set of requirements, then QA demands something that expands the scope of the project and that you feel was not within the intent of the original requirements. It struck me as devious, so I wondered if that was deliberate here, following the theme of the site.

As far as surprising behavior, if I had defined a cell with two inputs and I knew both inputs were changing, I would be alarmed to get only a single update. It's a difference of assumptions and the readme doesn't suggest either assumption.

My preferences in order would be

  1. Remove the test program requirement that a callback be called only once when both inputs change.
  2. Update the readme to be more explicit and less potentially misleading.
  3. Leave the problem devious.

And with this choices 2 or 3, perhaps add a new test case where changes to single inputs would produce a different result but changes to both inputs leave the result the same. Presumably you would not want a callback in this case. Like with b = a+1, c = a-1, d = b-c, d should always be 2 and a callback attached to d should not be called when a changes.

phone_number / example.go incorrect

In the phone-number / example.go file, if a number has more than 11+ digits, but the first digit is a 1, it still parses as a valid number. The tests don't check for this condition, so it passes the tests, however based on the description of the problem, this should not parse as valid.

Suggest changing the example.go as well as the test file.

Bonus solutions in examples or tests?

I implemented robot-name, and also did a bonus version and bonus test. Do we ever put these source control? Is there a naming convention for them?

accumulate: should be removed

The function described is a bad pattern in Go. The mailing list will tell you repeatedly that writing 'map' and 'reduce' type functions in Go results in more obscure code with worse performance.

Ideomatic error handling

Several exercises demand very clunky and unideomatic error handling. Examples are binary, octal, phone-number but there are more.
This is a problem because these exercises teach users bad practices.

Would patches to improve these exercises be accepted?

Or should these exercises rather stay close to the original Ruby versions?

Tests for triangle can be solved by an empty implementation.

Hi,

I started working on this problem (triangle), only to notice that after I added stubs for entities in my code, all tests passed. Basically, my "solution" was this:

package triangle

func KindFromSides(a, b, c float64) Kind {
    return Kind{}
}

type Kind struct {
}

var Equ = Kind{} // equilateral
var Iso = Kind{} // isosceles
var Sca = Kind{} // scalene
var NaT = Kind{} // not a triangle

and all of the tests which I ran locally passed. I presume this is incorrect?

provide implementation file

As a new user, I found confusing that I didn't have a blank file with the package name and the basic API to implement.

package problem_test

What do Gophers think of changing the package line on tests from package problem to package problem_test? It would force the good practice of exporting the API on both problem solvers and test writers. It validates that the exported API is usable. It also lets test code read like application code, somewhat testing the readability of the API. For example if a package defines just one major type and that type has a constructor function, it's idiomatic to name the function New. Test code thus currently has code like r := New() that might make you wonder if it should be more descriptive. If it said r := robot.New() though, you would feel more at ease. (If after changing the package declaration though, you discover you must write r := robotname.New() when really you are constructing a robot and not a name, then you notice the problem in the readability of the API.)

I think our current problem set is very close to exporting things properly already so changes would be minor.

Consider moving excercises to subdirectory

This is not specific to xgo but I noticed that there are a few subdirectories mingling with the exercises that probably should be separated from the actual exercises. Moving them to into a /problems or /exercises or similar subdir could be a solution but this will affect all the tracks.

bin
docs
error-handling
gen
img

Something to think about.

Issue using exercism tool to get the go exercises

I'm receiving the following error:
2015/11/19 14:21:15 error parsing API response - json: cannot unmarshal string into Go value of type []api.SubmissionInfo

after performing:
exercism fetch go leap

Common test data

I've been tossing around ideas for common test data that could be used across languages. We first need a place to put it. After that, if nothing else people could refer to it rather than a specific language (often Ruby) when implementing an exercise in a new language. Picking a format though would allow people to write programs to parse and use the test cases. X-common/stub already has stub.yml with a few pieces of information. I was thinking test cases could go right in that file.

To experiment with parsing the standard data, I modified a couple of Go exercises to store their test cases in a .yml file, convert the .yml to a .go file, and test against that. These experiments are in branches https://github.com/exercism/xgo/tree/bob-yml and https://github.com/exercism/xgo/tree/gigasecond-yml. The data file is example_gen.ym. If I remember, the client program does not transfer filenames starting with "example". The conversion program is example_gen.go. This program parses the .yml with a third-party library and generates Go data structures in an ad-hoc way. It's output is cases_test.go. This file, not starting with "example", does get copied by the client. It builds with the test program, providing test cases.

It seems to me other languages could do something similar. If it proves reasonable, we could put the yml in x-common and then begin to see benefits of common test data. The data should improve with "more eyes" and test programs in one language can benefit from lessons learned in another. For example I created the bob yml from Ruby test cases and was surprised when Go failed one of the tests. It seems that Go was written to check for spaces as silence and then at some later point Ruby began allowing tabs as silence as well.

There are details but that's the basic idea.

Add helpful information to the SETUP.md

The contents of the SETUP.md file gets included in
the README.md that gets delivered when a user runs the exercism fetch
command from their terminal.

At the very minimum, it should contain a link to the relevant
language-specific documentation on
help.exercism.io.

It would also be useful to explain in a generic way how to run the tests.
Remember that this file will be included with all the problems, so it gets
confusing if we refer to specific problems or files.

Some languages have very particular needs in terms of the solution: nested
directories, specific files, etc. If this is the case here, then it would be
useful to explain what is expected.


Thanks, @tejasbubane for suggesting that we add this documentation everywhere.
See exercism.io#2198.

roman-numerals: should return an error

AFAIK there is no 'best' behavior for representing large numbers. The README admits this. And then there's zero. And negative numbers.

The signature should have an error as a second return value.

Test fail messages

Not a bug, but there's an interesting thread on go-nuts relevant for us where we want to display where returned results differ from wanted results. We use a mix of ad-hoc techniques currently. At some point we might want to try something like what they are talking about in this thread, maybe for consistent output, maybe to improve messages, maybe to help test program correctness or readability.

go test on clock fails on TestVersion

± go test
# _/home/ar/work/exercism/go/clock
./clock_test.go:32: undefined: TestVersion
./clock_test.go:33: undefined: TestVersion
FAIL    _/home/ar/work/exercism/go/clock [build failed]

go 1.4rc2

custom_set_test.go doesn't test Equal properly

Someone had s1.String() == s2.String() which doesn't work since Go maps are unordered. However, the test appears to only every try Equal on 0, or 1 item sets and didn't notice this.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.