Giter VIP home page Giter VIP logo

til's Introduction

TIL

Today I Learned

til's People

Contributors

huygn avatar

Stargazers

 avatar  avatar

Watchers

 avatar  avatar

til's Issues

Go projects structure

Putting all .go files in same level as main.go (or package main file)

Found in confd and mostly used in command line tool repos where all top-level files belong to package main. This structure allows you to simply build you binary using go build .. If you run your main.go, you will see errors as go cannot find variables or funcs scatter in those top-level package main files. To get around it, do go run main.go lib1.go lib2.go args.
Ref: http://stackoverflow.com/questions/21293000/go-build-works-fine-but-go-run-fails

Some better structures

See https://medium.com/@benbjohnson/structuring-applications-in-go-3b04be4ff091.

Remove a pushed commit from history (dangerous!!!)

Ref: http://sethrobertson.github.io/GitFixUm/fixup.html#remove_deep


Loud WARNING

Rewriting commit history is VERY BAD and we should never do this at all, but if you wish to proceed, tell your team (or everyone who might have pulled the history telling them that history was rewritten) to git pull --rebase and do a bit of history rewriting of their own if they branched or tagged from the now outdated history.

Also please be warned. If some of the commits between SHA and the tip of your branch are merge commits, it is possible that git rebase -p will be unable to properly recreate them. Please inspect the resulting merge topology gitk --date-order HEAD ORIG_HEAD and contents to ensure that git did want you wanted. If it did not, there is not really any automated recourse. You can reset back to the commit before the SHA you want to get rid of, and then cherry-pick the normal commits and manually re-merge the “bad” merges. Or you can just suffer with the inappropriate topology (perhaps creating fake merges git merge --ours otherbranch so that subsequent development work on those branches will be properly merged in with the correct merge-base).

How

  • Find the unwanted commit SHA
git log --graph --decorate --oneline  # beautiful
  • Rebase away the commit
git rebase -p --onto 5697c2a^ 5697c2a
  • Force push to the repository
git push -f

P/S: This can be prevented by protecting your certain branches from force push, most Git hosting sites like Github and Gitlab have this feature (called protected branches).

Mocking sqlx pkg for testing

It was pretty straightforward.

mockDB, mock, err := sqlmock.New()
defer mockDB.Close()
sqlxDB = sqlx.NewDb(mockDB,"sqlmock")

Later on I used the sqlmock function:

mock.ExpectExec("INSERT INTO baskets").WillReturnResult(sqlmock.NewResult(newID, 1))

for the sqlx query:

sqlxDB.Exec("INSERT INTO baskets (user_id, name, created_at, updated_at) VALUES (?, ?, ?, ?)", basket.UserID, basket.Name, timeNow, timeNow)

and I used the sqlmock function:

rows := sqlmock.NewRows([]string{"id", "user_id", "name", "created_at", "updated_at"}).
            AddRow(1, userID, name, timeNow, timeNow)
mock.ExpectPrepare("^SELECT (.+) FROM baskets WHERE").ExpectQuery().WithArgs(userID).WillReturnRows(rows)

for the sqlx query:

sqlxDB.PrepareNamed("SELECT id, user_id , name, created_at, updated_at FROM baskets WHERE user_id = :user_id")

Go - "range variable i captured by func literal"

When doing anonymous function inside a for loop, you might see a warning range variable i captured by func literal.

for i, e := range itemIDs {
    // This work as expected
    ii, ee := i, e
    go func() {
        defer wg.Done()
        if err := GetStoryByID(ee, &items[ii]); err != nil {
            log.Fatalf("Error: %s", err)
        }
    }()

    // This WONT work as expected
    go func() {
        defer wg.Done()
        if err := GetStoryByID(e, &items[i]); err != nil {
            log.Fatalf("Error: %s", err)
        }
    }()
}

With our go func().. statement we start a new goroutine. They run concurrently, which they do not run one after the other in an orderly fashion. They could in theory run one after the other, or they could all run at the same time (in parallel). Or maybe the "last one" runs first and then the "first one" and the "second one" runs last, or maybe.. think you get the point; it's unpredictable.

Ref: http://oyvindsk.com/writing/common-golang-mistakes-1

Get current func name

See http://stackoverflow.com/a/10743805/4328963


package main

import "fmt"
import "runtime"

func main() {
    fmt.Println("Name of function: " + funcName())
    x()
}

// Where the magic happens
func funcName() string {
    pc, _, _, _ := runtime.Caller(1)
    return runtime.FuncForPC(pc).Name()
}

func x() {
    fmt.Println("Name of function: " + funcName())
}

Output:

Name of function: main.main
Name of function: main.x

Fix Firefox slow scrolling on Ubuntu

Check your video card driver

http://askubuntu.com/a/578578/438116

You might need to install your video cards drivers. By default Ubuntu usually installs the open source version which sometimes don't work well.

You can find out if this is the case by type Additional Drivers into the launcher -> search.

Select the most recent NVIDIA driver version, (for my GTX 750Ti, it was nvidia-361) then click Apply Changes, reboot after finished.

If NVIDIA driver is not installed, see below section.

Install Nvidia driver

$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt-get update
  • Purge any existing nvidia related packages you have installed
$sudo apt-get purge nvidia*
  • Check which drivers are available for your system
$ ubuntu-drivers devices
  • Install the reccomended driver
$ sudo apt-get install nvidia-361
  • Restart your system
$ sudo reboot

Ref: http://askubuntu.com/questions/451221/ubuntu-14-04-install-nvidia-driver/700613#700613

Last resort, try disabling User smooth scrolling in Firefox > Advanced.

Table Driven Testing

Ref: https://github.com/golang/go/wiki/TableDrivenTests


Introduction

If you ever find yourself using copy and paste when writing a test, think about whether refactoring into a table-driven test or pulling the copied code out into a helper function might be a better option.

Given a table of test cases, the actual test simply iterates through all table entries and for each entry performs the necessary tests. The test code is written once and amortized over all table entries, so it makes sense to write a careful test with good error messages.

Example of a table driven test

var flagtests = []struct {
    in  interface{}
    out interface{}
}{
    {"/", 404},
    {"/about", 200},
}

func TestAPIs(t *testing.T) {
    for _, tt := range flagtests {
        r, err := http.Get("http://10.88.102.47:8080" + tt.in.(string))
        if err != nil {
            t.Error(err)
        }
        defer r.Body.Close()
        s := r.StatusCode

        if s != tt.out {
            t.Errorf("Get %q => %v, want %v", tt.in, s, tt.out)
        }
    }
}

Return values in bash script

$(command) captures the text sent to stdout by the command contained within.
return does NOT output to stdout.
$? contains the result code of the last command.

  • To capture return value of a function, use $?
  • To capture output of a function (echo), use $(command)

In a function block, local variables that use $(command) to capture command output will NOT capture anything:

function fun1(){
  return 34
}

function fun2(){
  local res=$(fun1)
  echo $?  # <-- Always echos 0 since the 'local' command passes.

  res=$(fun1)
  echo $?  # <-- Outputs 34
}

Ref:

Go's net/http Client & Server timeout

TL;DR

By default http.Server and http.Client are initialized with no timeout, this leads to major issues when overlooked.

Do this instead:

// for Server
srv := &http.Server{
    Addr:           listenAddr,
    Handler:        handler,
    ReadTimeout:    30 * time.Second,
    WriteTimeout:   30 * time.Second,
    MaxHeaderBytes: 1 << 20,
}
srv.ListenAndServe()

// for Client
netClient := &http.Client{
    Timeout: 10 * time.Second,
}
response, _ := netClient.Get(url)

Auto-launching ssh-agent on Git for Windows

https://help.github.com/articles/working-with-ssh-key-passphrases/#auto-launching-ssh-agent-on-git-for-windows


Put this to your ~/.profile:

env=~/.ssh/agent.env

agent_load_env () { test -f "$env" && . "$env" >| /dev/null ; }

agent_start () {
    (umask 077; ssh-agent >| "$env")
    . "$env" >| /dev/null ; }

agent_load_env

# agent_run_state: 0=agent running w/ key; 1=agent w/o key; 2= agent not running
agent_run_state=$(ssh-add -l >| /dev/null 2>&1; echo $?)

if [ ! "$SSH_AUTH_SOCK" ] || [ $agent_run_state = 2 ]; then
    agent_start
    ssh-add
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 1 ]; then
    ssh-add
fi

unset env

If your ssh-agent has not loaded yet:

eval $(ssh-agent)

My Jenkins container's run command

$ docker run --name jenkins -d \
    -p 50000:50000 -p 8080:8080 \
    -v /var/run/docker.sock:/var/run/docker.sock \
    -v /var/jenkins_home:/var/jenkins_home \
    -v /etc/localtime:/etc/localtime:ro \
    -e JAVA_OPTS="-Duser.timezone=ICT -Xmx1024m -Dhudson.model.DirectoryBrowserSupport.CSP=\"default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';\"" \
    --privileged \
    --restart=unless-stopped \
    gnhuy91/jenkins-dockerize
  • /var/run/docker.sock:/var/run/docker.sock and --privileged allows Jenkins container to spawn containers - #10
  • /etc/localtime:/etc/localtime:ro and -Duser.timezone=ICT configures Jenkins's timezone display
  • -Xmx1024m configures memory heap size
  • -Dhudson.model.DirectoryBrowserSupport.CSP=\"default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';\" allow Jenkins to display third-party HTML & javascript reports, otherwise it won't display your reports in build statuses.
  • https://github.com/gnhuy91/jenkins-dockerize

List commits between 2 commit hashes

  • Show commit date
git show -s --format=%ci <commit>
  • Get logs between 2 commits (this includes both commits in the logs)
git log --since="<date of commit1>" --until="<date of commit2>" 
  • If you just lazy to get the date
git log \
--since="$(git show -s --format=%ci <commit1>)" \
--until="$(git show -s --format=%ci <commit2>)"
  • Format for a better output, also piped to head -n -1 to omit <commit1> log from the output
git log \
--pretty=format:"%h - %an, %ar : %s" \
--since="$(git show -s --format=%ci <commit1>)" \
--until="$(git show -s --format=%ci <commit2>)" \
| head -n -1

Ref: http://stackoverflow.com/questions/18679870/list-commits-between-2-commit-hashes-in-git

Github - Get latest release URL

Github has https://api.github.com/repos/:owner/:repo/releases endpoint that returns json with useful information:

$ export REPO=docker/compose
$ export URL=https://api.github.com/repos/$REPO/releases/latest

Latest tag:

$ curl -sL $URL | grep tag_name | cut -d '"' -f 4
1.8.0

Download url:

$ curl -sL $URL | grep browser_download_url | grep $(uname -s)-$(uname -m) | head -n 1 | cut -d '"' -f 4
https://github.com/docker/compose/releases/download/1.7.1/docker-compose-Linux-x86_64

Ref: https://developer.github.com/v3/repos/releases/#get-the-latest-release

Things to check with PostgreSQL

Took shamelessly from http://smarp.breezy.hr/p/4ca8a44f3036-backend-engineer.


  • Tips to optimize SQL queries
  • Optimize all queries using EXPLAIN
  • The differences between IN ANY VALUES(…), (…)… and IN ANY ARRAY[…] (and possibly other similar keywords) in PostgreSQL
  • The differences between NOT IN, EXCEPT and NOT EXISTS (and possibly other similar keywords) in PostgreSQL
  • The differences between = NULL and IS NULL (and possibly other similar keywords) in PostgreSQL
  • The differences between subquery, CTE and temporary table (and possibly other similar keywords) in PostgreSQL in terms of usage, readability, performance, and other relevant aspects

Docker Registry TLS

Ref: https://docs.docker.com/registry/insecure/


  • Generate your own certificate, be sure to use the name myregistrydomain.com as a Common Name:
$ mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
  • Use the result to start your registry with TLS enabled:
$ docker run -d -p 5000:5000 --restart=always --name registry \
  -v $(pwd)/certs:/certs \
  -e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
  -e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
  registry:2
  • Instruct every docker daemon to trust that certificate:
$ sudo cp certs/domain.crt /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt
  • Don’t forget to restart the Engine daemon: sudo service docker restart.
  • Now you can play with your Registry:
$ docker pull hello-world && docker tag hello-world localhost:5000/hello-world
$ docker run --rm localhost:5000/hello-world
  • Final docker-compose.yml file may looks like this:
registry:
  restart: unless-stopped
  image: registry:2
  ports:
    - 443:5000
  environment:
    REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
    REGISTRY_HTTP_TLS_KEY: /certs/domain.key
  volumes:
    - ./data:/var/lib/registry
    - ./certs:/certs

Port map 443:5000 so you can do docker run localhost/hello-world without specifying port 5000.

Bonus

Here is something to automate. Assuming 10.88.102.47:8443 is your Registry's URI:

  • Generate new key with [SAN] so you can pull docker images from other machines inside your network
openssl req -newkey rsa:4096 -nodes -sha256 \
    -keyout certs/domain.key -x509 -days 365 \
    -out certs/domain.crt \
    -reqexts SAN -config <(cat /etc/ssl/openssl.cnf \
    <(printf "[SAN]\nsubjectAltName=IP:10.88.102.47")) \
    -subj "/CN=10.88.102.47"

See http://www.shellhacks.com/en/HowTo-Create-CSR-using-OpenSSL-Without-Prompt-Non-Interactive for automating openssl keygen.

  • Then move cert file to /etc/docker/certs.d/
sudo chown -R $USER /etc/docker
sudo mkdir -p /etc/docker/certs.d/10.88.102.47:8443
sudo cp certs/domain.crt /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo chmod +r /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo service docker restart
docker-compose up -d
  • Copy cert file to other machines
sudo chown -R $USER /etc/docker
mkdir -p /etc/docker/certs.d/10.88.102.47:8443
ssh user@my-pc cat /etc/docker/certs.d/10.88.102.47:8443/ca.crt > /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo service docker restart

bash - [[ ]] vs [ ]

[[ has more features, fewer ‘surprises’ and is generally safer to use. But it is not portable - Posix doesn’t specify what it does and only some shells support it (beside bash, i heard ksh supports it too). For example, you can do

[[ -e $b ]]

to test whether a file exists. But with [, you have to quote $b, because it splits the argument and expands things like "a*" (where [[ takes it literally). That has also to do with how [ can be an external program and receives its argument just normally like every other program (although it can also be a builtin, but then it still has not this special handling).

[[ also has some other nice features, like regular expression matching with =~ along with operators like they are known in C-like languages. Here is a good page about it: What is the difference between test, [ and [[ ? and Bash Tests.

Ref:

Getting started with Kubernetes

Kubernetes - also called K8s.

Running an image (deployment / service):

kubectl run nginx --image=nginx --port=80 --expose=true
  • have to explicitly provide port number
  • have to explicitly expose the port with --expose=true

After above command, K8s will:

  • create a deployment named nginx
  • create a service name nginx (because --expose=true)

Remove deployments / services

If you delete pod or ReplicaSet of a deployment, a new one will be created right away due to the famous RestartPolicy.
By default if unspecified, pods RestartPolicy are set to Always, so if you want to remove a created pod, you have to kubectl delete deployment nginx.

After deleted deployment with kubectl delete deployment nginx, try kubectl get svcs and you will noitice service redis is still there.
To delete both deployment and service, do:

kubectl delete deployment,svc nginx

Making Docker images smaller

https://www.dajobe.org/blog/2015/04/18/making-debian-docker-images-smaller/

TL;DR

  1. Use one RUN to prepare, configure, make, install and cleanup.
  2. Cleanup with apt-get remove --purge -y $INSTALL_PACKAGES $(apt-mark showauto) && rm -rf /var/lib/apt/lists/*

For Alpine:

ENV INSTALL_PACKAGES="git ca-certificates go"
RUN apk add --update --no-cache $INSTALL_PACKAGES \
    # Do things here
    && echo "do things" \
    # Cleanup
    && apk del --purge $INSTALL_PACKAGES \
    && rm -rf /var/cache/apk/*

To sum up, install whatever packages you need to perform tasks, then remove those installed packages (or files) if they are not needed at runtime, ie. install wget to download a binary for entrypoint, then later you should remove wget because the binary can run without wget.

Checkout my minimal images:

Make your container spawn containers

How

Do these to your containers

  1. Share the docker socket -v /var/run/docker.sock:/var/run/docker.sock
  2. --privileged flag
  3. Install Docker inside your container (with curl -fsSL https://get.docker.com/ | sh)
  4. Optional - install docker-compose

Why

The idea is simple, instead of doing the crazy "docker inside docker" stuffs, we simply share the docker socket from host machine to our containers.

Step 1 + 2 ensures that your container connects to the correct socket and have extended privileges.

Step 3 is important - your container now has access to the docker socket on your host, which means it has the right to list, start, stop, remove containers just like your host machine. However, it needs an interface to talk to the Docker daemon to perform those operations - the docker client.

References

Bonus

Checkout my Jenkins container which can spawn other containers: https://github.com/gnhuy91/jenkins-dockerize and it's docker run command: #9.

Cloud Froundry's UAA basic mechanism

  1. Create UAA instance with admin password.
  2. Use uaac tool to login to UAA instance.
    1. Create client uaac client add.
    2. Fill in user id & secret (password).
  3. Client get access_token by POSTing client_id & client_id secret to uaa/oauth/token endpoint.
  4. Client use returned access_token in request Header (Authorization: Bearer + access_token) to send HTTP requests to (our) Web service.
  5. Web service grabs access_token from request Header, make a POST request to uaa/check_token endpoint (request body: token=access_token, Authorization: BasicAuth with UAA instance’s admin username & password).
  6. uaa/check_token endpoint then return Status 200 if the Client’s access_token is valid, 400 if the token is invalid (expired/not authorized/etc.).
  7. Web service will only process Client's request if UAA instance returns 200 (valid access_token).

stdin: `<(...)` and `psub` is awesome

Instead of:

echo requirepass ${REDIS_PASSWORD} > /tmp/redis.conf && redis-server /tmp/redis.conf

Simply:

redis-server <(echo requirepass ${REDIS_PASSWORD})

or this if using fish instead of bash

redis-server (echo requirepass $REDIS_PASSWORD | psub)

The <(cmd) returns location of cmd's result. In above examle, echo requirepass ${REDIS_PASSWORD} returns a string, and <(echo requirepass ${REDIS_PASSWORD}) return the location to that string.

This is extremely useful when chaining more commands, i.e

bash <(curl -fsSL https://raw.github.com/gnhuy91/dotfiles/master/bin/dotfiles) -y

Will first download a file, then pipe it to bash as 1st argument along with -y as 2nd argument.

Try cat <(echo hello world) or ls -l <(echo hello world) to get more understanding.

Keep Env variables when using SUDO

Ref: http://stackoverflow.com/a/8636711/4328963

Quick & dirty way

$ export HTTPS_PROXY=foof
$ sudo -E bash -c 'echo $HTTPS_PROXY'

Quote from sudo man page:

-E, --preserve-env
             Indicates to the security policy that the user wishes to reserve their
             existing environment variables.  The security policy may eturn an error
             if the user does not have permission to preserve the environment.

Preferred way

The trick is to add environment variables to sudoers file via sudo visudo command and add these lines:

Defaults  env_keep += "http_proxy"
Defaults  env_keep += "https_proxy"
Defaults  env_keep += "HTTP_PROXY"
Defaults  env_keep += "HTTPS_PROXY"

Nginx - Add trailing slash to urls

See: http://stackoverflow.com/questions/645853/add-slash-to-the-end-of-every-url-need-rewrite-rule-for-nginx/3912675#3912675

The Regular Expression translates to: "rewrite all URIs without any '.' in them that don't end with a '/' to the URI + '/'" Or simply: "If the URI doesn't have a period and does not end with a slash, add a slash to the end".
The reason for only rewriting URI's without dots in them makes it so any file with a file extension doesn't get rewritten. For example your images, css, javascript, etc and prevent possible redirect loops if using some php framework that does its own rewrites.

rewrite ^([^.]*[^/])$ $1/ permanent;

NOTE

Currently above rewrite turns POST request to GET request so I am not using it, see:

Why one needs RWMutex.RLock()

I don't understand RLock, why would one want to lock for reading? reading doesn't mutate data so concurrent reads is safe isn't it?

Luna Duclos [3:29 PM]
No, it isn't
Reading while mutating data is not safe
You can't read at all while anyone is writing
hence the concept of an RLock
RLock will allow concurrent reads
But will not allow any reads while a write is going on
So all RLocks() will block while a Lock() has been taken
and continue on once it's been released

CNTLM - Auto authorize corporate proxy

http://stackoverflow.com/a/23962313/4328963
http://stackoverflow.com/questions/9181637/how-to-fill-proxy-information-in-cntlm-config-file
http://cntlm.sourceforge.net/


  • Install cntlm
  • Type cntlm -H -d your_domain -u your_username (-d is optional).
    It will ask your password. Enter your password and cntlm will give you some hashes. Something like this:
$ cntlm -H -d your_domain -u your_username
Password:
PassLM          4E9C185900C7CF0B6FFCB2044F81920C
PassNT          6E9F120B83EEA0E875CE8E6F9730EC9A
PassNTLMv2      2A0B7C2457FB7DD8DA4EB737C4FA224F

Now you have password hashed. Save them to a text editor.

  • Type cntlm -M http://www.google.com -u <your_username> <proxy_host>[:]<proxy_port> for testing your credentials. Again enter your password. It will give you something like this:
$ cntlm -M http://www.google.com -u <your_username> <proxy_host>[:]<proxy_port>
Password:
Config profile  1/4... Credentials rejected
Config profile  2/4... OK (HTTP code: 302)
----------------------------[ Profile  1 ]------
Auth            NTLM
PassNT          6E9F120B83EEA0E875CE8E6F9730EC9A
PassLM          4E9C185900C7CF0B6FFCB2044F81920C
------------------------------------------------

Now you see that profile 2 is successful. Because it says OK for profile 2. It may be different on you system.

  • Save above hashes to ~/cntlm.ini:
#
# Cntlm Authentication Proxy Configuration File
#

Username yourusername
Domain yourdomain

Auth NTLM
PassLM          4E9C185900C7CF0B6FFCB2044F81920C
PassNT          6E9F120B83EEA0E875CE8E6F9730EC9A
PassNTLMv2      2A0B7C2457FB7DD8DA4EB737C4FA224F

Workstation yourhostname.yourdomain

# Most probably proxy.yourdomain:8080
Proxy  yourProxyIP:yourProxyPort

NoProxy  localhost, 127.0.0.*, 10.*, 192.168.*

Listen  3132

Gateway yes
  • Run cntlm -c ~/cntlm.ini
  • Now you can use your computer's IP address and port 3132 as a proxy (export https_proxy=http://localhost:3132).

Docker Registry strategy - tags build for Jenkins

We (at Yelp) also use the git sha as a unique tag for images, but that's mostly for convenience (it's easy to figure out what code is running in the container). There are lots of other options for a unique tag.

Since you're using Jenkins, $BUILD_TAG is a good option. It should always be unique, and it lets you track the image back to the job that built it.

We would use image name and tag to identify the state of each image. During the first docker build step:

docker build -t ${package}:${env.BUILD_TAG}
docker tag ${package}:${env.BUILD_TAG} ${package}:unstable

Pass the ${BUILD_TAG} value along to the following jobs in the jenkins pipeline, so they know which unique id to deploy and test. After the tests pass:

docker tag ${package}:${env.BUILD_TAG} ${package}:stable

After deployment succeeds:

docker tag ${package}:${env.BUILD_TAG} ${package}:live

That way you can operate on the unique id, and you also get labels for the "latest" image that has passed each phase of the pipeline. If you need more than latest, I suppose you could use :${env.BUILD_TAG}-stable, :${env.BUILD_TAG}-live, etc, to keep track of state.

Random string generation

Ref: http://stackoverflow.com/questions/2257441/random-string-generation-with-upper-case-letters-and-digits-in-python


''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(N))

A more secure version; see http://stackoverflow.com/a/23728630/2213647:

''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(N))

Using random.SystemRandom() instead of just random uses /dev/urandom on *nix machines and CryptGenRandom() in Windows. These are cryptographically secure PRNGs. Using random.choice instead of random.SystemRandom().choice in an application that requires a secure PRNG could be potentially devastating, and given the popularity of this question, I bet that mistake has been made many times already.

In details, with a clean function for further reuse:

>>> import string
>>> import random
>>> def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
...    return ''.join(random.choice(chars) for _ in range(size))
...
>>> id_generator()
'G5G74W'
>>> id_generator(3, "6793YUIO")
'Y3U'

Template to String

TL;DR - If your text is short & simple, run away and use simple string formatting like Printf and Sprintf.

// Templates expect a struct as input for parsing
type DBInfo struct {
    Protocol string
    Username string
    Password string
    Host     string
    DBName   string
}

dbinfo := DBInfo{
    "postgres",
    os.Getenv("POSTGRES_USER"),
    os.Getenv("POSTGRES_PASSWORD"),
    os.Getenv("POSTGRES_HOST"),
    os.Getenv("POSTGRES_DB")}

// Define the template
tmpl, err := template.New("dbinfo").Parse("{{.Protocol}}://{{.Username}}:{{.Password}}@{{.Host}}/{{.DBName}}")
if err != nil {
    panic(err)
}

// Parse the template to bytes.Buffer
var b bytes.Buffer
err = tmpl.Execute(&b, dbinfo)
if err != nil {
    panic(err)
}

// Convert the Buffer to String
dbURL := b.String()
fmt.Println(dbURL)

Middleware on the whole router

A great way to use middleware (alice style):

r.Get("/hello/{name}", alice.New(middleware.Junk).Then(http.HandlerFunc(HelloServer)))

chain := alice.New(middleware.Auth).Then(r)
http.ListenAndServe(":12345", chain)
  • middleware.Auth will be called on every request
  • middleware.Junk will only be called on /hello/{name}

Go channels

Ref: https://talks.golang.org/2016/applicative.slide


  1. A pipe that accepts a specific values type.
  2. You send values to a channel, and receive values from it - sends and receives block until both the sender and receiver are ready.

This func will Search in parallel AND wait for all Search functions to finish, since []Result{<-c, <-c, <-c} receives all 3 values that sent to the channels earlier.
Receiving a channel value block until it received a value. This allowed us to wait at the end of our program/function for the channel value without having to use any other synchronization.

func SearchParallel(query string) ([]Result, error) {
    c := make(chan Result)
    go func() { c <- Web(query) }()
    go func() { c <- Image(query) }()
    go func() { c <- Video(query) }()

    return []Result{<-c, <-c, <-c}, nil
}

This func will also do Search in parallel, instead of waiting for all Search to complete, Search result is sent to channel as soon as the goroutine finished. Using select wait for all values simultaneously, append each value to results as soon as it arrives.

func SearchTimeout(query string, timeout time.Duration) ([]Result, error) {
    timer := time.After(timeout)
    c := make(chan Result, 3)
    go func() { c <- Web(query) }()
    go func() { c <- Image(query) }()
    go func() { c <- Video(query) }()

    var results []Result
    for i := 0; i < 3; i++ {
        select {
        case result := <-c:
            results = append(results, result)
        case <-timer:
            return results, errors.New("timed out")
        }
    }
    return results, nil

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.