TIL
Today I Learned
Today I Learned
License: MIT License
Today I Learned
.go
files in same level as main.go
(or package main
file)Found in confd and mostly used in command line tool repos where all top-level files belong to package main
. This structure allows you to simply build you binary using go build .
. If you run your main.go
, you will see errors as go cannot find variables or funcs scatter in those top-level package main
files. To get around it, do go run main.go lib1.go lib2.go args
.
Ref: http://stackoverflow.com/questions/21293000/go-build-works-fine-but-go-run-fails
See https://medium.com/@benbjohnson/structuring-applications-in-go-3b04be4ff091.
Ref: http://sethrobertson.github.io/GitFixUm/fixup.html#remove_deep
Rewriting commit history is VERY BAD and we should never do this at all, but if you wish to proceed, tell your team (or everyone who might have pulled the history telling them that history was rewritten) to git pull --rebase
and do a bit of history rewriting of their own if they branched or tagged from the now outdated history.
Also please be warned. If some of the commits between SHA and the tip of your branch are merge commits, it is possible that git rebase -p
will be unable to properly recreate them. Please inspect the resulting merge topology gitk --date-order HEAD ORIG_HEAD
and contents to ensure that git did want you wanted. If it did not, there is not really any automated recourse. You can reset back to the commit before the SHA you want to get rid of, and then cherry-pick the normal commits and manually re-merge the “bad” merges. Or you can just suffer with the inappropriate topology (perhaps creating fake merges git merge --ours otherbranch
so that subsequent development work on those branches will be properly merged in with the correct merge-base).
git log --graph --decorate --oneline # beautiful
git rebase -p --onto 5697c2a^ 5697c2a
git push -f
P/S: This can be prevented by protecting your certain branches from force push, most Git hosting sites like Github and Gitlab have this feature (called protected branches).
Remove the first 2 lines from input:
docker images --no-trunc --format '{{.ID}} {{.Tag}}' \
| grep jenkins-foo \
| cut -d " " -f 1 \
| awk 'NR > 2 { print }' \
| xargs --no-run-if-empty docker rmi
Above is to scan for docker images of which tag contains jenkins-foo
and remove them except the latest 2 images (only keep the first 2 images).
So you won't need global vars for db connection / global config stuffs (works with default http
package so it should work with other packages, ie. mux
):
It was pretty straightforward.
mockDB, mock, err := sqlmock.New()
defer mockDB.Close()
sqlxDB = sqlx.NewDb(mockDB,"sqlmock")
Later on I used the sqlmock
function:
mock.ExpectExec("INSERT INTO baskets").WillReturnResult(sqlmock.NewResult(newID, 1))
for the sqlx
query:
sqlxDB.Exec("INSERT INTO baskets (user_id, name, created_at, updated_at) VALUES (?, ?, ?, ?)", basket.UserID, basket.Name, timeNow, timeNow)
and I used the sqlmock
function:
rows := sqlmock.NewRows([]string{"id", "user_id", "name", "created_at", "updated_at"}).
AddRow(1, userID, name, timeNow, timeNow)
mock.ExpectPrepare("^SELECT (.+) FROM baskets WHERE").ExpectQuery().WithArgs(userID).WillReturnRows(rows)
for the sqlx
query:
sqlxDB.PrepareNamed("SELECT id, user_id , name, created_at, updated_at FROM baskets WHERE user_id = :user_id")
When doing anonymous function inside a for loop, you might see a warning range variable i captured by func literal
.
for i, e := range itemIDs {
// This work as expected
ii, ee := i, e
go func() {
defer wg.Done()
if err := GetStoryByID(ee, &items[ii]); err != nil {
log.Fatalf("Error: %s", err)
}
}()
// This WONT work as expected
go func() {
defer wg.Done()
if err := GetStoryByID(e, &items[i]); err != nil {
log.Fatalf("Error: %s", err)
}
}()
}
With our go func()..
statement we start a new goroutine. They run concurrently, which they do not run one after the other in an orderly fashion. They could in theory run one after the other, or they could all run at the same time (in parallel). Or maybe the "last one" runs first and then the "first one" and the "second one" runs last, or maybe.. think you get the point; it's unpredictable.
See http://stackoverflow.com/a/10743805/4328963
package main
import "fmt"
import "runtime"
func main() {
fmt.Println("Name of function: " + funcName())
x()
}
// Where the magic happens
func funcName() string {
pc, _, _, _ := runtime.Caller(1)
return runtime.FuncForPC(pc).Name()
}
func x() {
fmt.Println("Name of function: " + funcName())
}
Output:
Name of function: main.main
Name of function: main.x
http://askubuntu.com/a/578578/438116
You might need to install your video cards drivers. By default Ubuntu usually installs the open source version which sometimes don't work well.
You can find out if this is the case by type Additional Drivers into the launcher -> search.
Select the most recent NVIDIA driver version, (for my GTX 750Ti, it was nvidia-361
) then click Apply Changes, reboot after finished.
If NVIDIA driver is not installed, see below section.
$ sudo add-apt-repository ppa:graphics-drivers/ppa
$ sudo apt-get update
$sudo apt-get purge nvidia*
$ ubuntu-drivers devices
$ sudo apt-get install nvidia-361
$ sudo reboot
Ref: http://askubuntu.com/questions/451221/ubuntu-14-04-install-nvidia-driver/700613#700613
Last resort, try disabling User smooth scrolling in Firefox > Advanced.
Ref: https://github.com/golang/go/wiki/TableDrivenTests
If you ever find yourself using copy and paste when writing a test, think about whether refactoring into a table-driven test or pulling the copied code out into a helper function might be a better option.
Given a table of test cases, the actual test simply iterates through all table entries and for each entry performs the necessary tests. The test code is written once and amortized over all table entries, so it makes sense to write a careful test with good error messages.
var flagtests = []struct {
in interface{}
out interface{}
}{
{"/", 404},
{"/about", 200},
}
func TestAPIs(t *testing.T) {
for _, tt := range flagtests {
r, err := http.Get("http://10.88.102.47:8080" + tt.in.(string))
if err != nil {
t.Error(err)
}
defer r.Body.Close()
s := r.StatusCode
if s != tt.out {
t.Errorf("Get %q => %v, want %v", tt.in, s, tt.out)
}
}
}
go func() {
ticker := time.Tick(30 * time.Minute)
for _ = range ticker {
// do something every 30 minutes
}
}()
http://stackoverflow.com/a/18060545/4328963
Delete all rows in someTable
and reset auto-increment colums counter:
TRUNCATE TABLE someTable RESTART IDENTITY;
go test -v | egrep --color 'PASS|'
$(command)
captures the text sent to stdout by the command contained within.
return
does NOT output to stdout.
$?
contains the result code of the last command.
return
value of a function, use $?
echo
), use $(command)
In a function
block, local
variables that use $(command)
to capture command output will NOT capture anything:
function fun1(){
return 34
}
function fun2(){
local res=$(fun1)
echo $? # <-- Always echos 0 since the 'local' command passes.
res=$(fun1)
echo $? # <-- Outputs 34
}
Ref:
By default http.Server
and http.Client
are initialized with no timeout
, this leads to major issues when overlooked.
Do this instead:
// for Server
srv := &http.Server{
Addr: listenAddr,
Handler: handler,
ReadTimeout: 30 * time.Second,
WriteTimeout: 30 * time.Second,
MaxHeaderBytes: 1 << 20,
}
srv.ListenAndServe()
// for Client
netClient := &http.Client{
Timeout: 10 * time.Second,
}
response, _ := netClient.Get(url)
deleted
Use slashes to avoid having to escape quotes:
def goTestCmd = /'go test -v | go-junit-report > ${xunitReportFileName}'/
// output: 'go test -v | go-junit-report > report.xml'
See http://mrhaki.blogspot.com/2009/08/groovy-goodness-string-strings-strings.html
http://foo-o-rama.com/vagrant--stdin-is-not-a-tty--fix.html
Simply:
config.vm.provision "fix-no-tty", type: "shell" do |s|
s.privileged = false
s.inline = "sudo sed -i '/tty/!s/mesg n/tty -s \\&\\& mesg n/' /root/.profile"
end
Put this to your ~/.profile
:
env=~/.ssh/agent.env
agent_load_env () { test -f "$env" && . "$env" >| /dev/null ; }
agent_start () {
(umask 077; ssh-agent >| "$env")
. "$env" >| /dev/null ; }
agent_load_env
# agent_run_state: 0=agent running w/ key; 1=agent w/o key; 2= agent not running
agent_run_state=$(ssh-add -l >| /dev/null 2>&1; echo $?)
if [ ! "$SSH_AUTH_SOCK" ] || [ $agent_run_state = 2 ]; then
agent_start
ssh-add
elif [ "$SSH_AUTH_SOCK" ] && [ $agent_run_state = 1 ]; then
ssh-add
fi
unset env
If your ssh-agent
has not loaded yet:
eval $(ssh-agent)
$ docker run --name jenkins -d \
-p 50000:50000 -p 8080:8080 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /var/jenkins_home:/var/jenkins_home \
-v /etc/localtime:/etc/localtime:ro \
-e JAVA_OPTS="-Duser.timezone=ICT -Xmx1024m -Dhudson.model.DirectoryBrowserSupport.CSP=\"default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';\"" \
--privileged \
--restart=unless-stopped \
gnhuy91/jenkins-dockerize
/var/run/docker.sock:/var/run/docker.sock
and --privileged
allows Jenkins container to spawn containers - #10/etc/localtime:/etc/localtime:ro
and -Duser.timezone=ICT
configures Jenkins's timezone display-Xmx1024m
configures memory heap size-Dhudson.model.DirectoryBrowserSupport.CSP=\"default-src 'self'; script-src 'self' 'unsafe-inline' 'unsafe-eval'; style-src 'self' 'unsafe-inline';\"
allow Jenkins to display third-party HTML & javascript reports, otherwise it won't display your reports in build statuses.By default Docker container use UTC timezone, to make it aware and use current host machine’s timezone, use a shared volume:
-v /etc/timezone:/etc/timezone:ro
git show -s --format=%ci <commit>
git log --since="<date of commit1>" --until="<date of commit2>"
git log \
--since="$(git show -s --format=%ci <commit1>)" \
--until="$(git show -s --format=%ci <commit2>)"
head -n -1
to omit <commit1>
log from the outputgit log \
--pretty=format:"%h - %an, %ar : %s" \
--since="$(git show -s --format=%ci <commit1>)" \
--until="$(git show -s --format=%ci <commit2>)" \
| head -n -1
Ref: http://stackoverflow.com/questions/18679870/list-commits-between-2-commit-hashes-in-git
Github has https://api.github.com/repos/:owner/:repo/releases
endpoint that returns json
with useful information:
$ export REPO=docker/compose
$ export URL=https://api.github.com/repos/$REPO/releases/latest
Latest tag:
$ curl -sL $URL | grep tag_name | cut -d '"' -f 4
1.8.0
Download url:
$ curl -sL $URL | grep browser_download_url | grep $(uname -s)-$(uname -m) | head -n 1 | cut -d '"' -f 4
https://github.com/docker/compose/releases/download/1.7.1/docker-compose-Linux-x86_64
Ref: https://developer.github.com/v3/repos/releases/#get-the-latest-release
Took shamelessly from http://smarp.breezy.hr/p/4ca8a44f3036-backend-engineer.
EXPLAIN
IN ANY VALUES(…), (…)…
and IN ANY ARRAY[…]
(and possibly other similar keywords) in PostgreSQLNOT IN
, EXCEPT
and NOT EXISTS
(and possibly other similar keywords) in PostgreSQL= NULL
and IS NULL
(and possibly other similar keywords) in PostgreSQLRef: https://docs.docker.com/registry/insecure/
myregistrydomain.com
as a Common Name:$ mkdir -p certs && openssl req -newkey rsa:4096 -nodes -sha256 -keyout certs/domain.key -x509 -days 365 -out certs/domain.crt
$ docker run -d -p 5000:5000 --restart=always --name registry \
-v $(pwd)/certs:/certs \
-e REGISTRY_HTTP_TLS_CERTIFICATE=/certs/domain.crt \
-e REGISTRY_HTTP_TLS_KEY=/certs/domain.key \
registry:2
$ sudo cp certs/domain.crt /etc/docker/certs.d/myregistrydomain.com:5000/ca.crt
sudo service docker restart
.$ docker pull hello-world && docker tag hello-world localhost:5000/hello-world
$ docker run --rm localhost:5000/hello-world
docker-compose.yml
file may looks like this:registry:
restart: unless-stopped
image: registry:2
ports:
- 443:5000
environment:
REGISTRY_HTTP_TLS_CERTIFICATE: /certs/domain.crt
REGISTRY_HTTP_TLS_KEY: /certs/domain.key
volumes:
- ./data:/var/lib/registry
- ./certs:/certs
Port map 443:5000
so you can do docker run localhost/hello-world
without specifying port 5000
.
Here is something to automate. Assuming 10.88.102.47:8443
is your Registry's URI:
[SAN]
so you can pull docker images from other machines inside your networkopenssl req -newkey rsa:4096 -nodes -sha256 \
-keyout certs/domain.key -x509 -days 365 \
-out certs/domain.crt \
-reqexts SAN -config <(cat /etc/ssl/openssl.cnf \
<(printf "[SAN]\nsubjectAltName=IP:10.88.102.47")) \
-subj "/CN=10.88.102.47"
See http://www.shellhacks.com/en/HowTo-Create-CSR-using-OpenSSL-Without-Prompt-Non-Interactive for automating openssl
keygen.
/etc/docker/certs.d/
sudo chown -R $USER /etc/docker
sudo mkdir -p /etc/docker/certs.d/10.88.102.47:8443
sudo cp certs/domain.crt /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo chmod +r /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo service docker restart
docker-compose up -d
sudo chown -R $USER /etc/docker
mkdir -p /etc/docker/certs.d/10.88.102.47:8443
ssh user@my-pc cat /etc/docker/certs.d/10.88.102.47:8443/ca.crt > /etc/docker/certs.d/10.88.102.47:8443/ca.crt
sudo service docker restart
[[
has more features, fewer ‘surprises’ and is generally safer to use. But it is not portable - Posix doesn’t specify what it does and only some shells support it (beside bash, i heard ksh supports it too). For example, you can do
[[ -e $b ]]
to test whether a file exists. But with [
, you have to quote $b
, because it splits the argument and expands things like "a*"
(where [[
takes it literally). That has also to do with how [
can be an external program and receives its argument just normally like every other program (although it can also be a builtin, but then it still has not this special handling).
[[
also has some other nice features, like regular expression matching with =~
along with operators like they are known in C-like languages. Here is a good page about it: What is the difference between test, [ and [[ ? and Bash Tests.
Ref:
Kubernetes - also called K8s.
kubectl run nginx --image=nginx --port=80 --expose=true
--expose=true
After above command, K8s will:
deployment
named nginx
service
name nginx
(because --expose=true
)If you delete pod or ReplicaSet of a deployment, a new one will be created right away due to the famous RestartPolicy.
By default if unspecified, pods RestartPolicy are set to Always, so if you want to remove a created pod, you have to kubectl delete deployment nginx
.
After deleted deployment
with kubectl delete deployment nginx
, try kubectl get svcs
and you will noitice service redis
is still there.
To delete both deployment
and service
, do:
kubectl delete deployment,svc nginx
This will copy git untracked files to other folders with same folder structure, using xargs -I{}
, then later place {}
to where you want the piped input to be placed.
$ git ls-files . --exclude-standard --others | xargs -I{} cp {} ~/my/dir/{}
Ref:
https://www.dajobe.org/blog/2015/04/18/making-debian-docker-images-smaller/
TL;DR
- Use one
RUN
to prepare, configure, make, install and cleanup.- Cleanup with
apt-get remove --purge -y $INSTALL_PACKAGES $(apt-mark showauto) && rm -rf /var/lib/apt/lists/*
For Alpine:
ENV INSTALL_PACKAGES="git ca-certificates go"
RUN apk add --update --no-cache $INSTALL_PACKAGES \
# Do things here
&& echo "do things" \
# Cleanup
&& apk del --purge $INSTALL_PACKAGES \
&& rm -rf /var/cache/apk/*
To sum up, install whatever packages you need to perform tasks, then remove those installed packages (or files) if they are not needed at runtime, ie. install wget
to download a binary for entrypoint
, then later you should remove wget
because the binary can run without wget
.
Checkout my minimal images:
-v /var/run/docker.sock:/var/run/docker.sock
--privileged
flagcurl -fsSL https://get.docker.com/ | sh
)The idea is simple, instead of doing the crazy "docker inside docker" stuffs, we simply share the docker socket from host machine to our containers.
Step 1 + 2 ensures that your container connects to the correct socket and have extended privileges.
Step 3 is important - your container now has access to the docker socket on your host, which means it has the right to list, start, stop, remove containers just like your host machine. However, it needs an interface to talk to the Docker daemon to perform those operations - the docker client.
Checkout my Jenkins container which can spawn other containers: https://github.com/gnhuy91/jenkins-dockerize and it's docker run
command: #9.
uaac
tool to login to UAA instance.
uaac client add
.access_token
by POSTing client_id & client_id
secret to uaa/oauth/token
endpoint.access_token
in request Header (Authorization: Bearer + access_token
) to send HTTP requests to (our) Web service.access_token
from request Header, make a POST request to uaa/check_token
endpoint (request body: token=access_token
, Authorization: BasicAuth with UAA instance’s admin username & password).uaa/check_token
endpoint then return Status 200 if the Client’s access_token
is valid, 400 if the token is invalid (expired/not authorized/etc.).access_token
).Instead of:
echo requirepass ${REDIS_PASSWORD} > /tmp/redis.conf && redis-server /tmp/redis.conf
Simply:
redis-server <(echo requirepass ${REDIS_PASSWORD})
or this if using fish
instead of bash
redis-server (echo requirepass $REDIS_PASSWORD | psub)
The <(cmd)
returns location of cmd
's result. In above examle, echo requirepass ${REDIS_PASSWORD}
returns a string, and <(echo requirepass ${REDIS_PASSWORD})
return the location to that string.
This is extremely useful when chaining more commands, i.e
bash <(curl -fsSL https://raw.github.com/gnhuy91/dotfiles/master/bin/dotfiles) -y
Will first download a file, then pipe it to bash as 1st argument along with -y
as 2nd argument.
Try cat <(echo hello world)
or ls -l <(echo hello world)
to get more understanding.
Ref: http://stackoverflow.com/a/8636711/4328963
$ export HTTPS_PROXY=foof
$ sudo -E bash -c 'echo $HTTPS_PROXY'
Quote from sudo
man page:
-E, --preserve-env Indicates to the security policy that the user wishes to reserve their existing environment variables. The security policy may eturn an error if the user does not have permission to preserve the environment.
The trick is to add environment variables to
sudoers
file viasudo visudo
command and add these lines:Defaults env_keep += "http_proxy" Defaults env_keep += "https_proxy" Defaults env_keep += "HTTP_PROXY" Defaults env_keep += "HTTPS_PROXY"
The Regular Expression translates to: "rewrite all URIs without any '.' in them that don't end with a '/' to the URI + '/'" Or simply: "If the URI doesn't have a period and does not end with a slash, add a slash to the end".
The reason for only rewriting URI's without dots in them makes it so any file with a file extension doesn't get rewritten. For example your images, css, javascript, etc and prevent possible redirect loops if using some php framework that does its own rewrites.
rewrite ^([^.]*[^/])$ $1/ permanent;
Currently above rewrite
turns POST request to GET request so I am not using it, see:
I don't understand
RLock
, why would one want to lock for reading? reading doesn't mutate data so concurrent reads is safe isn't it?Luna Duclos [3:29 PM]
No, it isn't
Reading while mutating data is not safe
You can't read at all while anyone is writing
hence the concept of anRLock
RLock
will allow concurrent reads
But will not allow any reads while a write is going on
So allRLocks()
will block while aLock()
has been taken
and continue on once it's been released
http://stackoverflow.com/a/23962313/4328963
http://stackoverflow.com/questions/9181637/how-to-fill-proxy-information-in-cntlm-config-file
http://cntlm.sourceforge.net/
cntlm
cntlm -H -d your_domain -u your_username
(-d
is optional).cntlm
will give you some hashes. Something like this:$ cntlm -H -d your_domain -u your_username
Password:
PassLM 4E9C185900C7CF0B6FFCB2044F81920C
PassNT 6E9F120B83EEA0E875CE8E6F9730EC9A
PassNTLMv2 2A0B7C2457FB7DD8DA4EB737C4FA224F
Now you have password hashed. Save them to a text editor.
cntlm -M http://www.google.com -u <your_username> <proxy_host>[:]<proxy_port>
for testing your credentials. Again enter your password. It will give you something like this:$ cntlm -M http://www.google.com -u <your_username> <proxy_host>[:]<proxy_port>
Password:
Config profile 1/4... Credentials rejected
Config profile 2/4... OK (HTTP code: 302)
----------------------------[ Profile 1 ]------
Auth NTLM
PassNT 6E9F120B83EEA0E875CE8E6F9730EC9A
PassLM 4E9C185900C7CF0B6FFCB2044F81920C
------------------------------------------------
Now you see that profile 2 is successful. Because it says OK for profile 2. It may be different on you system.
~/cntlm.ini
:#
# Cntlm Authentication Proxy Configuration File
#
Username yourusername
Domain yourdomain
Auth NTLM
PassLM 4E9C185900C7CF0B6FFCB2044F81920C
PassNT 6E9F120B83EEA0E875CE8E6F9730EC9A
PassNTLMv2 2A0B7C2457FB7DD8DA4EB737C4FA224F
Workstation yourhostname.yourdomain
# Most probably proxy.yourdomain:8080
Proxy yourProxyIP:yourProxyPort
NoProxy localhost, 127.0.0.*, 10.*, 192.168.*
Listen 3132
Gateway yes
cntlm -c ~/cntlm.ini
export https_proxy=http://localhost:3132
).We (at Yelp) also use the git sha as a unique tag for images, but that's mostly for convenience (it's easy to figure out what code is running in the container). There are lots of other options for a unique tag.
Since you're using Jenkins,
$BUILD_TAG
is a good option. It should always be unique, and it lets you track the image back to the job that built it.We would use image name and tag to identify the state of each image. During the first docker build step:
docker build -t ${package}:${env.BUILD_TAG} docker tag ${package}:${env.BUILD_TAG} ${package}:unstablePass the
${BUILD_TAG}
value along to the following jobs in the jenkins pipeline, so they know which unique id to deploy and test. After the tests pass:docker tag ${package}:${env.BUILD_TAG} ${package}:stable
After deployment succeeds:
docker tag ${package}:${env.BUILD_TAG} ${package}:live
That way you can operate on the unique id, and you also get labels for the "latest" image that has passed each phase of the pipeline. If you need more than latest, I suppose you could use
:${env.BUILD_TAG}-stable
,:${env.BUILD_TAG}-live
, etc, to keep track of state.
Other languages’ switch-case
feature is not exist in the Python world, we can either using if/elif block or using dictionary:
def numbers_to_strings(argument):
return {
0: "zero",
1: "one",
2: "two"
}.get(argument, "nothing")
Ref:
''.join(random.choice(string.ascii_uppercase + string.digits) for _ in range(N))
A more secure version; see http://stackoverflow.com/a/23728630/2213647:
''.join(random.SystemRandom().choice(string.ascii_uppercase + string.digits) for _ in range(N))
Using
random.SystemRandom()
instead of justrandom
uses/dev/urandom
on *nix machines andCryptGenRandom()
in Windows. These are cryptographically secure PRNGs. Usingrandom.choice
instead ofrandom.SystemRandom().choice
in an application that requires a secure PRNG could be potentially devastating, and given the popularity of this question, I bet that mistake has been made many times already.
In details, with a clean function for further reuse:
>>> import string
>>> import random
>>> def id_generator(size=6, chars=string.ascii_uppercase + string.digits):
... return ''.join(random.choice(chars) for _ in range(size))
...
>>> id_generator()
'G5G74W'
>>> id_generator(3, "6793YUIO")
'Y3U'
By default containers timezone will be UTC+0. In case we want to honor localtime of host machine, say, GMT+8, simply include this:
-v /etc/localtime:/etc/localtime:ro
In case you're rocking Jenkins in a docker container, you should also include
-e JAVA_OPTS="-Duser.timezone=ICT"
See https://wiki.jenkins-ci.org/display/JENKINS/Change+time+zone.
TL;DR - If your text is short & simple, run away and use simple string formatting like Printf
and Sprintf
.
// Templates expect a struct as input for parsing
type DBInfo struct {
Protocol string
Username string
Password string
Host string
DBName string
}
dbinfo := DBInfo{
"postgres",
os.Getenv("POSTGRES_USER"),
os.Getenv("POSTGRES_PASSWORD"),
os.Getenv("POSTGRES_HOST"),
os.Getenv("POSTGRES_DB")}
// Define the template
tmpl, err := template.New("dbinfo").Parse("{{.Protocol}}://{{.Username}}:{{.Password}}@{{.Host}}/{{.DBName}}")
if err != nil {
panic(err)
}
// Parse the template to bytes.Buffer
var b bytes.Buffer
err = tmpl.Execute(&b, dbinfo)
if err != nil {
panic(err)
}
// Convert the Buffer to String
dbURL := b.String()
fmt.Println(dbURL)
A great way to use middleware (alice style):
r.Get("/hello/{name}", alice.New(middleware.Junk).Then(http.HandlerFunc(HelloServer)))
chain := alice.New(middleware.Auth).Then(r)
http.ListenAndServe(":12345", chain)
middleware.Auth
will be called on every requestmiddleware.Junk
will only be called on /hello/{name}
Ref: https://talks.golang.org/2016/applicative.slide
This func will Search
in parallel AND wait for all Search
functions to finish, since []Result{<-c, <-c, <-c}
receives all 3 values that sent to the channels earlier.
Receiving a channel value block until it received a value. This allowed us to wait at the end of our program/function for the channel value without having to use any other synchronization.
func SearchParallel(query string) ([]Result, error) {
c := make(chan Result)
go func() { c <- Web(query) }()
go func() { c <- Image(query) }()
go func() { c <- Video(query) }()
return []Result{<-c, <-c, <-c}, nil
}
This func will also do Search
in parallel, instead of waiting for all Search
to complete, Search
result is sent to channel as soon as the goroutine finished. Using select
wait for all values simultaneously, append each value to results
as soon as it arrives.
func SearchTimeout(query string, timeout time.Duration) ([]Result, error) {
timer := time.After(timeout)
c := make(chan Result, 3)
go func() { c <- Web(query) }()
go func() { c <- Image(query) }()
go func() { c <- Video(query) }()
var results []Result
for i := 0; i < 3; i++ {
select {
case result := <-c:
results = append(results, result)
case <-timer:
return results, errors.New("timed out")
}
}
return results, nil
$ bash <(curl -fsSL https://raw.github.com/gnhuy91/dotfiles/master/bin/dotfiles) -y
Ref:
https://help.ubuntu.com/community/VirtualBox/Installation
sudo sh -c "echo 'deb http://download.virtualbox.org/virtualbox/debian '$(lsb_release -cs)' contrib non-free' > /etc/apt/sources.list.d/virtualbox.list" \
&& wget -q http://download.virtualbox.org/virtualbox/debian/oracle_vbox.asc -O- \
| sudo apt-key add - \
&& sudo apt-get update \
&& sudo apt-get install virtualbox-5.0
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.