microsoft / go Goto Github PK
View Code? Open in Web Editor NEWThe Microsoft build of the Go toolset
License: BSD 3-Clause "New" or "Revised" License
The Microsoft build of the Go toolset
License: BSD 3-Clause "New" or "Revised" License
It can be a little tedious to see the tests failed and scroll up in the log to find the cause(s). In particular with verbosity turned up.
If we can get the Go tests to show up in the AzDO UI, it might be easier to tell what happened at a glance. It might also be easier to track test results over time. (E.g. are certain tests flaky?)
I know there's a go test -json
command, but I haven't looked into its output.
We might need to modify go tool dist test
to accept a parameter that passes -json
to its go test
subcommands. (Similar to how there isn't an existing flag to pass -v
to subcommands.)
There are several places the os_arch is repeated:
go/microsoft/azure-pipelines.yml
Lines 41 to 42 in 79333bc
go/microsoft/azure-pipelines.yml
Lines 59 to 62 in 79333bc
go/microsoft/azure-pipelines.yml
Lines 68 to 69 in 79333bc
It would be bad to add more platforms the same way. I think this job should be turned into a template that accepts a list of os_arch
as a parameter. (That would also take the clutter out of the root azure-pipelines.yml
file.)
This should be done before (or along with) windows_amd64 (#28).
I noticed "git-codereview gofmt checks and tests" mentioned in the .gitattributes, justifying blocking autocrlf in the repo:
Lines 1 to 14 in e3168d7
Maybe this is already running in the existing set of tests, but I'm not sure. Seems worth investigating to get to parity with upstream tests.
Long tests probably need to run as root, but others might not. Upstream's Linux builders seem to all run as root (except WSL), but we can try changing it.
Right now you can technically look for the URL that the build gets published to, but it's not easy. It seems worthwhile to add a step that prints out a URL of each artifact to make it easier to grab from a given build.
From #32 (comment)
I think that we should have a bot that automatically submits a PR to update Go when there's a new release.
I'm not sure if there's a nice way to do this--what comes to mind is a bot that polls https://golang.org/dl/ periodically and scrapes data. I haven't looked very far for a better source of this data, though. We could also manually trigger the bot when we hear about a release on golang-announce if the polling's slow.
Currently there are a some bits we aren't reusing from .NET infrastructure:
dotnet
in the name.dotnet-buildtools-prereqs-docker
.dotnet
.We are reusing:
Filing this issue to keep track of which bits are are/aren't reusing and why. (May want to turn this into a checked-in doc at some point.)
Broken off from original issue tracking full coverage vs. upstream: #1
Something we can do to be a little more sure our tests are running properly is to look at the verbose logs, with a modification to make them more verbose and show SKIP
s:
Here's what I did to make tests show skips:
diff --git a/src/cmd/dist/test.go b/src/cmd/dist/test.go index 0c8e2c56bc..1248d9145b 100644 --- a/src/cmd/dist/test.go +++ b/src/cmd/dist/test.go @@ -271,7 +271,7 @@ func short() string { // defaults as later arguments in the command line. func (t *tester) goTest() []string { return []string{ - "go", "test", "-short=" + short(), "-count=1", t.tags(), t.runFlag(""), + "go", "test", "-v", "-short=" + short(), "-count=1", t.tags(), t.runFlag(""), } } @@ -350,6 +350,7 @@ func (t *tester) registerStdTest(pkg string, useG3 bool) { } args := []string{ "test", + "-v", "-short=" + short(), t.tags(), t.timeout(timeoutSec), @@ -388,6 +389,7 @@ func (t *tester) registerRaceBenchTest(pkg string) { ranGoBench = true args := []string{ "test", + "-v", "-short=" + short(), "-race", t.timeout(1200), // longer timeout for race with benchmarks~500 results for 'SKIP': https://gist.github.com/dagood/2bc08e37da295c2022b9196572d378a4
Results with 1 line of context (skip reason): https://gist.github.com/dagood/8105bad6aff4803794a4ed42c265a4fc
500 lines isn't small, but it isn't impossible. There are duplicate skip reasons, which helps.
Unfortunately, there is no baseline from upstream to check against. We need to reason out whether each one is a problem.
This is based on a golang-devs thread: https://groups.google.com/g/golang-dev/c/PNzwZXOe7bQ/m/M43Gl9mVDAAJ.
Running src/run.bash
uses go tool dist test
, which by default runs a bunch of go test
commands with -short
included on the command line. This skips a significant number of tests.
The goal seems to be running the same tests as amd64 longtest
on https://build.golang.org.
(More about longtest
at golang/go#12508, https://go-review.googlesource.com/c/build/+/113436/)
Setting GO_TEST_SHORT=false
disables -short
and lets the full set of tests run, but there are some extra dependencies required and it doesn't work on my machine/container. (Mercurial, permissions, ...?)
A few common skip patterns are:
testing.Short() && os.Getenv("GO_BUILDER_NAME") == ""
testing.Short() && testenv.Builder() == ""
and it shows up in helpers:
func MustHaveExternalNetwork(t testing.TB) {
if runtime.GOOS == "js" {
t.Skipf("skipping test: no external network on %s", runtime.GOOS)
}
if testing.Short() {
t.Skipf("skipping test: no external network in -short mode")
}
}
Running go test
with -v
makes it show --- SKIP:
lines for these (and a lot of other output, too). It looks like you have to modify dist/test
itself to get it to pass -v
. (No existing feature like GO_TEST_SHORT
.)
We need to put together the Windows prereqs and build Go on Windows.
clang
or gcc
according to https://golang.org/doc/install/source#ccompiler.
Go distributes windows-amd64 binaries as an MSI and as a zip on https://golang.org/dl/.
We need to figure out how to maintain forked branches (and/or patches). We should find an approach that:
There are a bunch of ways to go, here are two extremes:
microsoft/patches/*.patch
file that only gets applied during the build. All patches are disabled by a build arg when we want a baseline build.Patch files are not great for dev workflows: viewing a diff between two versions of a patch file is difficult (++
, +-
, -+
, --
๐จ), and to work on a patch, you need to apply it, make changes, and extract it again. Merge conflicts with another dev would be particularly tricky. (Although with a low number of contributors, this is not a huge risk.)
We plan to start with .patch
files. They are a simple way to start, and we have company: Debian uses patch files in debian/patches/*.patch
extensively to fix bugs. We can relatively easily move to a different strategy later if the dev workflow ends up being totally unreasonable when the changes are more than bug fixes.
This showed up once in linux-amd64-racecompile
in https://dev.azure.com/dnceng/internal/_build/results?buildId=1078273&view=results, then when I removed racecompile
, it showed up in linux-amd64-regabi
in https://dev.azure.com/dnceng/internal/_build/results?buildId=1078855&view=logs&j=17f0ed56-45c3-5f4e-2883-c1105f3261d7&t=be2aae34-cdee-5b9f-80ac-cc542ed93061&l=352:
--- FAIL: TestLookupGmailTXT (0.00s)
lookup_test.go:257: got [globalsign-smime-dv=CDYX+XFHUw2wml6/Gb8+59BsH31KzUr6c1l2BPvqKX8=]; want a record containing spf, google.com
lookup_test.go:257: got [globalsign-smime-dv=CDYX+XFHUw2wml6/Gb8+59BsH31KzUr6c1l2BPvqKX8=]; want a record containing spf, google.com
FAIL
FAIL net 12.192s
(In #52.)
It looks like these are network tests that request DNS info from gmail.com
. In some cases these are only run during longtest
but it looks like that behavior is explicitly overridden here, enabling them (not skipping them) if it's running in any builder:
Lines 990 to 998 in bc0c82c
Related issues upstream: golang/go#29698, maybe golang/go#22857, golang/go#29722
When I try to run the tests on Windows on my work machine, I get this error:
--- FAIL: TestGroupIds (0.00s)
user_test.go:144: &{Uid:[...] Gid:[...] Username:NORTHAMERICA\dagood Name:Davis Goodin HomeDir:C:\Users\dagood}.GroupIds(): The user name could not be found.
FAIL
FAIL os/user 1.819s
Someone else hit this in 2018, and it ended up seeming to be a configuration issue on their machine. Another machine also using AAD worked: golang/go#26041. It doesn't look like they were able to figure out the underlying issue.
To check for AAD: https://stackoverflow.com/a/51852296
C:\Users\dagood>dsregcmd /status
+----------------------------------------------------------------------+
| Device State |
+----------------------------------------------------------------------+
AzureAdJoined : YES
EnterpriseJoined : NO
DomainJoined : NO
Device Name : dagood-3
[...]
For now, Sign and Publish are separate jobs that run in sequence. Sign runs on a specific build agent queue that's capable of signing, and Publish runs on a generic image provided by AzDO that has az
installed.
We could combine the jobs--run both on the signing-capable machine. However, they don't have 'az' installed, which AzureCLI@2
needs. We would either need to upload to blob storage in some other way (via Arcade, MSBuild task?), or get az
installed on the signing build machines.
Broken off from original issue tracking full coverage vs. upstream: #1
For upstream, builders usually run on Debian stretch, but there are several other builders for some specific distros.
We may want to include them for parity with upstream to be even more sure about the compatibility of our patches before trying to upstream.
We should get a FIPS-compliant (or ideally certified) Ubuntu image in CI for good end-to-end coverage with FIPS work.
Normally, Go builds without PIE (Position Independent Executable) or other C-style security measures because these attacks are dealt with at a language/runtime level: https://groups.google.com/g/golang-nuts/c/Jd9tlNc6jUE. However, Go compiles to native binaries, so our SDL tooling (binskim
) treats it like any other binary and scans for these security measures.
Cgo and unsafe
may make it worthwhile to apply the C-style security measures to Go, but this is debatable.
-buildmode=pie
: https://golang.org/cmd/go/#hdr-Build_modesrelro
is automatically enabled by pie
build mode, since https://go-review.googlesource.com/c/go/+/22687/-fstack-protector
golang/go#21871Go uses WiX in the release tool to make it: https://github.com/golang/build/blob/9b76b0884b63b8c7298cd24147f863f13df739f4/cmd/release/releaselet.go#L84
The signing infrastructure uses checked-in versions of MicroBuild and SignTool. We should figure out if we need to auto-update them, and how.
If we separate out the signing build pipeline from the Go pipeline, it makes auto-updates easier: #13.
To match upstream, the primary artifact from our official builds is a tar.gz file. We should also produce a Deb package, sign it, and host it in a package repository for more convenient installs and automatic upgrading.
The tool I'm most familiar with to do this is FPM (https://github.com/jordansissel/fpm), because .NET [Core] uses it to produce some Linux packages. This would make it easy to make RPM packages, too.
Also consider using a Go module to create the packages. It might be better to consistently use Go libraries rather than use a Ruby gem.
While this is a private repo, we need to use the internal
project in our AzDO instance for auth reasons. The internal
project needs to have a service connection that connects to microsoft
repos to get the web hooks set up.
Once the repo is public, there's an existing "Microsoft" service connection in the public
AzDO project we can use.
Fill this out:
Lines 1 to 10 in 0e8523d
Installing packages during CI from the distro can be slow and sometimes unreliable, so we should use an image with dependencies already installed. It should be able to run the full set of tests (longtest
, see #1), the image should be hosted in a public ACR, and it should be based on Ubuntu (our first target).
We might want to setup a repo like https://github.com/dotnet/dotnet-buildtools-prereqs-docker. If we do, we should consider having it auto-update sometimes, for base image changes and updates to any packages we use from the distro.
Currently, there are no PDB files for Go. Debug information is in DWARF format:
BinSkim requires a PDB for each library, and has no way to turn this off:
Someone tried to extract a PDB file from a binary built on windows with MinGW, but that didn't work:
Running BinSkim on our windows-amd64 build gets this bunch of errors for each exe
file:
error ERR997.ExceptionLoadingPdb : BA2002 : 'go.exe' was not evaluated for check 'DoNotIncorporateVulnerableDependencies' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2006 : 'go.exe' was not evaluated for check 'BuildWithSecureTools' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2007 : 'go.exe' was not evaluated for check 'EnableCriticalCompilerWarnings' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2011 : 'go.exe' was not evaluated for check 'EnableStackProtection' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2013 : 'go.exe' was not evaluated for check 'InitializeStackProtection' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2014 : 'go.exe' was not evaluated for check 'DoNotDisableStackProtectionForFunctions' because its PDB could not be loaded.
error ERR997.ExceptionLoadingPdb : BA2024 : 'go.exe' was not evaluated for check 'EnableSpectreMitigations' because its PDB could not be loaded.
That happens even when I use a --config
file to configure all those rules (BA*
) to not run. It seems that you can turn off rules, but this is an "error" with no way to ignore.
<Properties Key="BA2002.DoNotIncorporateVulnerableDependencies.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2006.BuildWithSecureTools.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2007.EnableCriticalCompilerWarnings.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2011.EnableStackProtection.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2013.InitializeStackProtection.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2014.DoNotDisableStackProtectionForFunctions.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
<Properties Key="BA2024.EnableSpectreMitigations.Options" Type="PropertiesDictionary">
<Property Key="RuleEnabled" Value="Disabled" Type="Driver.RuleEnabledState" />
</Properties>
I think I can entirely disable scanning the exe
files with BinSkim to get around it temporarily, but this is obviously not acceptable in the long run.
Options that come to mind:
ERR997
ignorable.BA2002
, BA2006
, ... BA2024
stops BinSkim from trying and failing to load the PDB.Right now, the auto-sync script force pushes to the PR branch, then tries to create a PR, then (if PR creation fails) tries to find the PR so it can re-enable auto-merge.
It should either do nothing if the PR exists, or detect whether or not it makes sense to push the new commit to the old PR.
The build script currently downloads, verifies, and extracts the Go linux-amd64 binary release 1.16 to ~/.go-stage-0/1.16
. This is used in CI, and it can be used locally get a build going on a machine with minimal dependencies.
There are a few things to improve:
It should detect if Go is already installed and (optionally) use that.
It should download Go somewhere under the repository directory, not $HOME
.
git clean -xdf
).go
.
--- FAIL: TestAllDependencies (6.36s)
--- FAIL: TestAllDependencies/std(quick) (1.80s)
moddeps_test.go:63: /work/bin/go list -mod=vendor -deps ./...: exit status 1
package std/bytes
bytes/bytes.go:10:2: use of internal package internal/bytealg not allowed
[...]
moddeps_test.go:64: (Run 'go mod vendor' in /work/microsoft/.go-stage-0/go/src to ensure that dependecies have been vendored.)
FAIL
FAIL cmd/internal/moddeps 6.405s
This issue is referenced in source.
We plan to mirror branches from https://go.googlesource.com/go to this repo with their original names. Forked branches will begin with microsoft/
to avoid collisions.
This issue tracks creating automation to push branch updates to this repo from upstream.
The CLA bot should already be running because this is in the Microsoft org (https://docs.opensource.microsoft.com/tools/cla.html), but we should make sure.
The official build (internal build that produces artifacts) doesn't need to run tests, because the commit has been validated already. It should do the minimum work to produce artifacts to improve turnaround time.
If we don't have rolling builds, we might not want to do this. There's a chance that two PRs merged closely together can conflict in a way that the tests catch it but Git doesn't. We should have some way to detect this. Maybe running -short
tests is sufficient?
If we call run.bash -json
, it passes args through to dist test
("$@"
), making it emit JSON test events for gotestsum
to parse in CI:
Line 56 in 6985bbf
run.bat
doesn't so this, so -json
(and therefore gotestsum
) is ineffective:
Line 43 in 6985bbf
As of writing, I'm going to make CI call go tool dist test -json
directly rather than use src/run.bat
at all. Filing this issue to track context and to link in a code comment.
When developing on upstream, you'd normally use src/run.bat
to build/test on Windows. If we ever make any changes to our repo that don't work with src/run.bat
, we might not detect it until we try to upstream the changes. I don't think this is a big risk--it seems unlikely to me that our changes would have this kind of effect, and it's not hard to try it out locally before submitting if it seems like a change would cause a problem.
If we end up wanting to fix this, we could either use src/run.bat
as-is and give up gotestsum
results for the windows-amd64-devscript
job, or we could patch src/run.bat
.
I've started running golint
manually on new scripts I put in microsoft/
, but it would be nice to have this as a PR acceptance test. This could make it nicer to run locally, too.
The current plan is that the Go files will continue with the existing BSD-style license, and any changes we make to those files will be under the BSD-style license. Infrastructure-only files checked into the microsoft/
dir will be licensed under the MIT license.
This should be clearly explained in the project docs.
Similar content to linux-amd64 tar.gz.
Building for windows-amd64 is tracked by #27.
Applying SDL tasks is WIP on an internal branch, as of writing.
I'm preemptively filing this issue to track the combination of the two: make sure the windows-amd64 assets are processed properly by the SDL tasks. There may be different requirements/issues that come up for Windows vs. Linux.
This builder is very special: it installs (compiles) the compiler and linker packages with -race
, then rebuilds Go again using those race-detecting compiler bits. (Instead of running any ordinary tests.) This is done to test concurrent compilation for races.
https://github.com/golang/build/blob/83a8520724285855120f774cc4a7b57540a1d50b/dashboard/builders.go
Name: "linux-amd64-racecompile",
HostType: "host-linux-jessie",
tryBot: nil, // TODO: add a func to conditionally run this trybot if compiler dirs are touched
CompileOnly: true,
SkipSnapshot: true,
StopAfterMake: true,
InstallRacePackages: []string{"cmd/compile", "cmd/link"}, ...
func (c *BuildConfig) GoInstallRacePackages() []string {
if c.InstallRacePackages != nil {
return append([]string(nil), c.InstallRacePackages...)
}
if c.IsRace() {
return []string{"std"}
}
return nil
}
func (gb GoBuilder) RunMake(ctx context.Context, bc *buildlet.Client, w io.Writer) (remoteErr, err error) {
...
// Need to run "go install -race std" before the snapshot + tests.
if pkgs := gb.Conf.GoInstallRacePackages(); len(pkgs) > 0 {
sp := gb.CreateSpan("install_race_std")
remoteErr, err = bc.Exec(ctx, path.Join(gb.Goroot, "bin/go"), buildlet.ExecOpts{
Output: w,
ExtraEnv: append(gb.Conf.Env(), "GOBIN="),
Debug: true,
Args: append([]string{"install", "-race"}, pkgs...),
})
...
if gb.Name == "linux-amd64-racecompile" {
return gb.runConcurrentGoBuildStdCmd(ctx, bc, w)
}
// runConcurrentGoBuildStdCmd is a step specific only to the
// "linux-amd64-racecompile" builder to exercise the Go 1.9's new
// concurrent compilation. It re-builds the standard library and tools
// with -gcflags=-c=8 using a race-enabled cmd/compile and cmd/link
// (built by caller, RunMake, per builder config).
// The idea is that this might find data races in cmd/compile and cmd/link.
func (gb GoBuilder) runConcurrentGoBuildStdCmd(ctx context.Context, bc *buildlet.Client, w io.Writer) (remoteErr, err error) {
The outputs of the build need to be signed.
What we're able to do right now is produce a detached signature file .sig
that can be checked using gpg --verify
/gpgv
against the .tar.gz
file.
There are some other approaches out there. For example, "IMA appraisal" verifies a signature in xattr (filesystem extended attributes). I don't know if the signing infra we have access to can support this, or if IMA appraisal will actually be used for our Go builds. These can be tracked separately, but we need info on who would use one of these approaches.
...
97 123M 97 120M 0 0 36.5M 0 0:00:03 0:00:03 --:--:-- 44.1M
curl: (18) transfer closed with 2912825 bytes remaining to read
...
Filing this issue to keep track in case it ends up happening often.
A long-term fix is to pre-install Go into a build prereqs Dockerfile: #5.
go/microsoft/azure-pipelines.yml
Line 39 in fb31570
$(Build.SourceBranchName)
is misleading: with the ref name refs/heads/microsoft/main
, it gives only the last segment, main
. (This is documented AzDO behavior.) I would call the branch's name microsoft/main
.
This can cause overlap if someone runs a dev build of a branch called e.g. dev/dagood/do-foo/main
. I sometimes would do this if I planned to develop the same change for multiple branches. (dev/dagood/do-foo/release-branch.go1.16
) The main
overlap would mean that my build output shows up indistinguishable from a microsoft/main
build, if you only look at blob storage.
We handled this in .NET (runtime/setup) with a step like this:
- powershell: |
$prefix = "refs/heads/"
$branch = "$(Build.SourceBranch)"
$branchName = $branch
if ($branchName.StartsWith($prefix))
{
$branchName = $branchName.Substring($prefix.Length)
}
Write-Host "For Build.SourceBranch $branch, FullBranchName is $branchName"
Write-Host "##vso[task.setvariable variable=FullBranchName;]$branchName"
displayName: Find true SourceBranchName
Looks like Roslyn uses a oneliner that assumes it'll always have a refs/heads/
prefix:
- powershell: Write-Host "##vso[task.setvariable variable=SourceBranchName]$('$(Build.SourceBranch)'.Substring('refs/heads/'.Length))"
displayName: Setting SourceBranchName variable
condition: succeeded()
As of writing, our build scripts are standalone Go files. They have no dependencies and don't live in a module. This is convenient, but means we have some code that would probably be better to replace with a library. (Git command interactions, GitHub API.) If we put the scripts into a module, we could add dependencies, with nice tooling around downloading them on demand and verifying them.
I tried it for #71 (using gotestsum
as a dependency to convert go tool dist test
output to a format that works in AzDO), but hit a few problems with tests:
TestAllDependencies
test that scans the repo for all modules (including the microsoft
dir) and enforces that there are no dependencies that would require downloading. The goal is that someone can build the Go repo offline.
go mod vendor
, which copies the source code of the dependencies into the repo. ~400 files. (The test failure message had this as a suggestion.)TestDependencyVersionsConsistent
requires that every go.mod
file in the repo depend on the same version of each library if present. gotestsum
has some transitive dependencies with different versions from the others in the repo, so it failed.
These fixes would cause a fork maintenance problem... we'd need to update our pins and the vendored code every time upstream updates the version numbers, which seems fairly frequent:
https://github.com/golang/go/blame/master/src/go.mod
https://github.com/golang/go/blame/master/src/cmd/go.mod
We would hit test failures in sync PRs that bring in a change to these dependencies.
Alternative fixes:
gotestsum
brings in a lot of dependencies we don't use--we only need a very small bit of what it has to offer. We could probably write our own simplified go test -json
-> JUnit XML converter.gotestsum
as a command-line tool on the build machine and use it that way. (This is how I had the PR set up initially.) This is super easy to include conditionally--we can dynamically turn it off when we won't want an online dependency.
I prefer removing dependencies to keep the project as simple as possible. Storing everything in one Git tree is great to avoid managing cross-repo dependencies.
While trying to build the sync
util, the build failed:
# github.com/microsoft/go/_util/cmd/sync
cmd/sync/sync.go:77:2: undefined: flag.Func
note: module requires Go 1.16
##[error]Bash exited with code '2'.
Our stage0 is 1.16, but the hosted agents actually already have a few versions of Go on them. The one we get on PATH
is old--I ran go version
on a hosted agent to get: go1.15.13 linux/amd64
ee905cb caused the error by changing a direct dependency on our stage0 (1.16) Go:
go/microsoft/sync/sync-pipeline.yml
Lines 38 to 40 in bb95be4
into a soft dependency that prefers to use an existing Go on PATH
first (1.15.13):
Lines 68 to 74 in ee905cb
There's no good reason for this. run-util.sh
should be changed to use the stage0 Go no matter what.
It looks like the download doesn't make progress for a while (20 seconds), then fails when checking the checksum. I assume the file's truncated. I'm not sure why curl
itself isn't reporting an error:
Downloading stage 0 Go compiler and extracting to '/home/vsts_azpcontainer/.go-stage-0/1.16' ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 73 100 73 0 0 1216 0 --:--:-- --:--:-- --:--:-- 1216
0 0 0 0 0 0 0 0 --:--:-- 0:00:01 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:02 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:03 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:04 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:05 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:06 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:07 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:08 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:09 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:10 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:11 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:12 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:13 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:14 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:15 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:16 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:17 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:18 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:19 --:--:-- 0
0 0 0 0 0 0 0 0 --:--:-- 0:00:20 --:--:-- 0
100 1613 100 1613 0 0 78 0 0:00:20 0:00:20 --:--:-- 369
/home/vsts_azpcontainer/.go-stage-0/1.16/go.tar.gz: FAILED
sha256sum: WARNING: 1 computed checksum did NOT match
For comparison, a healthy download from another job in that build (completed in 5 seconds):
Downloading stage 0 Go compiler and extracting to '/home/vsts_azpcontainer/.go-stage-0/1.16' ...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
100 73 100 73 0 0 646 0 --:--:-- --:--:-- --:--:-- 646
96 123M 96 118M 0 0 119M 0 0:00:01 --:--:-- 0:00:01 119M
100 123M 100 123M 0 0 120M 0 0:00:01 0:00:01 --:--:-- 129M
/home/vsts_azpcontainer/.go-stage-0/1.16/go.tar.gz: OK
Done extracting stage 0 Go compiler to '/home/vsts_azpcontainer/.go-stage-0/1.16'
If it's simply flaky, this will be fixed by using a Docker container: #5.
We can move signing out of the main build pipeline into an independent pipeline.
Mostly internal procedures. At the end, we'll need to update the version of MicroBuild we use to a new one that knows our cert exists, and change the cert name in our signing csproj.
There are some outerloop windows-amd64
builders that upstream runs:
https://build.golang.org/, as of writing:
We should automatically (on some schedule) create PRs that merge changes from upstream into our forked branches. These PRs should be auto-merged.
Related: #4 tracks mirroring branches from upstream with no changes.
We expect some users will have Dockerfiles based on https://hub.docker.com/_/golang/, and need to switch over to our Go build. This could be as easy as changing the BASE
, if we provide Docker images.
Can we change the title of merge PRs from upstream such that they are more filterable in email ๐
Filtering these can be tricky because there really isn't anything other than the title to filter on and Outlook only allows for substring matching. Most of our other repositories use a title that has auto merge
and the repository name next to each other for easy filter. For example: [AutoMerge] Roslyn
The current merge emails have the pattern of [microsoft/go] [
microsoft/main] Merge upstream
. This means have to write a rule per target branch.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.