The Go Blog

Go 1.13 is released

Andrew Bonventre
3 September 2019

Today the Go team is very happy to announce the release of Go 1.13. You can get it from the download page.

Some of the highlights include:

For the complete list of changes and more information about the improvements above, see the Go 1.13 release notes.

We want to thank everyone who contributed to this release by writing code, filing bugs, providing feedback, and/or testing the beta and release candidates. Your contributions and diligence helped to ensure that Go 1.13 is as stable as possible. That said, if you notice any problems, please file an issue.

We hope you enjoy the new release!

Module Mirror and Checksum Database Launched

Katie Hockman
29 August 2019

We are excited to share that our module mirror, index, and checksum database are now production ready! The go command will use the module mirror and checksum database by default for Go 1.13 module users. See for privacy information about these services and the go command documentation for configuration details, including how to disable the use of these servers or use different ones. If you depend on non-public modules, see the documentation for configuring your environment.

This post will describe these services and the benefits of using them, and summarizes some of the points from the Go Module Proxy: Life of a Query talk at Gophercon 2019. See the recording if you are interested in the full talk.

Module Mirror

Modules are sets of Go packages that are versioned together, and the contents of each version are immutable. That immutability provides new opportunities for caching and authentication. When go get runs in module mode, it must fetch the module containing the requested packages, as well as any new dependencies introduced by that module, updating your go.mod and go.sum files as needed. Fetching modules from version control can be expensive in terms of latency and storage in your system: the go command may be forced to pull down the full commit history of a repository containing a transitive dependency, even one that isn’t being built, just to resolve its version.

The solution is to use a module proxy, which speaks an API that is better suited to the go command’s needs (see go help goproxy). When go get runs in module mode with a proxy, it will work faster by only asking for the specific module metadata or source code it needs, and not worrying about the rest. Below is an example of how the go command may use a proxy with go get by requesting the list of versions, then the info, mod, and zip file for the latest tagged version.

A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system, allowing the mirror to continue to serve source code that is no longer available from the original locations. This can speed up downloads and protect you from disappearing dependencies. See Go Modules in 2019 for more information.

The Go team maintains a module mirror, served at, which the go command will use by default for module users as of Go 1.13. If you are running an earlier version of the go command, then you can use this service by setting GOPROXY= in your local environment.

Checksum Database

Modules introduced the go.sum file, which is a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use the hashes to detect misbehavior by an origin server or proxy that gives you different code for the same version.

The limitation of this go.sum file is that it works entirely by trust on your first use. When you add a version of a dependency that you’ve never seen before to your module (possibly by upgrading an existing dependency), the go command fetches the code and adds lines to the go.sum file on the fly. The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you.

Go's solution is a global source of go.sum lines, called a checksum database, which ensures that the go command always adds the same lines to everyone's go.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version.

The checksum database is served by, and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian. The main advantage of a Merkle tree is that it is tamper proof and has properties that don’t allow for misbehavior to go undetected, which makes it more trustworthy than a simple database. The go command uses this tree to check “inclusion” proofs (that a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to your module’s go.sum file. Below is an example of such a tree.

The checksum database supports a set of endpoints used by the go command to request and verify go.sum lines. The /lookup endpoint provides a “signed tree head” (STH) and the requested go.sum lines. The /tile endpoint provides chunks of the tree called tiles which the go command can use for proofs. Below is an example of how the go command may interact with the checksum database by doing a /lookup of a module version, then requesting the tiles required for the proofs.

This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected.

If you are using Go 1.12 or earlier, you can manually check a go.sum file against the checksum database with gosumcheck:

$ go get
$ gosumcheck /path/to/go.sum

In addition to verification done by the go command, third-party auditors can hold the checksum database accountable by iterating over the log looking for bad entries. They can work together and gossip about the state of the tree as it grows to ensure that it remains uncompromised, and we hope that the Go community will run them.

Module Index

The module index is served by, and is a public feed of new module versions that become available through This is particularly useful for tool developers that want to keep their own cache of what’s available in, or keep up-to-date on some of the newest modules that people are using.

Feedback or bugs

We hope these services improve your experience with modules, and encourage you to file issues if you run into problems or have feedback!

Migrating to Go Modules

Jean de Klerk
21 August 2019


This post is part 2 in a series. See part 1 — Using Go Modules.

Go projects use a wide variety of dependency management strategies. Vendoring tools such as dep and glide are popular, but they have wide differences in behavior and don't always work well together. Some projects store their entire GOPATH directory in a single Git repository. Others simply rely on go get and expect fairly recent versions of dependencies to be installed in GOPATH.

Go's module system, introduced in Go 1.11, provides an official dependency management solution built into the go command. This article describes tools and techniques for converting a project to modules.

Please note: if your project is already tagged at v2.0.0 or higher, you will need to update your module path when you add a go.mod file. We'll explain how to do that without breaking your users in a future article focused on v2 and beyond.

Migrating to Go modules in your project

A project might be in one of three states when beginning the transition to Go modules:

  • A brand new Go project.
  • An established Go project with a non-modules dependency manager.
  • An established Go project without any dependency manager.

The first case is covered in Using Go Modules; we'll address the latter two in this post.

With a dependency manager

To convert a project that already uses a dependency management tool, run the following commands:

$ git clone
$ cd project
$ cat Godeps/Godeps.json
    "ImportPath": "",
    "GoVersion": "go1.12",
    "GodepVersion": "v80",
    "Deps": [
            "ImportPath": "",
            "Comment": "v0.2.0-1-g545cabd",
            "Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"
            "ImportPath": "",
            "Comment": "v0.2.0-1-g545cabd",
            "Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"
$ go mod init
go: creating new go.mod: module
go: copying requirements from Godeps/Godeps.json
$ cat go.mod

go 1.12

require v0.2.1-0.20190524193500-545cabda89ca

go mod init creates a new go.mod file and automatically imports dependencies from Godeps.json, Gopkg.lock, or a number of other supported formats. The argument to go mod init is the module path, the location where the module may be found.

This is a good time to pause and run go build ./... and go test ./... before continuing. Later steps may modify your go.mod file, so if you prefer to take an iterative approach, this is the closest your go.mod file will be to your pre-modules dependency specification.

$ go mod tidy
go: downloading v0.2.1-0.20190524193500-545cabda89ca
go: extracting v0.2.1-0.20190524193500-545cabda89ca
$ cat go.sum v0.2.1-0.20190524193500-545cabda89ca h1:FKXXXJ6G2bFoVe7hX3kEX6Izxw5ZKRH57DFBJmHCbkU= v0.2.1-0.20190524193500-545cabda89ca/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=

go mod tidy finds all the packages transitively imported by packages in your module. It adds new module requirements for packages not provided by any known module, and it removes requirements on modules that don't provide any imported packages. If a module provides packages that are only imported by projects that haven't migrated to modules yet, the module requirement will be marked with an // indirect comment. It is always good practice to run go mod tidy before committing a go.mod file to version control.

Let's finish by making sure the code builds and tests pass:

$ go build ./...
$ go test ./...

Note that other dependency managers may specify dependencies at the level of individual packages or entire repositories (not modules), and generally do not recognize the requirements specified in the go.mod files of dependencies. Consequently, you may not get exactly the same version of every package as before, and there's some risk of upgrading past breaking changes. Therefore, it's important to follow the above commands with an audit of the resulting dependencies. To do so, run

$ go list -m all
go: finding v0.2.1-0.20190524193500-545cabda89ca v0.2.1-0.20190524193500-545cabda89ca

and compare the resulting versions with your old dependency management file to ensure that the selected versions are appropriate. If you find a version that wasn't what you wanted, you can find out why using go mod why -m and/or go mod graph, and upgrade or downgrade to the correct version using go get. (If the version you request is older than the version that was previously selected, go get will downgrade other dependencies as needed to maintain compatibility.) For example,

$ go mod why -m
$ go mod graph | grep
$ go get

Without a dependency manager

For a Go project without a dependency management system, start by creating a go.mod file:

$ git clone
$ cd blog
$ go mod init
go: creating new go.mod: module
$ cat go.mod

go 1.12

Without a configuration file from a previous dependency manager, go mod init will create a go.mod file with only the module and go directives. In this example, we set the module path to because that is its custom import path. Users may import packages with this path, and we must be careful not to change it.

The module directive declares the module path, and the go directive declares the expected version of the Go language used to compile the code within the module.

Next, run go mod tidy to add the module's dependencies:

$ go mod tidy
go: finding latest
go: finding latest
go: finding latest
go: finding latest
go: downloading v1.1.1
go: downloading v0.0.0-20190813214729-9dba7caff850
go: downloading v0.0.0-20190813141303-74dc4d7220e7
go: extracting v1.1.1
go: extracting v0.0.0-20190813141303-74dc4d7220e7
go: downloading v2.0.0-20161208151619-d5d1b5820637
go: extracting v2.0.0-20161208151619-d5d1b5820637
go: extracting v0.0.0-20190813214729-9dba7caff850
go: downloading v0.0.0-20190809153340-86a7442ada7c
go: extracting v0.0.0-20190809153340-86a7442ada7c
$ cat go.mod

go 1.12

require ( v1.1.1 v0.0.0-20190813141303-74dc4d7220e7 v0.3.2 v0.0.0-20190813214729-9dba7caff850 v0.0.0-20190809153340-86a7442ada7c v2.0.0-20161208151619-d5d1b5820637
$ cat go.sum v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw= v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= v0.0.0-20181218151757-9b75e4fe745a/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg= v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=

go mod tidy added module requirements for all the packages transitively imported by packages in your module and built a go.sum with checksums for each library at a specific version. Let's finish by making sure the code still builds and tests still pass:

$ go build ./...
$ go test ./...
ok    0.335s
?    [no test files]
ok    0.040s
?    [no test files]
?    [no test files]
?    [no test files]
?    [no test files]

Note that when go mod tidy adds a requirement, it adds the latest version of the module. If your GOPATH included an older version of a dependency that subsequently published a breaking change, you may see errors in go mod tidy, go build, or go test. If this happens, try downgrading to an older version with go get (for example, go get, or take the time to make your module compatible with the latest version of each dependency.

Tests in module mode

Some tests may need tweaks after migrating to Go modules.

If a test needs to write files in the package directory, it may fail when the package directory is in the module cache, which is read-only. In particular, this may cause go test all to fail. The test should copy files it needs to write to a temporary directory instead.

If a test relies on relative paths (../package-in-another-module) to locate and read files in another package, it will fail if the package is in another module, which will be located in a versioned subdirectory of the module cache or a path specified in a replace directive. If this is the case, you may need to copy the test inputs into your module, or convert the test inputs from raw files to data embedded in .go source files.

If a test expects go commands within the test to run in GOPATH mode, it may fail. If this is the case, you may need to add a go.mod file to the source tree to be tested, or set GO111MODULE=off explicitly.

Publishing a release

Finally, you should tag and publish a release version for your new module. This is optional if you haven't released any versions yet, but without an official release, downstream users will depend on specific commits using pseudo-versions, which may be more difficult to support.

$ git tag v1.2.0
$ git push origin v1.2.0

Your new go.mod file defines a canonical import path for your module and adds new minimum version requirements. If your users are already using the correct import path, and your dependencies haven't made breaking changes, then adding the go.mod file is backwards-compatible — but it's a significant change, and may expose existing problems. If you have existing version tags, you should increment the minor version.

Imports and canonical module paths

Each module declares its module path in its go.mod file. Each import statement that refers to a package within the module must have the module path as a prefix of the package path. However, the go command may encounter a repository containing the module through many different remote import paths. For example, both and resolve to repositories containing the code hosted at The go.mod file contained in that repository declares its path to be, so only that path corresponds to a valid module.

Go 1.4 provided a mechanism for declaring canonical import paths using // import comments, but package authors did not always provide them. As a result, code written prior to modules may have used a non-canonical import path for a module without surfacing an error for the mismatch. When using modules, the import path must match the canonical module path, so you may need to update import statements: for example, you may need to change import "" to import "".

Another scenario in which a module's canonical path may differ from its repository path occurs for Go modules at major version 2 or higher. A Go module with a major version above 1 must include a major-version suffix in its module path: for example, version v2.0.0 must have the suffix /v2. However, import statements may have referred to the packages within the module without that suffix. For example, non-module users of at v2.0.1 may have imported it as instead, and will need to update the import path to include the /v2 suffix.


Converting to Go modules should be a straightforward process for most users. Occasional issues may arise due to non-canonical import paths or breaking changes within a dependency. Future posts will explore publishing new versions, v2 and beyond, and ways to debug strange situations.

To provide feedback and help shape the future of dependency management in Go, please send us bug reports or experience reports.

Thanks for all your feedback and help improving modules.

Contributors Summit 2019

Carmen Andoh and contributors
15 August 2019


For the third year in a row, the Go team and contributors convened the day before GopherCon to discuss and plan for the future of the Go project. The event included self-organizing into breakout groups, a town-hall style discussion about the proposal process in the morning, and afternoon break-out roundtable discussions based on topics our contributors chose. We asked five contributors to write about their experience in various discussions at this year’s summit.

(Photo by Steve Francia.)

Compiler and Runtime (report by Lynn Boger)

The Go contributors summit was a great opportunity to meet and discuss topics and ideas with others who also contribute to Go.

The day started out with a time to meet everyone in the room. There was a good mix of the core Go team and others who actively contribute to Go. From there we decided what topics were of interest and how to split the big group into smaller groups. My area of interest is the compiler, so I joined that group and stayed with them for most of the time.

At our first meeting, a long list of topics were brought up and as a result the compiler group decided to keep meeting throughout the day. I had a few topics of interest that I shared and many that others suggested were also of interest to me. Not all items on the list were discussed in detail; here is my list of those topics which had the most interest and discussion, followed by some brief comments that were made on other topics.

Binary size. There was a concern expressed about binary size, especially that it continues to grow with each release. Some possible reasons were identified such as increased inlining and other optimizations. Most likely there is a set of users who want small binaries, and another group who wants the best performance possible and maybe some don’t care. This led to the topic of TinyGo, and it was noted that TinyGo was not a full implementation of Go and that it is important to keep TinyGo from diverging from Go and splitting the user base. More investigation is required to understand the need among users and the exact reasons contributing to the current size. If there are opportunities to reduce the size without affecting performance, those changes could be made, but if performance were affected some users would prefer better performance.

Vector assembly. How to leverage vector assembly in Go was discussed for a while and has been a topic of interest in the past. I have split this into three separate possibilities, since they all relate to the use of vector instructions, but the way they are used are different, starting with the topic of vector assembly. This is another case of a compiler trade off.

For most targets, there are critical functions in standard packages such as crypto, hash, math and others, where the use of assembly is necessary to get the best possible performance; however having large functions written in assembly makes them difficult to support and maintain and could require different implementations for each target platform. One solution is to make use of macro assembly or other high-level generation techniques to make the vector assembly easier to read and understand.

Another side to this question is whether the Go compiler can directly generate SIMD vector instructions when compiling a Go source file, by enhancing the Go compiler to transform code sequences to “simdize” the code to make use of vector instructions. Implementing SIMD in the Go compiler would add complexity and compile time, and might not always result in code that performs better. The way the code is transformed could in some cases depend on the target platform so that would not be ideal.

Another way to leverage vector instructions in Go is to provide a way to make it easier to make use of vector instructions from within the Go source code. Topics discussed were intrinsics, or implementations that exist in other compilers like Rust. In gcc some platforms provide inline asm, and Go possibly could provide this capability, but I know from experience that intermixing inline asm with Go code adds complexity to the compiler in terms of tracking register use and debugging. It allows the user to do things the compiler might not expect or want, and it does add an extra level of complexity. It could be inserted in places that are not ideal.

In summary, it is important to provide a way to leverage the available vector instructions, and make it easier and safer to write. Where possible, functions use as much Go code as possible, and potentially find a way to use high level assembly. There was some discussion of designing an experimental vector package to try and implement some of these ideas.

New calling convention. Several people were interested in the topic of the ABI changes to provide a register based calling convention. The current status was reported with details. There was discussion on what remained to be done before it could be used. The ABI specification needs to be written first and it was not clear when that would be done. I know this will benefit some target platforms more than others and a register calling convention is used in most compilers for other platforms.

General optimizations. Certain optimizations that are more beneficial for some platforms other than x86 were discussed. In particular, loop optimizations such as hoisting of invariants and strength reduction could be done and provide more benefit on some platforms. Potential solutions were discussed, and implementation would probably be up to the targets that find those improvements important.

Feedback-directed optimizations. This was discussed and debated as a possible future enhancement. In my experience, it is hard to find meaningful programs to use for collecting performance data that can later be used to optimize code. It increases compile time and takes a lot of space to save the data which might only be meaningful for a small set of programs.

Pending submissions. A few members in the group mentioned changes they had been working on and plan to submit soon, including improvements to makeslice, and a rewrite of rulegen.

Compile time concerns. Compile time was discussed briefly. It was noted that phase timing was added to the GOSSAFUNC output.

Compiler contributor communication. Someone asked if there was a need for a Go compiler mailing list. It was suggested that we use golang-dev for that purpose, adding compiler to the subject line to identify it. If there is too much traffic on golang-dev, then a compiler-specific mailing list can be considered at some later point in time.

Community. I found the day very beneficial in terms of connecting with people who have been active in the community and have similar areas of interest. I was able to meet many people who I’ve only known by the user name appearing in issues or mailing lists or CLs. I was able to discuss some topics and existing issues and get direct interactive feedback instead of waiting for online responses. I was encouraged to write issues on problems I have seen. These connections happened not just during this day but while running into others throughout the conference, having been introduced on this first day, which led to many interesting discussions. Hopefully these connections will lead to more effective communication and improved handling of issues and code changes in the future.

Tools (report by Paul Jolly)

The tools breakout session during the contributor summit took an extended form, with two further sessions on the main conference days organized by the golang-tools group. This summary is broken down into two parts: the tools session at the contributor workshop, and a combined report from the golang-tools sessions on the main conference days.

Contributor summit. The tools session started with introductions from ~25 folks gathered, followed by a brainstorming of topics, including: gopls, ARM 32-bit, eval, signal, analysis, go/packages api, refactoring, pprof, module experience, mono repo analysis, go mobile, dependencies, editor integrations, compiler opt decisions, debugging, visualization, documentation. A lot of people with lots of interest in lots of tools!

The session focused on two areas (all that time allowed): gopls and visualizations. Gopls (pronounced: “go please”) is an implementation of the Language Server Protocol (LSP) server for Go. Rebecca Stamber, the gopls lead author, and the rest of the Go tools team were interested in hearing people’s experiences with gopls: stability, missing features, integrations in editors working, etc? The general feeling was that gopls was in really good shape and working extremely well for the majority of use cases. Integration test coverage needs to be improved, but this is a hard problem to get “right” across all editors. We discussed a better means of users reporting gopls errors they encounter via their editor, telemetry/diagnostics, gopls performance metrics, all subjects that got more detailed coverage in golang-tools sessions that followed on the main conference days (see below). A key area of discussion was how to extend gopls, e.g., in the form of additional go/analysis vet-like checks, lint checks, refactoring, etc. Currently there is no good solution, but it’s actively under investigation. Conversation shifted to the very broad topic of visualizations, with a demo-based introduction from Anthony Starks (who, incidentally, gave an excellent talk about Go for information displays at GopherCon 2018).

Conference days. The golang-tools sessions on the main conference days were a continuation of the monthly calls that have been happening since the group’s inception at GopherCon 2018. Full notes are available for the day 1 and day 2 sessions. These sessions were again well attended with 25-30 people at each session. The Go tools team was there in strength (a good sign of the support being put behind this area), as was the Uber platform team. In contrast to the contributor summit, the goal from these sessions was to come away with specific action items.

Gopls. Gopls “readiness” was a major focus for both sessions. This answer effectively boiled down to determining when it makes sense to tell editor integrators “we have a good first cut of gopls” and then compiling a list of “blessed” editor integrations/plugins known to work with gopls. Central to this “certification” of editor integrations/plugins is a well-defined process by which users can report problems they experience with gopls. Performance and memory are not blockers for this initial “release”. The conversation about how to extend gopls, started in the contributor summit the day before, continued in earnest. Despite the many obvious benefits and attractions to extending gopls (custom go/analysis checks, linter support, refactoring, code generation…), there isn’t a clear answer on how to implement this in a scalable way. Those gathered agreed that this should not be seen as a blocker for the initial “release”, but should continue to be worked on. In the spirit of gopls and editor integrations, Heschi Kreinick from the Go tools team brought up the topic of debugging support. Delve has become the de facto debugger for Go and is in good shape; now the state of debugger-editor integration needs to be established, following a process similar to that of gopls and the “blessed” integrations.

Go Discovery Site. The second golang-tools session started with an excellent introduction to the Go Discovery Site by Julie Qiu from the Go tools team, along with a quick demo. Julie talked about the plans for the Discovery Site: open sourcing the project, what signals are used in search ranking, how will ultimately be replaced, how submodules should work, how users can discover new major versions.

Build Tags. Conversation then moved to build tag support within gopls. This is an area that clearly needs to be better understood (use cases are currently being gathered in issue 33389). In light of this conversation, the session wrapped up with Alexander Zolotov from the JetBrains GoLand team suggesting that the gopls and GoLand teams should share experience in this and more areas, given GoLand has already gained lots of experience.

Join Us! We could easily have talked about tools-related topics for days! The good news is that the golang-tools calls will continue for the foreseeable future. Anyone interested in Go tooling is very much encouraged to join: the wiki has more details.

Enterprise Use (report by Daniel Theophanes)

Actively asking after the needs of less vocal developers will be the largest challenge, and greatest win, for the Go language. There is a large segment of programmers who don’t actively participate in the Go community. Some are business associates, marketers, or quality assurance who also do development. Some will wear management hats and make hiring or technology decisions. Others just do their job and return to their families. And lastly, many times these developers work in businesses with strict IP protection contracts. Even though most of these developers won’t end up directly participating in open source or the Go community proposals, their ability to use Go depends on both.

The Go community and Go proposals need to understand the needs of these less vocal developers. Go proposals can have a large impact on what is adopted and used. For instance, the vendor folder and later the Go modules proxy are incredibly important for businesses that strictly control source code and typically have fewer direct conversations with the Go community. Having these mechanisms allow these organizations to use Go at all. It follows that we must not only pay attention to current Go users, but also to developers and organizations who have considered Go, but have chosen against it. We need to understand these reasons.

Similarly, should the Go community pay attention to “enterprise” environments it would unlock many additional organizations who can utilize Go. By ensuring active directory authentication works, users who would be forced to use a different ecosystem can keep Go on the table. By ensuring WSDL just works, a section of users can pick Go up as a tool. No one suggested blindly making changes to appease non-Go users. But rather we should be aware of untapped potential and unrecognized hindrances in the Go language and ecosystem.

While several different possibilities to actively solicit this information from the outside was discussed, this is a problem we fundamentally need your help. If you are in an organization that doesn’t use Go even though it was considered, let us know why Go wasn’t chosen. If you are in an organization where Go is only used for a subsection of programming tasks, but not others, why isn’t it used for more? Are there specific blockers to adoption?

Education (report by Andy Walker)

One of the roundtables I was involved in at the Contributors Summit this year was on the topic of Go education, specifically what kind of resources we make available to the new Go programmer, and how we can improve them. Present were a number of very passionate organizers, engineers and educators, each of whom had a unique perspective on the subject, either through tools they’d designed, documents they’d written or workshops they’d given to developers of all stripes.

Early on, talk turned to whether or not Go makes a good first programming language. I wasn’t sure, and advocated against it. Go isn’t a good first language, I argued, because it isn’t intended to be. As Rob Pike wrote back in 2012, “the language was designed by and for people who write—and read and debug and maintain—large software systems”. To me, this guiding ethos is clear: Go is a deliberate response to perceived flaws in the processes used by experienced engineers, not an attempt to create an ideal programming language, and as such a certain basic familiarity with programming concepts is assumed.

This is evident in the official documentation at It jumps right into how to install the language before passing the user on to the tour, which is geared towards programmers who are already familiar with a C-like language. From there, they are taken to How to Write Go Code, which provides a very basic introduction to the classic non-module Go workspace, before moving immediately on to writing libraries and testing. Finally, we have Effective Go, and a series of references including the spec, rounded out by some examples. These are all decent resources if you’re already familiar with a C-like language, but they still leave a lot to be desired, and there’s nothing to be found for the raw beginner or even someone coming directly from a language like Python.

As an accessible, interactive starting point, the tour is a natural first target towards making the language more beginner friendly, and I think a lot of headway can be made targeting that alone. First, it should be the first link in the documentation, if not the first link in the bar at the top of, front and center. We should encourage the curious user to jump right in and start playing with the language. We should also consider including optional introductory sections on coming from other common languages, and the differences they are likely to encounter in Go, with interactive exercises. This would go a long way to helping new Go programmers in mapping the concepts they are already familiar with onto Go.

For experienced programmers, an optional, deeper treatment should be given to most sections in the tour, allowing them to drill down into more detailed documentation or interactive exercises enumerating the design decisions principles of good architecture in Go. They should find answers to questions like:

  • Why are there so many integer types when I am encouraged to use int most of the time?
  • Is there ever a good reason to pick a value receiver?
  • Why is there a plain int, but no plain float?
  • What are send- and receive-only channels, and when would I use them?
  • How do I effectively compose concurrency primitives, and when would I not want to use channels?
  • What is uint good for? Should I use it to restrict my user to positive values? Why not?

The tour should be someplace they can revisit upon finishing the first run-through to dive more deeply into some of the more interesting choices in language design.

But we can do more. Many people seek out programming as a way to design applications or scratch a particular itch, and they are most likely to want to target the interface they are most familiar with: the browser. Go does not have a good front-end story yet. Javascript is still the only language that really provides both a frontend and a backend environment, but WASM is fast becoming a first-order platform, and there are so many places we could go with that. We could provide something like vecty in The Go Play Space, or perhaps Gio, targeting WASM, for people to get started programming in the browser right away, inspiring their imagination, and provide them a migration path out of our playground into a terminal and onto GitHub.

So, is Go a good first language? I honestly don’t know, but it’s certainly true there are a significant number of people entering the programming profession with Go as their starting point, and I am very interested in talking to them, learning about their journey and their process, and shaping the future of Go education with their input.

Learning Platforms (report by Ronna Steinberg)

We discussed what a learning platform for Go should look like and how we can combine global resources to effectively teach the language. We generally agreed that teaching and learning is easier with visualization and that a REPL is very gratifying. We also overviewed some existing solutions for visualization with Go: templates, Go WASM, GopherJS as well as SVG and GIFs generation.

Compiler errors not making sense to the new developer was also brought up and we considered ideas of how to handle it, perhaps a bank of errors and how they would be useful. One idea was a wrapper for the compiler that explains your errors to you, with examples and solutions.

A new group convened for a second round later and we focused more on what UX should the Go learning platform have, and if and how we can take existing materials (talks, blog posts, podcasts, etc) from the community and organize them into a program people can learn from. Should such a platform link to those external resources? Embed them? Cite them? We agreed that a portal-like-solution (of external links to resources) makes navigation difficult and takes away from the learning experience, which led us to the conclusion that such contribution cannot be passive, and contributors will likely have to opt-in to have their material on the platform. There was then much excitement around the idea of adding a voting mechanism to the platform, effectively turning the learners into contributors, too, and incentivizing the contributors to put their materials on the platform.

(If you are interested in helping in educational efforts for Go, please email Carmen Andoh

Thank You!

Thanks to all the attendees for the excellent discussions on contributor day, and thanks especially to Lynn, Paul, Daniel, Andy, and Ronna for taking the time to write these reports.

Experiment, Simplify, Ship

Russ Cox
1 August 2019


This is the blog post version of my talk last week at Gophercon 2019.

We are all on the path to Go 2, together, but none of us know exactly where that path leads or sometimes even which direction the path goes. This post discusses how we actually find and follow the path to Go 2. Here’s what the process looks like.

We experiment with Go as it exists now, to understand it better, learning what works well and what doesn’t. Then we experiment with possible changes, to understand them better, again learning what works well and what doesn’t. Based on what we learn from those experiments, we simplify. And then we experiment again. And then we simplify again. And so on. And so on.

The Four R’s of Simplifying

During this process, there are four main ways that we can simplify the overall experience of writing Go programs: reshaping, redefining, removing, and restricting.

Simplify by Reshaping

The first way we simplify is by reshaping what exists into a new form, one that ends up being simpler overall.

Every Go program we write serves as an experiment to test Go itself. In the early days of Go, we quickly learned that it was common to write code like this addToList function:

func addToList(list []int, x int) []int {
    n := len(list)
    if n+1 > cap(list) {
        big := make([]int, n, (n+5)*2)
        copy(big, list)
        list = big
    list = list[:n+1]
    list[n] = x
    return list

We’d write the same code for slices of bytes, and slices of strings, and so on. Our programs were too complex, because Go was too simple.

So we took the many functions like addToList in our programs and reshaped them into one function provided by Go itself. Adding append made the Go language a little more complex, but on balance it made the overall experience of writing Go programs simpler, even after accounting for the cost of learning about append.

Here’s another example. For Go 1, we looked at the very many development tools in the Go distribution, and we reshaped them into one new command.

5a      8g
5g      8l
5l      cgo
6a      gobuild
6cov    gofix         →     go
6g      goinstall
6l      gomake
6nm     gopack
8a      govet

The go command is so central now that it is easy to forget that we went so long without it and how much extra work that involved.

We added code and complexity to the Go distribution, but on balance we simplified the experience of writing Go programs. The new structure also created space for other interesting experiments, which we’ll see later.

Simplify by Redefining

A second way we simplify is by redefining functionality we already have, allowing it to do more. Like simplifying by reshaping, simplifying by redefining makes programs simpler to write, but now with nothing new to learn.

For example, append was originally defined to read only from slices. When appending to a byte slice, you could append the bytes from another byte slice, but not the bytes from a string. We redefined append to allow appending from a string, without adding anything new to the language.

var b []byte
var more []byte
b = append(b, more...) // ok

var b []byte
var more string
b = append(b, more...) // ok later

Simplify by Removing

A third way we simplify is by removing functionality when it has turned out to be less useful or less important than we expected. Removing functionality means one less thing to learn, one less thing to fix bugs in, one less thing to be distracted by or use incorrectly. Of course, removing also forces users to update existing programs, perhaps making them more complex, to make up for the removal. But the overall result can still be that the process of writing Go programs becomes simpler.

An example of this is when we removed the boolean forms of non-blocking channel operations from the language:

ok := c <- x  // before Go 1, was non-blocking send
x, ok := <-c  // before Go 1, was non-blocking receive

These operations were also possible to do using select, making it confusing to need to decide which form to use. Removing them simplified the language without reducing its power.

Simplify by Restricting

We can also simplify by restricting what is allowed. From day one, Go has restricted the encoding of Go source files: they must be UTF-8. This restriction makes every program that tries to read Go source files simpler. Those programs don’t have to worry about Go source files encoded in Latin-1 or UTF-16 or UTF-7 or anything else.

Another important restriction is gofmt for program formatting. Nothing rejects Go code that isn’t formatted using gofmt, but we have established a convention that tools that rewrite Go programs leave them in gofmt form. If you keep your programs in gofmt form too, then these rewriters don’t make any formatting changes. When you compare before and after, the only diffs you see are real changes. This restriction has simplified program rewriters and led to successful experiments like goimports, gorename, and many others.

Go Development Process

This cycle of experiment and simplify is a good model for what we’ve been doing the past ten years. but it has a problem: it’s too simple. We can’t only experiment and simplify.

We have to ship the result. We have to make it available to use. Of course, using it enables more experiments, and possibly more simplifying, and the process cycles on and on.

We shipped Go to all of you for the first time on November 10, 2009. Then, with your help, we shipped Go 1 together in March 2012. And we’ve shipped twelve Go releases since then. All of these were important milestones, to enable more experimentation, to help us learn more about Go, and of course to make Go available for production use.

When we shipped Go 1, we explicitly shifted our focus to using Go, to understand this version of the language much better before trying any more simplifications involving language changes. We needed to take time to experiment, to really understand what works and what doesn’t.

Of course, we’ve had twelve releases since Go 1, so we have still been experimenting and simplifying and shipping. But we’ve focused on ways to simplify Go development without significant language changes and without breaking existing Go programs. For example, Go 1.5 shipped the first concurrent garbage collector and then the following releases improved it, simplifying Go development by removing pause times as an ongoing concern.

At Gophercon in 2017, we announced that after five years of experimentation, it was again time to think about significant changes that would simplify Go development. Our path to Go 2 is really the same as the path to Go 1: experiment and simplify and ship, towards an overall goal of simplifying Go development.

For Go 2, the concrete topics that we believed were most important to address are error handling, generics, and dependencies. Since then we have realized that another important topic is developer tooling.

The rest of this post discusses how our work in each of these areas follows that path. Along the way, we’ll take one detour, stopping to inspect the technical detail of what will be shipping soon in Go 1.13 for error handling.


It is hard enough to write a program that works the right way in all cases when all the inputs are valid and correct and nothing the program depends on is failing. When you add errors into the mix, writing a program that works the right way no matter what goes wrong is even harder.

As part of thinking about Go 2, we want to understand better whether Go can help make that job any simpler.

There are two different aspects that could potentially be simplified: error values and error syntax. We’ll look at each in turn, with the technical detour I promised focusing on the Go 1.13 error value changes.

Error Values

Error values had to start somewhere. Here is the Read function from the first version of the os package:

export func Read(fd int64, b *[]byte) (ret int64, errno int64) {
    r, e :=, &b[0], int64(len(b)));
    return r, e

There was no File type yet, and also no error type. Read and the other functions in the package returned an errno int64 directly from the underlying Unix system call.

This code was checked in on September 10, 2008 at 12:14pm. Like everything back then, it was an experiment, and code changed quickly. Two hours and five minutes later, the API changed:

export type Error struct { s string }

func (e *Error) Print() { … } // to standard error!
func (e *Error) String() string { … }

export func Read(fd int64, b *[]byte) (ret int64, err *Error) {
    r, e :=, &b[0], int64(len(b)));
    return r, ErrnoToError(e)

This new API introduced the first Error type. An error held a string and could return that string and also print it to standard error.

The intent here was to generalize beyond integer codes. We knew from past experience that operating system error numbers were too limited a representation, that it would simplify programs not to have to shoehorn all detail about an error into 64 bits. Using error strings had worked reasonably well for us in the past, so we did the same here. This new API lasted seven months.

The next April, after more experience using interfaces, we decided to generalize further and allow user-defined error implementations, by making the os.Error type itself an interface. We simplified by removing the Print method.

For Go 1 two years later, based on a suggestion by Roger Peppe, os.Error became the built-in error type, and the String method was renamed to Error. Nothing has changed since then. But we have written many Go programs, and as a result we have experimented a lot with how best to implement and use errors.

Errors Are Values

Making error a simple interface and allowing many different implementations means we have the entire Go language available to define and inspect errors. We like to say that errors are values, the same as any other Go value.

Here’s an example. On Unix, an attempt to dial a network connection ends up using the connect system call. That system call returns a syscall.Errno, which is a named integer type that represents a system call error number and implements the error interface:

package syscall

type Errno int64

func (e Errno) Error() string { ... }

const ECONNREFUSED = Errno(61)

    ... err == ECONNREFUSED ...

The syscall package also defines named constants for the host operating system’s defined error numbers. In this case, on this system, ECONNREFUSED is number 61. Code that gets an error from a function can test whether the error is ECONNREFUSED using ordinary value equality.

Moving up a level, in package os, any system call failure is reported using a larger error structure that records what operation was attempted in addition to the error. There are a handful of these structures. This one, SyscallError, describes an error invoking a specific system call with no additional information recorded:

package os

type SyscallError struct {
    Syscall string
    Err     error

func (e *SyscallError) Error() string {
    return e.Syscall + ": " + e.Err.Error()

Moving up another level, in package net, any network failure is reported using an even larger error structure that records the details of the surrounding network operation, such as dial or listen, and the network and addresses involved:

package net

type OpError struct {
    Op     string
    Net    string
    Source Addr
    Addr   Addr
    Err    error

func (e *OpError) Error() string { ... }

Putting these together, the errors returned by operations like net.Dial can format as strings, but they are also structured Go data values. In this case, the error is a net.OpError, which adds context to an os.SyscallError, which adds context to a syscall.Errno:

c, err := net.Dial("tcp", "localhost:50001")

// "dial tcp [::1]:50001: connect: connection refused"

err is &net.OpError{
    Op:   "dial",
    Net:  "tcp",
    Addr: &net.TCPAddr{IP: ParseIP("::1"), Port: 50001},
    Err: &os.SyscallError{
        Syscall: "connect",
        Err:     syscall.Errno(61), // == ECONNREFUSED

When we say errors are values, we mean both that the entire Go language is available to define them and also that the entire Go language is available to inspect them.

Here is an example from package net. It turns out that when you attempt a socket connection, most of the time you will get connected or get connection refused, but sometimes you can get a spurious EADDRNOTAVAIL, for no good reason. Go shields user programs from this failure mode by retrying. To do this, it has to inspect the error structure to find out whether the syscall.Errno deep inside is EADDRNOTAVAIL.

Here is the code:

func spuriousENOTAVAIL(err error) bool {
    if op, ok := err.(*OpError); ok {
        err = op.Err
    if sys, ok := err.(*os.SyscallError); ok {
        err = sys.Err
    return err == syscall.EADDRNOTAVAIL

A type assertion peels away any net.OpError wrapping. And then a second type assertion peels away any os.SyscallError wrapping. And then the function checks the unwrapped error for equality with EADDRNOTAVAIL.

What we’ve learned from years of experience, from this experimenting with Go errors, is that it is very powerful to be able to define arbitrary implementations of the error interface, to have the full Go language available both to construct and to deconstruct errors, and not to require the use of any single implementation.

These properties—that errors are values, and that there is not one required error implementation—are important to preserve.

Not mandating one error implementation enabled everyone to experiment with additional functionality that an error might provide, leading to many packages, such as,,,,, and more.

One problem with unconstrained experimentation, though, is that as a client you have to program to the union of all the possible implementations you might encounter. A simplification that seemed worth exploring for Go 2 was to define a standard version of commonly-added functionality, in the form of agreed-upon optional interfaces, so that different implementations could interoperate.


The most commonly-added functionality in these packages is some method that can be called to remove context from an error, returning the error inside. Packages use different names and meanings for this operation, and sometimes it removes one level of context, while sometimes it removes as many levels as possible.

For Go 1.13, we have introduced a convention that an error implementation adding removable context to an inner error should implement an Unwrap method that returns the inner error, unwrapping the context. If there is no inner error appropriate to expose to callers, either the error shouldn’t have an Unwrap method, or the Unwrap method should return nil.

// Go 1.13 optional method for error implementations.

interface {
    // Unwrap removes one layer of context,
    // returning the inner error if any, or else nil.
    Unwrap() error

The way to call this optional method is to invoke the helper function errors.Unwrap, which handles cases like the error itself being nil or not having an Unwrap method at all.

package errors

// Unwrap returns the result of calling
// the Unwrap method on err,
// if err’s type defines an Unwrap method.
// Otherwise, Unwrap returns nil.
func Unwrap(err error) error

We can use the Unwrap method to write a simpler, more general version of spuriousENOTAVAIL. Instead of looking for specific error wrapper implementations like net.OpError or os.SyscallError, the general version can loop, calling Unwrap to remove context, until either it reaches EADDRNOTAVAIL or there’s no error left:

func spuriousENOTAVAIL(err error) bool {
    for err != nil {
        if err == syscall.EADDRNOTAVAIL {
            return true
        err = errors.Unwrap(err)
    return false

This loop is so common, though, that Go 1.13 defines a second function, errors.Is, that repeatedly unwraps an error looking for a specific target. So we can replace the entire loop with a single call to errors.Is:

func spuriousENOTAVAIL(err error) bool {
    return errors.Is(err, syscall.EADDRNOTAVAIL)

At this point we probably wouldn’t even define the function; it would be equally clear, and simpler, to call errors.Is directly at the call sites.

Go 1.13 also introduces a function errors.As that unwraps until it finds a specific implementation type.

If you want to write code that works with arbitrarily-wrapped errors, errors.Is is the wrapper-aware version of an error equality check:

err == target


errors.Is(err, target)

And errors.As is the wrapper-aware version of an error type assertion:

target, ok := err.(*Type)
if ok {


var target *Type
if errors.As(err, &target) {

To Unwrap Or Not To Unwrap?

Whether to make it possible to unwrap an error is an API decision, the same way that whether to export a struct field is an API decision. Sometimes it is appropriate to expose that detail to calling code, and sometimes it isn’t. When it is, implement Unwrap. When it isn’t, don’t implement Unwrap.

Until now, fmt.Errorf has not exposed an underlying error formatted with %v to caller inspection. That is, the result of fmt.Errorf has not been possible to unwrap. Consider this example:

// errors.Unwrap(err2) == nil
// err1 is not available (same as earlier Go versions)
err2 := fmt.Errorf("connect: %v", err1)

If err2 is returned to a caller, that caller has never had any way to open up err2 and access err1. We preserved that property in Go 1.13.

For the times when you do want to allow unwrapping the result of fmt.Errorf, we also added a new printing verb %w, which formats like %v, requires an error value argument, and makes the resulting error’s Unwrap method return that argument. In our example, suppose we replace %v with %w:

// errors.Unwrap(err4) == err3
// (%w is new in Go 1.13)
err4 := fmt.Errorf("connect: %w", err3)

Now, if err4 is returned to a caller, the caller can use Unwrap to retrieve err3.

It is important to note that absolute rules like “always use %v (or never implement Unwrap)” or “always use %w (or always implement Unwrap)” are as wrong as absolute rules like “never export struct fields” or “always export struct fields.” Instead, the right decision depends on whether callers should be able to inspect and depend on the additional information that using %w or implementing Unwrap exposes.

As an illustration of this point, every error-wrapping type in the standard library that already had an exported Err field now also has an Unwrap method returning that field, but implementations with unexported error fields do not, and existing uses of fmt.Errorf with %v still use %v, not %w.

Error Value Printing (Abandoned)

Along with the design draft for Unwrap, we also published a design draft for an optional method for richer error printing, including stack frame information and support for localized, translated errors.

// Optional method for error implementations
type Formatter interface {
    Format(p Printer) (next error)

// Interface passed to Format
type Printer interface {
    Print(args ...interface{})
    Printf(format string, args ...interface{})
    Detail() bool

This one is not as simple as Unwrap, and I won’t go into the details here. As we discussed the design with the Go community over the winter, we learned that the design wasn’t simple enough. It was too hard for individual error types to implement, and it did not help existing programs enough. On balance, it did not simplify Go development.

As a result of this community discussion, we abandoned this printing design.

Error Syntax

That was error values. Let’s look briefly at error syntax, another abandoned experiment.

Here is some code from compress/lzw/writer.go in the standard library:

// Write the savedCode if valid.
if e.savedCode != invalidCode {
    if err := e.write(e, e.savedCode); err != nil {
        return err
    if err := e.incHi(); err != nil && err != errOutOfCodes {
        return err

// Write the eof code.
eof := uint32(1)<<e.litWidth + 1
if err := e.write(e, eof); err != nil {
    return err

At a glance, this code is about half error checks. My eyes glaze over when I read it. And we know that code that is tedious to write and tedious to read is easy to misread, making it a good home for hard-to-find bugs. For example, one of these three error checks is not like the others, a fact that is easy to miss on a quick skim. If you were debugging this code, how long would it take to notice that?

At Gophercon last year we presented a draft design for a new control flow construct marked by the keyword check. Check consumes the error result from a function call or expression. If the error is non-nil, the check returns that error. Otherwise the check evaluates to the other results from the call. We can use check to simplify the lzw code:

// Write the savedCode if valid.
if e.savedCode != invalidCode {
    check e.write(e, e.savedCode)
    if err := e.incHi(); err != errOutOfCodes {
        check err

// Write the eof code.
eof := uint32(1)<<e.litWidth + 1
check e.write(e, eof)

This version of the same code uses check, which removes four lines of code and more importantly highlights that the call to e.incHi is allowed to return errOutOfCodes.

Maybe most importantly, the design also allowed defining error handler blocks to be run when later checks failed. That would let you write shared context-adding code just once, like in this snippet:

handle err {
    err = fmt.Errorf("closing writer: %w", err)

// Write the savedCode if valid.
if e.savedCode != invalidCode {
    check e.write(e, e.savedCode)
    if err := e.incHi(); err != errOutOfCodes {
        check err

// Write the eof code.
eof := uint32(1)<<e.litWidth + 1
check e.write(e, eof)

In essence, check was a short way to write the if statement, and handle was like defer but only for error return paths. In contrast to exceptions in other languages, this design retained Go’s important property that every potential failing call was marked explicitly in the code, now using the check keyword instead of if err != nil.

The big problem with this design was that handle overlapped too much, and in confusing ways, with defer.

In May we posted a new design with three simplifications: to avoid the confusion with defer, the design dropped handle in favor of just using defer; to match a similar idea in Rust and Swift, the design renamed check to try; and to allow experimentation in a way that existing parsers like gofmt would recognize, it changed check (now try) from a keyword to a built-in function.

Now the same code would look like this:

defer errd.Wrapf(&err, "closing writer")

// Write the savedCode if valid.
if e.savedCode != invalidCode {
    try(e.write(e, e.savedCode))
    if err := e.incHi(); err != errOutOfCodes {

// Write the eof code.
eof := uint32(1)<<e.litWidth + 1
try(e.write(e, eof))

We spent most of June discussing this proposal publicly on GitHub.

The fundamental idea of check or try was to shorten the amount of syntax repeated at each error check, and in particular to remove the return statement from view, keeping the error check explicit and better highlighting interesting variations. One interesting point raised during the public feedback discussion, however, was that without an explicit if statement and return, there’s nowhere to put a debugging print, there’s nowhere to put a breakpoint, and there’s no code to show as unexecuted in code coverage results. The benefits we were after came at the cost of making these situations more complex. On balance, from this as well as other considerations, it was not at all clear that the overall result would be simpler Go development, so we abandoned this experiment.

That’s everything about error handling, which was one of the main focuses for this year.


Now for something a little less controversial: generics.

The second big topic we identified for Go 2 was some kind of way to write code with type parameters. This would enable writing generic data structures and also writing generic functions that work with any kind of slice, or any kind of channel, or any kind of map. For example, here is a generic channel filter:

// Filter copies values from c to the returned channel,
// passing along only those values satisfying f.
func Filter(type value)(f func(value) bool, c <-chan value) <-chan value {
    out := make(chan value)
    go func() {
        for v := range c {
            if f(v) {
                out <- v
    return out

We’ve been thinking about generics since work on Go began, and we wrote and rejected our first concrete design in 2010. We wrote and rejected three more designs by the end of 2013. Four abandoned experiments, but not failed experiments, We learned from them, like we learned from check and try. Each time, we learned that the path to Go 2 is not in that exact direction, and we noticed other directions that might be interesting to explore. But by 2013 we had decided that we needed to focus on other concerns, so we put the entire topic aside for a few years.

Last year we started exploring and experimenting again, and we presented a new design, based on the idea of a contract, at Gophercon last summer. We’ve continued to experiment and simplify, and we’ve been working with programming language theory experts to understand the design better.

Overall, I am hopeful that we’re headed in a good direction, toward a design that will simplify Go development. Even so, we might find that this design doesn’t work either. We might have to abandon this experiment and adjust our path based on what we learned. We’ll find out.

At Gophercon 2019, Ian Lance Taylor talked about why we might want to add generics to Go and briefly previewed the latest design draft. For details, see his blog post “Why Generics?


The third big topic we identified for Go 2 was dependency management.

In 2010 we published a tool called goinstall, which we called “an experiment in package installation.” It downloaded dependencies and stored them in your Go distribution tree, in GOROOT.

As we experimented with goinstall, we learned that the Go distribution and the installed packages should be kept separate, so that it was possible to change to a new Go distribution without losing all your Go packages. So in 2011 we introduced GOPATH, an environment variable that specified where to look for packages not found in the main Go distribution.

Adding GOPATH created more places for Go packages but simplified Go development overall, by separating your Go distribution from your Go libraries.


The goinstall experiment intentionally left out an explicit concept of package versioning. Instead, goinstall always downloaded the latest copy. We did this so we could focus on the other design problems for package installation.

Goinstall became go get as part of Go 1. When people asked about versions, we encouraged them to experiment by creating additional tools, and they did. And we encouraged package AUTHORS to provide their USERS with the same backwards compatibility we did for the Go 1 libraries. Quoting the Go FAQ:

“Packages intended for public use should try to maintain backwards compatibility as they evolve.

If different functionality is required, add a new name instead of changing an old one.

If a complete break is required, create a new package with a new import path.”

This convention simplifies the overall experience of using a package by restricting what authors can do: avoid breaking changes to APIs; give new functionality a new name; and give a whole new package design a new import path.

Of course, people kept experimenting. One of the most interesting experiments was started by Gustavo Niemeyer. He created a Git redirector called, which provided different import paths for different API versions, to help package authors follow the convention of giving a new package design a new import path.

For example, the Go source code in the GitHub repository go-yaml/yaml has different APIs in the v1 and v2 semantic version tags. The server provides these with different import paths and

The convention of providing backwards compatibility, so that a newer version of a package can be used in place of an older version, is what makes go get’s very simple rule—“always download the latest copy”—work well even today.

Versioning And Vendoring

But in production contexts you need to be more precise about dependency versions, to make builds reproducible.

Many people experimented with what that should look like, building tools that served their needs, including Keith Rarick’s goven (2012) and godep (2013), Matt Butcher’s glide (2014), and Dave Cheney’s gb (2015). All of these tools use the model that you copy dependency packages into your own source control repository. The exact mechanisms used to make those packages available for import varied, but they were all more complex than it seemed they should be.

After a community-wide discussion, we adopted a proposal by Keith Rarick to add explicit support for referring to copied dependencies without GOPATH tricks. This was simplifying by reshaping: like with addToList and append, these tools were already implementing the concept, but it was more awkward than it needed to be. Adding explicit support for vendor directories made these uses simpler overall.

Shipping vendor directories in the go command led to more experimentation with vendoring itself, and we realized that we had introduced a few problems. The most serious was that we lost package uniqueness. Before, during any given build, an import path might appear in lots of different packages, and all the imports referred to the same target. Now with vendoring, the same import path in different packages might refer to different vendored copies of the package, all of which would appear in the final resulting binary.

At the time, we didn’t have a name for this property: package uniqueness. It was just how the GOPATH model worked. We didn’t completely appreciate it until it went away.

There is a parallel here with the check and try error syntax proposals. In that case, we were relying on how the visible return statement worked in ways we didn’t appreciate until we considered removing it.

When we added vendor directory support, there were many different tools for managing dependencies. We thought that a clear agreement about the format of vendor directories and vendoring metadata would allow the various tools to interoperate, the same way that agreement about how Go programs are stored in text files enables interoperation between the Go compiler, text editors, and tools like goimports and gorename.

This turned out to be naively optimistic. The vendoring tools all differed in subtle semantic ways. Interoperation would require changing them all to agree about the semantics, likely breaking their respective users. Convergence did not happen.


At Gophercon in 2016, we started an effort to define a single tool to manage dependencies. As part of that effort, we conducted surveys with many different kinds of users to understand what they needed as far as dependency management, and a team started work on a new tool, which became dep.

Dep aimed to be able to replace all the existing dependency management tools. The goal was to simplify by reshaping the existing different tools into a single one. It partly accomplished that. Dep also restored package uniqueness for its users, by having only one vendor directory at the top of the project tree.

But dep also introduced a serious problem that took us a while to fully appreciate. The problem was that dep embraced a design choice from glide, to support and encourage incompatible changes to a given package without changing the import path.

Here is an example. Suppose you are building your own program, and you need to have a configuration file, so you use version 2 of a popular Go YAML package:

Now suppose your program imports the Kubernetes client. It turns out that Kubernetes uses YAML extensively, and it uses version 1 of the same popular package:

Version 1 and version 2 have incompatible APIs, but they also have different import paths, so there is no ambiguity about which is meant by a given import. Kubernetes gets version 1, your config parser gets version 2, and everything works.

Dep abandoned this model. Version 1 and version 2 of the yaml package would now have the same import path, producing a conflict. Using the same import path for two incompatible versions, combined with package uniqueness, makes it impossible to build this program that you could build before:

It took us a while to understand this problem, because we had been applying the “new API means new import path” convention for so long that we took it for granted. The dep experiment helped us appreciate that convention better, and we gave it a name: the import compatibility rule:

“If an old package and a new package have the same import path, the new package must be backwards compatible with the old package.”

Go Modules

We took what worked well in the dep experiment and what we learned about what didn’t work well, and we experimented with a new design, called vgo. In vgo, packages followed the import compatibility rule, so that we can provide package uniqueness but still not break builds like the one we just looked at. This let us simplify other parts of the design as well.

Besides restoring the import compatibility rule, another important part of the vgo design was to give the concept of a group of packages a name and to allow that grouping to be separated from source code repository boundaries. The name of a group of Go packages is a module, so we refer to the system now as Go modules.

Go modules are now integrated with the go command, which avoids needing to copy around vendor directories at all.

Replacing GOPATH

With Go modules comes the end of GOPATH as a global name space. Nearly all the hard work of converting existing Go usage and tools to modules is caused by this change, from moving away from GOPATH.

The fundamental idea of GOPATH is that the GOPATH directory tree is the global source of truth for what versions are being used, and the versions being used don’t change as you move around between directories. But the global GOPATH mode is in direct conflict with the production requirement of per-project reproducible builds, which itself simplifies the Go development and deployment experience in many important ways.

Per-project reproducible builds means that when you are working in a checkout of project A, you get the same set of dependency versions that the other developers of project A get at that commit, as defined by the go.mod file. When you switch to working in a checkout of project B, now you get that project’s chosen dependency versions, the same set that the other developers of project B get. But those are likely different from project A. The set of dependency versions changing when you move from project A to project B is necessary to keep your development in sync with that of the other developers on A and on B. There can’t be a single global GOPATH anymore.

Most of the complexity of adopting modules arises directly from the loss of the one global GOPATH. Where is the source code for a package? Before, the answer depended only on your GOPATH environment variable, which most people rarely changed. Now, the answer depends on what project you are working on, which may change often. Everything needs updating for this new convention.

Most development tools use the go/build package to find and load Go source code. We’ve kept that package working, but the API did not anticipate modules, and the workarounds we added to avoid API changes are slower than we’d like. We’ve published a replacement, Developer tools should now use that instead. It supports both GOPATH and Go modules, and it is faster and easier to use. In a release or two we may move it into the standard library, but for now is stable and ready for use.

Go Module Proxies

One of the ways modules simplify Go development is by separating the concept of a group of packages from the underlying source control repository where they are stored.

When we talked to Go users about dependencies, almost everyone using Go at their companies asked how to route go get package fetches through their own servers, to better control what code can be used. And even open-source developers were concerned about dependencies disappearing or changing unexpectedly, breaking their builds. Before modules, users had attempted complex solutions to these problems, including intercepting the version control commands that the go command runs.

The Go modules design makes it easy to introduce the idea of a module proxy that can be asked for a specific module version.

Companies can now easily run their own module proxy, with custom rules about what is allowed and where cached copies are stored. The open-source Athens project has built just such a proxy, and Aaron Schlesinger gave a talk about it at Gophercon 2019. (We’ll add a link here when the video becomes available.)

And for individual developers and open source teams, the Go team at Google has launched a proxy that serves as a public mirror of all open-source Go packages, and Go 1.13 will use that proxy by default when in module mode. Katie Hockman gave a talk about this system at Gophercon 2019. (We’ll add a link here when the video becomes available.)

Go Modules Status

Go 1.11 introduced modules as an experimental, opt-in preview. We keep experimenting and simplifying. Go 1.12 shipped improvements, and Go 1.13 will ship more improvements.

Modules are now at the point where we believe that they will serve most users, but we aren’t ready to shut down GOPATH just yet. We will keep experimenting, simplifying, and revising.

We fully recgonize that the Go user community built up almost a decade of experience and tooling and workflows around GOPATH, and it will take a while to convert all of that to Go modules.

But again, we think that modules will now work very well for most users, and I encourage you to take a look when Go 1.13 is released.

As one data point, the Kubernetes project has a lot of dependencies, and they have migrated to using Go modules to manage them. You probably can too. And if you can’t, please let us know what’s not working for you or what’s too complex, by filing a bug report, and we will experiment and simplify.


Error handling, generics, and dependency management are going to take a few more years at least, and we’re going to focus on them for now. Error handling is close to done, modules will be next after that, and maybe generics after that.

But suppose we look a couple years out, to when we are done experimenting and simplifying and have shipped error handling, modules, and generics. Then what? It’s very difficult to predict the future, but I think that once these three have shipped, that may mark the start of a new quiet period for major changes. Our focus at that point will likely shift to simplifying Go development with improved tools.

Some of the tool work is already underway, so this post finishes by looking at that.

While we helped update all the Go community’s existing tools to understand Go modules, we noticed that having a ton of development helper tools that each do one small job is not serving users well. The individual tools are too hard to combine, too slow to invoke, and too different to use.

We began an effort to unify the most commonly-required development helpers into a single tool, now called gopls (pronounced “go, please”). Gopls speaks the Language Server Protocol, LSP, and works with any integrated development environment or text editor with LSP support, which is essentially everything at this point.

Gopls marks an expansion in focus for the Go project, from delivering standalone compiler-like, command-line tools like go vet or gorename to also delivering a complete IDE service. Rebecca Stambler gave a talk with more details about gopls and IDEs at Gophercon 2019. (We’ll add a link here when the video becomes available.)

After gopls, we also have ideas for reviving go fix in an extensible way and for making go vet even more helpful.


So there’s the path to Go 2. We will experiment and simplify. And experiment and simplify. And ship. And experiment and simplify. And do it all again. It may look or even feel like the path goes around in circles. But each time we experiment and simplify we learn a little more about what Go 2 should look like and move another step closer to it. Even abandoned experiments like try or our first four generics designs or dep are not wasted time. They help us learn what needs to be simplified before we can ship, and in some cases they help us better understand something we took for granted.

At some point we will realize we have experimented enough, and simplified enough, and shipped enough, and we will have Go 2.

Thanks to all of you in the Go community for helping us experiment and simplify and ship and find our way on this path.

See the index for more articles.