The Go Blog

Working with Errors in Go 1.13

Damien Neil and Jonathan Amsterdam
17 October 2019

Introduction

Go’s treatment of errors as values has served us well over the last decade. Although the standard library’s support for errors has been minimal—just the errors.New and fmt.Errorf functions, which produce errors that contain only a message—the built-in error interface allows Go programmers to add whatever information they desire. All it requires is a type that implements an Error method:

type QueryError struct {
    Query string
    Err   error
}

func (e *QueryError) Error() string { return e.Query + ": " + e.Err.Error() }

Error types like this one are ubiquitous, and the information they store varies widely, from timestamps to filenames to server addresses. Often, that information includes another, lower-level error to provide additional context.

The pattern of one error containing another is so pervasive in Go code that, after extensive discussion, Go 1.13 added explicit support for it. This post describes the additions to the standard library that provide that support: three new functions in the errors package, and a new formatting verb for fmt.Errorf.

Before describing the changes in detail, let's review how errors are examined and constructed in previous versions of the language.

Errors before Go 1.13

Examining errors

Go errors are values. Programs make decisions based on those values in a few ways. The most common is to compare an error to nil to see if an operation failed.

if err != nil {
    // something went wrong
}

Sometimes we compare an error to a known sentinel value, to see if a specific error has occurred.

var ErrNotFound = errors.New("not found")

if err == ErrNotFound {
    // something wasn't found
}

An error value may be of any type which satisfies the language-defined error interface. A program can use a type assertion or type switch to view an error value as a more specific type.

type NotFoundError struct {
    Name string
}

func (e *NotFoundError) Error() string { return e.Name + ": not found" }

if e, ok := err.(*NotFoundError); ok {
    // e.Name wasn't found
}

Adding information

Frequently a function passes an error up the call stack while adding information to it, like a brief description of what was happening when the error occurred. A simple way to do this is to construct a new error that includes the text of the previous one:

if err != nil {
    return fmt.Errorf("decompress %v: %v", name, err)
}

Creating a new error with fmt.Errorf discards everything from the original error except the text. As we saw above with QueryError, we may sometimes want to define a new error type that contains the underlying error, preserving it for inspection by code. Here is QueryError again:

type QueryError struct {
    Query string
    Err   error
}

Programs can look inside a *QueryError value to make decisions based on the underlying error. You'll sometimes see this referred to as "unwrapping" the error.

if e, ok := err.(*QueryError); ok && e.Err == ErrPermission {
    // query failed because of a permission problem
}

The os.PathError type in the standard library is another example of one error which contains another.

Errors in Go 1.13

The Unwrap method

Go 1.13 introduces new features to the errors and fmt standard library packages to simplify working with errors that contain other errors. The most significant of these is a convention rather than a change: an error which contains another may implement an Unwrap method returning the underlying error. If e1.Unwrap() returns e2, then we say that e1 wraps e2, and that you can unwrap e1 to get e2.

Following this convention, we can give the QueryError type above an Unwrap method that returns its contained error:

func (e *QueryError) Unwrap() error { return e.Err }

The result of unwrapping an error may itself have an Unwrap method; we call the sequence of errors produced by repeated unwrapping the error chain.

Examining errors with Is and As

The Go 1.13 errors package includes two new functions for examining errors: Is and As.

The errors.Is function compares an error to a value.

// Similar to:
//   if err == ErrNotFound { … }
if errors.Is(err, ErrNotFound) {
    // something wasn't found
}

The As function tests whether an error is a specific type.

// Similar to:
//   if e, ok := err.(*QueryError); ok { … }
var e *QueryError
if errors.As(err, &e) {
    // err is a *QueryError, and e is set to the error's value
}

In the simplest case, the errors.Is function behaves like a comparison to a sentinel error, and the errors.As function behaves like a type assertion. When operating on wrapped errors, however, these functions consider all the errors in a chain. Let's look again at the example from above of unwrapping a QueryError to examine the underlying error:

if e, ok := err.(*QueryError); ok && e.Err == ErrPermission {
    // query failed because of a permission problem
}

Using the errors.Is function, we can write this as:

if errors.Is(err, ErrPermission) {
    // err, or some error that it wraps, is a permission problem
}

The errors package also includes a new Unwrap function which returns the result of calling an error's Unwrap method, or nil when the error has no Unwrap method. It is usually better to use errors.Is or errors.As, however, since these functions will examine the entire chain in a single call.

Wrapping errors with %w

As mentioned earlier, it is common to use the fmt.Errorf function to add additional information to an error.

if err != nil {
    return fmt.Errorf("decompress %v: %v", name, err)
}

In Go 1.13, the fmt.Errorf function supports a new %w verb. When this verb is present, the error returned by fmt.Errorf will have an Unwrap method returning the argument of %w, which must be an error. In all other ways, %w is identical to %v.

if err != nil {
    // Return an error which unwraps to err.
    return fmt.Errorf("decompress %v: %w", name, err)
}

Wrapping an error with %w makes it available to errors.Is and errors.As:

err := fmt.Errorf("access denied: %w”, ErrPermission)
...
if errors.Is(err, ErrPermission) ...

Whether to Wrap

When adding additional context to an error, either with fmt.Errorf or by implementing a custom type, you need to decide whether the new error should wrap the original. There is no single answer to this question; it depends on the context in which the new error is created. Wrap an error to expose it to callers. Do not wrap an error when doing so would expose implementation details.

As one example, imagine a Parse function which reads a complex data structure from an io.Reader. If an error occurs, we wish to report the line and column number at which it occurred. If the error occurs while reading from the io.Reader, we will want to wrap that error to allow inspection of the underlying problem. Since the caller provided the io.Reader to the function, it makes sense to expose the error produced by it.

In contrast, a function which makes several calls to a database probably should not return an error which unwraps to the result of one of those calls. If the database used by the function is an implementation detail, then exposing these errors is a violation of abstraction. For example, if the LookupUser function of your package pkg uses Go's database/sql package, then it may encounter a sql.ErrNoRows error. If you return that error with fmt.Errorf("accessing DB: %v", err) then a caller cannot look inside to find the sql.ErrNoRows. But if the function instead returns fmt.Errorf("accessing DB: %w", err), then a caller could reasonably write

err := pkg.LookupUser(...)
if errors.Is(err, sql.ErrNoRows) …

At that point, the function must always return sql.ErrNoRows if you don't want to break your clients, even if you switch to a different database package. In other words, wrapping an error makes that error part of your API. If you don't want to commit to supporting that error as part of your API in the future, you shouldn't wrap the error.

It’s important to remember that whether you wrap or not, the error text will be the same. A person trying to understand the error will have the same information either way; the choice to wrap is about whether to give programs additional information so they can make more informed decisions, or to withhold that information to preserve an abstraction layer.

Customizing error tests with Is and As methods

The errors.Is function examines each error in a chain for a match with a target value. By default, an error matches the target if the two are equal. In addition, an error in the chain may declare that it matches a target by implementing an Is method.

As an example, consider this error inspired by the Upspin error package which compares an error against a template, considering only fields which are non-zero in the template:

type Error struct {
    Path string
    User string
}

func (e *Error) Is(target error) bool {
    t, ok := target.(*Error)
    if !ok {
        return false
    }
    return (e.Path == t.Path || t.Path == "") &&
           (e.User == t.User || t.User == "")
}

if errors.Is(err, &Error{User: "someuser"}) {
    // err's User field is "someuser".
}

The errors.As function similarly consults an As method when present.

Errors and package APIs

A package which returns errors (and most do) should describe what properties of those errors programmers may rely on. A well-designed package will also avoid returning errors with properties that should not be relied upon.

The simplest specification is to say that operations either succeed or fail, returning a nil or non-nil error value respectively. In many cases, no further information is needed.

If we wish a function to return an identifiable error condition, such as "item not found," we might return an error wrapping a sentinel.

var ErrNotFound = errors.New("not found")

// FetchItem returns the named item.
//
// If no item with the name exists, FetchItem returns an error
// wrapping ErrNotFound.
func FetchItem(name string) (*Item, error) {
    if itemNotFound(name) {
        return nil, fmt.Errorf("%q: %w", name, ErrNotFound)
    }
    // ...
}

There are other existing patterns for providing errors which can be semantically examined by the caller, such as directly returning a sentinel value, a specific type, or a value which can be examined with a predicate function.

In all cases, care should be taken not to expose internal details to the user. As we touched on in "Whether to Wrap" above, when you return an error from another package you should convert the error to a form that does not expose the underlying error, unless you are willing to commit to returning that specific error in the future.

f, err := os.Open(filename)
if err != nil {
    // The *os.PathError returned by os.Open is an internal detail.
    // To avoid exposing it to the caller, repackage it as a new
    // error with the same text. We use the %v formatting verb, since
    // %w would permit the caller to unwrap the original *os.PathError.
    return fmt.Errorf("%v", err)
}

If a function is defined as returning an error wrapping some sentinel or type, do not return the underlying error directly.

var ErrPermission = errors.New("permission denied")

// DoSomething returns an error wrapping ErrPermission if the user
// does not have permission to do something.
func DoSomething() {
    if !userHasPermission() {
        // If we return ErrPermission directly, callers might come
        // to depend on the exact error value, writing code like this:
        //
        //     if err := pkg.DoSomething(); err == pkg.ErrPermission { … }
        //
        // This will cause problems if we want to add additional
        // context to the error in the future. To avoid this, we
        // return an error wrapping the sentinel so that users must
        // always unwrap it:
        //
        //     if err := pkg.DoSomething(); errors.Is(err, pkg.ErrPermission) { ... }
        return fmt.Errorf("%w", ErrPermission)
    }
    // ...
}

Conclusion

Although the changes we’ve discussed amount to just three functions and a formatting verb, we hope they will go a long way toward improving how errors are handled in Go programs. We expect that wrapping to provide additional context will become commonplace, helping programs to make better decisions and helping programmers to find bugs more quickly.

As Russ Cox said in his GopherCon 2019 keynote, on the path to Go 2 we experiment, simplify and ship. Now that we’ve shipped these changes, we look forward to the experiments that will follow.

Publishing Go Modules

Tyler Bui-Palsulich
26 September 2019

Introduction

This post is part 3 in a series.

This post discusses how to write and publish modules so other modules can depend on them.

Please note: this post covers development up to and including v1. A future article will cover developing a module at v2 and beyond, which requires changing the module's path.

This post uses Git in examples. Mercurial, Bazaar, and others are supported as well.

Project setup

For this post, you'll need an existing project to use as an example. So, start with the files from the end of the Using Go Modules article:

$ cat go.mod
module example.com/hello

go 1.12

require rsc.io/quote/v3 v3.1.0

$ cat go.sum
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c h1:qgOY6WgZOaTkIIMiVjBQcw93ERBE4m30iBm00nkL0i8=
golang.org/x/text v0.0.0-20170915032832-14c0d48ead0c/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
rsc.io/quote/v3 v3.1.0 h1:9JKUTTIUgS6kzR9mK1YuGKv6Nl+DijDNIc0ghT58FaY=
rsc.io/quote/v3 v3.1.0/go.mod h1:yEA65RcK8LyAZtP9Kv3t0HmxON59tX3rD+tICJqUlj0=
rsc.io/sampler v1.3.0 h1:7uVkIFmeBqHfdjD+gZwtXXI+RODJ2Wc4O7MPEh/QiW4=
rsc.io/sampler v1.3.0/go.mod h1:T1hPZKmBbMNahiBKFy5HrXp6adAjACjK9JXDnKaTXpA=

$ cat hello.go
package hello

import "rsc.io/quote/v3"

func Hello() string {
    return quote.HelloV3()
}

func Proverb() string {
    return quote.Concurrency()
}

$ cat hello_test.go
package hello

import (
    "testing"
)

func TestHello(t *testing.T) {
    want := "Hello, world."
    if got := Hello(); got != want {
        t.Errorf("Hello() = %q, want %q", got, want)
    }
}

func TestProverb(t *testing.T) {
    want := "Concurrency is not parallelism."
    if got := Proverb(); got != want {
        t.Errorf("Proverb() = %q, want %q", got, want)
    }
}

$

Next, create a new git repository and add an initial commit. If you're publishing your own project, be sure to include a LICENSE file. Change to the directory containing the go.mod then create the repo:

$ git init
$ git add LICENSE go.mod go.sum hello.go hello_test.go
$ git commit -m "hello: initial commit"
$

Semantic versions and modules

Every required module in a go.mod has a semantic version, the minimum version of that dependency to use to build the module.

A semantic version has the form vMAJOR.MINOR.PATCH.

  • Increment the MAJOR version when you make a backwards incompatible change to the public API of your module. This should only be done when absolutely necessary.
  • Increment the MINOR version when you make a backwards compatible change to the API, like changing dependencies or adding a new function, method, struct field, or type.
  • Increment the PATCH version after making minor changes that don't affect your module's public API or dependencies, like fixing a bug.

You can specify pre-release versions by appending a hyphen and dot separated identifiers (for example, v1.0.1-alpha or v2.2.2-beta.2). Normal releases are preferred by the go command over pre-release versions, so users must ask for pre-release versions explicitly (for example, go get example.com/hello@v1.0.1-alpha) if your module has any normal releases.

v0 major versions and pre-release versions do not guarantee backwards compatibility. They let you refine your API before making stability commitments to your users. However, v1 major versions and beyond require backwards compatibility within that major version.

The version referenced in a go.mod may be an explicit release tagged in the repository (for example, v1.5.2), or it may be a pseudo-version based on a specific commit (for example, v0.0.0-20170915032832-14c0d48ead0c). Pseudo-versions are a special type of pre-release version. Pseudo-versions are useful when a user needs to depend on a project that has not published any semantic version tags, or develop against a commit that hasn't been tagged yet, but users should not assume that pseudo-versions provide a stable or well-tested API. Tagging your modules with explicit versions signals to your users that specific versions are fully tested and ready to use.

Once you start tagging your repo with versions, it's important to keep tagging new releases as you develop your module. When users request a new version of your module (with go get -u or go get example.com/hello), the go command will choose the greatest semantic release version available, even if that version is several years old and many changes behind the primary branch. Continuing to tag new releases will make your ongoing improvements available to your users.

Do not delete version tags from your repo. If you find a bug or a security issue with a version, release a new version. If people depend on a version that you have deleted, their builds may fail. Similarly, once you release a version, do not change or overwrite it. The module mirror and checksum database store modules, their versions, and signed cryptographic hashes to ensure that the build of a given version remains reproducible over time.

v0: the initial, unstable version

Let's tag the module with a v0 semantic version. A v0 version does not make any stability guarantees, so nearly all projects should start with v0 as they refine their public API.

Tagging a new version has a few steps:

1. Run go mod tidy, which removes any dependencies the module might have accumulated that are no longer necessary.

2. Run go test ./... a final time to make sure everything is working.

3. Tag the project with a new version using git tag.

4. Push the new tag to the origin repository.

$ go mod tidy
$ go test ./...
ok      example.com/hello       0.015s
$ git add go.mod go.sum hello.go hello_test.go
$ git commit -m "hello: changes for v0.1.0"
$ git tag v0.1.0
$ git push origin v0.1.0
$

Now other projects can depend on v0.1.0 of example.com/hello. For your own module, you can run go list -m example.com/hello@v0.1.0 to confirm the latest version is available (this example module does not exist, so no versions are available). If you don't see the latest version immediately and you're using the Go module proxy (the default since Go 1.13), try again in a few minutes to give the proxy time to load the new version.

If you add to the public API, make a breaking change to a v0 module, or upgrade the minor or version of one of your dependencies, increment the MINOR version for your next release. For example, the next release after v0.1.0 would be v0.2.0.

If you fix a bug in an existing version, increment the PATCH version. For example, the next release after v0.1.0 would be v0.1.1.

v1: the first stable version

Once you are absolutely sure your module's API is stable, you can release v1.0.0. A v1 major version communicates to users that no incompatible changes will be made to the module's API. They can upgrade to new v1 minor and patch releases, and their code should not break. Function and method signatures will not change, exported types will not be removed, and so on. If there are changes to the API, they will be backwards compatible (for example, adding a new field to a struct) and will be included in a new minor release. If there are bug fixes (for example, a security fix), they will be included in a patch release (or as part of a minor release).

Sometimes, maintaining backwards compatibility can lead to awkward APIs. That's OK. An imperfect API is better than breaking users' existing code.

The standard library's strings package is a prime example of maintaining backwards compatibility at the cost of API consistency.

  • Split slices a string into all substrings separated by a separator and returns a slice of the substrings between those separators.
  • SplitN can be used to control the number of substrings to return.

However, Replace took a count of how many instances of the string to replace from the beginning (unlike Split).

Given Split and SplitN, you would expect functions like Replace and ReplaceN. But, we couldn't change the existing Replace without breaking callers, which we promised not to do. So, in Go 1.12, we added a new function, ReplaceAll. The resulting API is a little odd, since Split and Replace behave differently, but that inconsistency is better than a breaking change.

Let's say you're happy with the API of example.com/hello and you want to release v1 as the first stable version.

Tagging v1 uses the same process as tagging a v0 version: run go mod tidy and go test ./..., tag the version, and push the tag to the origin repository:

$ go mod tidy
$ go test ./...
ok      example.com/hello       0.015s
$ git add go.mod go.sum hello.go hello_test.go
$ git commit -m "hello: changes for v1.0.0"
$ git tag v1.0.0
$ git push origin v1.0.0
$

At this point, the v1 API of example.com/hello is solidified. This communicates to everyone that our API is stable and they should feel comfortable using it.

Conclusion

This post walked through the process of tagging a module with semantic versions and when to release v1. A future post will cover how to maintain and publish modules at v2 and beyond.

To provide feedback and help shape the future of dependency management in Go, please send us bug reports or experience reports.

Thanks for all your feedback and help improving Go modules.

Go 1.13 is released

Andrew Bonventre
3 September 2019

Today the Go team is very happy to announce the release of Go 1.13. You can get it from the download page.

Some of the highlights include:

For the complete list of changes and more information about the improvements above, see the Go 1.13 release notes.

We want to thank everyone who contributed to this release by writing code, filing bugs, providing feedback, and/or testing the beta and release candidates. Your contributions and diligence helped to ensure that Go 1.13 is as stable as possible. That said, if you notice any problems, please file an issue.

We hope you enjoy the new release!

Module Mirror and Checksum Database Launched

Katie Hockman
29 August 2019

We are excited to share that our module mirror, index, and checksum database are now production ready! The go command will use the module mirror and checksum database by default for Go 1.13 module users. See proxy.golang.org/privacy for privacy information about these services and the go command documentation for configuration details, including how to disable the use of these servers or use different ones. If you depend on non-public modules, see the documentation for configuring your environment.

This post will describe these services and the benefits of using them, and summarizes some of the points from the Go Module Proxy: Life of a Query talk at Gophercon 2019. See the recording if you are interested in the full talk.

Module Mirror

Modules are sets of Go packages that are versioned together, and the contents of each version are immutable. That immutability provides new opportunities for caching and authentication. When go get runs in module mode, it must fetch the module containing the requested packages, as well as any new dependencies introduced by that module, updating your go.mod and go.sum files as needed. Fetching modules from version control can be expensive in terms of latency and storage in your system: the go command may be forced to pull down the full commit history of a repository containing a transitive dependency, even one that isn’t being built, just to resolve its version.

The solution is to use a module proxy, which speaks an API that is better suited to the go command’s needs (see go help goproxy). When go get runs in module mode with a proxy, it will work faster by only asking for the specific module metadata or source code it needs, and not worrying about the rest. Below is an example of how the go command may use a proxy with go get by requesting the list of versions, then the info, mod, and zip file for the latest tagged version.

A module mirror is a special kind of module proxy that caches metadata and source code in its own storage system, allowing the mirror to continue to serve source code that is no longer available from the original locations. This can speed up downloads and protect you from disappearing dependencies. See Go Modules in 2019 for more information.

The Go team maintains a module mirror, served at proxy.golang.org, which the go command will use by default for module users as of Go 1.13. If you are running an earlier version of the go command, then you can use this service by setting GOPROXY=https://proxy.golang.org in your local environment.

Checksum Database

Modules introduced the go.sum file, which is a list of SHA-256 hashes of the source code and go.mod files of each dependency when it was first downloaded. The go command can use the hashes to detect misbehavior by an origin server or proxy that gives you different code for the same version.

The limitation of this go.sum file is that it works entirely by trust on your first use. When you add a version of a dependency that you’ve never seen before to your module (possibly by upgrading an existing dependency), the go command fetches the code and adds lines to the go.sum file on the fly. The problem is that those go.sum lines aren’t being checked against anyone else’s: they might be different from the go.sum lines that the go command just generated for someone else, perhaps because a proxy intentionally served malicious code targeted to you.

Go's solution is a global source of go.sum lines, called a checksum database, which ensures that the go command always adds the same lines to everyone's go.sum file. Whenever the go command receives new source code, it can verify the hash of that code against this global database to make sure the hashes match, ensuring that everyone is using the same code for a given version.

The checksum database is served by sum.golang.org, and is built on a Transparent Log (or “Merkle tree”) of hashes backed by Trillian. The main advantage of a Merkle tree is that it is tamper proof and has properties that don’t allow for misbehavior to go undetected, which makes it more trustworthy than a simple database. The go command uses this tree to check “inclusion” proofs (that a specific record exists in the log) and “consistency” proofs (that the tree hasn’t been tampered with) before adding new go.sum lines to your module’s go.sum file. Below is an example of such a tree.

The checksum database supports a set of endpoints used by the go command to request and verify go.sum lines. The /lookup endpoint provides a “signed tree head” (STH) and the requested go.sum lines. The /tile endpoint provides chunks of the tree called tiles which the go command can use for proofs. Below is an example of how the go command may interact with the checksum database by doing a /lookup of a module version, then requesting the tiles required for the proofs.

This checksum database allows the go command to safely use an otherwise untrusted proxy. Because there is an auditable security layer sitting on top of it, a proxy or origin server can’t intentionally, arbitrarily, or accidentally start giving you the wrong code without getting caught. Even the author of a module can’t move their tags around or otherwise change the bits associated with a specific version from one day to the next without the change being detected.

If you are using Go 1.12 or earlier, you can manually check a go.sum file against the checksum database with gosumcheck:

$ go get golang.org/x/mod/gosumcheck
$ gosumcheck /path/to/go.sum

In addition to verification done by the go command, third-party auditors can hold the checksum database accountable by iterating over the log looking for bad entries. They can work together and gossip about the state of the tree as it grows to ensure that it remains uncompromised, and we hope that the Go community will run them.

Module Index

The module index is served by index.golang.org, and is a public feed of new module versions that become available through proxy.golang.org. This is particularly useful for tool developers that want to keep their own cache of what’s available in proxy.golang.org, or keep up-to-date on some of the newest modules that people are using.

Feedback or bugs

We hope these services improve your experience with modules, and encourage you to file issues if you run into problems or have feedback!

Migrating to Go Modules

Jean de Klerk
21 August 2019

Introduction

This post is part 2 in a series.

Go projects use a wide variety of dependency management strategies. Vendoring tools such as dep and glide are popular, but they have wide differences in behavior and don't always work well together. Some projects store their entire GOPATH directory in a single Git repository. Others simply rely on go get and expect fairly recent versions of dependencies to be installed in GOPATH.

Go's module system, introduced in Go 1.11, provides an official dependency management solution built into the go command. This article describes tools and techniques for converting a project to modules.

Please note: if your project is already tagged at v2.0.0 or higher, you will need to update your module path when you add a go.mod file. We'll explain how to do that without breaking your users in a future article focused on v2 and beyond.

Migrating to Go modules in your project

A project might be in one of three states when beginning the transition to Go modules:

  • A brand new Go project.
  • An established Go project with a non-modules dependency manager.
  • An established Go project without any dependency manager.

The first case is covered in Using Go Modules; we'll address the latter two in this post.

With a dependency manager

To convert a project that already uses a dependency management tool, run the following commands:

$ git clone https://github.com/my/project
[...]
$ cd project
$ cat Godeps/Godeps.json
{
    "ImportPath": "github.com/my/project",
    "GoVersion": "go1.12",
    "GodepVersion": "v80",
    "Deps": [
        {
            "ImportPath": "rsc.io/binaryregexp",
            "Comment": "v0.2.0-1-g545cabd",
            "Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"
        },
        {
            "ImportPath": "rsc.io/binaryregexp/syntax",
            "Comment": "v0.2.0-1-g545cabd",
            "Rev": "545cabda89ca36b48b8e681a30d9d769a30b3074"
        }
    ]
}
$ go mod init github.com/my/project
go: creating new go.mod: module github.com/my/project
go: copying requirements from Godeps/Godeps.json
$ cat go.mod
module github.com/my/project

go 1.12

require rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca
$

go mod init creates a new go.mod file and automatically imports dependencies from Godeps.json, Gopkg.lock, or a number of other supported formats. The argument to go mod init is the module path, the location where the module may be found.

This is a good time to pause and run go build ./... and go test ./... before continuing. Later steps may modify your go.mod file, so if you prefer to take an iterative approach, this is the closest your go.mod file will be to your pre-modules dependency specification.

$ go mod tidy
go: downloading rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca
go: extracting rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca
$ cat go.sum
rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca h1:FKXXXJ6G2bFoVe7hX3kEX6Izxw5ZKRH57DFBJmHCbkU=
rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca/go.mod h1:qTv7/COck+e2FymRvadv62gMdZztPaShugOCi3I+8D8=
$

go mod tidy finds all the packages transitively imported by packages in your module. It adds new module requirements for packages not provided by any known module, and it removes requirements on modules that don't provide any imported packages. If a module provides packages that are only imported by projects that haven't migrated to modules yet, the module requirement will be marked with an // indirect comment. It is always good practice to run go mod tidy before committing a go.mod file to version control.

Let's finish by making sure the code builds and tests pass:

$ go build ./...
$ go test ./...
[...]
$

Note that other dependency managers may specify dependencies at the level of individual packages or entire repositories (not modules), and generally do not recognize the requirements specified in the go.mod files of dependencies. Consequently, you may not get exactly the same version of every package as before, and there's some risk of upgrading past breaking changes. Therefore, it's important to follow the above commands with an audit of the resulting dependencies. To do so, run

$ go list -m all
go: finding rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca
github.com/my/project
rsc.io/binaryregexp v0.2.1-0.20190524193500-545cabda89ca
$

and compare the resulting versions with your old dependency management file to ensure that the selected versions are appropriate. If you find a version that wasn't what you wanted, you can find out why using go mod why -m and/or go mod graph, and upgrade or downgrade to the correct version using go get. (If the version you request is older than the version that was previously selected, go get will downgrade other dependencies as needed to maintain compatibility.) For example,

$ go mod why -m rsc.io/binaryregexp
[...]
$ go mod graph | grep rsc.io/binaryregexp
[...]
$ go get rsc.io/binaryregexp@v0.2.0
$

Without a dependency manager

For a Go project without a dependency management system, start by creating a go.mod file:

$ git clone https://go.googlesource.com/blog
[...]
$ cd blog
$ go mod init golang.org/x/blog
go: creating new go.mod: module golang.org/x/blog
$ cat go.mod
module golang.org/x/blog

go 1.12
$

Without a configuration file from a previous dependency manager, go mod init will create a go.mod file with only the module and go directives. In this example, we set the module path to golang.org/x/blog because that is its custom import path. Users may import packages with this path, and we must be careful not to change it.

The module directive declares the module path, and the go directive declares the expected version of the Go language used to compile the code within the module.

Next, run go mod tidy to add the module's dependencies:

$ go mod tidy
go: finding golang.org/x/website latest
go: finding gopkg.in/tomb.v2 latest
go: finding golang.org/x/net latest
go: finding golang.org/x/tools latest
go: downloading github.com/gorilla/context v1.1.1
go: downloading golang.org/x/tools v0.0.0-20190813214729-9dba7caff850
go: downloading golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7
go: extracting github.com/gorilla/context v1.1.1
go: extracting golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7
go: downloading gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637
go: extracting gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637
go: extracting golang.org/x/tools v0.0.0-20190813214729-9dba7caff850
go: downloading golang.org/x/website v0.0.0-20190809153340-86a7442ada7c
go: extracting golang.org/x/website v0.0.0-20190809153340-86a7442ada7c
$ cat go.mod
module golang.org/x/blog

go 1.12

require (
    github.com/gorilla/context v1.1.1
    golang.org/x/net v0.0.0-20190813141303-74dc4d7220e7
    golang.org/x/text v0.3.2
    golang.org/x/tools v0.0.0-20190813214729-9dba7caff850
    golang.org/x/website v0.0.0-20190809153340-86a7442ada7c
    gopkg.in/tomb.v2 v2.0.0-20161208151619-d5d1b5820637
)
$ cat go.sum
cloud.google.com/go v0.26.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
cloud.google.com/go v0.34.0/go.mod h1:aQUYkXzVsufM+DwF1aE+0xfcU+56JwCaLick0ClmMTw=
git.apache.org/thrift.git v0.0.0-20180902110319-2566ecd5d999/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
git.apache.org/thrift.git v0.0.0-20181218151757-9b75e4fe745a/go.mod h1:fPE2ZNJGynbRyZ4dJvy6G277gSllfV2HJqblrnkyeyg=
github.com/beorn7/perks v0.0.0-20180321164747-3a771d992973/go.mod h1:Dwedo/Wpr24TaqPxmxbtue+5NUziq4I4S80YR8gNf3Q=
[...]
$

go mod tidy added module requirements for all the packages transitively imported by packages in your module and built a go.sum with checksums for each library at a specific version. Let's finish by making sure the code still builds and tests still pass:

$ go build ./...
$ go test ./...
ok      golang.org/x/blog    0.335s
?       golang.org/x/blog/content/appengine    [no test files]
ok      golang.org/x/blog/content/cover    0.040s
?       golang.org/x/blog/content/h2push/server    [no test files]
?       golang.org/x/blog/content/survey2016    [no test files]
?       golang.org/x/blog/content/survey2017    [no test files]
?       golang.org/x/blog/support/racy    [no test files]
$

Note that when go mod tidy adds a requirement, it adds the latest version of the module. If your GOPATH included an older version of a dependency that subsequently published a breaking change, you may see errors in go mod tidy, go build, or go test. If this happens, try downgrading to an older version with go get (for example, go get github.com/broken/module@v1.1.0), or take the time to make your module compatible with the latest version of each dependency.

Tests in module mode

Some tests may need tweaks after migrating to Go modules.

If a test needs to write files in the package directory, it may fail when the package directory is in the module cache, which is read-only. In particular, this may cause go test all to fail. The test should copy files it needs to write to a temporary directory instead.

If a test relies on relative paths (../package-in-another-module) to locate and read files in another package, it will fail if the package is in another module, which will be located in a versioned subdirectory of the module cache or a path specified in a replace directive. If this is the case, you may need to copy the test inputs into your module, or convert the test inputs from raw files to data embedded in .go source files.

If a test expects go commands within the test to run in GOPATH mode, it may fail. If this is the case, you may need to add a go.mod file to the source tree to be tested, or set GO111MODULE=off explicitly.

Publishing a release

Finally, you should tag and publish a release version for your new module. This is optional if you haven't released any versions yet, but without an official release, downstream users will depend on specific commits using pseudo-versions, which may be more difficult to support.

$ git tag v1.2.0
$ git push origin v1.2.0

Your new go.mod file defines a canonical import path for your module and adds new minimum version requirements. If your users are already using the correct import path, and your dependencies haven't made breaking changes, then adding the go.mod file is backwards-compatible — but it's a significant change, and may expose existing problems. If you have existing version tags, you should increment the minor version. See Publishing Go Modules to learn how to increment and publish versions.

Imports and canonical module paths

Each module declares its module path in its go.mod file. Each import statement that refers to a package within the module must have the module path as a prefix of the package path. However, the go command may encounter a repository containing the module through many different remote import paths. For example, both golang.org/x/lint and github.com/golang/lint resolve to repositories containing the code hosted at go.googlesource.com/lint. The go.mod file contained in that repository declares its path to be golang.org/x/lint, so only that path corresponds to a valid module.

Go 1.4 provided a mechanism for declaring canonical import paths using // import comments, but package authors did not always provide them. As a result, code written prior to modules may have used a non-canonical import path for a module without surfacing an error for the mismatch. When using modules, the import path must match the canonical module path, so you may need to update import statements: for example, you may need to change import "github.com/golang/lint" to import "golang.org/x/lint".

Another scenario in which a module's canonical path may differ from its repository path occurs for Go modules at major version 2 or higher. A Go module with a major version above 1 must include a major-version suffix in its module path: for example, version v2.0.0 must have the suffix /v2. However, import statements may have referred to the packages within the module without that suffix. For example, non-module users of github.com/russross/blackfriday/v2 at v2.0.1 may have imported it as github.com/russross/blackfriday instead, and will need to update the import path to include the /v2 suffix.

Conclusion

Converting to Go modules should be a straightforward process for most users. Occasional issues may arise due to non-canonical import paths or breaking changes within a dependency. Future posts will explore publishing new versions, v2 and beyond, and ways to debug strange situations.

To provide feedback and help shape the future of dependency management in Go, please send us bug reports or experience reports.

Thanks for all your feedback and help improving modules.

See the index for more articles.