Go Concurrency: Goroutines and Channels Explained
Go's concurrency model is one of its strongest features. Learn how goroutines and channels work, and how to use them to write safe concurrent backend code.
Go Concurrency: Goroutines and Channels Explained
Go was designed with concurrency as a first-class feature. Its approach — goroutines and channels — is simpler and safer than traditional thread-based concurrency, and it’s one of the main reasons Go is popular for backend services.
What Is a Goroutine?
A goroutine is a lightweight thread managed by the Go runtime. You start one with the go keyword:
package main
import (
"fmt"
"time"
)
func fetchUser(id int) {
time.Sleep(100 * time.Millisecond) // simulate DB call
fmt.Printf("Fetched user %d\n", id)
}
func main() {
go fetchUser(1) // runs concurrently
go fetchUser(2)
go fetchUser(3)
time.Sleep(200 * time.Millisecond) // wait for goroutines to finish
}
Goroutines are cheap — you can run thousands of them. The Go runtime multiplexes them onto OS threads automatically.
The Problem With time.Sleep
Waiting with time.Sleep is fragile. The real solution is synchronisation. The two main tools are channels and sync.WaitGroup.
sync.WaitGroup
WaitGroup waits for a collection of goroutines to finish:
import (
"fmt"
"sync"
)
func fetchUser(id int, wg *sync.WaitGroup) {
defer wg.Done() // signal when this goroutine finishes
fmt.Printf("Fetched user %d\n", id)
}
func main() {
var wg sync.WaitGroup
for _, id := range []int{1, 2, 3} {
wg.Add(1)
go fetchUser(id, &wg)
}
wg.Wait() // block until all goroutines call Done()
fmt.Println("All users fetched")
}
Use WaitGroup when you just need to wait for goroutines to complete without collecting their results.
Channels
A channel is a typed pipe that goroutines use to communicate. The Go philosophy: “Do not communicate by sharing memory; share memory by communicating.”
ch := make(chan int) // unbuffered channel
ch := make(chan int, 10) // buffered channel with capacity 10
Send to a channel: ch <- value
Receive from a channel: value := <-ch
func fetchUser(id int, ch chan<- string) {
// do work
ch <- fmt.Sprintf("user_%d", id)
}
func main() {
ch := make(chan string, 3) // buffered — doesn't block the sender
go fetchUser(1, ch)
go fetchUser(2, ch)
go fetchUser(3, ch)
for i := 0; i < 3; i++ {
result := <-ch
fmt.Println(result)
}
}
Buffered vs Unbuffered
- Unbuffered: sender blocks until receiver is ready, receiver blocks until sender is ready — tight synchronisation
- Buffered: sender only blocks when the buffer is full — looser coupling
Fan-Out Pattern: Distribute Work Across Workers
A common backend pattern: distribute tasks to a pool of worker goroutines:
func worker(id int, jobs <-chan int, results chan<- int, wg *sync.WaitGroup) {
defer wg.Done()
for job := range jobs {
results <- processJob(job)
}
}
func main() {
jobs := make(chan int, 100)
results := make(chan int, 100)
var wg sync.WaitGroup
// Start 5 workers
for w := 1; w <= 5; w++ {
wg.Add(1)
go worker(w, jobs, results, &wg)
}
// Send 20 jobs
for j := 1; j <= 20; j++ {
jobs <- j
}
close(jobs) // signal workers that no more jobs are coming
// Wait for all workers, then close results
go func() {
wg.Wait()
close(results)
}()
for result := range results {
fmt.Println(result)
}
}
select: Handling Multiple Channels
select lets a goroutine wait on multiple channel operations, taking whichever is ready first:
func main() {
ch1 := make(chan string)
ch2 := make(chan string)
go func() { ch1 <- "from ch1" }()
go func() { ch2 <- "from ch2" }()
for i := 0; i < 2; i++ {
select {
case msg := <-ch1:
fmt.Println(msg)
case msg := <-ch2:
fmt.Println(msg)
}
}
}
Add a default case to make the select non-blocking.
Context: Cancellation and Timeouts
Always propagate context.Context through your goroutines for cancellation and timeouts:
func fetchWithTimeout(ctx context.Context, url string) (string, error) {
ctx, cancel := context.WithTimeout(ctx, 5*time.Second)
defer cancel()
req, _ := http.NewRequestWithContext(ctx, "GET", url, nil)
resp, err := http.DefaultClient.Do(req)
if err != nil {
return "", err
}
defer resp.Body.Close()
body, _ := io.ReadAll(resp.Body)
return string(body), nil
}
Common Mistakes
Goroutine leaks — a goroutine blocked forever on a channel with nobody to receive/send. Always ensure every goroutine has a path to exit, and use context cancellation.
Race conditions — two goroutines reading and writing shared memory without synchronisation. Run tests with -race:
go test -race ./...
Closing a closed channel — panics. Only close from the sending side, once.
The Takeaway
Go’s concurrency model is powerful because it’s composable. Start simple — one goroutine, one channel — and add complexity only when you need it. The Go concurrency patterns talk by Rob Pike is the best deep dive available and worth an hour of your time.
Related Articles
Docker for Backend Developers: A Practical Introduction
Learn how Docker works, why backend developers need it, and how to containerize your first Python or Go application in under 30 minutes.
Containerising a Backend Service: From Docker to Kubernetes
A practical walkthrough of containerising a Python backend service with Docker, deploying it to Kubernetes on ECS, and the production gaps that only show up once real traffic hits.
Environment Variables Explained: Keeping Secrets Out of Code
Learn what environment variables are and why every developer needs them. This guide covers how to use .env files, os.environ in Python, process.env in Node.js, and best practices.