Atomic Increase in Zig
Zig continues to gain attention among systems programmers for its promise of simplicity, performance, and safety without the baggage of hidden control flow. One of the best ways to appreciate these qualities is through a small, focused example that touches real concurrency primitives. Today, we’ll look at a concise program that demonstrates how Zig handles multithreaded access to shared memory using atomic operations — a classic “increment a counter from multiple threads” benchmark that reveals both the power and the clarity of Zig’s design.
The program spawns four threads, each tasked with incrementing a shared usize counter exactly 100,000 times. Without synchronization, this would be a textbook data race leading to lost updates and unpredictable results. Instead, we rely on Zig’s built-in atomic primitives: each increment is performed with @atomicRmw (atomic read-modify-write), using the strongest memory ordering, .seq_cst (sequential consistency). This guarantees that all threads see a coherent view of the counter, and after all threads complete, the final value is reliably read with @atomicLoad. When the program finishes, it prints “Final counter value: 400000” — exactly what we expect from four threads performing 100,000 successful increments each.
What makes this example particularly instructive is how little ceremony is required. Thread creation uses std.Thread.spawn, passing a tuple of arguments directly to the worker function. We allocate an array of thread handles with the page allocator, spawn each one in a clean for loop, and join them just as cleanly. Resource cleanup is automatic thanks to defer. There are no locks, no mutexes, no condition variables — just direct use of hardware-supported atomics. This lock-free approach is not only faster in many real-world scenarios but also immune to deadlocks and priority inversion.
Zig’s standard library makes the mechanics of threading feel almost trivial, yet it never hides the important details. Errors from thread creation are explicit (try), memory ordering must be chosen deliberately, and the compiler refuses to let you shoot yourself in the foot with mismatched atomic types or orderings. The result is code that is short, readable, and demonstrably correct — the kind of systems programming experience Zig was built to deliver.
If you’re curious about low-level concurrency without the complexity of C++ templates or Rust’s borrow checker, this little program is an excellent starting point. It runs in well under a second, uses almost no memory, and gives you a real taste of what safe, high-performance multithreading can look like when the language gets out of your way. Give it a try — you might find yourself reaching for Zig the next time you need fine-grained control over concurrent code.
const std = @import("std");
const Thread = std.Thread;
pub fn main() !void {
// Plain usize that we will increment atomically
var counter: usize = 0;
const thread_count = 4;
const increments = 100_000;
// Allocate thread handles
const threads = try std.heap.page_allocator.alloc(Thread, thread_count);
defer std.heap.page_allocator.free(threads);
// Spawn threads
for (threads) |*t| {
t.* = try Thread.spawn(.{}, threadMain, .{ &counter, increments });
}
// Join them all
for (threads) |t| {
t.join();
}
// Atomically load the final value
const final = @atomicLoad(usize, &counter, .seq_cst);
var buffer: [128]u8 = undefined; // small buffer for stdout
var stdout_writer = std.fs.File.stdout().writer(&buffer);
const stdout = &stdout_writer.interface;
try stdout.print("Final counter value: {d}\n", .{final});
try stdout.flush(); // ensure everything is written
}
fn threadMain(counter_ptr: *usize, increments: usize) void {
var i: usize = 0;
while (i < increments) : (i += 1) {
_ = @atomicRmw(usize, counter_ptr, .Add, 1, .seq_cst);
}
}
The program begins by importing necessary parts of Zig’s standard library, including facilities for threading, and defines the main entry point with an error-returning void type, indicating it can fail gracefully. It declares a shared counter as a plain usize initialized to zero, sets up constants for four threads and 100,000 increments each, and dynamically allocates an array of thread handles using the page allocator, ensuring the memory is freed automatically via defer when the function exits.
In the main() function, it then iterates over the array to spawn each thread, passing a tuple containing a pointer to the shared counter and the number of increments as arguments to a worker function called threadMain(). After all threads are spawned, another loop joins them, waiting for every thread to complete before proceeding. Once joined, it performs an atomic load of the counter with sequential consistency ordering to safely read the final value in a multithreaded context, guaranteeing that all prior atomic operations from every thread are visible.
The threadMain() function, executed concurrently in each thread, simply loops the specified number of times and, on each iteration, uses the built-in @atomicRmw (atomic read-modify-write) to increment the counter by one. The operation uses sequential consistency memory ordering, the strongest and most intuitive model, ensuring that all threads agree on the order of modifications. The result of the atomic operation is discarded with an underscore since we only care about the side effect on the shared counter.
Finally, the program prints the result to standard output using a buffered writer and formatted print, correctly displaying the expected final value of 400,000 when run successfully. This concise example elegantly demonstrates Zig’s approach to concurrency: direct access to hardware-supported atomics, explicit error handling, straightforward thread management, and no need for locks or higher-level synchronization primitives, all while maintaining safety and predictability through the language’s design.
Save the code as atomicIncrease.zig and execute it:
$ zig version
0.15.2
$ zig run atomicIncrease.zig
Final counter value: 400000
Happy coding in Zig!