Using Shared-State Concurrency

We've explored message passing as a way for threads to communicate with each other. Now let's look at another method: shared-state concurrency. Shared-state concurrency is when multiple threads have access to the same data. While message passing ("share memory by communicating") is often the better choice, Oxide's type system makes shared-state concurrency safe and practical through the use of Mutex<T> and Arc<T>.

Mutex Provides Mutual Exclusion

Mutex is short for mutual exclusion, and as the name suggests, a mutex allows only one thread to access some data at any given time. To access the data in a mutex, a thread must first signal that it wants access by asking to acquire the mutex's lock. The lock is a data structure that is part of the mutex that keeps track of who currently has exclusive access to the data.

A mutex is described as guarding the data it holds via the locking system.

The API of Mutex<T>

Let's first look at how to use a mutex:

import std.sync.Mutex

fn main() {
    let m = Mutex.new(5)

    {
        var num = m.lock().unwrap()
        *num = 6
    }

    println!("m = \(m:?)")
}

Like many types, we create a Mutex<T> using the associated function new. To access the data inside the mutex, we use the lock method to acquire the lock. This call will block the current thread so it can't do any work until it's our turn to have the lock.

The call to lock returns a Result<MutexGuard>. If another thread holding the lock panicked, the lock call will fail and return an Err. Here we use unwrap() to panic in that situation.

The lock method returns a smart pointer called MutexGuard. This smart pointer implements Deref to point at our inner data. The smart pointer also has a Drop implementation that releases the lock automatically when the MutexGuard drops (goes out of scope).

When we run this code, we'll see:

m = Mutex { data: 6 }

The mutex successfully protected the integer inside, preventing data races.

Sharing a Mutex<T> Between Multiple Threads

Now let's try to share a value between multiple threads using a Mutex<T>. We'll spin up 10 threads and have each increment a counter by 1, so the counter goes from 0 to 10:

import std.sync.Mutex
import std.thread

fn main() {
    let counter = Mutex.new(0)
    var handles = vec![]

    for _ in 0..10 {
        let handle = thread.spawn move {
            var num = counter.lock().unwrap()
            *num += 1
        }
        handles.push(handle)
    }

    for handle in handles {
        handle.join().unwrap()
    }

    println!("Result: \(counter.lock().unwrap())")
}

We create a Mutex<Int> with an initial value of 0. We then create 10 threads by looping 10 times. For each thread, we use move to move the counter into the thread closure.

However, if we try to compile this, we get an error:

error[E0382]: borrow of moved value: `counter`
 --> src/main.ox:11:37
  |
9 |     for _ in 0..10 {
10|         let handle = thread.spawn move {
11|             var num = counter.lock().unwrap()
   |                          ------- value borrowed here after move

The problem is that counter is moved into the first thread's closure because we use move. So the second iteration of the loop tries to move an already-moved value! The compiler correctly tells us we can't move counter multiple times.

Arc<T>: Atomic Reference Counting

The solution is to use Arc<T>, which stands for Atomic Reference Counting. The Arc<T> type lets us have multiple owners of a value. The atomic part means Arc<T> is safe to use in concurrent situations.

Let's modify our code to use Arc<Mutex<Int>>:

import std.sync.Arc
import std.sync.Mutex
import std.thread

fn main() {
    let counter = Arc.new(Mutex.new(0))
    var handles = vec![]

    for _ in 0..10 {
        let counter = Arc.clone(&counter)
        let handle = thread.spawn move {
            var num = counter.lock().unwrap()
            *num += 1
        }
        handles.push(handle)
    }

    for handle in handles {
        handle.join().unwrap()
    }

    println!("Result: \(counter.lock().unwrap())")
}

The key part is that we clone the Arc for each thread. The Arc.clone(&counter) call creates a new Arc that points to the same value on the heap. Now when we move the cloned Arc into each thread's closure, the reference count increases by one, meaning the data won't be deallocated until all threads are done with it.

When we run this code, we'll see:

Result: 10

Perfect! Each thread successfully incremented the counter.

How Arc<Mutex<T>> Works

Let's understand the combination:

  • Mutex<T> - Provides interior mutability, allowing us to mutate the contents even when we only have an immutable reference to the Mutex.
  • Arc<T> - Allows multiple ownership with automatic cleanup when the reference count reaches zero. Each clone increments the reference count.

Together, Arc<Mutex<T>> is a safe way to share mutable state across threads:

import std.sync.Arc
import std.sync.Mutex
import std.thread
import std.time.Duration

fn main() {
    let data = Arc.new(Mutex.new(vec![]))

    for i in 0..5 {
        let data = Arc.clone(&data)
        thread.spawn move {
            var list = data.lock().unwrap()
            list.push(i)
            // Lock is released here when list goes out of scope
        }
    }

    thread.sleep(Duration.fromMillis(100))

    let finalData = data.lock().unwrap()
    println!("Final data: \(finalData:?)")
}

Each thread clones the Arc, takes ownership of the clone, acquires the lock, modifies the data, and releases the lock when the MutexGuard drops. The Arc ensures the underlying data lives as long as any thread needs it.

Comparing Message Passing and Mutex

When should you use message passing versus a Mutex? Here are some guidelines:

ScenarioUse
Passing data once from one thread to anotherMessage passing (channels)
Sharing mutable state across threadsArc<Mutex<T>>
Complex communication patternsMessage passing
Simple shared counters or flagsMutex<T>
Want to avoid lock contentionMessage passing

In general, prefer message passing for most concurrent code. It's easier to reason about and naturally encourages a design where threads have clear responsibilities. Use Arc<Mutex<T>> when you genuinely need shared mutable state.

Deadlock Risk

One downside of using Mutex is the risk of deadlocks. A deadlock occurs when:

  1. Operation A needs locks on resources 1 and 2
  2. Operation B needs locks on resources 2 and 1
  3. Operation A locks resource 1, then waits for resource 2
  4. Operation B locks resource 2, then waits for resource 1

Both threads are now blocked forever. Oxide's type system prevents some deadlock scenarios, but not all. Always:

  • Acquire locks in a consistent order across all code paths
  • Keep the lock scope as small as possible
  • Avoid nested locks when possible

Rust Comparison

The Mutex and Arc APIs are nearly identical between Rust and Oxide:

ConceptRustOxide
Importuse std::sync::{Mutex, Arc};import std.sync.Mutex, import std.sync.Arc
Create mutexMutex::new(5)Mutex.new(5)
Acquire lockm.lock().unwrap()m.lock().unwrap()
Clone ArcArc::clone(&arc)Arc.clone(&arc)

The semantics are identical: Mutex<T> provides mutual exclusion, and Arc<T> provides shared ownership with reference counting. Both are essential for safe shared-state concurrency in Oxide.

Summary

  • Mutex<T> allows only one thread at a time to access the data
  • Arc<T> enables multiple ownership of a value with automatic cleanup
  • Arc<Mutex<T>> is the combination needed for safe shared mutable state across threads
  • The lock is automatically released when the MutexGuard drops
  • Prefer message passing for most concurrent code, use Arc<Mutex<T>> when you genuinely need shared state
  • Be aware of deadlock risks when using multiple mutexes