Categories
Tags
algorithms APIT Arc arm assembly asynchronous base64 BitHacks Blogging box c clang-format client cmake compiler concat concurrency const_fn contravariant cos covariant cpp Customization cybersecurity DataStructure db debugging Demo deserialization discrete doc DP dtruss Dynamic Example FFI flat_map format FP fsanitize Functional functions futures Fuwari GATs gccrs generics gitignore glibc GUI hacking hashmap haskell heap interop invariant iterator join justfile kernel LaTeX leak LFU linux lto MachineLearning macOS Markdown math ML mmap nc OnceLock optimization OS panic parallels perf physics pin postgresql radare2 release reverse RPIT rust sanitizer science Science serialization server shift sin SmallProjects socket std strace String StringView strip strlen surrealdb SWAR swisstable synchronous tan toml traits triangulation UnsafeRust utf16 utf8 Video wsl x86_64 xilem zig
1712 words
9 minutes
260201_Arc_Basic001
link
Arcin Rust stands for Atomically Reference Counted. It allows multiple owners of the same heap data across threads, while safely tracking how many references exist.Let’s go step by step and then look at a solid example.
🔹 Key Takeaways
Arc<T>= shared ownership across threads.clone()= increments ref count (no data copy)- Thread-safe due to atomic operations
- Use with
MutexorRwLockfor mutation - Slightly slower than
Rcdue to atomic overhead
러스트 Arc 의 작동 원리
출처 :
What is Arc?
- Atomic Reference Count
- 공유되는 값이 여러 개 있을때 사용
Arc는 동일한 공유 값을 참조하는 핸들을 처리합니다.- 일반적으로 공유 값은 불변이지만, 가변으로 만들 수도 있습니다.
- 공유 값을 사용하는 이점 중 하나는
- 복제할 때마다 메모리를 여러 번 사용할 뿐만 아니라, 값을 수정하면 모든 복제본이 해당 변경 사항을 볼 수 있다는 것입니다.
Arc의 작동원리는clone을 호출하면 Arc객체를 얻지만 실제로 모든 데이터를 복제하는 것은 아닙니다. 이것이Arc의 핵심 아이디어 객체는 소멸되고, 카운터가 0이 되면 소멸자가 실행되어 메모리를 정리합니다. 이것이Arc의 전부입니다.~
Arc기초(Rust code예시)
use std::sync::Arc;
use std::thread;
fn main() {
let shared = Arc::new(vec![10, 20, 30]);
println!("shared address : {:p}", &shared);
let mut handles = vec![];
for i in 0..3 {
let shared_clone = Arc::clone(&shared);
println!("shared clone() address : {:p}", &shared_clone);
let handle = thread::spawn(move || {
println!("Thread {i}: {:?}", shared_clone);
println!("(handle)shared clone() address : {:p}", &shared_clone);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final strong count = {}", Arc::strong_count(&shared));
}- Result
shared address : 0x7fffff02e628
shared clone() address : 0x7fffff02e6a0
shared clone() address : 0x7fffff02e6a0
shared clone() address : 0x7fffff02e6a0
Thread 0: [10, 20, 30]
(handle)shared clone() address : 0x722251260a70
Thread 2: [10, 20, 30]
(handle)shared clone() address : 0x722250e5ba70
Thread 1: [10, 20, 30]
(handle)shared clone() address : 0x72225105fa70
Final strong count = 1🔹 What Arc<T> actually does
- Stores your data on the heap
- Keeps a reference count (atomic)
- Every time you call
.clone():- It does NOT copy the data
- It only increments the reference count
- When an
Arcis dropped:- Count is decremented
- When it reaches 0 → data is freed
🔹 Why Arc instead of Rc?
| Type | Thread-safe? | Counter |
|---|---|---|
Rc<T> | ❌ No | non-atomic |
Arc<T> | ✅ Yes | atomic |
- Use
Arcwhen working with threads.
🔹 Basic Example (Clone & Share Memory)
use std::sync::Arc;
fn main() {
let data = Arc::new(vec![1, 2, 3]);
let a = Arc::clone(&data);
let b = Arc::clone(&data);
println!("data: {:?}", data);
println!("a: {:?}", a);
println!("b: {:?}", b);
println!("strong count = {}", Arc::strong_count(&data));
}- result
data: [1, 2, 3]
a: [1, 2, 3]
b: [1, 2, 3]
strong count = 3🔍 What happens:
data,a,ball point to the same vector in memory- No copying of
[1,2,3] - Only reference count increases
🔹 Multithreaded Example (Real Use Case)
use std::sync::Arc;
use std::thread;
fn main() {
let shared = Arc::new(vec![10, 20, 30]);
let mut handles = vec![];
for i in 0..3 {
let shared_clone = Arc::clone(&shared);
let handle = thread::spawn(move || {
println!("Thread {i}: {:?}", shared_clone);
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Final strong count = {}", Arc::strong_count(&shared));
}- result
Thread 1: [10, 20, 30]
Thread 0: [10, 20, 30]
Thread 2: [10, 20, 30]
Final strong count = 1🔹 Important Detail: .clone() is cheap
let x = Arc::new(100);
let y = x.clone();- This does:
Arc pointer ─────┐
Arc pointer ─────┘ → same heap value (100)- NOT:
100 copied ❌🔹 Interior Mutability (Arc + Mutex)
⚠️
Arc<T>alone = shared immutable access onlyTo mutate shared data, combine with
Mutex:
use std::sync::{Arc, Mutex};
use std::thread;
fn main() {
let counter = Arc::new(Mutex::new(0));
let mut handles = vec![];
for _ in 0..5 {
let counter = Arc::clone(&counter);
let handle = thread::spawn(move || {
let mut num = counter.lock().unwrap();
*num += 1;
});
handles.push(handle);
}
for handle in handles {
handle.join().unwrap();
}
println!("Result = {}", *counter.lock().unwrap());
}- result
Result = 5🔹 Memory Model Visualization
Arc (ptr, count=3)
├── data → [10, 20, 30]
├── clone A
└── clone B- Drop order:
drop A → count = 2
drop B → count = 1
drop original → count = 0 → free memoryArc<T> is implemented internally
- Nice—this is exactly where Rust gets systems-level interesting. Let’s break down how
Arc<T>is implemented internally.
🔹 High-level structure of Arc<T>
- At its core,
Arc<T>is just a pointer to a heap allocation that looks like this:
struct ArcInner<T> {
strong: AtomicUsize, // strong reference count
weak: AtomicUsize, // weak reference count
data: T, // actual value
}- And
Arc<T>itself:
pub struct Arc<T> {
ptr: NonNull<ArcInner<T>>,
}🔹 Memory Layout
- When you do:
let a = Arc::new(5);- Heap memory looks like:
[ strong = 1 | weak = 1 | data = 5 ]- 👉 Why
weak = 1?- Even if no
Weak<T>exists, Arc keeps an implicit weak ref - This ensures the allocation lives long enough for cleanup
- Even if no
🔹 Clone (Arc::clone)
let b = Arc::clone(&a);- Internally:
strong.fetch_add(1, Ordering::Relaxed);- Important details:
- No data is copied
- Only the atomic counter increases
- Uses
Relaxedordering → very fast
🔹 Drop (Arc destructor)
- When an
Arcis dropped:
if strong.fetch_sub(1, Ordering::Release) == 1 {
// last strong reference
acquire_fence();
drop(data);
if weak.fetch_sub(1, Ordering::Release) == 1 {
deallocate_memory();
}
}🔍 Step-by-step:
- Decrement
strong
- Decrement
- If it was the last one:
- destroy
T
- Then decrement
weak
- Then decrement
- If
weak == 0:
- free the allocation
- If
🔹 Why both strong and weak?
Because of Weak<T>:
let a = Arc::new(10);
let w = Arc::downgrade(&a);- Now:
strong = 1
weak = 2 (1 implicit + 1 explicit)🔹 Weak reference behavior
if strong == 0 {
upgrade() → None
}Weak<T>does NOT keep data alive- Only keeps allocation alive
🔹 Atomic Ordering (important detail)
- Rust uses:
| Operation | Ordering |
|---|---|
| clone | Relaxed |
| drop (decrement) | Release |
| final cleanup | Acquire |
👉 Why?
Relaxed→ fast increment, no sync neededRelease→ ensures writes happen before dropAcquire→ ensures visibility before destruction
This is a classic lock-free reference counting pattern
🔹 Arc::make_mut internally
- Core idea:
if strong == 1 {
return &mut data;
} else {
clone data into new allocation
}- More concretely:
if Arc::get_mut(this).is_some() {
// unique → safe mutable access
} else {
// clone inner data
}🔹 Arc::get_mut
fn get_mut(this: &mut Arc<T>) -> Option<&mut T> {
if strong == 1 {
Some(&mut data)
} else {
None
}
}- 👉 This is the key uniqueness check
🔹 Pointer Trick (important)
Arc<T> is a thin pointer:
ptr → ArcInner<T>- But when you dereference:
*arc- Rust does:
&(*ptr).data- So
Arc<T>behaves like&T.
🔹 Why NonNull?
NonNull<ArcInner<T>>- Never null → optimization
- Enables niche optimization (same size as raw pointer)
- Avoids extra checks
🔹 Deallocation Flow (full picture)
Arc::clone → strong++
Arc::drop:
strong--
if strong == 0:
drop(T)
weak--
if weak == 0:
free memory🔹 Key Guarantees
- Thread-safe (atomic ops)
- Lock-free reference counting
- No data races on refcount
- Data itself is NOT protected (you still need
Mutex)
🔹 Minimal “Arc-like” Example (simplified)
- This is NOT production-safe, but shows the idea:
use std::ptr::NonNull;
use std::sync::atomic::{AtomicUsize, Ordering};
struct MyArcInner<T> {
count: AtomicUsize,
data: T,
}
#[derive(Debug)]
struct MyArc<T> {
ptr: NonNull<MyArcInner<T>>,
}
impl<T> MyArc<T> {
fn new(data: T) -> Self {
let boxed = Box::new(MyArcInner {
count: AtomicUsize::new(1),
data,
});
MyArc {
ptr: NonNull::new(Box::into_raw(boxed)).unwrap(),
}
}
fn clone(&self) -> Self {
let inner = unsafe { self.ptr.as_ref() };
inner.count.fetch_add(1, Ordering::Relaxed);
MyArc { ptr: self.ptr }
}
// Helper method to access the inner data safely
fn get_data(&self) -> &T {
let inner = unsafe { self.ptr.as_ref() };
&inner.data
}
// Helper method to get reference count
fn ref_count(&self) -> usize {
let inner = unsafe { self.ptr.as_ref() };
inner.count.load(Ordering::Relaxed)
}
}
impl<T> Drop for MyArc<T> {
fn drop(&mut self) {
let inner = unsafe { self.ptr.as_ref() };
if inner.count.fetch_sub(1, Ordering::Release) == 1 {
unsafe {
drop(Box::from_raw(self.ptr.as_ptr()));
}
}
}
}
// Safety: MyArc can be sent between threads if T is Send
// because the data is accessed atomically through reference counting
unsafe impl<T: Send> Send for MyArc<T> {}
// Safety: MyArc can be shared between threads if T is Sync
// because all accesses to the inner data are synchronized through atomic operations
unsafe impl<T: Sync> Sync for MyArc<T> {}
fn main() {
// Example 1: Basic usage - creating and cloning MyArc
{
println!("=== Example 1: Basic Usage ===");
let original = MyArc::new(String::from("Hello, MyArc!"));
let clone1 = MyArc::clone(&original);
let clone2 = MyArc::clone(&original);
// All three point to the same data
println!("Reference count: {}", original.ref_count());
println!("Data: {}", original.get_data());
println!();
}
println!("Example 1 completed - all clones dropped\n");
// Example 2: Using with threads (demonstrates Arc-like behavior)
{
println!("=== Example 2: Multi-threaded Usage ===");
use std::thread;
use std::time::Duration;
let shared_data = MyArc::new(vec![1, 2, 3, 4, 5]);
let mut handles = vec![];
for i in 0..3 {
let arc_clone = MyArc::clone(&shared_data);
handles.push(thread::spawn(move || {
println!(
"Thread {}: {:?}, count: {}",
i,
arc_clone.get_data(),
arc_clone.ref_count()
);
thread::sleep(Duration::from_millis(100));
}));
}
for handle in handles {
handle.join().unwrap();
}
println!("Main: Final count: {}\n", shared_data.ref_count());
}
// Example 3: Custom type with MyArc
{
println!("=== Example 3: Custom Type ===");
#[derive(Debug)]
struct Counter {
value: i32,
}
let counter = MyArc::new(Counter { value: 0 });
println!("Initial counter: {:?}", counter.get_data());
let counter_clone1 = MyArc::clone(&counter);
let counter_clone2 = MyArc::clone(&counter);
let counter_clone3 = MyArc::clone(&counter);
println!("With 4 total references, count: {}", counter.ref_count());
println!("{counter_clone1:?} {counter_clone2:?} {counter_clone3:?}");
}
}- Result
=== Example 1: Basic Usage ===
Reference count: 3
Data: Hello, MyArc!
Example 1 completed - all clones dropped
=== Example 2: Multi-threaded Usage ===
Thread 0: [1, 2, 3, 4, 5], count: 4
Thread 1: [1, 2, 3, 4, 5], count: 4
Thread 2: [1, 2, 3, 4, 5], count: 4
Main: Final count: 1
=== Example 3: Custom Type ===
Initial counter: Counter { value: 0 }
With 4 total references, count: 4
MyArc { ptr: 0x104de6340 } MyArc { ptr: 0x104de6340 } MyArc { ptr: 0x104de6340 }🔥 Final Insight
Arc<T>is basically:
atomic refcount + heap allocation + smart drop logic- The real magic is:
- carefully chosen memory orderings
- correct lifetime + ownership guarantees
- separation of strong vs weak references
260201_Arc_Basic001
https://younghakim7.github.io/blog/posts/260201_arc_basic001/