# Size flags (last 3 bits of size field): # P (0x1) = prev_in_use — previous chunk allocated # M (0x2) = mmap'd chunk # A (0x4) = non-main arena # Actual size = size & ~0x7 # Minimum chunk: 0x20 (32 bytes, 64-bit) # User ptr = chunk_addr + 0x10 (skip prev_size + size)
# main_arena in libc (malloc_state struct) # Contains all bin heads, top chunk, last remainder # Key offsets from libc base (glibc 2.35 x86-64): # main_arena = libc.sym['main_arena'] # main_arena + 0x60 = smallbins[0] (fd of unsorted bin head) # main_arena + 0x68 = smallbins[0] (bk) # top chunk = main_arena + 0x60 (unsorted bin sentinel) # Leak main_arena from unsorted bin: # free a large chunk → its fd/bk point to main_arena+offset # read fd → subtract offset → libc base python3 -c " from pwn import * libc = ELF('./libc.so.6') # main_arena offset (varies by version) print(hex(libc.sym['main_arena'])) "
| Bin | Size range (64-bit) | Count | Structure | Key notes |
|---|---|---|---|---|
| tcache | 0x20 – 0x410 | 64 per size | Singly-linked (fd only) | Per-thread, fastest path, no size check unsafe |
| fastbin | 0x20 – 0xb0 | 10 per size | Singly-linked LIFO | No coalescing, partial size check |
| unsorted bin | any | 1 bin | Doubly-linked | Chunks land here first; contains main_arena ptr = libc leak |
| smallbin | 0x20 – 0x3f0 | 62 bins | Doubly-linked FIFO | Sorted by size, safe-unlink protections |
| largebin | 0x400+ | 63 bins | Doubly-linked + skip | Ranges of sizes per bin, has fd_nextsize/bk_nextsize |
# tcache_perthread_struct (at heap base + 0x10) # counts[64] — number of entries per bin (uint16) # entries[64] — head pointer of each bin # Max 7 entries per bin (TCACHE_MAX_BINS) # Tcache entry (just fd pointer, no bk) # Free chunk: user data area contains fd pointer # Safe-linking (glibc ≥ 2.32): # stored_fd = real_fd XOR (chunk_addr >> 12) # Bypass: leak heap address to compute key # Get heap base from tcache key: python3 -c " # Read stored fd (pos) and next chunk ptr pos = 0xdeadbeef12345678 # leaked stored fd # if fd=0 (last entry): pos = 0 XOR (addr>>12) = addr>>12 heap_base = pos << 12 print(hex(heap_base)) "
# Classic tcache poison (glibc < 2.32, no safe-linking) # 1. Free chunk A twice (double free) OR # overflow into chunk A's fd # 2. Overwrite A's fd with target address # 3. malloc() → returns A # 4. malloc() → returns target address # With safe-linking (glibc ≥ 2.32): # Need heap leak to compute mangled pointer python3 -c " chunk_addr = 0x55555555b2a0 # address of free chunk target = 0x404060 # where we want to write key = chunk_addr >> 12 # safe-linking key mangled = target ^ key print(hex(mangled)) # write this as fd " # After poison: 2 mallocs → write to target # Common target: __free_hook, __malloc_hook (< 2.34) # Or: tcache_perthread_struct to control counts
# Overview heap # list all chunks heap -v # verbose (show free chunks too) heap 0x...addr # heap in specific arena vis_heap_chunks # visual color-coded layout vis_heap_chunks 20 # first 20 chunks # Bins bins # all bins at once tcache # tcache contents fastbins # fastbin lists smallbins # small bin lists largebins # large bin lists unsortedbin # unsorted bin # Arena arena # main_arena info arenas # all arenas top_chunk # top chunk address/size
# Parse a specific chunk malloc_chunk 0x55555555b2a0 malloc_chunk -v addr # verbose # Find fake fastbin chunk (for fake chunk attacks) find_fake_fast 0x404060 # find fakeable chunk near addr find_fake_fast &__malloc_hook # Heap base p (void*)mp_.sbrk_base # sbrk base heap_base # pwndbg shortcut # Examine tcache struct x/200gx (long)mp_.sbrk_base # raw heap start p tcache # tcache_perthread_struct # Track allocations track_heap # log malloc/free calls
# Overflow from chunk A into chunk B's header # Corrupt size field of B → change bin it lands in # Corrupt fd/bk of B (if free) → arbitrary write # Detect: vis_heap_chunks shows corrupt size/flags # Off-by-one: write 1 extra byte → corrupt P flag # Off-by-null: write null byte → clear P flag # → allows consolidation attack # Check with: heap # look for unusual sizes malloc_chunk addr # verify flags
# UAF: pointer kept after free → fd is now readable # Read fd of freed chunk → heap address leak # Write fd of freed chunk → tcache/fastbin poison # Double free: free same chunk twice # glibc < 2.28: no key → easy double free # glibc ≥ 2.28: tcache key in bk field # → overwrite key first (need write primitive) # key = (uint64_t)tcache (tcache struct addr) # Fastbin: size must match on re-allocation # Confirm with pwndbg: bins # chunk appears twice in bin
| Attack | Primitive needed | Goal | glibc version |
|---|---|---|---|
| Tcache poison | UAF/overflow on fd | malloc returns arbitrary addr | ≥ 2.26 |
| Fastbin dup | Double free + size control | malloc returns controlled addr | all |
| Unsorted bin attack | Write to bk field of free chunk | Overwrite arbitrary addr with libc ptr | ≤ 2.28 |
| House of Force | Overflow into top chunk size | malloc returns arbitrary addr | ≤ 2.28 |
| House of Spirit | Control malloc arg + fake chunk | Free attacker chunk → into fastbin | all |
| Largebin attack | Modify fd_nextsize/bk_nextsize | Overwrite arbitrary ptr with heap addr | all |
| __malloc_hook | Arbitrary write near hook | RCE on next malloc | ≤ 2.33 |
| __free_hook | Arbitrary write | RCE on next free | ≤ 2.33 |
| exit handler | Arbitrary write | RCE on exit() | ≥ 2.34 |
| Tcache struct | Write to tcache_perthread | Control counts/entries directly | ≥ 2.26 |
# Classic method — free a large chunk (> 0x408) # so it bypasses tcache and goes to unsorted bin # fd and bk point to main_arena + offset python3 -c " from pwn import * # 1. Allocate chunk > 0x408 to avoid tcache # 2. Allocate small chunk after it (prevent consolidation with top) # 3. Free the large chunk → fd/bk = main_arena + 0x60 # 4. Read fd of freed chunk (UAF or show) # 5. Compute libc base libc = ELF('./libc.so.6') # fd_leak = main_arena + 0x60 (approx, check version) libc_base = fd_leak - libc.sym['main_arena'] - 0x60 print(hex(libc_base)) " # In pwndbg: check unsorted bin after free unsortedbin x/4gx chunk_addr # fd/bk are libc pointers
# Tcache chunk (glibc < 2.32): fd = raw next ptr # Read fd of a freed tcache chunk → heap address # Tcache chunk (glibc ≥ 2.32): fd is mangled # If ONLY ONE chunk in tcache bin: # fd = 0 XOR (addr >> 12) = addr >> 12 # → shift left 12 bits to get heap addr python3 -c " mangled = 0x0000555555 # leaked fd when next = NULL heap_base = mangled << 12 print(hex(heap_base)) " # If two chunks: fd = next XOR (self >> 12) # Free chunk A, then B → read B's fd # fd_B = addr_A XOR (addr_B >> 12) # Free chunk, read its fd when next=NULL → quick leak
vis_heap_chunks map the heap
② bins see what's free and where
③ UAF? Read fd → heap leak; write fd → tcache poison
④ Double free → check for tcache key → overwrite key first
⑤ Large free → unsorted bin fd = libc ptr → leak base
⑥ Safe-linking: stored_fd = real_fd XOR (chunk >> 12)
⑦ glibc ≥ 2.34: hooks gone → target exit handlers or __io_buf_base