二进制安全之堆溢出(系列)——堆基础 & 结构(三)

二进制安全之堆溢出(系列)——“堆基础&结构”第三节

上回书说到_int_malloc小总结本次咱们从calloc讲起

二进制安全之堆溢出(系列)——堆基础 & 结构(三)

calloc

calloc在动态分配完内存之后,自动初始化该内存空间为0,而malloc不初始化,里面数据是随机的垃圾数据

_int_free

  • 判断是否在astbi的范围内

在就放入fastbin不在放入unsorted bin

  • 当放入unsorted bin检查是否需要unlink

前向合并:向低地址合并后向合并:向高地址合并

二进制安全之堆溢出(系列)——堆基础 & 结构(三)

  • 满足以下条件才会free

检查当前的块的prev_inuse是否为0检查下个块的prev_insue是否为1检查下下个块的prev_insue是否为1此时发生前向合并如果下下个块的prev_insue为0,则会前向合并之后再后向合并

__libc_free

  • 用于封装__int_free函数
  • 主要包括三个部分

检查是否有__free_hook ,利用:__free_hook->sysem如果堆块是空则不进行操作mmap的堆块,使用ummap进行释放然后调用__int_free

__malloc_hook

<code>void *weak_variable (*__malloc_hook)  (size_t __size, const void *) = malloc_hook_ini;void weak_variable (*__free_hook) (void *__ptr,                                   const void *) = NULL;/<code>
  • malloc_hookl利用方法

1. 现在知道了system或一个shellcode的地址2. 要调这个shellcode的地址可以通过修改got表3. 在Partial RELRO保护机制下,无法写入got表4. 这时候可以修改__malloc_hook地址为one_gadget的地址,构造rop链,自5. 动调用shellcode,起一个bin/sh。
6. 对于__free_hook,可以将其直接修改为system的地址,最简单的就是传入7. 7. 一个"bin/sh"的指针,这样当free某个堆块的时候就会调用/bin/sh。8. 因为__malloc_hook的参数类型为int,不能传一个字符串的地址 "/bin/sh"9. 从而绕过Partial RELRO保护机制

  • __malloc_hook & fastbin attack

1. 将fastbin的fd指向[__malloc_hook+0x23] 或[ __free_hook]2. 再次malloc就会将[__malloc_hook+0x23] 或[ __free_hook]分配出来3. 在[__malloc_hook+0x23] 或[ __free_hook]写一个地址,如"bin/sh"或shellcode,one_gadget的地址

unsorted bin

  • 设计原则

释放一个不属于fastbin的chunk,并且该chunk不与top_chunk紧邻时,就会进入unsortedbin切割一个较大的chunk时,如果剩下的部分大于minsize时,就会被放到unsortedbin中unsortedbin管理bins链的原则 : 双向循环链表,FIFOunsortedbin只有一个链表,即bin数组的第一个索引处当unsorted bin被遍历一次之后,会触发整理,将相应大小的chunk place in order

  • 连接
二进制安全之堆溢出(系列)——堆基础 & 结构(三)

  • 当chunk释放到unsorted bin,当前chunk内就有libc上的地址

0x7ffff7dd1000 0x7ffff7dd3000 rw-p 2000 1c4000 /lib/x86_64-linux-gnu/libc-2.23.solibc_arrd = libc_on - libc.symbol("main_arena") - 88

  • unsortedbin attack

1. 构造overlap,控制一个释放了还能修改的chunk
2. 修改当前chunk的fd为target -0x23 的地址 3. 在进行断链操作时,当前chunk的bk指向的地址bck赋给av->bk,将main_arena的地址赋给bck->fd,而bck->fd即当前chunk的fd
4. 即我们将target - 0x23处写入了main_arena的地址 a. main_arena为libc中的地址,以0x7f打头b. target-0x23正好向size位写入0x7f,只写size的倒数第4位c. 这样我们就在target中写了一个0x7f的内容d. 然后就可以结合fastbin,将target malloc出来,完成任意地址堆块的创建和读写 5. 因为main_arena的bk被修改了,unsortedbin的链表结构被破坏
6. 因此再次分配堆块的时候不能从unsortedbin上分配,而从fastedbin中分配(提前在fastedbin放几个chunk)

<code>3733 while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av)) //当前chunk为空是bk指向本身
{
bck = victim->bk; //指向当前chunk的bk指向的下一个chunk
if (__builtin_expect (chunksize_nomask (victim) <= 2 * SIZE_SZ, 0)
|| __builtin_expect (chunksize_nomask (victim)
> av->system_mem, 0))
malloc_printerr ("malloc(): memory corruption");
size = chunksize (victim);


/*
If a small request, try to use last remainder if it is the
only chunk in unsorted bin. This helps promote locality for

runs of consecutive small requests. This is the only
exception to best-fit, and applies only when there is
no exact fit for a small chunk.
*/


if (in_smallbin_range (nb) &&
bck == unsorted_chunks (av) &&
victim == av->last_remainder &&
(unsigned long) (size) > (unsigned long) (nb + MINSIZE))
{
/* split and reattach remainder */
remainder_size = size - nb;
remainder = chunk_at_offset (victim, nb);
unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
av->last_remainder = remainder;
remainder->bk = remainder->fd = unsorted_chunks (av);
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}


set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);


check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}


/* remove from unsorted list */
unsorted_chunks (av)->bk = bck; //断链,将之前的chunk取出
bck->fd = unsorted_chunks (av);
/<code>

last remainder

  • 在用户malloc请求分配内存之后,ptmalloc找到的chunk与目标申请的大小不一致时,会进行切割,剩余部分称为last remainder chunk,由 last_remainder 管理,存储在unsortedbin中
  • p main_arena可以查看last remainderlast_remainder = 0x6020a0
  • 切割完成后会leak出<main>的地址/<main>
<code>0x602260 PREV_INUSE {
prev_size = 0,
size = 305,
fd = 0x7ffff7dd1c98 <main>,
bk = 0x7ffff7dd1c98 <main>,
fd_nextsize = 0x0,
bk_nextsize = 0x0
}/<main>/<main>/<code>

fast bin

  • 设计原则

单向链表的管理结构,类似于C语言散列表的管理结构,分别以0x20,0x30,0x40到0x80为散列项,相同的散列项被连接在一起ptmalloc专门设计了fastbin,用来管理程序在释放一些较小的内存块,对应malloc state结构体中的fastbinY。
fastbin中的chunk的prev_inuse为1,故不会在遇到与之相邻的chunk就发生合并,直到malloc_consolidate的时候会整理,实验在malloc大于0x400的时候两个相邻的fastbin会合并。每个bin管理的chunk默认最大为0x80字节,但是可以最大支持到0xa0字节fastbin最多支持10个bin,从0x20到0xa0

  • 管理原则

1. 当用户需要的chunk的大小小于fastbin中的最大大小的堆块时,ptmalloc会首先判断fastbin中相应的bin中是否有对应大小的空闲块,如果有的话,直接获取这个堆块,没有的话,从top_chunk中分配并进行一系列操作。 2. 出入到fastbin的堆块采用LIFO原则 : 最近free的chunk,与main_arena直接相连,从main_arena遍历,并且最先被分配出去 3. main_arena开始对应的散列项与0(null)相连,每插入一个chunk,在main_arena和0中间插一个,每个插入的堆块都直接和main_arena相连

  • 索引

ptmalloc默认情况下会调用set_max_fast(DEFAULT_MAXFAST)将全局变量global_max_fast设置为DEFAULT_MAXFAST,即fastbin chunk的最大值。当DEFAULT_MAXFAST设置为0时,系统就不会支持fastbinfastbin索引的计算方法:fastbin_idx(size)

  • 连接
二进制安全之堆溢出(系列)——堆基础 & 结构(三)

  • fastbin attack

1. 构造一个fastbin的overlap(堆块重叠),造成释放了当前chunk还能对它修改 2. 修改当前chunk的fd指向我们的target,当释放当前chunk的时候,main_arena就会指向target
3. 将伪造的地址的起始地址+0x8的内存内容即target的size修改为0x71或0x7f等,绕过chunk_size和prev_inuse的检测,后面的内容可以随意写入。(同一索引的chunk的大小在一个梯度内,检查是否在此梯度)tips :libc的地址存在0x7fxxxx:0x7ffff7a0d000 0x7ffff7bcd000 r-xp 1c0000 0 /lib/x86_64-linux-gnu/libc-2.23.so4. 再次malloc的时候,就会malloc出来target的堆块,比如target为got或任意地址,造成函数执行和任意地址写

  • 断链过程
<code>if ((unsigned long) (nb) <= (unsigned long) (get_max_fast ()))
{
idx = fastbin_index (nb); //获取散列号
mfastbinptr *fb = &fastbin (av, idx); //找到main_arena
mchunkptr pp;
victim = *fb; //victim = main_arena->fd

if (victim != NULL)
{
if (SINGLE_THREAD_P)
*fb = victim->fd; //将main_arena->fd->fd 赋给main_arena->fd,实现断掉和main_arena直接相连的chunk
else
REMOVE_FB (fb, pp, victim);
if (__glibc_likely (victim != NULL))
{
size_t victim_idx = fastbin_index (chunksize (victim)); //检查chunk_size和散列表的大小是否一致以及prev_inuse=1
if (__builtin_expect (victim_idx != idx, 0)) //检测当前的size和散列项的size是否一致
malloc_printerr ("malloc(): memory corruption (fast)");
check_remalloced_chunk (av, victim, nb);/<code>

small bin

  • 管理原则

small bins中一共有62个循环双向链表,每个链表中存储的chunk大小一致
small bins中每个bin对应的链表采取FIFO的规则
每次去bin链中的chunk是从bk指向的位置取的,bk指向目前最早释放进来的chunk

  • small bins中的每个chunk的大小与其所在的bin的index的关系是:chunk_size = 2 * SIZE_T * index

32位对应的SIZE_T为4,64位对应的SIZE_T为8small bins的index从2到62比如说对于64位机器,index = 2 ,chunk_size = 32;index = 62 ,chunk_size = 992

  • 对于fastbin和small bin的重合问题的解释

fastbin的chunk在经过consolidate之后都会进入到small bin中的0x20到0x80的对应位置

  • small bin的结构
二进制安全之堆溢出(系列)——堆基础 & 结构(三)

large bin

  • 管理原则

large bin一共包含63个bin,每个bin中的chunk的大小不一致,处于一个相同的范围63个bin被分为公差一致的六组 :第一组 : 索引64-95 数量32 起始大小1024 公差64 [1024,1024+64]第二组 : 索引96-111 数量16 起始大小xxxx 公差512第三组 : 索引112-119 数量8 起始大小xxxx 公差4096第四组 : 索引120-123 数量4 起始大小xxxx 公差32768第五组 : 索引124-125 数量2 起始大小xxxx 公差262144第六组 : 索引126 数量1 起始大小xxxx 公差无限制
与smallbin一样采取FIFO的管理方式,不同的是largebin具有两条双向链表largebin中fd_nextsize和bk_nextsize按照chunk的大小排列,分别指向下一个更大size的chunk和更小的chunkfd和bk按照chunk的free顺序排列(先来的和main_arena相连)·

  • 结构
二进制安全之堆溢出(系列)——堆基础 & 结构(三)

  • largebin attack

要求target与当前chunk在一个链上,大小范围一致

碎块合并

bin链的归属问题

  • fastbin:10个0x20-0x80,实际上malloc的大小为0x10-0x70
  • bin链 :128个bin 1 为unsorted binbin 2 到63为small binbin 64到126为large bin
  • 按照free的堆块大小
    • 0x20-0x80 --> 进入fastbin
    • 0x80以上 --> 进入unsorted bin

bin链的整理分类

  • malloc_consolidate会整理fastbin,将能合并的fastbin合并一起放入unsortedbin,在unsortedbin的malloc的最后,会将unsortedbin上的堆块进行整理并放入到对应的bin中
  • fastbin --->malloc_consolidate---> unsortedbin ----->整理后分配到相应的bin链
  • 在fastbin和unsortedbin,largebin,map上都没有适合的堆块才会从top_chunk中切割。
  • 减少堆块的碎片化

consolidate

fastbin范围的chunk的inuse位始终为1,因此不会和其他释放的chunk合并。当释放的chunk与该chunk相邻的空闲chunk合并后的大小大于FAST_CONSOLIDATION_THRESHOLD时,内存碎片可能性比较多了,我们需要把fastbins中的chunk都进行合并,以减少内存碎片对系统的影响。malloc_consolidate函数可以将fastbin中所有能和其它chunk合并的chunk合并在一起。0. 碎片检查针对于fastbin的碎片整理

<code>#define FAST_CONSOLIDATION_THRESHOLD (65536ul)/<code>

1. 触发合并

第一次合并整理的时候malloc超过0x80的空间就会触发合并

<code>/*
If a small request, check regular bin. Since these "smallbins"
hold one size each, no searching within bins is necessary.
(For a large request, we need to wait until unsorted chunks are
processed to find best fit. But for small ones, fits are exact
anyway, so we can check now, which is faster.)
*/

if (in_smallbin_range (nb))
{
idx = smallbin_index (nb);
bin = bin_at (av, idx);

if ((victim = last (bin)) != bin)
{
bck = victim->bk;
if (__glibc_unlikely (bck->fd != victim))
malloc_printerr ("malloc(): smallbin double linked list corrupted");
set_inuse_bit_at_offset (victim, nb);
bin->bk = bck;
bck->fd = bin;

if (av != &main_arena)
set_non_main_arena (victim);
check_malloced_chunk (av, victim, nb);
#if USE_TCACHE
/* While we're here, if we see other chunks of the same size,
stash them in the tcache. */
size_t tc_idx = csize2tidx (nb);
if (tcache && tc_idx < mp_.tcache_bins)
{
mchunkptr tc_victim;

/* While bin not empty and tcache not full, copy chunks over. */
while (tcache->counts[tc_idx] < mp_.tcache_count
&& (tc_victim = last (bin)) != bin)
{
if (tc_victim != 0)
{
bck = tc_victim->bk;
set_inuse_bit_at_offset (tc_victim, nb);
if (av != &main_arena)
set_non_main_arena (tc_victim);
bin->bk = bck;
bck->fd = bin;

tcache_put (tc_victim, tc_idx);
}
}
}
#endif
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}
}

/*
If this is a large request, consolidate fastbins before continuing.
While it might look excessive to kill all fastbins before
even seeing if there is space available, this avoids
fragmentation problems normally associated with fastbins.
Also, in practice, programs tend to have runs of either small or

large requests, but less often mixtures, so consolidation is not
invoked all that often in most programs. And the programs that
it is called frequently in otherwise tend to fragment.
*/

else
{
idx = largebin_index (nb); //当前chunk为largebin
if (atomic_load_relaxed (&av->have_fastchunks)) //含有fastbin时触发
malloc_consolidate (av);
}/<code>

2. 开始合并

将fastbin上取消的堆块合并到unsortedbin

<code>static void malloc_consolidate(mstate av)
{
mfastbinptr* fb; /* current fastbin being consolidated */
mfastbinptr* maxfb; /* last fastbin (for loop control) */
mchunkptr p; /* current chunk being consolidated */
mchunkptr nextp; /* next chunk to consolidate */
mchunkptr unsorted_bin; /* bin header */
mchunkptr first_unsorted; /* chunk to link to */

/* These have same use as in free() */
mchunkptr nextchunk;
INTERNAL_SIZE_T size;
INTERNAL_SIZE_T nextsize;
INTERNAL_SIZE_T prevsize;
int nextinuse;
mchunkptr bck;
mchunkptr fwd;

atomic_store_relaxed (&av->have_fastchunks, false);

unsorted_bin = unsorted_chunks(av); //取un的bin头

/*
Remove each chunk from fast bin and consolidate it, placing it
then in unsorted bin. Among other reasons for doing this,
placing in unsorted bin avoids needing to calculate actual bins
until malloc is sure that chunks aren't immediately going to be
reused anyway.
*/

maxfb = &fastbin (av, NFASTBINS - 1);
fb = &fastbin (av, 0);
do {
p = atomic_exchange_acq (fb, NULL);
if (p != 0) {
do {
{
unsigned int idx = fastbin_index (chunksize (p)); //取fastbin的bin头
if ((&fastbin (av, idx)) != fb)
malloc_printerr ("malloc_consolidate(): invalid chunk size");
}

check_inuse_chunk(av, p); //检测前项chunk是否正在使用
nextp = p->fd;

/* Slightly streamlined version of consolidation code in free() */
size = chunksize (p);
nextchunk = chunk_at_offset(p, size); //获取fastbin的堆块
nextsize = chunksize(nextchunk);

if (!prev_inuse(p)) { //合并fastbin
prevsize = prev_size (p);
size += prevsize;
p = chunk_at_offset(p, -((long) prevsize));
unlink(av, p, bck, fwd);
}

if (nextchunk != av->top) { //合并整理unsortedbin
nextinuse = inuse_bit_at_offset(nextchunk, nextsize);

if (!nextinuse) {
size += nextsize;
unlink(av, nextchunk, bck, fwd);
} else
clear_inuse_bit_at_offset(nextchunk, 0);

first_unsorted = unsorted_bin->fd;
unsorted_bin->fd = p; //p:fastbin取上的堆块,将其合并到unsortedbin
first_unsorted->bk = p;

if (!in_smallbin_range (size)) {
p->fd_nextsize = NULL;
p->bk_nextsize = NULL;
}

set_head(p, size | PREV_INUSE);
p->bk = unsorted_bin;

p->fd = first_unsorted;
set_foot(p, size);
}

else {
size += nextsize;
set_head(p, size | PREV_INUSE);
av->top = p;
}

} while ( (p = nextp) != 0);

}
} while (fb++ != maxfb);
}/<code>

3. 开始查找

遍历查找当前unsortedbin上合适的堆块,必要的时候切割unsortedbin

<code> /*
1. 在unsortedbin上寻找有没有适合的堆块
2. 没有的话有两种情况
2.1 切割unsortedbin
2.2 再次触发consolidate,然后切割topchunk
*/

for (;; )
{
int iters = 0;
while ((victim = unsorted_chunks (av)->bk) != unsorted_chunks (av)) //当前chunk为空是bk指向本身
{
bck = victim->bk; //指向当前chunk的bk指向的下一个chunk
if (__builtin_expect (chunksize_nomask (victim) <= 2 * SIZE_SZ, 0)
|| __builtin_expect (chunksize_nomask (victim)
> av->system_mem, 0))
malloc_printerr ("malloc(): memory corruption");
size = chunksize (victim);

/*
If a small request, try to use last remainder if it is the

only chunk in unsorted bin. This helps promote locality for
runs of consecutive small requests. This is the only
exception to best-fit, and applies only when there is
no exact fit for a small chunk.
*/
//需要的chunk必须在0x20-0x400的时候才会切割unsortedbin

if (in_smallbin_range (nb) &&
bck == unsorted_chunks (av) &&
victim == av->last_remainder &&
(unsigned long) (size) > (unsigned long) (nb + MINSIZE))
{ //切割完大于0x20
/* split and reattach remainder */
remainder_size = size - nb;
remainder = chunk_at_offset (victim, nb);
unsorted_chunks (av)->bk = unsorted_chunks (av)->fd = remainder;
av->last_remainder = remainder;
remainder->bk = remainder->fd = unsorted_chunks (av);
if (!in_smallbin_range (remainder_size))
{
remainder->fd_nextsize = NULL;
remainder->bk_nextsize = NULL;
}

set_head (victim, nb | PREV_INUSE |
(av != &main_arena ? NON_MAIN_ARENA : 0));
set_head (remainder, remainder_size | PREV_INUSE);
set_foot (remainder, remainder_size);

check_malloced_chunk (av, victim, nb);
void *p = chunk2mem (victim);
alloc_perturb (p, bytes);
return p;
}

/* remove from unsorted list */
unsorted_chunks (av)->bk = bck; //断链,将之前的chunk取出
bck->fd = unsorted_chunks (av);

/* Take now instead of binning if exact fit */

if (size == nb)
{
set_inuse_bit_at_offset (victim, size);
if (av != &main_arena)
set_non_main_arena (victim);
//当前循环跳出表示unsortedbin找不到适合的堆块/<code>

4.开始整理

将unsortedbin上找到的合适堆块整理到相应大小的bin链中

<code>/* place chunk in bin */

if (in_smallbin_range (size)) //判断从unsortedbin上取出的堆块是否在smallbin
{
victim_index = smallbin_index (size); //放入smallbin
bck = bin_at (av, victim_index);
fwd = bck->fd;
}
else
{
victim_index = largebin_index (size); //放入largebin
bck = bin_at (av, victim_index);
fwd = bck->fd;

/* maintain large bins in sorted order */
if (fwd != bck)
{
/* Or with inuse bit to speed comparisons */
size |= PREV_INUSE;
/* if smaller than smallest, bypass loop below */
assert (chunk_main_arena (bck->bk));
if ((unsigned long) (size)
< (unsigned long) chunksize_nomask (bck->bk))
{
fwd = bck;
bck = bck->bk;

victim->fd_nextsize = fwd->fd;
victim->bk_nextsize = fwd->fd->bk_nextsize;
fwd->fd->bk_nextsize = victim->bk_nextsize->fd_nextsize = victim;
}
else
{
assert (chunk_main_arena (fwd));
while ((unsigned long) size < chunksize_nomask (fwd))
{
fwd = fwd->fd_nextsize;
assert (chunk_main_arena (fwd));
}

if ((unsigned long) size
== (unsigned long) chunksize_nomask (fwd))
/* Always insert in the second position. */
fwd = fwd->fd;
else
{
victim->fd_nextsize = fwd;
victim->bk_nextsize = fwd->bk_nextsize;
fwd->bk_nextsize = victim;
victim->bk_nextsize->fd_nextsize = victim;
}
bck = fwd->bk;
}
}
else
victim->fd_nextsize = victim->bk_nextsize = victim;
}

mark_bin (av, victim_index); //断链加链
victim->bk = bck;
victim->fd = fwd;
fwd->bk = victim;
bck->fd = victim;/<code>

后续内容请看下一篇哦~~



分享到:


相關文章: