天天看點

linux記憶體管理-brk()系統調用

作者:核心中文社群

盡管可見度不高,brk也許是最常使用的系統調用了,使用者程序通過它向核心申請空間。人們常常并不意識到在調用brk,原因在于很少有人會直接使用系統調用brk向系統申請空間,而總是通過像malloc一類的C語言庫函數(或語言成分,如C++中的new)間接地調用brk。如果把malloc想象成零售,brk則是批發。庫函數malloc為使用者程序(malloc本身就是該程序的一部分)維持一個小倉庫,當程序需要使用更多的記憶體空間時就向小倉庫要,小倉庫中存量不足時就通過brk向核心批發。

前面講過,每個程序擁有3GB位元組的使用者虛存空間。但是,這并不意味着使用者程序在這3GB位元組的範圍裡可以任意使用,因為虛存空間最終得映射到某個實體存儲空間(記憶體或磁盤空間),才真正可以使用,而這種映射的建立和管理則由核心處理。所謂向核心申請一塊空間,是指請求核心配置設定一塊虛存空間和相應的若幹實體頁面,并建立起映射關系。由于每個程序的虛存空間都很大(3GB),而實際需要使用的又很小,核心不可能在建立程序時就為整個虛存空間都配置設定好相應的實體空間并建立映射,而隻能是需要用多少才配置設定多少。

那麼,核心怎樣管理每個程序的3G位元組虛存空間呢?粗略地說,使用者程式經過編譯、連結形成的映像檔案中有一個代碼段和一個資料段(包括data段和bss段),其中代碼段在下,資料段在上。資料段中包括了所有靜态配置設定的資料空間,包括全局變量和說明為static的局部變量。這些空間是程序所必須的基本要求,是以核心在建立一個程序的運作映象時就配置設定好些空間,包括虛存位址區間和實體頁面,并建立好二者間的映射。除此之外,堆棧空間安置在虛存空間的頂部,運作時由頂向下延伸;代碼段和資料段則在底部(注意,不要與X86系統結構中由段寄存器建立的代碼段及資料段相混淆);在運作時并不向上伸展。而從資料段的頂部end_data到堆棧段位址的下沿這個中間區域則是一個巨大的空洞,這就是可以在運作時動态配置設定的空間。最初,這個動态配置設定空間是從程序的end_data開始的,這個位址為核心和程序所共知。以後,每次動态配置設定一塊記憶體,這個邊界就往上推進一段距離,同時核心和程序都要記下目前的邊界在哪裡。在程序這一邊由malloc或類似的庫函數管理,而在核心則将目前的邊界記錄在程序的mm_struct結構中。具體地說,mm_struct結構中有一個成分brk,表示動态配置設定區目前的底部。當一個程序需要配置設定記憶體時,将要求的大小與其目前的動态配置設定區底部邊界相加,所得的就是所要求的的新邊界,也就是brk調用時的參數brk。當核心能滿足要求時,系統調用brk傳回0,此後新舊兩個邊界之間的虛存位址就都可以使用了。當核心發現無法滿足要求(例如實體空間已經配置設定完),或者發現新的邊界已經過于逼近設于頂部的堆棧時,就拒絕配置設定而傳回-1。

系統調用brk在核心中的實作為sys_brk,其代碼在mm/mmap.c中,這個函數既可以用來配置設定空間,即把動态配置設定區底部的邊界往上推;也可以用來釋放,即歸還空間。是以,它的代碼也大緻上可以分成兩部分。我們先讀第一部分:

sys_brk

/*
 *  sys_brk() for the most part doesn't need the global kernel
 *  lock, except when an application is doing something nasty
 *  like trying to un-brk an area that has already been mapped
 *  to a regular file.  in this case, the unmapping will need
 *  to invoke file system routines that need the global lock.
 */
asmlinkage unsigned long sys_brk(unsigned long brk)
{
	unsigned long rlim, retval;
	unsigned long newbrk, oldbrk;
	struct mm_struct *mm = current->mm;
 
	down(&mm->mmap_sem);
 
	if (brk < mm->end_code)
		goto out;
	newbrk = PAGE_ALIGN(brk);
	oldbrk = PAGE_ALIGN(mm->brk);
	if (oldbrk == newbrk)
		goto set_brk;
 
	/* Always allow shrinking brk. */
	if (brk <= mm->brk) {
		if (!do_munmap(mm, newbrk, oldbrk-newbrk))
			goto set_brk;
		goto out;
	}           

參數brk表示所要求的新邊界,這個邊界不能低于代碼段的終點,并且必須與頁面大小對齊。如果新邊界低于老邊界,那就不是申請配置設定空間,而是釋放空間,是以通過do_munmap解除一部分區間的映射,這是個重要的函數。其代碼如下:

sys_brk=>do_munmap

/* Munmap is split into 2 main parts -- this part which finds
 * what needs doing, and the areas themselves, which do the
 * work.  This now handles partial unmappings.
 * Jeremy Fitzhardine <[email protected]>
 */
int do_munmap(struct mm_struct *mm, unsigned long addr, size_t len)
{
	struct vm_area_struct *mpnt, *prev, **npp, *free, *extra;
 
	if ((addr & ~PAGE_MASK) || addr > TASK_SIZE || len > TASK_SIZE-addr)
		return -EINVAL;
 
	if ((len = PAGE_ALIGN(len)) == 0)
		return -EINVAL;
 
	/* Check if this memory area is ok - put it on the temporary
	 * list if so..  The checks here are pretty simple --
	 * every area affected in some way (by any overlap) is put
	 * on the list.  If nothing is put on, nothing is affected.
	 */
	mpnt = find_vma_prev(mm, addr, &prev);
	if (!mpnt)
		return 0;
	/* we have  addr < mpnt->vm_end  */
 
	if (mpnt->vm_start >= addr+len)
		return 0;
 
	/* If we'll make "hole", check the vm areas limit */
	if ((mpnt->vm_start < addr && mpnt->vm_end > addr+len)
	    && mm->map_count >= MAX_MAP_COUNT)
		return -ENOMEM;           

函數find_vma_prev的作用于以前在linux記憶體管理-幾個重要的資料結構和函數部落格中讀過的find_vma基本相同,它掃描目前程序使用者空間的vm_area_struct結構連結清單或AVL樹,試圖找到結束位址高于address的第一個區間,如果找到,則函數傳回該區間的vm_area_struct結構指針。不同的是,它同時還通過參數prev傳回其前一區間結構的指針。等一下我們就将看到為什麼需要這個指針。如果傳回的指針為0,或者該區間的起始位址也高于addr+len,那就表示想要解除映射的那部分空間原來就沒有映射,是以直接傳回0,。如果這部分空間落在某個區間的中間,則在解除這部分空間的映射以後會造成一個空洞而使原來的區間一分為二。可是,一個程序可以擁有的虛存區間的數量是有限制的,是以若這個數量達到了上限MAX_MAP_COUNT,就不再允許這樣的操作。

linux記憶體管理-brk()系統調用

需要的小夥伴私信回複核心免費領取

sys_brk=>do_munmap

/*
	 * We may need one additional vma to fix up the mappings ... 
	 * and this is the last chance for an easy error exit.
	 */
	extra = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
	if (!extra)
		return -ENOMEM;
 
	npp = (prev ? &prev->vm_next : &mm->mmap);
	free = NULL;
	spin_lock(&mm->page_table_lock);
	for ( ; mpnt && mpnt->vm_start < addr+len; mpnt = *npp) {
		*npp = mpnt->vm_next;
		mpnt->vm_next = free;
		free = mpnt;
		if (mm->mmap_avl)
			avl_remove(mpnt, &mm->mmap_avl);
	}
	mm->mmap_cache = NULL;	/* Kill the cache. */
	spin_unlock(&mm->page_table_lock);           

由于解除一部分空間的映射有可能使原來的區間一分為二,是以這裡先配置設定好一個空白的vm_area_struct結構extra。另一方面,要解除映射的那部分空間也有可能跨越好幾個區間,是以通過一個for循環把所有涉及的區間都轉移到一個臨時隊列free中,如果建立了AVL樹,則也要把這些區間的vm_area_struct結構從AVL樹中删除。以前講過,mm_struct結構中的指針mmap_cache指向上一次find_vma操作的對象,因為對虛存區間的操作往往是有連續性的(見find_vma的代碼),而現在使用者空間的結構有了變化,多半已經打破了這種連續性,是以把它清成0。至此,已經完成了所有的準備,下面就要具體解除映射了。

sys_brk=>do_munmap

/* Ok - we have the memory areas we should free on the 'free' list,
	 * so release them, and unmap the page range..
	 * If the one of the segments is only being partially unmapped,
	 * it will put new vm_area_struct(s) into the address space.
	 * In that case we have to be careful with VM_DENYWRITE.
	 */
	while ((mpnt = free) != NULL) {
		unsigned long st, end, size;
		struct file *file = NULL;
 
		free = free->vm_next;
 
		st = addr < mpnt->vm_start ? mpnt->vm_start : addr;
		end = addr+len;
		end = end > mpnt->vm_end ? mpnt->vm_end : end;
		size = end - st;
 
		if (mpnt->vm_flags & VM_DENYWRITE &&
		    (st != mpnt->vm_start || end != mpnt->vm_end) &&
		    (file = mpnt->vm_file) != NULL) {
			atomic_dec(&file->f_dentry->d_inode->i_writecount);
		}
		remove_shared_vm_struct(mpnt);
		mm->map_count--;
 
		flush_cache_range(mm, st, end);
		zap_page_range(mm, st, size);
		flush_tlb_range(mm, st, end);
 
		/*
		 * Fix the mapping, and free the old area if it wasn't reused.
		 */
		extra = unmap_fixup(mm, mpnt, st, size, extra);
		if (file)
			atomic_inc(&file->f_dentry->d_inode->i_writecount);
	}
 
	/* Release the extra vma struct if it wasn't used */
	if (extra)
		kmem_cache_free(vm_area_cachep, extra);
 
	free_pgtables(mm, prev, addr, addr+len);
 
	return 0;
}           

這裡通過一個while循環逐個處理所涉及的區間,這些區間的vm_area_struct結構都連結在一個臨時的隊列free中。在下一篇部落格中讀者将看到,一個程序可以通過系統調用mmap将一個檔案的内容映射到其使用者空間的某個區間,然後就像通路記憶體一樣來通路這個檔案。但是,如果這個檔案同時又被别的程序打開,并通過正常的檔案操作通路,則在二者對此檔案的兩種不同形式的寫操作之間要加以互斥。如果要解除映射的隻是這樣的區間的一部分(735-737行),那就相當于對此區間的寫操作,是以要遞減該檔案的inode結構中的一個計數器i_writecount,以保證互斥,到操作完成以後再予以恢複(751-752行)。同時,還要通過remove_shared_vm_struct看看所處理的區間是否是這樣的區間,如果是,就将其vm_area_struct結構從目标檔案的inode結構内的i_mapping隊列中脫鍊。

代碼中的zap_page_range解除若幹連續頁面的映射,并且釋放所映射的記憶體頁面,或對交換裝置上實體頁面的引用,這才是我們在這裡所主要關心的。其代碼如下:

sys_brk=>do_munmap=>zap_page_range

/*
 * remove user pages in a given range.
 */
void zap_page_range(struct mm_struct *mm, unsigned long address, unsigned long size)
{
	pgd_t * dir;
	unsigned long end = address + size;
	int freed = 0;
 
	dir = pgd_offset(mm, address);
 
	/*
	 * This is a long-lived spinlock. That's fine.
	 * There's no contention, because the page table
	 * lock only protects against kswapd anyway, and
	 * even if kswapd happened to be looking at this
	 * process we _want_ it to get stuck.
	 */
	if (address >= end)
		BUG();
	spin_lock(&mm->page_table_lock);
	do {
		freed += zap_pmd_range(mm, dir, address, end - address);
		address = (address + PGDIR_SIZE) & PGDIR_MASK;
		dir++;
	} while (address && (address < end));
	spin_unlock(&mm->page_table_lock);
	/*
	 * Update rss for the mm_struct (not necessarily current->mm)
	 * Notice that rss is an unsigned long.
	 */
	if (mm->rss > freed)
		mm->rss -= freed;
	else
		mm->rss = 0;
}           

這個函數解除一塊虛存區間的頁面映射。首先通過pgd_offset在第一層頁面目錄中找到起始位址所屬的目錄項,然後通過一個do-while循環從這個目錄項開始處理涉及的所有目錄項。

/* to find an entry in a page-table-directory. */
#define pgd_index(address) ((address >> PGDIR_SHIFT) & (PTRS_PER_PGD-1))
 
#define __pgd_offset(address) pgd_index(address)
 
#define pgd_offset(mm, address) ((mm)->pgd+pgd_index(address))           

對于涉及的每一個目錄項,通過zap_pmd_range處理第二層的中間目錄項。

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range

static inline int zap_pmd_range(struct mm_struct *mm, pgd_t * dir, unsigned long address, unsigned long size)
{
	pmd_t * pmd;
	unsigned long end;
	int freed;
 
	if (pgd_none(*dir))
		return 0;
	if (pgd_bad(*dir)) {
		pgd_ERROR(*dir);
		pgd_clear(dir);
		return 0;
	}
	pmd = pmd_offset(dir, address);
	address &= ~PGDIR_MASK;
	end = address + size;
	if (end > PGDIR_SIZE)
		end = PGDIR_SIZE;
	freed = 0;
	do {
		freed += zap_pte_range(mm, pmd, address, end - address);
		address = (address + PMD_SIZE) & PMD_MASK; 
		pmd++;
	} while (address < end);
	return freed;
}           

同樣,先通過pmd_offset,在第二層目錄表中找到起始目錄項。對于采用二級映射的i386結構,中間目錄表這一層是空的。定義如下:

extern inline pmd_t * pmd_offset(pgd_t * dir, unsigned long address)
{
	return (pmd_t *) dir;
}           

可見,pmd_offset把指向第一層目錄項的指針原封不動地作為指向中間目錄項的指針傳回來了,也就是說把第一層目錄當成了中間目錄。是以,對于二級映射,zap_pmd_range在某種意義上隻是把zap_page_range所做的事情重複了一遍。不過,這一次重複調用的是zap_pte_range,處理的是底層的頁面映射表了。

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range=>zap_pte_range

static inline int zap_pte_range(struct mm_struct *mm, pmd_t * pmd, unsigned long address, unsigned long size)
{
	pte_t * pte;
	int freed;
 
	if (pmd_none(*pmd))
		return 0;
	if (pmd_bad(*pmd)) {
		pmd_ERROR(*pmd);
		pmd_clear(pmd);
		return 0;
	}
	pte = pte_offset(pmd, address);
	address &= ~PMD_MASK;
	if (address + size > PMD_SIZE)
		size = PMD_SIZE - address;
	size >>= PAGE_SHIFT;
	freed = 0;
	for (;;) {
		pte_t page;
		if (!size)
			break;
		page = ptep_get_and_clear(pte);
		pte++;
		size--;
		if (pte_none(page))
			continue;
		freed += free_pte(page);
	}
	return freed;
}           

還是先找到在給定頁面表中的起始表項,與pte_offset有關的定義如下:

/* Find an entry in the third-level page table.. */
#define __pte_offset(addr)	(((addr) >> PAGE_SHIFT) & (PTRS_PER_PTE - 1))
#define pte_offset(dir, addr)	((pte_t *)pmd_page(*(dir)) + __pte_offset(addr))           

然後就是在一個for循環中,對需要解除映射的頁面調用ptep_get_and_clear将頁面表項清成0:

#define ptep_get_and_clear(xp)	__pte(xchg(&(xp)->pte_low, 0))           

最後通過free_pte解除對記憶體頁面以及盤上頁面的使用,這個函數的代碼在mm/memory.c中:

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range=>zap_pte_range=>free_pte

/*
 * Return indicates whether a page was freed so caller can adjust rss
 */
static inline int free_pte(pte_t pte)
{
	if (pte_present(pte)) {
		struct page *page = pte_page(pte);
		if ((!VALID_PAGE(page)) || PageReserved(page))
			return 0;
		/* 
		 * free_page() used to be able to clear swap cache
		 * entries.  We may now have to do it manually.  
		 */
		if (pte_dirty(pte) && page->mapping)
			set_page_dirty(page);
		free_page_and_swap_cache(page);
		return 1;
	}
	swap_free(pte_to_swp_entry(pte));
	return 0;
}           

如果頁面表項表明在解除映射前頁面就已不在記憶體,則目前程序對該記憶體頁面的使用已經解除,是以隻需調用swap_free解除對交換裝置上的盤上頁面的使用。當然,swap_free首先是遞減盤上頁面的使用計數,隻有當這個計數達到0時才真正地釋放了這個盤上頁面。如果目前程序是這個盤上頁面的最後一個使用者(或惟一的使用者),則該計數遞減後為0。反之,則要通過free_page_and_swap_cache解除對盤上頁面和記憶體頁面二者的使用。此外,如果頁面在最近一次try_to_swap_out以後已被寫過,則還要通過set_page_dirty設定該頁面page結構中的PG_dirty标志位,并在相應的address_space結構中将其移入dirty_pages隊列。函數free_page_and_swap_cache的代碼在mm/swap_state.c中:

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range=>zap_pte_range=>free_pte=>free_page_and_swap_cache

/* 
 * Perform a free_page(), also freeing any swap cache associated with
 * this page if it is the last user of the page. Can not do a lock_page,
 * as we are holding the page_table_lock spinlock.
 */
void free_page_and_swap_cache(struct page *page)
{
	/* 
	 * If we are the only user, then try to free up the swap cache. 
	 */
	if (PageSwapCache(page) && !TryLockPage(page)) {
		if (!is_page_shared(page)) {
			delete_from_swap_cache_nolock(page);
		}
		UnlockPage(page);
	}
	page_cache_release(page);
}           

以前講過,一個由使用者空間映射、可換出的記憶體頁面(确切地說是它的page資料結構),同時在三個隊列中。一是通過其隊列頭list鍊入某個換入、換出隊列,即相應address_space結構中的clean_pages、dirty_pages以及locked_pages三個隊列之一;二是通過其隊列頭lru鍊入某個LRU隊列,即active_list、inactive_dirty_list或者某個inactive_clean_list之一;最後就是通過指針next_hash鍊入一個雜湊隊列。當一個頁面在某個換入、換出隊列中時,其page結構中的PG_swap_cache标志位為1,如果目前程序是這個頁面的最後一個使用者(或唯一使用者),此時便要調用delete_from_swap_cache_nolock将頁面從上述隊列中脫離出來。

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range=>zap_pte_range=>free_pte=>free_page_and_swap_cache=>delete_from_swap_cache_nolock

/*
 * This will never put the page into the free list, the caller has
 * a reference on the page.
 */
void delete_from_swap_cache_nolock(struct page *page)
{
	if (!PageLocked(page))
		BUG();
 
	if (block_flushpage(page, 0))
		lru_cache_del(page);
 
	spin_lock(&pagecache_lock);
	ClearPageDirty(page);
	__delete_from_swap_cache(page);
	spin_unlock(&pagecache_lock);
	page_cache_release(page);
}           

先通過block_flushpage把頁面的内容沖刷到塊裝置上,不過實際上這種沖刷僅在頁面來自一個映射到使用者空間的檔案時才進行,因為對于交換裝置上的頁面,此時的内容已經沒有意義了。完成了沖刷以後,就通過lru_cache_del将頁面從其所在的LRU隊列中脫離出來。然後,再通過__delete_from_swap_cache,使頁面脫離其他兩個隊列。

sys_brk=>do_munmap=>zap_page_range=>zap_pmd_range=>zap_pte_range=>free_pte=>free_page_and_swap_cache=>delete_from_swap_cache_nolock=>__delete_from_swap_cache

/*
 * This must be called only on pages that have
 * been verified to be in the swap cache.
 */
void __delete_from_swap_cache(struct page *page)
{
	swp_entry_t entry;
 
	entry.val = page->index;
 
#ifdef SWAP_CACHE_INFO
	swap_cache_del_total++;
#endif
	remove_from_swap_cache(page);
	swap_free(entry);
}           

這裡的remove_from_swap_cache将頁面的page結構從換入、換出隊列和雜湊隊列中脫離出來。然後,也是通過swap_free釋放盤上頁面,回到delete_from_swap_cache_nolock。最後是page_cache_release,即遞減page結構中的使用計數。由于目前程序是頁面的最後一個使用者,并且在解除映射之前頁面在記憶體中(見上面free_pte中的264行),是以頁面的使用計數應該是2,這裡(119行)調用了一次page_cache_release就變成了1。再傳回到free_page_and_swap_cache中,這裡(149行)又調用了一次page_cache_release,這一次就使其變成了0,于是就最終把頁面釋放,讓它回到了空閑頁面隊列中。當回到do_munmap中的時候,已經完成了對一個虛存區間的操作。此時,一方面要對虛存區間的vm_area_struct資料結構和程序的mm_struct資料結構作出調整,以反映已經發生的變化,如果整個區間都解除了映射,則要釋放原有的vm_area_struct資料結構。這些操作是由unmap_fixup完成的。其代碼如下:

sys_brk=>do_munmap=>unmap_fixup

/* Normal function to fix up a mapping
 * This function is the default for when an area has no specific
 * function.  This may be used as part of a more specific routine.
 * This function works out what part of an area is affected and
 * adjusts the mapping information.  Since the actual page
 * manipulation is done in do_mmap(), none need be done here,
 * though it would probably be more appropriate.
 *
 * By the time this function is called, the area struct has been
 * removed from the process mapping list, so it needs to be
 * reinserted if necessary.
 *
 * The 4 main cases are:
 *    Unmapping the whole area
 *    Unmapping from the start of the segment to a point in it
 *    Unmapping from an intermediate point to the end
 *    Unmapping between to intermediate points, making a hole.
 *
 * Case 4 involves the creation of 2 new areas, for each side of
 * the hole.  If possible, we reuse the existing area rather than
 * allocate a new one, and the return indicates whether the old
 * area was reused.
 */
static struct vm_area_struct * unmap_fixup(struct mm_struct *mm, 
	struct vm_area_struct *area, unsigned long addr, size_t len, 
	struct vm_area_struct *extra)
{
	struct vm_area_struct *mpnt;
	unsigned long end = addr + len;
 
	area->vm_mm->total_vm -= len >> PAGE_SHIFT;
	if (area->vm_flags & VM_LOCKED)
		area->vm_mm->locked_vm -= len >> PAGE_SHIFT;
 
	/* Unmapping the whole area. */
	if (addr == area->vm_start && end == area->vm_end) {
		if (area->vm_ops && area->vm_ops->close)
			area->vm_ops->close(area);
		if (area->vm_file)
			fput(area->vm_file);
		kmem_cache_free(vm_area_cachep, area);
		return extra;
	}
 
	/* Work out to one of the ends. */
	if (end == area->vm_end) {
		area->vm_end = addr;
		lock_vma_mappings(area);
		spin_lock(&mm->page_table_lock);
	} else if (addr == area->vm_start) {
		area->vm_pgoff += (end - area->vm_start) >> PAGE_SHIFT;
		area->vm_start = end;
		lock_vma_mappings(area);
		spin_lock(&mm->page_table_lock);
	} else {
	/* Unmapping a hole: area->vm_start < addr <= end < area->vm_end */
		/* Add end mapping -- leave beginning for below */
		mpnt = extra;
		extra = NULL;
 
		mpnt->vm_mm = area->vm_mm;
		mpnt->vm_start = end;
		mpnt->vm_end = area->vm_end;
		mpnt->vm_page_prot = area->vm_page_prot;
		mpnt->vm_flags = area->vm_flags;
		mpnt->vm_raend = 0;
		mpnt->vm_ops = area->vm_ops;
		mpnt->vm_pgoff = area->vm_pgoff + ((end - area->vm_start) >> PAGE_SHIFT);
		mpnt->vm_file = area->vm_file;
		mpnt->vm_private_data = area->vm_private_data;
		if (mpnt->vm_file)
			get_file(mpnt->vm_file);
		if (mpnt->vm_ops && mpnt->vm_ops->open)
			mpnt->vm_ops->open(mpnt);
		area->vm_end = addr;	/* Truncate area */
 
		/* Because mpnt->vm_file == area->vm_file this locks
		 * things correctly.
		 */
		lock_vma_mappings(area);
		spin_lock(&mm->page_table_lock);
		__insert_vm_struct(mm, mpnt);
	}
 
	__insert_vm_struct(mm, area);
	spin_unlock(&mm->page_table_lock);
	unlock_vma_mappings(area);
	return extra;
}           

我們把這段代碼留給讀者。最後,當循環結束之時,由于已經解除了一些頁面的映射,有些頁面映射表可能整個都已經空白,對于這樣的頁面表(所占的頁面)也要加以釋放。這是由free_pgtables完成的。我們把它代碼留給讀者。

sys_brk=>do_munmap=>free_pgtables

/*
 * Try to free as many page directory entries as we can,
 * without having to work very hard at actually scanning
 * the page tables themselves.
 *
 * Right now we try to free page tables if we have a nice
 * PGDIR-aligned area that got free'd up. We could be more
 * granular if we want to, but this is fast and simple,
 * and covers the bad cases.
 *
 * "prev", if it exists, points to a vma before the one
 * we just free'd - but there's no telling how much before.
 */
static void free_pgtables(struct mm_struct * mm, struct vm_area_struct *prev,
	unsigned long start, unsigned long end)
{
	unsigned long first = start & PGDIR_MASK;
	unsigned long last = end + PGDIR_SIZE - 1;
	unsigned long start_index, end_index;
 
	if (!prev) {
		prev = mm->mmap;
		if (!prev)
			goto no_mmaps;
		if (prev->vm_end > start) {
			if (last > prev->vm_start)
				last = prev->vm_start;
			goto no_mmaps;
		}
	}
	for (;;) {
		struct vm_area_struct *next = prev->vm_next;
 
		if (next) {
			if (next->vm_start < start) {
				prev = next;
				continue;
			}
			if (last > next->vm_start)
				last = next->vm_start;
		}
		if (prev->vm_end > first)
			first = prev->vm_end + PGDIR_SIZE - 1;
		break;
	}
no_mmaps:
	/*
	 * If the PGD bits are not consecutive in the virtual address, the
	 * old method of shifting the VA >> by PGDIR_SHIFT doesn't work.
	 */
	start_index = pgd_index(first);
	end_index = pgd_index(last);
	if (end_index > start_index) {
		clear_page_tables(mm, start_index, end_index - start_index);
		flush_tlb_pgtables(mm, first & PGDIR_MASK, last & PGDIR_MASK);
	}
}           

回到sys_brk的代碼中,我們已經完成了通過sys_brk釋放空間的情景分析。

如果新邊界高于老邊界,就表示要配置設定空間,這就是sys_brk的後一部分。我們繼續往下看:

sys_brk

/* Check against rlimit.. */
	rlim = current->rlim[RLIMIT_DATA].rlim_cur;
	if (rlim < RLIM_INFINITY && brk - mm->start_data > rlim)
		goto out;
 
	/* Check against existing mmap mappings. */
	if (find_vma_intersection(mm, oldbrk, newbrk+PAGE_SIZE))
		goto out;
 
	/* Check if we have enough memory.. */
	if (!vm_enough_memory((newbrk-oldbrk) >> PAGE_SHIFT))
		goto out;
 
	/* Ok, looks good - let it rip. */
	if (do_brk(oldbrk, newbrk-oldbrk) != oldbrk)
		goto out;
set_brk:
	mm->brk = brk;
out:
	retval = mm->brk;
	up(&mm->mmap_sem);
	return retval;
}           

首先檢查對程序的資源限制,如果所要求的新邊界使資料段的大小超過了對目前程序的限制,就拒絕執行。此外,還要通過find_vma_intersection,檢查所要求的那部分空間是否與已經存在的某一區間相沖突,這個inline函數的代碼如下:

sys_brk=>find_vma_intersection

/* Look up the first VMA which intersects the interval start_addr..end_addr-1,
   NULL if none.  Assume start_addr < end_addr. */
static inline struct vm_area_struct * find_vma_intersection(struct mm_struct * mm, unsigned long start_addr, unsigned long end_addr)
{
	struct vm_area_struct * vma = find_vma(mm,start_addr);
 
	if (vma && end_addr <= vma->vm_start)
		vma = NULL;
	return vma;
}           

這裡的start_addr是老邊界,如果find_vma傳回一個非0指針,就表示在它之上已經有了一個已經映射區間,是以有沖突的可能。此時新的邊界end_addr必須落在這個區間的起點之下,也就是讓從start_addr到end_addr這個空間落在空洞中,否則便是有了沖突。在查明了不存在沖突以後,還要通過vm_enough_memory看看系統中是否有足夠的空閑記憶體頁面。

sys_brk=>vm_enough_memory

/* Check that a process has enough memory to allocate a
 * new virtual mapping.
 */
int vm_enough_memory(long pages)
{
	/* Stupid algorithm to decide if we have enough memory: while
	 * simple, it hopefully works in most obvious cases.. Easy to
	 * fool it, but this should catch most mistakes.
	 */
	/* 23/11/98 NJC: Somewhat less stupid version of algorithm,
	 * which tries to do "TheRightThing".  Instead of using half of
	 * (buffers+cache), use the minimum values.  Allow an extra 2%
	 * of num_physpages for safety margin.
	 */
 
	long free;
	
        /* Sometimes we want to use more memory than we have. */
	if (sysctl_overcommit_memory)
	    return 1;
 
	free = atomic_read(&buffermem_pages);
	free += atomic_read(&page_cache_size);
	free += nr_free_pages();
	free += nr_swap_pages;
	return free > pages;
}           

通過了這些檢查,接着就是操作的主體do_brk了。這個函數的代碼在mm/mmap.c中:

sys_brk=>do_brk

/*
 *  this is really a simplified "do_mmap".  it only handles
 *  anonymous maps.  eventually we may be able to do some
 *  brk-specific accounting here.
 */
unsigned long do_brk(unsigned long addr, unsigned long len)
{
	struct mm_struct * mm = current->mm;
	struct vm_area_struct * vma;
	unsigned long flags, retval;
 
	len = PAGE_ALIGN(len);
	if (!len)
		return addr;
 
	/*
	 * mlock MCL_FUTURE?
	 */
	if (mm->def_flags & VM_LOCKED) {
		unsigned long locked = mm->locked_vm << PAGE_SHIFT;
		locked += len;
		if (locked > current->rlim[RLIMIT_MEMLOCK].rlim_cur)
			return -EAGAIN;
	}
 
	/*
	 * Clear old maps.  this also does some error checking for us
	 */
	retval = do_munmap(mm, addr, len);
	if (retval != 0)
		return retval;
 
	/* Check against address space limits *after* clearing old maps... */
	if ((mm->total_vm << PAGE_SHIFT) + len
	    > current->rlim[RLIMIT_AS].rlim_cur)
		return -ENOMEM;
 
	if (mm->map_count > MAX_MAP_COUNT)
		return -ENOMEM;
 
	if (!vm_enough_memory(len >> PAGE_SHIFT))
		return -ENOMEM;
 
	flags = vm_flags(PROT_READ|PROT_WRITE|PROT_EXEC,
				MAP_FIXED|MAP_PRIVATE) | mm->def_flags;
 
	flags |= VM_MAYREAD | VM_MAYWRITE | VM_MAYEXEC;
	
 
	/* Can we just expand an old anonymous mapping? */
	if (addr) {
		struct vm_area_struct * vma = find_vma(mm, addr-1);
		if (vma && vma->vm_end == addr && !vma->vm_file && 
		    vma->vm_flags == flags) {
			vma->vm_end = addr + len;
			goto out;
		}
	}	
 
 
	/*
	 * create a vma struct for an anonymous mapping
	 */
	vma = kmem_cache_alloc(vm_area_cachep, SLAB_KERNEL);
	if (!vma)
		return -ENOMEM;
 
	vma->vm_mm = mm;
	vma->vm_start = addr;
	vma->vm_end = addr + len;
	vma->vm_flags = flags;
	vma->vm_page_prot = protection_map[flags & 0x0f];
	vma->vm_ops = NULL;
	vma->vm_pgoff = 0;
	vma->vm_file = NULL;
	vma->vm_private_data = NULL;
 
	insert_vm_struct(mm, vma);
 
out:
	mm->total_vm += len >> PAGE_SHIFT;
	if (flags & VM_LOCKED) {
		mm->locked_vm += len >> PAGE_SHIFT;
		make_pages_present(addr, addr + len);
	}
	return addr;
}           

參數addr為需要建立映射的新區間的起點,len則為區間的長度。前面我們已經看到find_vma_intersection對沖突的檢查,可是不知讀者是否注意到,實際上檢查的隻是新區間的高端,對于其低端的沖突則并未檢查。例如,老的邊界是否恰好是一個已映射區間的終點呢?如果不是,那就說明在低端有了沖突。不過,對于低端的沖突是允許的,解決的方法是以新的映射為準,先通過do_munmap把原有的映射解除(見803行),再來建立新的映射,讀者大概要問了,為什麼對新區間的高端和低端有如此不同的容忍程度和對待呢?讀者最好先想一想,然後再往下看。

以前說過,使用者空間的頂端是程序的使用者空間堆棧。不管什麼程序,在那裡總是有一個已映射區間存在着的,是以find_vma_intersection中的find_vma其實不會傳回0,因為至少用于堆棧的那個區間總是存在的。當然,在堆棧以下也可能還有通過mmap或ioremap建立的映射區間。是以,如果新區間的高端有沖突,那就可能是與堆棧的沖突,而低端的沖突則隻能是與資料段的沖突。是以,對于低端可以讓程序自己對可能的錯誤負責,而對于堆棧可就不能采取把原有的映射解除,另行建立新的映射這樣的方法了。

建立新的映射時,先看看是否可以跟原有的區間合并,即通過擴充原有區間來覆寫新增的區間(826-831行)。如果不行就得另行建立一個區間(838-852行)。

最後,通過make_pages_present,為新增的區間建立起對記憶體頁面的映射。其代碼如下:

sys_brk=>do_brk=>make_pages_present

/*
 * Simplistic page force-in..
 */
int make_pages_present(unsigned long addr, unsigned long end)
{
	int write;
	struct mm_struct *mm = current->mm;
	struct vm_area_struct * vma;
 
	vma = find_vma(mm, addr);
	write = (vma->vm_flags & VM_WRITE) != 0;
	if (addr >= end)
		BUG();
	do {
		if (handle_mm_fault(mm, vma, addr, write) < 0)
			return -1;
		addr += PAGE_SIZE;
	} while (addr < end);
	return 0;
}           

這裡所用的方法很有趣,那就是對新區間中的每一個頁面模拟一次缺頁異常。讀者不妨想想,當從do_brk傳回,進而從sys_brk傳回之時,這些頁面表項的映射是怎樣的?如果程序從新配置設定的區間中讀,讀出的内容該是什麼?往裡面寫,情況又會怎樣?

原文作者:首頁 - 核心技術中文網 - 建構全國最權威的核心技術交流分享論壇

原文位址:linux記憶體管理-brk()系統調用 - 圈點 - 核心技術中文網 - 建構全國最權威的核心技術交流分享論壇(版權歸原文作者所有,侵權聯系删除)

繼續閱讀