Fix off-by-one error in the page allocator
authorBarret Rhoden <brho@cs.berkeley.edu>
Wed, 4 May 2016 21:39:17 +0000 (17:39 -0400)
committerBarret Rhoden <brho@cs.berkeley.edu>
Wed, 4 May 2016 22:00:58 +0000 (18:00 -0400)
Trace through the code with order = 0 to convince yourself.  Basically,
anytime we found a non-free page in our scan, we'd run the next loop on the
page *two* pages forward.  If the page we skipped was already busy, then we
got lucky.

If it wasn't, we fragmented our memory slightly.  That could be a problem
if you're doing a lot of higher-order allocations (CONFIG_LARGE_KSTACKS).

There could also be a pathological case where there are many free pages,
you only want a single free page, but we can't find them since we happen to
skip over them.

Signed-off-by: Barret Rhoden <brho@cs.berkeley.edu>
kern/src/page_alloc.c

index 535d16e..ac3b1ce 100644 (file)
@@ -196,10 +196,14 @@ void *get_cont_pages(size_t order, int flags)
                int j;
                for(j=i; j>=(i-(npages-1)); j--) {
                        if( !page_is_free(j) ) {
-                               i = j - 1;
+                               /* i will be j - 1 next time around the outer loop */
+                               i = j;
                                break;
                        }
                }
+               /* careful: if we change the allocator and allow npages = 0, then this
+                * will trip when we set i = j.  then we'll be handing out in-use
+                * memory. */
                if( j == (i-(npages-1)-1)) {
                        first = j+1;
                        break;