Fix tracking of lock call sites
authorBarret Rhoden <brho@cs.berkeley.edu>
Thu, 19 Jul 2018 18:49:00 +0000 (14:49 -0400)
committerBarret Rhoden <brho@cs.berkeley.edu>
Thu, 19 Jul 2018 18:49:00 +0000 (14:49 -0400)
If we don't inline post_lock(), the call site for a lock is just
spin_lock(), which is not what we want.

Signed-off-by: Barret Rhoden <brho@cs.berkeley.edu>
kern/src/atomic.c

index 7af4cfb..70d2805 100644 (file)
@@ -39,7 +39,7 @@ static bool can_trace(spinlock_t *lock)
 }
 
 /* spinlock and trylock call this after locking */
-static void post_lock(spinlock_t *lock, uint32_t coreid)
+static __always_inline void post_lock(spinlock_t *lock, uint32_t coreid)
 {
        struct per_cpu_info *pcpui = &per_cpu_info[coreid];
        if ((pcpui->__lock_checking_enabled == 1) && can_trace(lock))