akaros/Documentation/kthreads.txt
<<
>>
Prefs
   1kthreads.txt
   2Barret Rhoden
   3
   4What Are They, And Why?
   5-------------------------------
   6Eventually a thread of execution in the kernel will want to block.  This means
   7that the thread is unable to make forward progress and something else ought
   8to run - the common case for this is when we wait on an IO operation.  This gets
   9trickier when a function does not know if a function it calls will block or not.
  10Sometimes they do, sometimes they don't.
  11
  12The critical feature is not that we want to save the registers, but that we want
  13to preserve the stack and be able to use it independently of whatever else we do
  14on that core in the interim time.  If we knew we would be done with and return
  15from whatever_else() before we needed to continue the current thread of
  16execution, we could simply call the function.  Instead, we want to be able to
  17run the old context independently of what else is running (which may be a
  18process). 
  19
  20We call this suspended context and the associated information a kthread, managed
  21by a struct kthread.  It's the bare minimum needed for the kernel to stop and
  22restart a thread of execution.  It holds the registers, stack pointer, PC,
  23struct proc* if applicable, stacktop, and little else.  There is no silly_state
  24/ floating point state, or anything else.  Its address space is determined from
  25which process context (possibly none) that was running.
  26
  27We also get a few other benefits, such as the ability to pick and choose which
  28kthreads to run where and when.  Users of kthreads should not assume that the
  29core_id() stayed the same across blocking calls.  
  30
  31We can also use this infrastructure in other cases where we might want to start
  32on a new stack.  One example is when we deal with low memory.  We may have to do
  33a lot of work, but only need to do a little to allow the original thread (that
  34might have failed on a page_alloc) to keep running, while we want the memory
  35freer to keep running too (or later) from where it left off.  In essence, we
  36want to fork, work, and yield or run on another core.  The kthread is just a
  37means of suspending a call stack and a context for a little while.
  38
  39Side Note:
  40-----------
  41Right now, blocking a kthread is an explicit action.  Some function realizes it
  42can't make progress (like waiting on a block device), so it sleeps on something
  43(for now a semaphore), and gets woken up when it receives its signal.  This
  44differs from processes, which can be stopped and suspended at any moment
  45(pagefault is the classic example).  In the future, we could make kthreads be
  46preemptable (timer interrupt goes off, and we choose to suspend a kthread), but
  47even then kthreads still have the ability to turn off interrupts for tricky
  48situations (like suspending the kthread).  The analog in the process code is
  49disabling notifications, which dramatically complicates its functions (compare
  50the save and pop functions for _ros_tf and _kernel_tf).  Furthermore, when a
  51process disables notifications, it still doesn't mean it is running without
  52interruptions (it looks like that to the vcore).  When the kernel disables
  53interrupts, it really is running.
  54
  55What About Events?
  56-------------------------------
  57Why not just be event driven for all IO?  Why do we need these kernel threads?
  58In short, IO isn't as simple as "I just want a block and when it's done, run a
  59function."  While that is what the block device driver will do, the subsystems
  60actually needing the IO are much simpler if they are threaded.  Consider the
  61potentially numerous blocking IO calls involved in opening a file.  Having a
  62continuation for each one of those points in the call graph seems like a real
  63pain to code.  Perhaps I'm not seeing it, but if you're looking for a simple,
  64light mechanism for keeping track of what work you need to do, just use a stack.
  65Programming is much simpler, and it costs a page plus a small data structure.
  66
  67Note that this doesn't mean that all IO needs to use kthreads, just that some
  68will really benefit from it.  I plan to make the "last part" of some IO calls
  69more event driven.  Basically, it's all just a toolbox, and you should use what
  70you need.
  71
  72Freeing Stacks and Structs
  73-------------------------------
  74When we restart a kthread, we have to be careful about freeing the old stack and
  75the struct kthread.  We need to delay the freeing of both of these until after
  76we longjmp to the new kthread.  We can't free the kthread before popping it,
  77and we are on the stack we need to free (until we pop to the new stack).
  78
  79To deal with this, we have a "spare" kthread per core, which gets assigned as
  80the spare when we restart a previous kthread.  When making/suspending a kthread,
  81we'll use this spare.  When restarting one, we'll free the old spare if it
  82exists and put ours there.  One drawback is that we potentially waste a chunk of
  83memory (1 page + a bit per core, worst case), but it is a nice, simple solution.
  84Also, it will cut down on contention for free pages and the kthread_kcache,
  85though this won't help with serious contention issues (which we'll deal with
  86eventually).
  87
  88What To Run Next?
  89-------------------------------
  90When a kthread suspends, what do we run next?  And how do we know what to run
  91next?  For now, we call smp_idle() - it is what you do when you have nothing
  92else to do, or don't know what to do.  We could consider having sleep_on() take
  93a function pointer, but when we start hopping stacks, passing that info gets
  94tricky.  And we need to make a decision about which function to call quickly (in
  95the code.  I don't trust the compiler much).  We can store the function pointer
  96at the bottom of the future stack and extract it from there.  Or we could put it
  97in per_cpu_info.  Or we can send ourselves a routine kernel message.
  98
  99Regardless of where we put it, we ought to call smp_idle() (or something
 100similar) before calling it, since we need to make sure that whatever we call
 101right after jumping stacks never returns.  It's more flexible to allow a
 102function that returns for the func *, so we'll use smp_idle() as a level of
 103indirection.
 104
 105Semaphore Stuff
 106-------------------------------
 107We use the semaphore (defined in kthread.h) for kthreads to sleep on and wait
 108for a signal.  It is possible that the signal wins the race and beats the call
 109to sleep_on().  The semaphore handles this by "returning false."  You'll notice
 110that we don't actually call __down_sem(), but instead "build it in" to
 111sleep_on().  I didn't want to deal with returning a bool (even if it was an
 112inline), because I want to minimize the amount of stuff we do with potential
 113stack variables (I don't trust the register variable).  As soon as we unlock,
 114the kthread could be restarted (in theory), and it could start to clobber the
 115stack in later function calls.
 116
 117So it is possible that we lose the semaphore race and shouldn't sleep.  We
 118unwind the sleep prep work.  An alternative was to only do the prep work if we
 119won the race, but that would mean we have to do a lot of work in that delicate
 120period of "I'm on the queue but it is unlocked" - work that requires touching
 121the stack.  Or we could just hold the lock for a longer period of time, which
 122I don't care to do.  What we do now is try and down the semaphore early (the
 123early bailout), and if it fails then try to sleep (unlocked).  If it then
 124loses the race (unlikely), it can manually unwind.
 125
 126Note that a lot of this is probably needless worry - we have interrupts disabled
 127for most of sleep_on(), though arguably we can be a little more careful with
 128pcpui->spare and move the disable_irq() down to right before setjmp().
 129
 130What's the Deal with Stacks/Stacktops?
 131-------------------------------
 132When the kernel traps from userspace, it needs to know what to set the kernel
 133stack pointer to.  In x86, it looks in the TSS.  In riscv, we have a data
 134structure tracking that info (core_stacktops).  One thing I considered was
 135migrating the kernel from its boot stacks (x86, just core0, riscv, all the cores
 136have one).  Instead, we just make sure the tables/TSS are up to date right away
 137(before interrupts or traps can come in for x86, and right away for riscv).
 138These boot stacks aren't particularly special, just note they are in the program
 139data/bss sections and were never originally added to a free list.  But they can
 140be freed later on.  This might be an issue in some places, but those places
 141ought to be fixed.
 142
 143There is also some implications about PGSIZE stacks (specifically in the
 144asserts, how we alloc only one page, etc).  The bootstacks are bigger than a
 145page (for now), but in general we don't want to have giant stacks (and shouldn't
 146need them - note linux runs with 4KB stacks).  In the future (long range, when
 147we're 64 bit), I'd like to put all kernel stacks high in the address space, with
 148guard pages after them.  This would require a certain "quiet migration" to the
 149new locations for the bootstacks (though not a new page - just a different
 150virtual address for the stacks (not their page-alloced KVA).  A bunch of minor
 151things would need to change for that, so don't hold your breath.
 152
 153So what about stacktop?  It's just the top of the stack, but sometimes it is the
 154stack we were on (when suspending the kthread), other times kthread->stacktop
 155is just a scrap page's top.
 156
 157What's important when suspending is that the current stack is not
 158used in future traps - that it doesn't get clobbered.  That's why we need to
 159find a new stack and set it as the current stacktop.  We also need to 'save'
 160the stack page of the old kthread - we don't want it to be freed, since we
 161need it later. When starting a kthread, I don't particularly care about which
 162stack is now the default stack.  The sleep_on() assumes it was the kthread's,
 163so unless we always have a default one that is only used very briefly and
 164never blocked on, (which requires a stack jump), we ought to just have a
 165kthread run with its stack as the default stacktop.
 166
 167When restarting a kthread, we eventually will use its stack, instead of the
 168current one, but we can't free the current stack until after we actually
 169longjmp() to it.  This is the same problem as with the struct kthread dealloc.
 170So we can have the kthread (which we want to free later) hold on to the page we
 171wanted to dealloc.  Likewise, when we would need a fresh kthread, we also need a
 172page to use as the default stacktop.  So if we had a cached kthread, we then use
 173the page that kthread was pointing to.  NOTE: the spare kthread struct is not
 174holding the stack it was originally saved with.  Instead, it is saving the page
 175of the stack that was running when that kthread was reactivated.  It's spare
 176storage for both the struct and the page, but they aren't linked in any
 177meaningful way (like it is the stack of the page).  That linkage is only true
 178when a kthread is being used (like in a semaphore queue).
 179
 180Current and Process Contexts
 181-------------------------------
 182When a kthread is suspended, should the core stay in process context (if it was
 183before)?  Short answer: yes.
 184
 185For vcore local calls (process context, trapped on the calling core), we're
 186giving the core back, so we can avoid TLB shootdowns.  Though we do have to
 187incref (which writes a cache line in the proc struct), since we are storing a
 188reference to the proc (and will try to load its cr3 later).  While this sucks,
 189keep in mind this is for a blocking IO call (where we couldn't find the page in
 190any cache, etc).  It might be a scalability bottleneck, but it also might not
 191matter in any real case.
 192
 193For async calls, it is less clear.  We might want to keep processing that
 194process's syscalls, so it'd be easier to keep its cr3 loaded.  Though it's not
 195as clear how we get from smp_idle() to a workable function and if it is useful
 196to be in process context until we start processing those functions again.  Keep
 197in mind that normally, smp_idle() shouldn't be in any process's context.  I'll
 198probably write something later that abandons any context before halting to make
 199sure processes die appropriately.  But there are still some unresolved issues
 200that depend on what exactly we want to do.
 201
 202While it is tempting to say that we stay in process context if it was local, but
 203not if it is async, there is an added complication.  The function calling
 204sleep_on() doesn't care about whether it is on a process-allocated core or not.
 205This is solvable by using per_cpu_info(), and will probably work its way into a
 206future patch, regardless of whether or not we stay in process context for async
 207calls.
 208        
 209As a final case, what will we do for processes that were interrupted by
 210something that wants to block, but wasn't servicing a syscall?  We probably
 211shouldn't have these (I don't have a good example of when we'd want it, and a
 212bunch of reasons why we wouldn't), but if we do, then it might be okay anyway -
 213the kthread is just holding that proc alive for a bit.  Page faults are a bit
 214different - they are something the process wants at least.  I was thinking more
 215about unrelated async events.  Still, shouldn't be a big deal.
 216
 217Kmsgs and Kthreads
 218-------------------------------
 219Is there a way to mix kernel messages and kthreads?  What's the difference, and
 220can one do the other?  A kthread is a suspended call-stack and context (thread),
 221stopped in the middle of its work.  Kernel messages are about starting fresh -
 222"hey core X, run this function."  A kmsg can very easily be a tool used to
 223restart a kthread (either locally or on another core).  We do this in the test
 224code, if you're curious how it could work.
 225
 226Note we use the semaphore to deal with races.  In test_kthreads(), we're
 227actually using the kmsg to up the semaphore.  You just as easily could up the
 228semaphore in one core (possibly in response to a kmsg, though more likely due to
 229an interrupt), and then send the kthread to another core to restart via a kmsg.
 230
 231There's no reason you can't separate the __up_sem() and the running of the
 232kthread - the semaphore just protects you from missing the signal.  Perhaps
 233you'll want to rerun the kthread on the physical core it was suspended on!
 234(cache locality, and it might be a legit option to allow processes to say it's
 235okay to take their vcore).  Note this may require more bookkeeping in the struct
 236kthread.
 237
 238There is another complication: the way we've been talking about kmsgs (starting
 239fresh), we are talking about *routine* messages.  One requirement for routine
 240messages that do not return is that they handle process state.  The current
 241kmsgs, such as __death and __preempt are built so they can handle acting on
 242whichever process is currently running.  Likewise, __launch_kthread() needs to
 243handle the cases that arise when it runs on a core that was about to run a
 244process (as can often happen with proc_restartcore(), which calls
 245process_routine_kmsg()).  Basically, if it was a _S, it just yields the process,
 246similar to what happens in Linux (call schedule() on the way out, from what I
 247recall).  If it was a _M, things are a bit more complicated, since this should
 248only happen if the kthread is for that process (and probably a bunch of other
 249things - like they said it was okay to interrupt their vcore to finish the
 250syscall).  Note - this might not be accurate anymore (see discussions on
 251current_ctx).
 252
 253To a certain extent, routine kmsgs don't seem like a nice fit, when we really
 254want to be calling schedule().  Though if you think of it as the enactment of a
 255previous scheduling decision (like other kmsgs (__death())), then it makes more
 256sense.  The scheduling decision (as of now) was made in the interrupt handler
 257when it decided to send the kernel msg.  In the future, we could split this into
 258having the handler make the kthread active, and have the scheduler called to
 259decide where and when to run the kthread.
 260
 261Current_ctx, Returning Twice, and Blocking
 262--------------------------------
 263One of the reasons for decoupling kthreads from a vcore or the notion of a
 264kernel thread per user processs/task is so that when the kernel blocks (on a
 265syscall or wherever), it can return to the process.  This is the essence of the
 266asynchronous kernel/syscall interface (though it's not limited to syscalls
 267(pagefaults!!)).  Here is what we want it to be able to handle:
 268- When a process traps (syscall, trap, or actual interrupt), the process regains
 269  control when the kernel is done or when it blocks.
 270- Any kernel path can block at any time.
 271- Kernel control paths need to not "return twice", but we don't want to have to
 272  go through acrobatics in the code to prevent this.
 273
 274There are a couple of approaches I considered, and it involves the nature of
 275"current_ctx", and a brutal bug.  Current_ctx (formerly current_ctx) is a
 276pointer to the trapframe of the process that was interrupted/trapped, and is
 277what user context ought to be running on this core if we return.  Current_ctx is
 278'made' when the kernel saves the context at the top of the interrupt stack (aka
 279'stacktop').  Then the kernel's call path proceeds down the same stack.  This
 280call path may get blocked in a kthread.  When we block, we want to restart the
 281current_ctx.  There is a coupling between the kthread's stack and the storage of
 282current_ctx (contents, not the pointer (which is in pcpui)).
 283
 284This coupling presents a problem when we are in userspace and get interrupted,
 285and that interrupt wants to restart a kthread.  In this case, current_ctx points
 286to the interrupt stack, but then we want to switch to the kthread's stack.  This
 287is okay.  When that kthread wants to block again, it needs to switch back to
 288another stack.  Up until this commit, it was jumping to the top of the old stack
 289it was on, clobbering current_ctx (took about 8-10 hours to figure this out).
 290While we could just make sure to save space for current_ctx, it doesn't solve
 291the problem: namely that the current_ctx concept is not bound to a specific
 292kernel stack (kthread or otherwise).  We could have cases where more than one
 293kthread starts up on a core and we end up freeing the page that holds
 294current_ctx (since it is a stack we no longer need).  We don't want to bother
 295keeping stacks around just to hold the current_ctx.  Part of the nature of this
 296weird coupling is that a given kthread might or might not have the current_ctx
 297at the top of its stack.  What a pain in the ass...
 298
 299The right answer is to decouple current_ctx from kthread stacks.  There are two
 300ways to do this.  In both ways, current_ctx retains its role of the context the
 301kernel restarts (or saves) when it goes back to a process, and is independent of
 302blocking kthreads.  SPOILER: solution 1 is not the one I picked
 303
 3041) All traps/interrupts come in on one stack per core.  That stack never changes
 305(regardless of blocking), and current_ctx is stored at the top.  Kthreads sort
 306of 'dispatch' / turn into threads from this event-like handling code.  This
 307actually sounds really cool!
 308
 3092) The contents of current_ctx get stored in per-cpu-info (pcpui), thereby
 310clearly decoupling it from any execution context.  Any thread of execution can
 311block without any special treatment (though interrupt handlers shouldn't do
 312this).  We handle the "returning twice" problem at the point of return.
 313
 314One nice thing about 1) is that it might make stack management easier (we
 315wouldn't need to keep a spare page, since it's the default core stack).  2) is
 316also tricky since we need to change some entry point code to write the TF to
 317pcpui (or at least copy-out for now).
 318
 319The main problem with 1) is that you need to know and have code to handle when
 320you "become" a kthread and are allowed to block.  It also prevents us making
 321changes such that all executing contexts are kthreads (which sort of is what is
 322going on, even if they don't have a struct yet).
 323
 324While considering 1), here's something I wanted to say: "every thread of
 325execution, including a KMSG, needs to always return (and thus not block), or
 326never return (and be allowed to block)."  To "become" a kthread, we'd need to
 327have code that jumps stacks, and once it jumps it can never return.  It would
 328have to go back to some place such as smp_idle().
 329
 330The jumping stacks isn't a problem, and whatever we jump to would just have to
 331have smp_idle() at the end.  The problem is that this is a pain in the ass to
 332work with in reality.  But wait!  Don't we do that with batched syscalls right
 333now?  Yes (though we should be using kmsgs instead of the hacked together
 334workqueue spread across smp_idle() and syscall.c), and it is a pain in the ass.
 335It is doable with syscalls because we have that clearly defined point
 336(submitting vs processing).  But what about other handlers, such as the page
 337fault handler?  It could block, and lots of other handlers could block too.  All
 338of those would need to have a jump point (in trap.c).  We aren't even handling
 339events anymore, we are immediately jumping to other stacks, using our "event
 340handler" to hold current_ctx and handle how we return to current_ctx.  Don't
 341forget about other code (like the boot code) that wants to block.  Simply put,
 342option 1 creates a layer that is a pain to work with, cuts down on the
 343flexibility of the kernel to block when it wants, and doesn't handle the issue
 344at its source.
 345
 346The issue about having a defined point in the code that you can't return back
 347across (which is where 1 would jump stacks) is about "returning twice."  Imagine
 348a syscall that doesn't block.  It traps into the kernel, does its work, then
 349returns.  Now imagine a syscall that blocks.  Most of these calls are going to
 350block on occasion, but not always (imagine the read was filled from the page
 351cache).  These calls really need to handle both situations.  So in one instance,
 352the call blocks.  Since we're async, we return to userspace early (pop the
 353current_ctx).  Now, when that kthread unblocks, its code is going to want to
 354finish and unroll its stack, then pop back to userspace.  This is the 'returning
 355twice' problem.  Note that a *kthread* never returns twice.  This is what makes
 356the idea of magic jumping points we can't return back across (and tying that to
 357how we block in the kernel) painful.
 358
 359The way I initially dealt with this was by always calling smp_idle(), and having
 360smp_idle decide what to do.  I also used it as a place to dispatch batched
 361syscalls, which is what made smp_idle() more attractive.  However, after a bit,
 362I realized the real nature of returning twice: current_ctx.  If we forget about
 363the batching for a second, all we really need to do is not return twice.  The
 364best place to do that is at the place where we consider returning to userspace:
 365proc_restartcore().  Instead of calling smp_idle() all the time (which was in
 366essence a "you can now block" point), and checking for current_ctx to return,
 367just check in restartcore to see if there is a tf to restart.  If there isn't,
 368then we smp_idle().  And don't forget to handle the cases where we want to start
 369and scp_ctx (which we ought to point current_ctx to in proc_run()).
 370
 371As a side note, we ought to use kmsgs for batched syscalls - it will help with
 372preemption latencies.  At least for all but the first syscall (which can be
 373called directly).  Instead of sending a retval via current_ctx about how many
 374started, just put that info in the syscall struct's flags (which might help the
 375remote syscall case - no need for a response message, though there are still a
 376few differences (no failure model other than death)).
 377
 378Note that page faults will still be tricky, but at least now we don't have to
 379worry about points of no return.  We just check if there is a current_ctx to
 380restart.  The tricky part is communicating that the PF was sorted when there
 381wasn't an explicit syscall made.
 382
 383
 384Aborting Syscalls (2013-11-22)
 385-------------------------------
 386On occasion, userspace would like to abort a syscall, specifically ones that
 387are listening on sockets/conversations where no one will ever answer.
 388
 389We have limited support for aborting syscalls.  Kthreads that are in
 390rendez_sleep() (common for anything in the 9ns chunk of the kernel, which
 391includes any conversation listens) can be aborted.  They'll return with an
 392error string to userspace.
 393
 394The easier part is the rules for kernel code to be abortable:
 395- Restore your invariants with waserror() before calling rendez_sleep().
 396- That's really it.
 397So if you're holding a qlock, put your qunlock() code and any other unwinding
 398(such as a kfree()) in a waserror() catch.  As it happens, it looks like plan9
 399already did that (at least for the rendez in listen).  And, as always, you
 400can't hold a spinlock when blocking, regardless of aborting calls or anything.
 401
 402I don't want arbitrary sleeps to be abortable.  For instance, if a kthread is
 403waiting on an arbitrary semaphore/qlock, we won't allow an abort.  The
 404reasoning is that the kthread will eventually acquire the qlock - we're not
 405waiting on external sources to wake up.  That's not 100% true - a kthread
 406could be blocked on a qlock, and the qlock holder could be abortable.  In the
 407future, we could build some sort of "abort inheritance", usable by root or
 408something (danger of aborting another process's kthread).  Alternatively, we
 409could make qlocks abortable too, though that would require all qlocking code
 410to be unwindable.
 411
 412The harder part to syscall aborting is safely waking a kthread.  There are
 413several layers to go through from uthread or syscall down to the condition
 414variable a kthread is sleeping on.  Given a uthread, find its syscall.  Given
 415a syscall, find its kthread.  Given the kthread, find the CV.  And during all
 416of these, syscalls complete concurrently, kthreads get repurposed for other
 417syscalls, CVs could be freed (though that doesn't happen).  Syscalls are often
 418on stacks, so when they complete, the memory is both gibberish and potentially
 419in use.
 420
 421Ultimately, I decided on a system of "safe abort attempts", where it is
 422harmless to be wrong with an attempt.  Instead of dealing with the races
 423associated with memory freeing and syscalls completing, the aborts will only
 424work if it is safe to work (using a lookup via pointer, and only dereferencing
 425if the lookup succeeds).
 426
 427As it stands now, all abortable kthreads/sleepers/syscalls are on a per-proc
 428list, and we can lookup by struct syscall*.  They are only on the list when
 429they are abortable (the CV can be poked), and the invariant is that when they
 430are on the list, they are in a state that can be safely aborted: the kthread
 431is working on the syscall, it hasn't unwound, it is still in rendez_sleep(),
 432the CV is safe, etc.  The details of this protection are sorted out with
 433__reg_abortable_cv() and dereg_abortable_cv() (since it's really the condition
 434variable that we're trying to find).  So from the kernel side, nothing bad can
 435happen if you ask to abort an arbitrary struct syscall*.
 436
 437The actual abort takes the "write/signal, then wake" method.  The aborter
 438tracks down the kthread via the lookup, the success of which guarantees the
 439sleeper is in rendez_sleep() (or similar sleep paths), marks "SC_ABORT",
 440(barriers), and attempts to wake the kthread (cv_broadcast, since we need to
 441be sure we woke the right kthread).
 442
 443On the user side, we set an alarm to run an event handler that will cancel our
 444syscall.  The alarm stuff is fairly standard (runs in vcore context).
 445Userspace no longer has the concern of syscalls completing while they abort,
 446since the kernel will only abort syscalls that are abortable.  However, it may
 447have issues (in theory) with aborting future syscalls.  If the alarm goes off
 448when the uthread is in another later syscall (which may happen to use the same
 449struct syscall*), then we could accidentally abort the wrong call.  There's an
 450aspect of time associated with the first abort alarm handler.  This is
 451relatively easy to handle: just turn off the alarm before reusing that syscall
 452struct for a syscall.  This relies on a property of the alarms: that when
 453deregistering completes, the alarm handler will not be running concurrently.
 454Incidentally, there is *another* minor trick here: the uthread when adjusting
 455the alarm will issue a syscall, possibly reusing its old sysc*, but that will
 456be *after* deregistering its original alarm: the point at which we could have
 457potentially accidentally cancelled an arbitrary syscall.  Also note that the
 458call to change the kernel alarm wouldn't actually block and become abortable,
 459but regardless, we're safe.
 460
 461There are a couple downsides to the "safe abort attempts" approach.  We can
 462only abort syscalls when they are at a certain point - if they aren't
 463currently sleeping, the call will fail.  Technically, the abort could take
 464effect later on in the life of a syscall (the aborter flags the kthread to
 465abort concurrent with the kthread waking up naturally, and then the call
 466aborts on the next rendez_sleep that needs to block).  Related to this
 467limitation, userspace must keep attempting to cancel a syscall until it
 468succeeds.  It may also be told an abort succeeded, even if the call actually
 469completes (the aborter flags the kthread, the rendez wakes naturally, and the
 470kthread never blocks again).  Ultimately, we can't "fire and forget" our abort
 471attempt.  It's not a huge problem though, and is less of a problem than my
 472older approaches that didn't have this problem.
 473
 474For instance, the original idea I had was for userspace to flag the syscall
 475(flags |= SC_ABORT).  It could do this at any time.  Whenever the kthread was
 476going to block in an abortable location (e.g. rendez_sleep()), it would see
 477the flag and abort.  It might already be asleep, so we would also provide a
 478syscall that would 'kick' the kthread responsible for some sysc*, to wake it
 479up to see the flag and abort.  The first problem was writing to the sysc
 480flags.  Unless we know the memory is actually the syscall we want, this could
 481result in randomly writing to memory (such as a uthread's stack).  I ran into
 482similar issues in the kernel: you can't touch a kthread struct unless you know
 483it is the kthread you want.
 484
 485Once I started dealing with the syscall -> kthread mapping, it became clear
 486I'd need a per-proc lookup service in the kernel, which acts as a way to lock
 487a reference to the kthread.  I could solve the 'kthread memory safety' problem
 488by looking up by reference, similar to how pid2proc works.  Analogously, by
 489changing the interface for sys_abort_syscall() to be a "lookup" approach, I
 490solve the struct syscall * memory problem.
 491
 492As a smaller note, I considered registering every kthread with the process
 493right away (specifically, when we link the syscall to the kthread->sysc) for
 494the sysc->kthread lookup service.  But this would get expensive, since every
 495syscall pays the lookup tax (and we'd need to worry about scaling).  We want
 496syscalls to be fast, but the infrequent aborts can be expensive.  The obvious
 497change was to only save the abortable kthreads.  The tradeoff is that we can't
 498flag syscalls for aborting unless they are in an abortable state.  This
 499requires multiple pokes by userspace.  In theory, they would have to deal with
 500that scenario anyways (in case they attempt to abort before we even register
 501in the first place).
 502
 503As another side note, if userspace ever has a struct syscall allocator, for
 504use in async (non-uthread stack-based) syscalls, we'll need to not reuse a
 505syscall struct until after the cancel alarm has been disarmed.  Right now we
 506do this by not having the uthread issue another syscall til after the disarm,
 507since uthread stack-based syscalls are inherently bound to the uthread.  A
 508simple solution would be to have a per-uthread syscall struct, which that
 509uthread uses preferentially, and the sysc is only freed when the uthread is
 510freed.  Not only would this scale better than accessing the sysc allocator for
 511every syscall, but also there is no worry of reuse til the uthread disarms and
 512exits.
 513
 514It is a userspace bug for a uthread to set the alarm and not unset it before
 515either making a syscall or exiting.  The root issue of that potential bug is
 516that someone (alarm handler) holds a pointer to a uthread, with the intent of
 517cancelling its syscall, and we need to somehow take back that pointer (cancel
 518the alarm) before reusing the syscall or freeing the uthread.  I considered
 519not making the alarm guarantee that when the cancel returns, the handler isn't
 520running concurrently.  We could handle the races in the alarm handler and in
 521the cancel code, but it's an added hassle that isn't clearly needed.  This
 522does mean we have to run the alarm handlers serially, while holding the alarm
 523lock.  I'm fine with this, for now.  Perhaps if users want more concurrency,
 524their handlers can spawn or wake up a uthread.
 525
 526It is also worth noting that many rendez_sleep() calls actually return right
 527away.  This is common if some data is already in the queue (or whatever the
 528condition is that we want to conditionally sleep on).  Since registration is a
 529little bit heavier than just locking the CV, I use the classic "check, signal,
 530check again" style, where we check cond, then register, and then check cond
 531for real.  The initial check is the optimization, while the "signal, then
 532check" is the true synchronization.  I use this style all over the place
 533(check out the event delivery with concurrent vcore yields code).
 534
 535Because of this optimization, we have a slightly odd interface: __reg is
 536called with the CV lock held, and dereg_ is not.  There are some lock ordering
 537issues.  Without the optimization, we could simply make the order {list lock,
 538CV lock}, so that the aborter can use the list lock to keep a kthread/cv alive
 539(one of the struct cv_lookup_elm in the code, to be precise) while it
 540cv_broadcasts.  However, the "check first" optimization would need to lock and
 541unlock the CV a couple times, which seems excessive.  So we switch the lock
 542order to {CV, list lock}, and the aborter doesn't hold the list lock while
 543signalling the CV.  Instead, it keeps the cle alive with a flag that dereg_
 544spins on.  This spinwait is why dereg can't hold the CV lock: it would create
 545a circular dependency.
 546