akaros/Documentation/async_events.txt
<<
>>
Prefs
   1async_events.txt
   2Barret Rhoden
   3
   41. Overview
   52. Async Syscalls and I/O
   63. Event Delivery / Notification
   74. Single-core Process (SCP) Events
   85. Misc Things That Aren't Sorted Completely:
   9
  101. Overview
  11====================
  121.1 Event Handling / Notifications / Async IO Issues:
  13------------------------------------------------------------------
  14Basically, syscalls use the ROS event delivery mechanisms, redefined and
  15described below.  Syscalls use the event delivery just like any other
  16subsystem would that wants to deliver messages to a process.  The only other
  17example we have right now are the "kernel notifications", which are the
  18one-sided, kernel-initiated messages that the kernel sends to a process.
  19
  20Overall, there are several analogies from how vcores work to how the OS
  21handles interrupts.  This is a result of trying to make vcores run like
  22virtual multiprocessors, in control of their resources and aware of the lower
  23levels of the system.  This analogy has guided much of how the vcore layer
  24works.  Whenever we have issues with the 2-lsched, realize the amount of
  25control they want means using solutions that the OS must do too.
  26
  27Note that there is some pointer chasing going on, though we try to keep it to
  28a minimum.  Any time the kernel chases a pointer, it needs to make sure it is
  29in the R/W section of userspace, though it doesn't need to check if the page
  30is present.  There's more info in the Page Fault sections of the
  31documentation.  (Briefly, if the kernel PFs on a user address, it will either
  32block and handle the PF, or if the address was unmapped, it will kill the
  33process).
  34
  351.2 Some Definitions:
  36---------------------------------------
  37ev_q, event_queue, event_q: all terms used interchangeably with each other.
  38They are the endpoint for communicating messages to a process, encapsulating
  39the method of delivery (such as IPI or not) with where to save the message.
  40
  41Vcore context: the execution context of the virtual core on the "trampoline"
  42stack.  All executions start from the top of this stack, and no stack state is
  43saved between vcore_entry() calls.  All executions on here are non-blocking,
  44notifications (IPIs) are disabled, and there is a specific TLS loaded.  Vcore
  45context is used for running the second level scheduler (2LS), swapping between
  46threads, and handling notifications.  It is analagous to "interrupt context"
  47in the OS.  Any functions called from here should be brief.  Any memory
  48touched must be pinned.  In Lithe terms, vcore context might be called the
  49Hart / hard thread.  People often wonder if they can run out of vcore context
  50directly.  Technically, you can, but you lose the ability to take any fault
  51(page fault) or to get IPIs for notification.  In essence, you lose control,
  52analgous to running an application in the kernel with preemption/interrupts
  53disabled.  See the process documentation for more info.
  54
  552LS: is the second level scheduler/framework.  This code executes in vcore
  56context, and is Lithe / plugs in to Lithe (eventually).  Often used
  57interchangeably with "vcore context", usually when I want to emphasize the
  58scheduling nature of the code.
  59
  60VCPD: "virtual core preemption data".  In procdata, there is an array of
  61struct preempt_data, one per vcore.  This is the default location to look for
  62all things related to the management of vcores, such as its event_mbox (queue
  63of incoming messages/notifications/events).  Both the kernel and the vcore
  64code know to look here for a variety of things.
  65
  66Vcore-business: This is a term I use for a class of messages where the receiver
  67is the actual vcore, and not just using the vcore as a place to receive the
  68message.  Examples of vcore-business are INDIR events, preempt_pending events,
  69scheduling events (self-ipis by the 2LS from one vcore to another), and things
  70like that.  There are two types: public and private.  Private will only be
  71handled by that vcore.  Public might be handled by another vcore.
  72
  73Notif_table: This is a list of event_q*s that correspond to certain
  74unexpected/"one-sided" events the kernel sends to the process.  It is similar
  75to an IRQ table in the kernel.  Each event_q tells the kernel how the process
  76wants to be told about the specific event type.
  77
  78Notifications: used to be a generic event, but now used in terms of the verb
  79'notify' (do_notify()).  In older docs, passive notification is just writing a
  80message somewhere.  Active notification is an IPI delivered to a vcore.  I use
  81that term interchangeably with an IPI, and usually you can tell by context
  82that I'm talking about an IPI going to a process (and not just the kernel).
  83The details of it make it more complicated than just an IPI, but it's
  84analagous.  I've start referring to notification as the IPI, and "passive
  85notification" as just events, though older documentation has both meanings.
  86
  87BCQ: "bounded concurrent queue".  It is a fixed size array of messages
  88(structs of notification events, or whatever).  It is non-blocking, supporting
  89multiple producers and consumers, where the producers do not trust the
  90consumers.  It is the primary mechanism for the kernel delivering message
  91payloads into a process's address space.  Note that producers don't trust each
  92other either (in the event of weirdness, the producers give up and say the
  93buffer is full).  This means that a process can produce for one of its ev_qs
  94(which is what they need to do to send message to itself).
  95
  96UCQ: "unbounded concurrent queue".  This is a data structure allowing the kernel
  97to produce an unbounded number of messages for the process to consume.  The main
  98limitation to the number of messages is RAM.  Check out its documentation.
  99
 1002. Async Syscalls and I/O
 101====================
 1022.1 Basics
 103----------------------------------------------
 104The syscall struct is the contract for work with the kernel, including async
 105I/O.  Lots of current OS async packages use epoll or other polling systems.
 106Note the distinction between Polling and Async I/O.  Polling is about finding
 107out if a call will block.  It is primarily used for sockets and pipes.  It
 108does relatively nothing for disk I/O, which requires a separate async I/O
 109system.  By having all syscalls be async, we can make polling a bit easier and
 110more unified with the generic event code that we use for all syscalls.
 111
 112For instance, we can have a sys_poll syscall, which is async just like any
 113other syscall.  The call can be a "one shot / non-blocking", like the current
 114systems polling code, or it can also notify on change (not requiring future
 115polls) via the event_q mechanisms.  If you don't want to be IPId, you can
 116"poll" the syscall struct - not requiring another kernel crossing/syscall.
 117
 118Note that we do not tie syscalls and polling to FDs.  We do events on
 119syscalls, which can be used to check FDs.  I think a bunch of polling cases
 120will not be needed once we have async syscalls, but for those that remain,
 121we'll have sys_poll() (or whatever).
 122
 123To receive an event on a syscall completion or status change, just fill in the
 124event_q pointer.  If it is 0, the kernel will assume you poll the actual
 125syscall struct.
 126
 127struct syscall {
 128        current stuff           /* arguments, retvals */
 129        struct ev_queue *       /* struct used for messaging, including IPIs*/
 130        void *                  /* used by 2LS, usually a struct u_thread * */
 131}
 132
 133One issue with async syscalls is that there can be too many outstanding IOs
 134(normally sync calls provide feedback / don't allow you to over-request).
 135Eventually, processes can exhaust kernel memory (the kthreads, specifically).
 136We need a way to limit the kthreads per proc, etc.  Shouldn't be a big deal.
 137
 138Normally, we talk about changing the flag in a syscall to SC_DONE.  Async
 139syscalls can be SC_PROGRESS (new stuff happened on it), which can trigger a
 140notification event.  Some calls, like AIO or bulk accept, exist for a while
 141and slowly get filled in / completed.  In the future, we'll also want a way to
 142abort the in-progress syscalls (possibly any syscall!).
 143
 1442.2 Uthreads Blocking on Syscalls
 145----------------------------------------------
 146Many threading libraries will want some notion of a synchronous, blocking
 147thread.  These threads use regular I/O calls, which are async under the hood,
 148but don't want to bother with call backs or other details of async I/O.  In
 149this section, I'll talk a bit about how this works, esp regarding
 150uthreads/pthreads.
 151
 152'Blocking' refers to user threads, and has nothing to do with an actual
 153process blocking/waiting on some kernel event.  The kernel does not know
 154anything about what goes on here.  While a bit confusing, this allows
 155applications to do whatever they want on top of an async interface, and is a
 156consequence of decoupling cores from user-threads from kthreads.
 157
 1582.2.1 Basics of Uthread Blocking
 159---------------
 160When a thread calls a glibc function that makes a system call, if the syscall
 161is not yet complete when the kernel returns to userspace, glibc will check for
 162the existence of a second level scheduler and attempt to use it to yield its
 163uthread.  If there is no 2LS, the code just spins for now.  Eventually, it
 164will try to suspend/yield the process for a while (til the call is done), aka,
 165block in the kernel.
 166
 167If there is a 2LS, the current thread will yield, and call out to the 2LS's
 168blockon_sysc() method, which needs a way to stop the thread and be able to
 169restart it when the syscall completes.  Specifically, the pthread 2LS registers
 170the syscall to respond to an event (described in detail elsewhere in this doc).
 171When the event comes in, meaning the syscall is complete, the thread is put on
 172the runnable list.
 173
 174Details:
 175- A pointer to the struct pthread is stored in the syscall's void*.  When the
 176  syscall is done, we normally get a message from the kernel, and the payload
 177  tells us the syscall is done, which tells us which thread to unblock. 
 178- The pthread code also always asks for an IPI and event message for every
 179  syscall that completes.  This is far from ideal.  Still, the basics are the
 180  same for any threading library.  Once you know a thread is done, you need to
 181  do something about it.
 182- The pthread code does syscall blocking and event notification on a per-core
 183  basis.  Using the default (VCPD) ev_mbox for this is a bad idea (which we did
 184  at some point).
 185- There's a race between the 2LS trying to sign up for events and the kernel
 186  finishing the event.  We handle this in uthread code, so use the helper to
 187  register_evq(), which does the the right thing (atomics, careful ordering
 188  with writes, etc).
 189
 1902.2.1 Recovering from Event Overflow
 191---------------
 192Event overflow recovery is unnecessary, since syscall ev_qs use UCQs now.  this
 193section is kept around for some useful tidbits, such as details about
 194deregistering ev_qs for a syscall:
 195
 196---------------------------
 197The pthread code expects to receive an event somehow to unblock a thread
 198once its syscall is done.  One limitation to our messaging systems is that you
 199can't send an infinite amount of event messages.  (By messages, I mean a chunk
 200of memory with a payload, in this case consisting of a struct syscall *).
 201Event delivery degrades to a bit in the case of the message queue being full
 202(more details on that later).
 203
 204The pthread code (and any similar 2LS) needs to handle waking up syscalls when
 205the event message was lost and all we know is that some syscall that was meant
 206to have a message sent to a particular event queue (per-core in the case of
 207pthread stuff (actually the VCPD for now)).  The basic idea is to poll all
 208outstanding system calls and unblock whoever is done.
 209
 210The key problem is due to a race: for a given syscall we don't know if we're
 211going to get a message for a syscall or not.  There could be a completion
 212message in the queue for the syscall while we are going through the list of
 213blocked threads.  If we assume we already got the message (or it was lost in
 214the overflow), but didn't really, then if we finish as SC and free its memory
 215(free or return up the stack), we could later get a message for it, and all
 216sorts of things would go wrong (like trying to unblock a pointer that is
 217gibberish).
 218
 219Here's what we do:
 2201) Set a "handling overflow" flag so we don't recurse.
 2212) Turn off event delivery for all syscalls on our list
 2223) Handle any event messages.  This is how we make a distinction between
 223finished syscalls that had a message sent and those that didn't.  We're doing
 224the message-sent ones here.
 2254) For any left on the list, check to see if they are done.  We actually do
 226this by attempting to turn on event delivery for them.  Turning on event
 227delivery can fail if the call is already done.  So if it fails, they are done
 228and we unblock them (similar to how we block the threads in the first place).
 229If it doesn't fail, they are now ready to receive messages.  This can be
 230tweaked a bit.
 2315) Unset the overflow-handling flag.
 232
 233One thing to be careful of is that when we turn off event delivery, you need to
 234be sure the kernel isn't in the process of sending an event.  This is why we
 235have the SC_K_LOCK syscall flag.  Uthread code will not consider deregistration
 236complete while that flag is set, since the kernel is still mucking with the
 237syscall (and sending an event).  Once the flag is clear, the event has been
 238delivered (the ev_msg is in the ev_mbox), and our assumptions remain true.
 239
 240There are a couple implications of this style.  If you have a shared event
 241queue (with other event sources), those events can get mixed in with the
 242recovery.  Don't leave the vcore context due to other events.  This'll
 243probably need work.  The other thing is that completed syscalls can get
 244handled in a different order than they were signaled.  Shouldn't be a big
 245deal.
 246
 247Note on the overflow handling flag and unsetting it.  There should not be any
 248races with this.  The flag prevented us from handling overflows on the event
 249queue.  Other than when we checked for events that had been succesfully sent,
 250we didn't try to handle events.  We can unset the flag, and at that point we
 251can start handling missed events.  If there was an overflow after we last
 252checked the list, but before we cleared the overflow-handling flag, we'll
 253still catch it since we haven't tried handling events in between checking the
 254list and clearing the flag.  That flag doesn't even matter until we want to
 255handle_events, so we aren't missing anything.  the next handle_events() will
 256deal with everything from scratch.
 257
 258For blocking threads that block concurrently with the overflow handling: in
 259the pthread case, this can't happen since everything is per-vcore.  If you do
 260have process-wide thread blocking/syscall management, we can add new ones, but
 261they must have event delivery turned off when they are added to the list.  And
 262you'll need to lock the list, etc.  This should work in part due to new
 263syscalls being added to the end of the list, and the overflow-handler
 264proceeding linearly through the list.
 265
 266Also note that we shouldn't handle the event for unblocking a syscall on a
 267different core than the one it was submitted to.  This could result in
 268concurrent modifications to the original core's TAILQ (bad).  This restriction
 269is dependent on how a 2LS does its thread handling/blocking.
 270
 271Eventually, we'll want a way to detect and handle excessive overflow, since
 272it's probably quite expensive.  Perhaps turn it off and periodically poll the
 273syscalls for completion (but don't bother turning on the ev_q).
 274---------------------------
 275
 2763. Event Delivery / Notification
 277====================
 2783.1 Basics
 279----------------------------------------------
 280The mbox (mailbox) is where the actual messages go.
 281
 282struct ev_mbox {
 283        bcq of notif_events     /* bounded buffer, multi-consumer/producer */
 284        msg_bitmap
 285}
 286struct ev_queue {               /* aka, event_q, ev_q, etc. */
 287        struct ev_mbox *
 288        void handler(struct event_q *)
 289        vcore_to_be_told
 290        flags                   /* IPI_WANTED, RR, 2L-handle-it, etc */
 291}
 292struct ev_queue_big {
 293        struct ev_mbox *        /* pointing to the internal storage */
 294        vcore_to_be_told
 295        flags                   /* IPI_WANTED, RR, 2L-handle-it, etc */
 296        struct ev_mbox { }      /* never access this directly */
 297}
 298
 299The purpose of the big one is to simply embed some storage.  Still, only
 300access the mbox via the pointer.  The big one can be casted (and stored as)
 301the regular, so long as you know to dealloc a big one (free() knows, custom
 302styles or slabs would need some help).
 303
 304The ev_mbox says where to put the actual message, and the flags handle things
 305such as whether or not an IPI is wanted.
 306
 307Using pointers for the ev_q like this allows multiple event queues to use the
 308same mbox.  For example, we could use the vcpd queue for both kernel-generated
 309events as well as async syscall responses.  The notification table is actually
 310a bunch of ev_qs, many of which could be pointing to the same vcore/vcpd-mbox,
 311albeit with different flags.
 312
 3133.2 Kernel Notification Using Event Queues
 314----------------------------------------------
 315The notif_tbl/notif_methods (kernel-generated 'one-sided' events) is just an
 316array of struct ev_queue*s.  Handling a notification is like any other time
 317when we want to send an event.  Follow a pointer, send a message, etc.  As
 318with all ev_qs, ev_mbox* points to where you want the message for the event,
 319which usually is the vcpd's mbox.  If the ev_q pointer is 0, then we know the
 320process doesn't want the event (equivalent to the older 'NOTIF_WANTED' flag).
 321Theoretically, we can send kernel notifs to user threads.  While it isn't
 322clear that anyone will ever want this, it is possible (barring other issues),
 323since they are just events.
 324
 325Also note the flag EVENT_VCORE_APPRO.  Processes should set this for certain
 326types of events where they want the kernel to send the event/IPI to the
 327'appropriate' vcore.  For example, when sending a message about a preemption
 328coming in, it makes sense for the kernel to send it to the vcore that is going
 329to get preempted, but the application could choose to ignore the notification.
 330When this flag is set, the kernel will also use the vcore's ev_mbox, ignoring
 331the process's choice.  We can change this later, but it doesn't really make
 332sense for a process to pick an mbox and also say VCORE_APPRO.
 333
 334There are also interfaces in the kernel to put a message in an ev_mbox
 335regardless of the process's wishes (post_vcore_event()), and can send an IPI
 336at any time (proc_notify()).
 337
 3383.3 IPIs, Indirection Events, and Fallback (Spamming Indirs)
 339----------------------------------------------
 340An ev_q can ask for an IPI, for an indirection event, and for an indirection
 341event to be spammed in case a vcore is offline (sometimes called the 'fallback'
 342option.  Or any combination of these.  Note that these have little to do with
 343the actual message being sent.  The actual message is dropped in the ev_mbox
 344pointed to by the ev_q.
 345
 346The main use for all of this is for syscalls.  If you want to receive an event
 347when a syscall completes or has a change in status, simply allocate an event_q,
 348and point the syscall at it.  syscall: ev_q* -> "vcore for IPI, syscall message
 349in the ev_q mbox", etc.  You can also point it to an existing ev_q.  Pthread
 350code has examples of two ways to do this.  Both have per vcore ev_qs, requesting
 351IPIs, INDIRS, and SPAM_INDIR.  One way is to have an ev_mbox per vcore, and
 352another is to have a global ev_mbox that all ev_qs point to.  As a side note, if
 353you do the latter, you don't need to worry about a vcore's ev_q if it gets
 354preempted: just check the global ev_mbox (which is done by checking your own
 355vcore's syscall ev_q).
 356
 3573.3.1: IPIs and INDIRs
 358---------------
 359An EVENT_IPI simply means we'll send an IPI to the given vcore.  Nothing else.
 360This will usually be paired with an Indirection event (EVENT_INDIR).  An INDIR
 361is a message of type EV_EVENT with an ev_q* payload.  It means "check this
 362ev_q".  Most ev_qs that ask for an IPI will also want an INDIR so that the vcore
 363knows why it was IPIed.  You don't have to do this: for instance, your 2LS might
 364poll its own ev_q, so you won't need the indirection event.
 365
 366Additionally, note that IPIs and INDIRs can be spurious.  It's not a big deal to
 367receive and IPI and have nothing to do, or to be told to check an empty ev_q.
 368All of the event handling code can deal with this.
 369
 370INDIR events are sent to the VCPD public mbox, which means they will get handled
 371if the vcore gets preempted.  Any other messages sent here will also get handled
 372during a preemption.  However, the only type of messages you should use this for
 373are ones that can handle spurious messages.  The completion of a syscall is an
 374example of a message that cannot be spurious.  Since INDIRs can be spurious, we
 375can use the public mbox.  (Side note: the kernel may spam INDIRs in attempting
 376to make sure you get the message on a vcore that didn't yield.)
 377
 378Never use a VCPD mbox (public or private) for messages you might want to receive
 379if that vcore is offline.  If you want to be sure to get a message, create your
 380own ev_q and set flags for INDIR, SPAM_INDIR, and IPI.  There's no guarantee a
 381*specific* message will get looked at.  In cases where it won't, the kernel will
 382send that message to another vcore.  For example, if the kernel posts an INDIR
 383to a VCPD mbox (the public one btw) and it loses a race with the vcore yielding,
 384the vcore might never see that message.  However, the kernel knows it lost the
 385race, and will find another vcore to send it to.
 386
 3873.3.2: Spamming Indirs / Fallback
 388---------------
 389Both IPI and INDIR need an actual vcore.  If that vcore is unavailable and if
 390EVENT_SPAM_INDIR is set, the kernel will pick another vcore and send the
 391messages there.  This allows an ev_q to be set up to handle work when the vcore
 392is online, while allowing the program to handle events when that core yields,
 393without having to reset all of its ev_qs to point to "known" available vcores
 394(and avoiding those races).  Note 'online' is synonymous with 'mapped', when
 395talking about vcores.  A vcore technically isn't always online, only destined
 396to be online, when it is mapped to a pcore (kmsg on the way, etc).  It's
 397easiest to think of it being online for the sake of this discussion.
 398
 399One question is whether or not 2LSs need a SPAM_INDIR flag for their ev_qs.
 400The main use for SPAM_INDIR is so that vcores can yield.  (Note that fallback
 401won't help you *miss* INDIR messages in the event of a preemption; you can
 402always lose that race due to it taking too long to process the messages).  An
 403alternative would be for vcores to pick another vcore and change all of its
 404ev_qs to that vcore.  There are a couple problems with this.  One is that it'll
 405be a pain to get those ev_qs back when the vcore comes back online (if ever).
 406Another issue is that other vcores will build up a list of ev_qs that they
 407aren't aware of, which will be hard to deal with when *they* yield.  SPAM_INDIR
 408avoids all of those problems.
 409
 410An important aspect of spamming indirs is that it works with yielded vcores,
 411not preempted vcores.  It could be that there are no cores that are online, but
 412there should always be at least one core that *will* be online in the future, a
 413core that the process didn't want to lose and will deal with in the future.  If
 414not for this distinction, SPAM_INDIR could fail.  An older idea would be to have
 415fallback send the msg to the desired vcore if there were no others.  This would
 416not work if the vcore yielded and then the entire process was preempted or
 417otherwise not running.  Another way to put this is that we need a field to
 418determine whether a vcore is offline temporarily or permanently.
 419
 420This is why we have the VCPD flag 'VC_CAN_RCV_MSG'.  It tells the kernel's event
 421delivery code that the vcore will check the messages: it is an acceptable
 422destination for a spammed indir.  There are two reasons to put this in VCPD:
 4231) Userspace can remotely turn off a vcore's msg reception.  This is necessary
 424for handling preemption of a vcore that was in uthread context, so that we can
 425remotely 'yield' the core without having to sys_change_vcore() (which I discuss
 426below, and is meant to 'unstick' a vcore).
 4272) Yield is simplified.  The kernel no longer races with itself nor has to worry
 428about turning off that flag - userspace can do it when it wants to yield.  (turn
 429off the flag, check messages, then yield).  This is less big of a deal now that
 430the kernel races with vcore membership in the online_vcs list.
 431
 432Two aspects of the code make this work nicely.  The VC_CAN_RCV_MSG flag greatly
 433simplifies the kernel's job.  There are a lot of weird races we'd have to deal
 434with, such as process state (RUNNING_M), whether a mass preempt is going on, or
 435just one core, or a bunch of cores, mass yields, etc.  A flag that does one
 436thing well helps a lot - esp since preemption is not the same as yielding.  The
 437other useful thing is being able to handle spurious events.  Vcore code can
 438handle extra IPIs and INDIRs to non-VCPD ev_qs.  Any vcore can handle an ev_q
 439that is "non-VCPD business".
 440
 441Worth mentioning is the difference between 'notif_pending' and VC_CAN_RCV_MSG.
 442VC_CAN_RCV_MSG is the process saying it will check for messages.
 443'notif_pending' is when the kernel says it *has* sent a message.
 444'notif_pending' is also used by the kernel in proc_yield() and the 2LS in
 445pop_user_ctx() to make sure the sent message is not missed.
 446
 447Also, in case this comes up, there's a slight race on changing the mbox* and the
 448vcore number within the event_q.  The message could have gone to the wrong (old)
 449vcore, but not the IPI.  Not a big deal - IPIs can be spurious, and the other
 450vcore will eventually get it.  The real way around this is create a new ev_q and
 451change the pointer (thus atomically changing the entire ev_q's contents), though
 452this can be a bit tricky if you have multiple places pointing to the same ev_q
 453(can't change them all at once).
 454
 4553.3.3: Fallback and Preemption
 456---------------
 457SPAM_INDIR doesn't protect you from preemptions.  A vcore can be preempted and
 458have INDIRs in its VCPD.
 459
 460It is tempting to just use sys_change_vcore(), which will change the calling
 461vcore to the new one.  This should only be used to "unstick" a vcore.  A vcore
 462is stuck when it was preempted while it had notifications disabled.  This is
 463usually when it is vcore context, but also in any lock holding code for locks
 464shared with vcore context (the userspace equivalent of irqsave locks).  With
 465this syscall, you could change to the offline vcore and process its INDIRs.
 466
 467The problem with that plan is the calling core (that is trying to save the
 468other) may have extra messages, and that sys_change_vcore does not return.  We
 469need a way to deal with our other messages.  We're back to the same problem we
 470had before, just with different vcores.  The only thing we really accomplished
 471is that we unstuck the other vcore.  We could tell the restarted vcore (via an
 472event) to switch back to us, but by the time it does that, it may have other
 473events that got lost.  So we're back to polling the ev_qs that it might have
 474received INDIRs about.  Note that we still want to send an event with
 475sys_change_vcore().  We want the new vcore to know the old vcore was put
 476offline: a preemption (albeit one that it chose to do, and one that isn't stuck
 477in vcore context).
 478
 479One older way to deal with this was to force the 2LS to deal with this. The 2LS
 480would check the ev_mboxes/ev_qs of all ev_qs that could send INDIRS to the
 481offline vcore.  There could be INDIRS in the VCPD that are just lying there.
 482The 2LS knows which ev_qs these are (such as for completed syscalls), and for
 483many things, this will be a common ev_q (such as for 'vcore-x-was-preempted').
 484However, this is a huge pain in the ass, since a preempted vcore could have the
 485spammed INDIR for an ev_q associated with another vcore.  To deal with this,
 486the 2LS would need to check *every* ev_q that requests INDIRs.  We don't do
 487this.
 488
 489Instead, we simply have the remote core check the VCPD public mbox of the
 490preempted vcore.  INDIRs (and other vcore business that other vcores can handle)
 491will get sorted here.
 492
 4933.3.5: Lists to Find Vcores
 494---------------
 495A process has three lists: online, bulk_preempt, and inactive.  These not only
 496are good for process management, but also for helping alert_vcore() find
 497potentially alertable vcores.  alert_vcore() and its associated helpers are
 498failry complicated and heavily commented.  I've set things up so both the
 499online_vcs and the bulk_preempted_vcs lists can be handled the same way: post to
 500the first element, then see if it still VC_CAN_RCV_MSG.  If not, if it is still
 501the first on the list, then it hasn't proc_yield()ed yet, and it will eventually
 502restart when it tries to yield.  And this all works without locking the
 503proc_lock.  There are a bunch more details and races avoided.  Check the code
 504out.
 505
 5063.3.6: Vcore Business and the VCPD mboxs
 507---------------
 508There are two types of VCPD mboxes: public and private.  Public ones will get
 509handled during preemption recovery.  Messages sent here need to be handle-able
 510by any vcore.  Private messages are for that specific vcore.  In the common
 511case, the public mbox will usually only get looked at by its vcore.  Only during
 512recovery and some corner cases will we deal with it remotely.
 513
 514Here's some guidelines: if you message is spammy and the handler can deal with
 515spurious events and it doesn't need to be on a specific vcore, then go with
 516public.  Examples of public mbox events are ones that need to be spammed:
 517preemption recovery, INDIRs, etc.  Note that you won't need to worry about
 518these: uthread code and the kernel handle them.  But if you have something
 519similar, then that's where it would go.  You can also send non-spammy things,
 520but there's no guarantee they'll be looked at.
 521
 522Some messages should only be sent to the private mbox.  These include ones that
 523make no sense for other vcores to handle.  Examples: 2LS IPIs/preemptions (like
 524"change your scheduling policy vcore 3", preemption-pending notifs from the
 525kernel, timer interrupts, etc.
 526
 527An example of something that shouldn't be sent to either is syscall completions.
 528They can't be spammed, so you can't send them around like INDIRs.  And they need
 529to be dealt with.  Other than carefully-spammed public messages, there's no
 530guarantee of getting a message for certain scenarios (yields).  Instead, use an
 531ev_q with INDIR set.
 532
 533Also note that a 2LS could set up a big ev_q with EVENT_IPI and not EVENT_INDIR,
 534and then poll for that in their vcore_entry().  This is equivalent to setting up
 535a small ev_q with EVENT_IPI and pointing it at the private mbox.
 536
 5373.4 Application-specific Event Handling
 538---------------------------------------
 539So what happens when the vcore/2LS isn't handling an event queue, but has been
 540"told" about it?  This "telling" is in the form of an IPI.  The vcore was
 541prodded, but is not supposed to handle the event.  This is actually what
 542happens now in Linux when you send signals for AIO.  It's all about who (which
 543thread, in their world) is being interrupted to process the work in an
 544application specific way.  The app sets the handler, with the option to have a
 545thread spawned (instead of a sighandler), etc.
 546
 547This is not exactly the same as the case above where the ev_mbox* pointed to
 548the vcore's default mbox.  That issue was just about avoiding extra messages
 549(and messages in weird orders).  A vcore won't handle an ev_q if the
 550message/contents of the queue aren't meant for the vcore/2LS.  For example, a
 551thread can want to run its own handler, perhaps because it performs its own
 552asynchronous I/O (compared to relying on the 2LS to schedule synchronous
 553blocking u_threads).
 554
 555There are a couple ways to handle this.  Ultimately, the application is supposed
 556to handle the event.  If it asked for an IPI, it is because something ought to
 557be done, which really means running a handler.  We used to support the
 558application setting EVENT_THREAD in the ev_q's flags, and the 2LS would spawn a
 559thread to run the ev_q's handler.  Now we just have the application block a
 560uthread on the evq.  If an ev_handler is set, the vcore will execute the
 561handler itself.  Careful with this, since the only memory it touches must be
 562pinned, the function must not block (this is only true for the handlers called
 563directly out of vcore context), and it should return quickly.
 564
 565Note that in either case, vcore-written code (library code) does not look at
 566the contents of the notification event.  Also note the handler takes the whole
 567event_queue, and not a specific message.  It is more flexible, can handle
 568multiple specific events, and doesn't require the vcore code to dequeue the
 569event and either pass by value or allocate more memory.
 570
 571These ev_q handlers are different than ev_handlers.  The former handles an
 572event_queue.  The latter is the 2LS's way to handle specific types of messages.
 573If an app wants to process specific messages, have them sent to an ev_q under
 574its control; don't mess with ev_handlers unless you're the 2LS (or example
 575code).
 576
 577Continuing the analogy between vcores getting IPIs and the OS getting HW
 578interrupts, what goes on in vcore context is like what goes on in interrupt
 579context, and the threaded handler is like running a threaded interrupt handler
 580(in Linux).  In the ROS world, it is like having the interrupt handler kick
 581off a kernel message to defer the work out of interrupt context.
 582
 583If neither of the application-specific handling flags are set, the vcore will
 584respond to the IPI by attempting to handle the event on its own (lookup table
 585based on the type of event (like "syscall complete")).  If you didn't want the
 586vcore to handle it, then you shouldn't have asked for an IPI.  Those flags are
 587the means by which the vcore can distinguish between its event_qs and the
 588applications.  It does not make sense otherwise to send the vcore an IPI and
 589an event_q, but not tell give the code the info it needs to handle it.
 590
 591In the future, we might have the ability to block a u_thread on an event_q, so
 592we'll have other EV_ flags to express this, and probably a void*.  This may
 593end up being redudant, since u_threads will be able to block on syscalls (and
 594not necessarily IPIs sent to vcores).
 595
 596As a side note, a vcore can turn off the IPI wanted flag at any time.  For
 597instance, when it spawns a thread to handle an ev_q, the vcore can turn off
 598IPI wanted on that event_q, and the thread handler can turn it back on when it
 599is done processing and wants to be re-IPId.  The reason for this is to avoid
 600taking future IPIs (once we leave vcore context, IPIs are enabled) to let us
 601know about an event for which a handler is already running.
 602
 6033.5 Overflowed/Missed Messages in the VCPD 
 604---------------------------------------
 605This too is no longer necessary.  It's useful in that it shows what we don't
 606have to put up with.  Missing messages requires potentially painful
 607infrastructure to handle it:
 608
 609-----------------------------
 610All event_q's requesting IPIs ought to register with the 2LS.  This is for
 611recovering in case the vcpd's mbox overflowed, and the vcore knows it missed a
 612NE_EVENT type message.  At that point, it would have to check all of its
 613IPI-based queues.  To do so, it could check to see if the mbox has any
 614messages, though in all likelihood, we'll just act as if there was a message
 615on each of the queues (all such handlers should be able to handle spurious
 616IPIs anyways).  This is analagous to how the OS's block drivers don't solely
 617rely on receiving an interrupt (they deal with it via timeouts).  Any user
 618code requiring an IPI must do this.  Any code that runs better due to getting
 619the IPI ought to do this.
 620
 621We could imagine having a thread spawned to handle an ev_q, and the vcore
 622never has to touch the ev_q (which might make it easier for memory
 623allocation).  This isn't a great idea, but I'll still explain it.  In the
 624notif_ev message sent to the vcore, it has the event_q*.  We could also send a
 625flag with the same info as in the event_q's flags, and also send the handler.
 626The problem with this is that it isn't resilient to failure.  If there was a
 627message overflow, it would have the check the event_q (which was registered
 628before) anyway, and could potentially page fault there.  Also the kernel would
 629have faulted on it (and read it in) back when it tried to read those values.
 630It's somewhat moot, since we're going to have an allocator that pins event_qs.
 631-----------------------------
 632
 6333.6 Round-Robin or Other IPI-delivery styles
 634---------------------------------------
 635In the same way that the IOAPIC can deliver interrupts to a group of cores,
 636round-robinning between them, so can we imagine processes wanting to
 637distribute the IPI/active notification of events across its vcores.  This is
 638only meaningful is the NOTIF_IPI_WANTED flag is set.
 639
 640Eventually we'll support this, via a flag in the event_q.  When
 641NE_ROUND_ROBIN, or whatever, is set a couple things will happen.  First, the
 642vcore field will be used in a "delivery-specific" manner.  In the case of RR,
 643it will probably be the most recent destination.  Perhaps it will be a bitmask
 644of vcores available to receive.  More important is the event_mbox*.  If it is
 645set, then the event message will be sent there.  Whichever vcore gets selected
 646will receive an IPI, and its vcpd mbox will get a NE_EVENT message.  If the
 647event_mbox* is 0, then the actual message will get delivered to the vcore's
 648vcpd mbox (the default location).
 649
 6503.7 Event_q-less Notifications
 651---------------------------------------
 652Some events needs to be delivered directly to the vcore, regardless of any
 653event_qs.  This happens currently when we bypass the notification table (e.g.,
 654sys_self_notify(), preemptions, etc).  These notifs will just use the vcore's
 655default mbox.  In essence, the ev_q is being generated/sent with the call.
 656The implied/fake ev_q points to the vcpd's mbox, with the given vcore set, and
 657with IPI_WANTED set.  It is tempting to make those functions take a
 658dynamically generated ev_q, though more likely we'll just use the lower level
 659functions in the kernel, much like the Round Robin set will need to do.  No
 660need to force things to fit just for the sake of using a 'solution'.  We want
 661tools to make solutions, not packaged solutions.
 662
 6633.8 UTHREAD_DONT_MIGRATE
 664---------------------------------------
 665DONT_MIGRATE exists to allow uthreads to disable notifications/IPIs and enter
 666vcore context.  It is needed since you need to read vcoreid to disable notifs,
 667but once you read it, you need to not move to another vcore.  Here are a few
 668rules/guidelines.
 669
 670We turn off the flag so that we can disable notifs, but turn the flag back on
 671before enabling.  The thread won't get migrated in that instant since notifs are
 672off.  But if it was the other way, we could miss a message (because we skipped
 673an opportunity to be dropped into vcore context to read a message).
 674
 675Don't check messages/handle events when you have a DONT_MIGRATE uthread.  There
 676are issues with preemption recovery if you do.  In short, if two uthreads are
 677both DONT_MIGRATE with notifs enabled on two different vcores, and one vcore
 678gets preempted while the other gets an IPI telling it to recover the other one,
 679both could keep bouncing back and forth if they handle their preemption
 680*messages* without dealing with their own DONT_MIGRATEs first.  Note that the
 681preemption recovery code can handle having a DONT_MIGRATE thread on the vcore.
 682This is a special case, and it is very careful about how cur_uthread works.
 683
 684All uses of DONT_MIGRATE must reenable notifs (and check messages) at some
 685point.  One such case is uthread_yield().  Another is mcs_unlock_notifsafe().
 686Note that mcs_notif_safe locks have uthreads that can't migrate for a
 687potentially long time.  notifs are also disabled, so it's not a big deal.  It's
 688basically just the same as if you were in vcore context (though technically you
 689aren't) when it comes to preemption recovery: we'll just need to restart the
 690vcore via a syscall.  Also note that it would be a real pain in the ass to
 691migrate a notif_safe locking uthread.  The whole point of it is in case it grabs
 692a lock that would be held by vcore context, and there's no way to know it isn't
 693a lock on the restart-path.
 694
 6953.9 Why Preemption Handling Doesn't Lock Up (probably)
 696---------------------------------------
 697One of the concerns of preemption handling is that we don't get into some form
 698of livelock, where we ping-pong back and forth between vcores (or a set of
 699vcores), all of which are trying to handle each other's preemptions.  Part of
 700the concern is that when a vcore sys_changes to another, it can result in
 701another preemption message being sent.  We want to be sure that we're making
 702progress, and not just livelocked doing sys_change_vcore()s.
 703
 704A few notes first:
 7051) If a vcore is holding locks or otherwise isn't handling events and is
 706preempted, it will let go of its locks before it gets to the point of
 707attempting to handle any other vcore preemption events.  Event handling is only
 708done when it is okay to never return (meaning no locks are held).  If this is
 709the situation, eventually it'll work itself out or get to a potential ping-pong
 710scenario.
 711
 7122) When you change_to while handling preemption, once you start back up, you
 713will leave change_to and eventually fetch a new event.  This means any
 714potential ping-pong needs to happen on a fresh event.
 715
 7163) If there are enough pcores for the vcores to all run, we won't issue any
 717change_tos, since the vcores are no longer preempted.  This means we only are
 718worried about situations with insufficient vcores.  We'll mostly talk about 1
 719pcore and 2 vcores.
 720
 7214) Preemption handlers will not call change_to on their target vcore if they
 722are also the one STEALING from that vcore.  The handler will stop STEALING
 723first.
 724
 725So the only way to get stuck permanently is if both cores are stuck doing a
 726sys_change_to(FALSE).  This means we want to become the other vcore, *and* we
 727need to restart our vcore where it left off.  This is due to some invariant
 728that keeps us from abandoning vcore context.  If we were to abandon vcore
 729context (with a sys_change_to(TRUE)), we basically don't need to be
 730preempt-recovered.  We already packaged up our cur_uthread, and we know we
 731aren't holding any locks or otherwise breaking any invariants.  The system will
 732work fine if we never run again.  (Someone just needs to check our messages).
 733
 734Now, there are only two cases where we will do a sys_change_to(FALSE) *while*
 735handling preemptions.  Again, we aren't concerned about things like MCS-PDR
 736locks; those all work because the change_tos are done where we'd normally just
 737busy loop.  We are only concerned about change_tos during handle_vc_preempt.
 738These two cases are when the changing/handling vcore has a DONT_MIGRATE uthread
 739or when someone else is STEALING its uthread.  Note that both of these cases
 740are about the calling vcore, not its target.
 741
 742If a vcore (referred to as "us") has a DONT_MIGRATE uthread and it is handling
 743events, it is because someone else is STEALING from our vcore, and we are in
 744the short one-shot event handling loop at the beginning of
 745uthread_vcore_entry().  Whichever vcore is STEALING will quickly realize it
 746can't steal (it sees the DONT_MIGRATE), and bail out.  If that vcore isn't
 747running now, we will change_to it (which is the purpose of our handling their
 748preemption).  Once that vcore realizes it can't steal, it will stop STEALING
 749and change to us.  At this point, no one is STEALING from us, and we move along
 750in the code.  Specifically, we do *not* handle events (we now have an event
 751about the other vcore being preempted when it changed_to us), and instead we
 752start up the DONT_MIGRATE uthread and let it run until it is migratable, at
 753which point we handle events and will deal with the other vcore.  
 754
 755So DONT_MIGRATE will be sorted out.  Likewise, STEALING gets sorted out too,
 756quite easily.  If someone is STEALING from us, they will quickly stop STEALING
 757and change to us.  There are only two ways this could even happen: they are
 758running concurrently with us, and somehow saw us out of vcore context before
 759deciding to STEAL, or they were in the process of STEALING and got preempted by
 760the kernel.  They would not have willingly stopped running while STEALING our
 761cur_uthread.  So if we are running and someone is stealing, after a round of
 762change_tos, eventually they run, and stop STEALING.
 763
 764Note that once someone stops STEALING from us, they will not start again,
 765unless we leave vcore context.  If that happened, we basically broke out of the
 766ping-pong, and now we're onto another set of preemptions.  We wouldn't leave
 767vcore context if we still had preemption events to deal with.
 768
 769Finally, note that we needed to only check for one message at a time at the
 770beginning of uthread_vcore_entry().  If we just handled the entire mbox without
 771checking STEALING, then we might not break out of that loop if there is a
 772constant supply of messages (perhaps from a vcore in a similar loop).
 773
 774Anyway, that's the basic plan behind the preemption handler and how we avoid
 775the ping-ponging.  change_to_vcore() is built so that we handle our own
 776preemption before changing (pack up our current uthread), so that we make
 777progress.  The two cases where we can't do that get sorted out after everyone
 778gets to run once, and since you can't steal or have other uthread's turn on
 779DONT_MIGRATE while we're in vcore context, eventually we clear everything up.
 780There might be other bugs or weird corner cases, possibly involving multiple
 781vcores, but I think we're okay for now.
 782
 7833.10: Handling Messages for Other Vcores
 784---------------------------------------
 785First, remember that when a vcore handles an event, there's no guarantee that
 786the vcore will return from the handler.  It may start fresh in vcore_entry().
 787
 788The issue is that when you handle another vcore's INDIRs, you may handle
 789preemption messages.  If you have to do a change_to, the kernel will make sure
 790a message goes out about your demise.  Thus someone who recovers that will
 791check your public mbox.  However, the recoverer won't know that you were
 792working on another vcore's mbox, so those messages might never be checked.
 793
 794The way around it is to send yourself a "check the other guy's messages" event.
 795When we might change_to and never return, if we were dealing with another
 796vcores mbox, we'll send ourselves a message to finish up that mbox (if there
 797are any messages left).  Whoever reads our messages will eventually get that
 798message, and deal with it.
 799
 800One thing that is a little ugly is that the way you deal with messages two
 801layers deep is to send yourself the message.  So if VC1 is handling VC2's
 802messages, and then wants to change_to VC3, VC1 sends a message to VC1 to check
 803VC2.  Later, when VC3 is checking VC1's messages, it'll handle the "check VC2's messages"
 804message.  VC3 can't directly handle VC2's messages, since it could run a
 805handler that doesn't return.  Nor can we just forget about VC2.  So VC3 sends
 806itself a message to check VC2 later.  Alternatively, VC3 could send itself a
 807message to continue checking VC1, and then move on to VC2.  Both seem
 808equivalent.  In either case, we ought to check to make sure the mbox has
 809something before bothering sending the message.
 810
 811So for either a "change_to that might not return" or for a "check INDIRs on yet
 812another vcore", we send messages to ourself so that we or someone else will
 813deal with it.
 814
 815Note that we use TLS to track whether or not we are handling another vcore's
 816messages, and if we do plan to change_to that might not return, we clear the
 817bool so that when our vcore starts over at vcore_entry(), it starts over and
 818isn't still checking someone elses message.
 819
 820As a reminder of why this is important: these messages we are hunting down
 821include INDIRs, specifically ones to ev_qs such as the "syscall completed
 822ev_q".  If we never get that message, a uthread will block forever.  If we
 823accidentally yield a vcore instead of checking that message, we would end up
 824yielding the process forever since that uthread will eventually be the last
 825one, but our main thread is probably blocked on a join call.  Our process is
 826blocked on a message that already came, but we just missed it. 
 827
 8284. Single-core Process (SCP) Events:
 829====================
 8304.1 Basics:
 831---------------------------------------
 832Event delivery is important for SCP's blocking syscalls.  It can also be used
 833(in the future) to deliver POSIX signals, which would just be another kernel
 834event.
 835
 836SCPs can receive events just like MCPs.  For the most part, the code paths are
 837the same on both sides of the K/U interface.  The kernel sends events (which
 838can detect an SCP and will send it to vcore0), the kernel will make sure you
 839can't yield/miss an event, etc.  Userspace preps vcore context in advance, and
 840can do all the things vcore context does: handle events, select thread to run.
 841For an SCP, there is only one thread to run.
 842
 8434.2 Degenerate Event Delivery:
 844---------------------------------------
 845That being said, there are a few tricky things.  First, there is a time before
 846the SCP is ready to fully receive events.  Specifically, before
 847vcore_event_init(), which is called out of glibc's _start.  More importantly,
 848the runtime linker never calls that function, yet it wants to block.
 849
 850The important thing to note is that there are a few parts to event delivery:
 851registration (user), sending the event (kernel), making sure the proc wakes up
 852(kernel), and actually handling the event (user).  For syscalls, the only thing
 853the process (even rtld) needs is the first three.  Registration is easy - can be
 854done with nothing more than kernel headers (no need for parlib) for NO_MSG ev_qs
 855(no need to init the UCQ).  Event handling is trickier, and requires parlib
 856(which rtld can't link against).  To support processes that could register for
 857events, but not handle them (or even enter vcore context), the kernel needed a
 858few changes (checking the VC_SCP_NOVCCTX flag) so that it would wake the
 859process, but never put it in vcore context.  
 860
 861This degenerate event handling just wakes the process up, at which point it can
 862check on its syscall.  Very early in the process's life, it'll init vcore0's
 863UCQ and be able to handle full events, enter vcore context, etc.
 864
 865Once the SCP is up and running, it can receive events like normal.  One thing to
 866note is that the SCPs are not using a handle_syscall() event handler, like the
 867MCPs do.  They are only using the event to get the process restarted, at which
 868point their vcore 0 restarts thread0.  One consequence of this is that if a
 869process receives an unrelated event while blocking on a syscall, it'll handle
 870that event, then restart thread0.  Thread0 will see its syscall isn't complete,
 871and then re-block.  (It also re-registers its ev_q, which is harmless).  When
 872that syscall is finally done, the kernel will send an event and wake it up
 873again.
 874
 8754.3 Extra Tidbits:
 876---------------------------------------
 877If we receive an event right as we transition from SCP to MCP, vcore0 could get
 878spammed with a message that is never received.  Right now, it's not a problem,
 879since vcore0 is the first vcore that will get woken up as an MCP.  This could be
 880an issue if we ever allow transitions from MCP back to SCP.
 881
 882On a related note, it's now wrong for SCPs to sys_yield(FALSE) (not being nice,
 883meaning they are waiting for an event) in a loop that does not check events or
 884otherwise allow them to break out of that loop.  This should be fairly obvious.
 885A little more subtle is that these loops also need to sort out notif_pending.
 886If you are trying to yield and still have an old notif_pending set, the kernel
 887won't let you yield (it thinks you are missing the notif).  For the degenerate
 888mode, (VC_SCP_NOVCCTX is set on vcore0), the kernel will handle dealing with
 889this flag.
 890
 891Finally, note that while the SCP is in vcore context, it has none of the
 892guarantees of an MCP.  It's somewhat meaningless to talk about being gang
 893scheduled or knowing about the state of other vcores.  If you're running, you're
 894on a physical core.  You may get unexpected interrupts, descheduled, etc.  Aside
 895from the guarantees and being the only vcore, the main differences are really up
 896to the kernel scheduler.  In that sense, we have somewhat of a new state for
 897processes - SCPs that can enter vcore context.  From the user's perspective,
 898they look a lot like an MCP, and the degenerate/early mode SCPs are like the
 899old, dumb SCPs.  The big difference for userspace is that there isn't a 2LS yet
 900(will need to reinit things slightly).  The kernel treats SCPs and MCPs very
 901differently too, but that may not always be the case.
 902
 9035. Misc Things That Aren't Sorted Completely:
 904====================
 9055.1 What about short handlers?
 906---------------------------------------
 907Once we sort the other issues, we can ask for them via a flag in the event_q,
 908and run the handler in the event_q struct.
 909
 9105.2 What about blocking on a syscall?
 911---------------------------------------
 912The current plan is to set a flag, and let the kernel go from there.  The
 913kernel knows which process it is, since that info is saved in the kthread that
 914blocked.  One issue is that the process could muck with that flag and then go
 915to sleep forever.  To deal with that, maybe we'd have a long running timer to
 916reap those.  Arguably, it's like having a process while(1).  You can screw
 917yourself, etc.  Killing the process would still work.
 918