akaros/Documentation/processes.txt
<<
>>
Prefs
   1processes.txt
   2Barret Rhoden
   3
   4All things processes!  This explains processes from a high level, especially
   5focusing on the user-kernel boundary and transitions to the many-core state,
   6which is the way in which parallel processes run.  This doesn't discuss deep
   7details of the ROS kernel's process code.
   8
   9This is motivated by two things: kernel scalability and direct support for
  10parallel applications.
  11
  12Part 1: Overview
  13Part 2: How They Work
  14Part 3: Resource Requests
  15Part 4: Preemption and Notification
  16Part 5: Old Arguments (mostly for archival purposes))
  17Part 6: Parlab app use cases
  18
  19Revision History:
  202009-10-30 - Initial version
  212010-03-04 - Preemption/Notification, changed to many-core processes
  22
  23Part 1: World View of Processes
  24==================================
  25A process is the lowest level of control, protection, and organization in the
  26kernel.
  27
  281.1: What's a process?
  29-------------------------------
  30Features:
  31- They are an executing instance of a program.  A program can load multiple
  32  other chunks of code and run them (libraries), but they are written to work
  33  with each other, within the same address space, and are in essence one
  34  entity.
  35- They have one address space/ protection domain.  
  36- They run in Ring 3 / Usermode.
  37- They can interact with each other, subject to permissions enforced by the
  38  kernel.
  39- They can make requests from the kernel, for things like resource guarantees.
  40  They have a list of resources that are given/leased to them.
  41
  42None of these are new.  Here's what's new:
  43- They can run in a many-core mode, where its cores run at the same time, and
  44  it is aware of changes to these conditions (page faults, preemptions).  It
  45  can still request more resources (cores, memory, whatever).
  46- Every core in a many-core process (MCP) is *not* backed by a kernel
  47  thread/kernel stack, unlike with Linux tasks.
  48        - There are *no* per-core run-queues in the kernel that decide for
  49          themselves which kernel thread to run.
  50- They are not fork()/execed().  They are created(), and then later made
  51  runnable.  This allows the controlling process (parent) to do whatever it
  52  wants: pass file descriptors, give resources, whatever.
  53
  54These changes are directly motivated by what is wrong with current SMP
  55operating systems as we move towards many-core: direct (first class) support
  56for truly parallel processes, kernel scalability, and an ability of a process
  57to see through classic abstractions (the virtual processor) to understand (and
  58make requests about) the underlying state of the machine.
  59
  601.2: What's a partition?
  61-------------------------------
  62So a process can make resource requests, but some part of the system needs to
  63decide what to grant, when to grant it, etc.  This goes by several names:
  64scheduler / resource allocator / resource manager.  The scheduler simply says
  65when you get some resources, then calls functions from lower parts of the
  66kernel to make it happen.
  67
  68This is where the partitioning of resources comes in.  In the simple case (one
  69process per partitioned block of resources), the scheduler just finds a slot
  70and runs the process, giving it its resources.  
  71
  72A big distinction is that the *partitioning* of resources only makes sense
  73from the scheduler on up in the stack (towards userspace).  The lower levels
  74of the kernel know about resources that are granted to a process.  The
  75partitioning is about the accounting of resources and an interface for
  76adjusting their allocation.  It is a method for telling the 'scheduler' how
  77you want resources to be granted to processes.
  78
  79A possible interface for this is procfs, which has a nice hierarchy.
  80Processes can be grouped together, and resources can be granted to them.  Who
  81does this?  A process can create it's own directory entry (a partition), and
  82move anyone it controls (parent of, though that's not necessary) into its
  83partition or a sub-partition.  Likewise, a sysadmin/user can simply move PIDs
  84around in the tree, creating partitions consisting of processes completely
  85unaware of each other.
  86
  87Now you can say things like "give 25% of the system's resources to apache and
  88mysql".  They don't need to know about each other.  If you want finer-grained
  89control, you can create subdirectories (subpartitions), and give resources on
  90a per-process basis.  This is back to the simple case of one process for one
  91(sub)partition.
  92
  93This is all influenced by Linux's cgroups (process control groups).
  94http://www.mjmwired.net/kernel/Documentation/cgroups.txt. They group processes
  95together, and allow subsystems to attach meaning to those groups.
  96
  97Ultimately, I view partitioning as something that tells the kernel how to
  98grant resources.  It's an abstraction presented to userspace and higher levels
  99of the kernel.  The specifics still need to be worked out, but by separating
 100them from the process abstraction, we can work it out and try a variety of
 101approaches.
 102
 103The actual granting of resources and enforcement is done by the lower levels
 104of the kernel (or by hardware, depending on future architectural changes).
 105
 106Part 2: How They Work
 107===============================
 1082.1: States
 109-------------------------------
 110PROC_CREATED
 111PROC_RUNNABLE_S
 112PROC_RUNNING_S
 113PROC_WAITING
 114PROC_DYING
 115PROC_DYING_ABORT
 116PROC_RUNNABLE_M
 117PROC_RUNNING_M
 118
 119Difference between the _M and the _S states:
 120- _S : legacy process mode.  There is no need for a second-level scheduler, and
 121  the code running is analogous to a user-level thread.
 122- RUNNING_M implies *guaranteed* core(s).  You can be a single core in the
 123  RUNNING_M state.  The guarantee is subject to time slicing, but when you
 124  run, you get all of your cores.
 125- The time slicing is at a coarser granularity for _M states.  This means that
 126  when you run an _S on a core, it should be interrupted/time sliced more
 127  often, which also means the core should be classified differently for a
 128  while.  Possibly even using it's local APIC timer.
 129- A process in an _M state will be informed about changes to its state, e.g.,
 130  will have a handler run in the event of a page fault
 131
 132For more details, check out kern/inc/process.h  For valid transitions between
 133these, check out kern/src/process.c's proc_set_state().
 134
 1352.2: Creation and Running
 136-------------------------------
 137Unlike the fork-exec model, processes are created, and then explicitly made
 138runnable.  In the time between creation and running, the parent (or another
 139controlling process) can do whatever it wants with the child, such as pass
 140specific file descriptors, map shared memory regions (which can be used to
 141pass arguments).
 142
 143New processes are not a copy-on-write version of the parent's address space.
 144Due to our changes in the threading model, we no longer need (or want) this
 145behavior left over from the fork-exec model.
 146
 147By splitting the creation from the running and by explicitly sharing state
 148between processes (like inherited file descriptors), we avoid a lot of
 149concurrency and security issues.
 150
 1512.3: Vcoreid vs Pcoreid
 152-------------------------------
 153The vcoreid is a virtual cpu number.  Its purpose is to provide an easy way
 154for the kernel and userspace to talk about the same core.  pcoreid (physical)
 155would also work.  The vcoreid makes things a little easier, such as when a
 156process wants to refer to one of its other cores (not the calling core).  It
 157also makes the event notification mechanisms easier to specify and maintain.
 158
 159Processes that care about locality should check what their pcoreid is.  This
 160is currently done via sys_getcpuid().  The name will probably change.
 161
 1622.4: Transitioning to and from states
 163-------------------------------
 1642.4.1: To go from _S to _M, a process requests cores.
 165--------------
 166A resource request from 0 to 1 or more causes a transition from _S to _M.  The
 167calling context is saved in the uthread slot (uthread_ctx) in vcore0's
 168preemption data (in procdata).  The second level scheduler needs to be able to
 169restart the context when vcore0 starts up.  To do this, it will need to save the
 170TLS/TCB descriptor and the floating point/silly state (if applicable) in the
 171user-thread control block, and do whatever is needed to signal vcore0 to run the
 172_S context when it starts up.  One way would be to mark vcore0's "active thread"
 173variable to point to the _S thread.  When vcore0 starts up at
 174_start/vcore_entry() (like all vcores), it will see a thread was running there
 175and restart it.  The kernel will migrate the _S thread's silly state (FP) to the
 176new pcore, so that it looks like the process was simply running the _S thread
 177and got notified.  Odds are, it will want to just restart that thread, but the
 178kernel won't assume that (hence the notification).
 179
 180In general, all cores (and all subsequently allocated cores) start at the elf
 181entry point, with vcoreid in eax or a suitable arch-specific manner.  There is
 182also a syscall to get the vcoreid, but this will save an extra trap at vcore
 183start time.
 184
 185Future proc_runs(), like from RUNNABLE_M to RUNNING_M start all cores at the
 186entry point, including vcore0.  The saving of a _S context to vcore0's
 187uthread_ctx only happens on the transition from _S to _M (which the process
 188needs to be aware of for a variety of reasons).  This also means that userspace
 189needs to handle vcore0 coming up at the entry point again (and not starting the
 190program over).  This is currently done in sysdeps-ros/start.c, via the static
 191variable init.  Note there are some tricky things involving dynamically linked
 192programs, but it all works currently.
 193
 194When coming in to the entry point, whether as the result of a startcore or a
 195notification, the kernel will set the stack pointer to whatever is requested
 196by userspace in procdata.  A process should allocate stacks of whatever size
 197it wants for its vcores when it is in _S mode, and write these location to
 198procdata.  These stacks are the transition stacks (in Lithe terms) that are
 199used as jumping-off points for future function calls.  These stacks need to be
 200used in a continuation-passing style, and each time they are used, they start
 201from the top.
 202
 2032.4.2: To go from _M to _S, a process requests 0 cores
 204--------------
 205The caller becomes the new _S context.  Everyone else gets trashed
 206(abandon_core()).  Their stacks are still allocated and it is up to userspace
 207to deal with this.  In general, they will regrab their transition stacks when
 208they come back up.  Their other stacks and whatnot (like TBB threads) need to
 209be dealt with.
 210
 211When the caller next switches to _M, that context (including its stack)
 212maintains its old vcore identity.  If vcore3 causes the switch to _S mode, it
 213ought to remain vcore3 (lots of things get broken otherwise).
 214As of March 2010, the code does not reflect this.  Don't rely on anything in
 215this section for the time being.
 216
 2172.4.3: Requesting more cores while in _M
 218--------------
 219Any core can request more cores and adjust the resource allocation in any way.
 220These new cores come up just like the original new cores in the transition
 221from _S to _M: at the entry point.
 222
 2232.4.4: Yielding
 224--------------
 225sys_yield()/proc_yield() will give up the calling core, and may or may not
 226adjust the desired number of cores, subject to its parameters.  Yield is
 227performing two tasks, both of which result in giving up the core.  One is for
 228not wanting the core anymore.  The other is in response to a preemption.  Yield
 229may not be called remotely (ARSC).
 230
 231In _S mode, it will transition from RUNNING_S to RUNNABLE_S.  The context is
 232saved in scp_ctx.
 233
 234In _M mode, this yields the calling core.  A yield will *not* transition from _M
 235to _S.  The kernel will rip it out of your vcore list.  A process can yield its
 236cores in any order.  The kernel will "fill in the holes of the vcoremap" for any
 237future new cores requested (e.g., proc A has 4 vcores, yields vcore2, and then
 238asks for another vcore.  The new one will be vcore2).  When any core starts in
 239_M mode, even after a yield, it will come back at the vcore_entry()/_start point.
 240
 241Yield will normally adjust your desired amount of vcores to the amount after the
 242calling core is taken.  This is the way a process gives its cores back.
 243
 244Yield can also be used to say the process is just giving up the core in response
 245to a pending preemption, but actually wants the core and does not want resource
 246requests to be readjusted.  For example, in the event of a preemption
 247notification, a process may yield (ought to!) so that the kernel does not need
 248to waste effort with full preemption.  This is done by passing in a bool
 249(being_nice), which signals the kernel that it is in response to a preemption.
 250The kernel will not readjust the amt_wanted, and if there is no preemption
 251pending, the kernel will ignore the yield.
 252
 253There may be an m_yield(), which will yield all or some of the cores of an MPC,
 254remotely.  This is discussed farther down a bit.  It's not clear what exactly
 255it's purpose would be.
 256
 257We also haven't addressed other reasons to yield, or more specifically to wait,
 258such as for an interrupt or an event of some sort.
 259
 2602.4.5: Others
 261--------------
 262There are other transitions, mostly self-explanatory.  We don't currently use
 263any WAITING states, since we have nothing to block on yet.  DYING is a state
 264when the kernel is trying to kill your process, which can take a little while
 265to clean up.
 266
 267Part 3: Resource Requests
 268===============================
 269A process can ask for resources from the kernel.  The kernel either grants
 270these requests or not, subject to QoS guarantees, or other scheduler-related
 271criteria.
 272
 273A process requests resources, currently via sys_resource_req.  The form of a
 274request is to tell the kernel how much of a resource it wants.  Currently,
 275this is the amt_wanted.  We'll also have a minimum amount wanted, which tells
 276the scheduler not to run the process until the minimum amount of resources are
 277available.
 278
 279How the kernel actually grants resources is resource-specific.  In general,
 280there are functions like proc_give_cores() (which gives certain cores to a
 281process) that actually does the allocation, as well as adjusting the
 282amt_granted for that resource.
 283
 284For expressing QoS guarantees, we'll probably use something like procfs (as
 285mentioned above) to explicitly tell the scheduler/resource manager what the
 286user/sysadmin wants.  An interface like this ought to be usable both by
 287programs as well as simple filesystem tools (cat, etc).
 288
 289Guarantees exist regardless of whether or not the allocation has happened.  An
 290example of this is when a process may be guaranteed to use 8 cores, but
 291currently only needs 2.  Whenever it asks for up to 8 cores, it will get them.
 292The exact nature of the guarantee is TBD, but there will be some sort of
 293latency involved in the guarantee for systems that want to take advantage of
 294idle resources (compared to simply reserving and not allowing anyone else to
 295use them).  A latency of 0 would mean a process wants it instantly, which
 296probably means they ought to be already allocated (and billed to) that
 297process.  
 298
 299Part 4: Preemption and Event Notification
 300===============================
 301Preemption and Notification are tied together.  Preemption is when the kernel
 302takes a resource (specifically, cores).  There are two types core_preempt()
 303(one core) and gang_preempt() (all cores).  Notification (discussed below) is
 304when the kernel informs a process of an event, usually referring to the act of
 305running a function on a core (active notification).
 306
 307The rough plan for preemption is to notify beforehand, then take action if
 308userspace doesn't yield.  This is a notification a process can ignore, though
 309it is highly recommended to at least be aware of impending core_preempt()
 310events.
 311
 3124.1: Notification Basics
 313-------------------------------
 314One of the philosophical goals of ROS is to expose information up to userspace
 315(and allow requests based on that information).  There will be a variety of
 316events in the system that processes will want to know about.  To handle this,
 317we'll eventually build something like the following.
 318
 319All events will have a number, like an interrupt vector.  Each process will
 320have an event queue (per core, described below).  On most architectures, it
 321will be a simple producer-consumer ring buffer sitting in the "shared memory"
 322procdata region (shared between the kernel and userspace).  The kernel writes
 323a message into the buffer with the event number and some other helpful
 324information.
 325
 326Additionally, the process may request to be actively notified of specific
 327events.  This is done by having the process write into an event vector table
 328(like an IDT) in procdata.  For each event, the process writes the vcoreid it
 329wants to be notified on.
 330
 3314.2: Notification Specifics
 332-------------------------------
 333In procdata there is an array of per-vcore data, holding some
 334preempt/notification information and space for two trapframes: one for
 335notification and one for preemption.
 336
 3374.2.1: Overall
 338-----------------------------
 339When a notification arrives to a process under normal circumstances, the
 340kernel places the previous running context in the notification trapframe, and
 341returns to userspace at the program entry point (the elf entry point) on the
 342transition stack.  If a process is already handling a notification on that
 343core, the kernel will not interrupt it.  It is the processes's responsibility
 344to check for more notifications before returning to its normal work.  The
 345process must also unmask notifications (in procdata) before it returns to do
 346normal work.  Unmasking notifications is the signal to the kernel to not
 347bother sending IPIs, and if an IPI is sent before notifications are masked,
 348then the kernel will double-check this flag to make sure interrupts should
 349have arrived.
 350
 351Notification unmasking is done by clearing the notif_disabled flag (similar to
 352turning interrupts on in hardware).  When a core starts up, this flag is on,
 353meaning that notifications are disabled by default.  It is the process's
 354responsibility to turn on notifications for a given vcore.
 355
 3564.2.2: Notif Event Details
 357-----------------------------
 358When the process runs the handler, it is actually starting up at the same
 359location in code as it always does.  To determine if it was a notification or
 360not, simply check the queue and bitmask.  This has the added benefit of allowing
 361a process to notice notifications that it missed previously, or notifs it wanted
 362without active notification (IPI).  If we want to bypass this check by having a
 363magic register signal, we can add that later.  Additionally, the kernel will
 364mask notifications (much like an x86 interrupt gate).  It will also mask
 365notifications when starting a core with a fresh trapframe, since the process
 366will be executing on its transition stack.  The process must check its per-core
 367event queue to see why it was called, and deal with all of the events on the
 368queue.  In the case where the event queue overflows, the kernel will up a
 369counter so the process can at least be aware things are missed.  At the very
 370least, the process will see the notification marked in a bitmask.
 371
 372These notification events include things such as: an IO is complete, a
 373preemption is pending to this core, the process just returned from a
 374preemption, there was a trap (divide by 0, page fault), and many other things.
 375We plan to allow this list to grow at runtime (a process can request new event
 376notification types).  These messages will often need some form of a timestamp,
 377especially ones that will expire in meaning (such as a preempt_pending).
 378
 379Note that only one notification can be active at a time, including a fault.
 380This means that if a process page faults or something while notifications are
 381masked, the process will simply be killed.    It is up to the process to make
 382sure the appropriate pages are pinned, which it should do before entering _M
 383mode.
 384
 3854.2.3: Event Overflow and Non-Messages
 386-----------------------------
 387For missed/overflowed events, and for events that do not need messages (they
 388have no parameters and multiple notifications are irrelevant), the kernel will
 389toggle that event's bit in a bitmask.  For the events that don't want messages,
 390we may have a flag that userspace sets, meaning they just want to know it
 391happened.  This might be too much of a pain, so we'll see.  For notification
 392events that overflowed the queue, the parameters will be lost, but hopefully the
 393application can sort it out.  Again, we'll see.  A specific notif_event should
 394not appear in both the event buffers and in the bitmask.
 395
 396It does not make sense for all events to have messages.  Others, it does not
 397make sense to specify a different core on which to run the handler (e.g. page
 398faults).  The notification methods that the process expresses via procdata are
 399suggestions to the kernel.  When they don't make sense, they will be ignored.
 400Some notifications might be unserviceable without messages.  A process needs to
 401have a fallback mechanism.  For example, they can read the vcoremap to see who
 402was lost, or they can restart a thread to cause it to page fault again.
 403
 404Event overflow sucks - it leads to a bunch of complications.  Ultimately, what
 405we really want is a limitless amount of notification messages (per core), as
 406well as a limitless amount of notification types.  And we want these to be
 407relayed to userspace without trapping into the kernel. 
 408
 409We could do this if we had a way to dynamically manage memory in procdata, with
 410a distrusted process on one side of the relationship.  We could imagine growing
 411procdata dynamically (we plan to, mostly to grow the preempt_data struct as we
 412request more vcores), and then run some sort of heap manager / malloc.  Things
 413get very tricky since the kernel should never follow pointers that userspace can
 414touch.  Additionally, whatever memory management we use becomes a part of the
 415kernel interface.  
 416
 417Even if we had that, dynamic notification *types* is tricky - they are
 418identified by a number, not by a specific (list) element.
 419
 420For now, this all seems like an unnecessary pain in the ass.  We might adjust it
 421in the future if we come up with clean, clever ways to deal with the problem,
 422which we aren't even sure is a problem yet.
 423
 4244.2.4: How to Use and Leave a Transition Stack
 425-----------------------------
 426We considered having the kernel be aware of a process's transition stacks and
 427sizes so that it can detect if a vcore is in a notification handler based on
 428the stack pointer in the trapframe when a trap or interrupt fires.  While
 429cool, the flag for notif_disabled is much easier and just as capable.
 430Userspace needs to be aware of various races, and only enable notifications
 431when it is ready to have its transition stack clobbered.  This means that when
 432switching from big user-thread to user-thread, the process should temporarily
 433disable notifications and reenable them before starting the new thread fully.
 434This is analogous to having a kernel that disables interrupts while in process
 435context.
 436
 437A process can fake not being on its transition stack, and even unmapping their
 438stack.  At worst, a vcore could recursively page fault (the kernel does not
 439know it is in a handler, if they keep enabling notifs before faulting), and
 440that would continue til the core is forcibly preempted.  This is not an issue
 441for the kernel.
 442
 443When a process wants to use its transition stack, it ought to check
 444preempt_pending, mask notifications, jump to its transition stack, do its work
 445(e.g. process notifications, check for new notifications, schedule a new
 446thread) periodically checking for a pending preemption, and making sure the
 447notification queue/list is empty before moving back to real code.  Then it
 448should jump back to a real stack, unmask notifications, and jump to the newly
 449scheduled thread.
 450
 451This can be really tricky.  When userspace is changing threads, it will need to
 452unmask notifs as well as jump to the new thread.  There is a slight race here,
 453but it is okay.  The race is that an IPI can arrive after notifs are unmasked,
 454but before returning to the real user thread.  Then the code will think the
 455uthread_ctx represents the new user thread, even though it hasn't started (and
 456the PC is wrong).  The trick is to make sure that all state required to start
 457the new thread, as well as future instructions, are all saved within the "stuff"
 458that gets saved in the uthread_ctx.  When these threading packages change
 459contexts, they ought to push the PC on the stack of the new thread, (then enable
 460notifs) and then execute a return.  If an IPI arrives before the "function
 461return", then when that context gets restarted, it will run the "return" with
 462the appropriate value on the stack still.
 463
 464There is a further complication.  The kernel can send an IPI that the process
 465wanted, but the vcore did not get truly interrupted since its notifs were
 466disabled.  There is a race between checking the queue/bitmask and then enabling
 467notifications.  The way we deal with it is that the kernel posts the
 468message/bit, then sets notif_pending.  Then it sends the IPI, which may or may
 469not be received (based on notif_disabled).  (Actually, the kernel only ought to
 470send the IPI if notif_pending was 0 (atomically) and notif_disabled is 0).  When
 471leaving the transition stack, userspace should clear the notif_pending, then
 472check the queue do whatever, and then try to pop the tf.  When popping the tf,
 473after enabling notifications, check notif_pending.  If it is still clear, return
 474without fear of missing a notif.  If it is not clear, it needs to manually
 475notify itself (sys_self_notify) so that it can process the notification that it
 476missed and for which it wanted to receive an IPI.  Before it does this, it needs
 477to clear notif_pending, so the kernel will send it an IPI.  These last parts are
 478handled in pop_user_ctx().
 479
 4804.3: Preemption Specifics
 481-------------------------------
 482There's an issue with a preempted vcore getting restarted while a remote core
 483tries to restart that context.  They resolve this fight with a variety of VC
 484flags (VC_UTHREAD_STEALING).  Check out handle_preempt() in uthread.c.
 485
 4864.4: Other trickiness
 487-------------------------------
 488Take all of these with a grain of salt - it's quite old.
 489
 4904.4.1: Preemption -> deadlock
 491-------------------------------
 492One issue is that a context can be holding a lock that is necessary for the
 493userspace scheduler to manage preempted threads, and this context can be
 494preempted.  This would deadlock the scheduler.  To assist a process from
 495locking itself up, the kernel will toggle a preempt_pending flag in
 496procdata for that vcore before sending the actual preemption.  Whenever the
 497scheduler is grabbing one of these critical spinlocks, it needs to check that
 498flag first, and yield if a preemption is coming in.
 499
 500Another option we may implement is for the process to be able to signal to the
 501kernel that it is in one of these ultra-critical sections by writing a magic
 502value to a specific register in the trapframe.  If there kernel sees this, it
 503will allow the process to run for a little longer.  The issue with this is
 504that the kernel would need to assume processes will always do this (malicious
 505ones will) and add this extra wait time to the worst case preemption time.
 506
 507Finally, a scheduler could try to use non-blocking synchronization (no
 508spinlocks), or one of our other long-term research synchronization methods to
 509avoid deadlock, though we realize this is a pain for userspace for now.  FWIW,
 510there are some OSs out there with only non-blocking synchronization (I think).
 511
 5124.4.2: Cascading and overflow
 513-------------------------------
 514There used to be issues with cascading interrupts (when contexts are still
 515running handlers).  Imagine a pagefault, followed by preempting the handler.
 516It doesn't make sense to run the preempt context after the page fault.
 517Earlier designs had issues where it was hard for a vcore to determine the
 518order of events and unmixing preemption, notification, and faults.  We deal
 519with this by having separate slots for preemption and notification, and by
 520treating faults as another form of notification.  Faulting while handling a
 521notification just leads to death.  Perhaps there is a better way to do that.
 522
 523Another thing we considered would be to have two stacks - transition for
 524notification and an exception stack for faults.  We'd also need a fault slot
 525for the faulting trapframe.  This begins to take up even more memory, and it
 526is not clear how to handle mixed faults and notifications.  If you fault while
 527on the notification slot, then fine.  But you could fault for other reasons,
 528and then receive a notification.  And then if you fault in that handler, we're
 529back to where we started - might as well just kill them.
 530
 531Another issue was overload.  Consider if vcore0 is set up to receive all
 532events.  If events come in faster than it can process them, it will both nest
 533too deep and process out of order.  To handle this, we only notify once, and
 534will not send future active notifications / interrupts until the process
 535issues an "end of interrupt" (EOI) for that vcore.  This is modelled after
 536hardware interrupts (on x86, at least).
 537
 5384.4.3: Restarting a Preempted Notification
 539-------------------------------
 540Nowadays, to restart a preempted notification, you just restart the vcore.
 541The kernel does, either if it gives the process more cores or if userspace asked
 542it to with a sys_change_vcore().
 543
 5444.4.4: Userspace Yield Races
 545-------------------------------
 546Imagine a vcore realizes it is getting preempted soon, so it starts to yield.
 547However, it is too slow and doesn't make it into the kernel before a preempt
 548message takes over.  When that vcore is run again, it will continue where it
 549left off and yield its core.  The desired outcome is for yield to fail, since
 550the process doesn't really want to yield that core.  To sort this out, yield
 551will take a parameter saying that the yield is in response to a pending
 552preemption.  If the phase is over (preempted and returned), the call will not
 553yield and simply return to userspace.
 554
 5554.4.5: Userspace m_yield
 556-------------------------------
 557There are a variety of ways to implement an m_yield (yield the entire MCP).
 558We could have a "no niceness" yield - just immediately preempt, but there is a
 559danger of the locking business.  We could do the usual delay game, though if
 560userspace is requesting its yield, arguably we don't need to give warning. 
 561
 562Another approach would be to not have an explicit m_yield call.  Instead, we
 563can provide a notify_all call, where the notification sent to every vcore is
 564to yield.  I imagine we'll have a notify_all (or rather, flags to the notify
 565call) anyway, so we can do this for now.
 566
 567The fastest way will probably be the no niceness way.  One way to make this
 568work would be for vcore0 to hold all of the low-level locks (from 4.4.1) and
 569manually unlock them when it wakes up.  Yikes!
 570
 5714.5: Random Other Stuff
 572-------------------------------
 573Pre-Notification issues: how much time does userspace need to clean up and
 574yield?  How quickly does the kernel need the core back (for scheduling
 575reasons)?
 576
 577Part 5: Old Arguments about Processes vs Partitions
 578===============================
 579This is based on my interpretation of the cell (formerly what I thought was
 580called a partition).
 581
 5825.1: Program vs OS
 583-------------------------------
 584A big difference is what runs inside the object.  I think trying to support
 585OS-like functionality is a quick path to unnecessary layers and complexity,
 586esp for the common case.  This leads to discussions of physical memory
 587management, spawning new programs, virtualizing HW, shadow page tables,
 588exporting protection rings, etc.
 589
 590This unnecessarily brings in the baggage and complexity of supporting VMs,
 591which are a special case.  Yes, we want processes to be able to use their
 592resources, but I'd rather approach this from the perspective of "what do they
 593need?" than "how can we make it look like a real machine."  Virtual machines
 594are cool, and paravirtualization influenced a lot of my ideas, but they have
 595their place and I don't think this is it.
 596
 597For example, exporting direct control of physical pages is a bad idea.  I
 598wasn't clear if anyone was advocating this or not.  By exposing actual machine
 599physical frames, we lose our ability to do all sorts of things (like swapping,
 600for all practical uses, and other VM tricks).  If the cell/process thinks it
 601is manipulating physical pages, but really isn't, we're in the VM situation of
 602managing nested or shadow page tables, which we don't want.
 603
 604For memory, we'd be better off giving an allocation of a quantity frames, not
 605specific frames.  A process can pin up to X pages, for instance.  It can also
 606pick pages to be evicted when there's memory pressure.  There are already
 607similar ideas out there, both in POSIX and in ACPM.
 608
 609Instead of mucking with faking multiple programs / entities within an cell,
 610just make more processes.  Otherwise, you'd have to export weird controls that
 611the kernel is doing anyway (and can do better!), and have complicated middle
 612layers.
 613
 6145.2: Multiple "Things" in a "partition"
 615-------------------------------
 616In the process-world, the kernel can make a distinction between different
 617entities that are using a block of resources.  Yes, "you" can still do
 618whatever you want with your resources.  But the kernel directly supports
 619useful controls that you want. 
 620- Multiple protection domains are no problem.  They are just multiple
 621  processes.  Resource allocation is a separate topic.
 622- Processes can control one another, based on a rational set of rules.  Even
 623  if you have just cells, we still need them to be able to control one another
 624  (it's a sysadmin thing).
 625
 626"What happens in a cell, stays in a cell."  What does this really mean?  If
 627it's about resource allocation and passing of resources around, we can do that
 628with process groups.  If it's about the kernel not caring about what code runs
 629inside a protection domain, a process provides that.  If it's about a "parent"
 630program trying to control/kill/whatever a "child" (even if it's within a cell,
 631in the cell model), you *want* the kernel to be involved.  The kernel is the
 632one that can do protection between entities.
 633
 6345.3: Other Things
 635-------------------------------
 636Let the kernel do what it's made to do, and in the best position to do: manage
 637protection and low-level resources.
 638
 639Both processes and partitions "have" resources.  They are at different levels
 640in the system.  A process actually gets to use the resources.  A partition is
 641a collection of resources allocated to one or more processes.
 642
 643In response to this:
 644
 645On 2009-09-15 at 22:33 John Kubiatowicz wrote:
 646> John Shalf wrote:  
 647> >
 648> > Anyhow, Barret is asking that resource requirements attributes be 
 649> > assigned on a process basis rather than partition basis.  We need
 650> > to justify why gang scheduling of a partition and resource
 651> > management should be linked.  
 652
 653I want a process to be aware of it's specific resources, as well as the other
 654members of it's partition.  An individual process (which is gang-scheduled in
 655many-core mode) has a specific list of resources.  Its just that the overall
 656'partition of system resources' is separate from the list of specific
 657resources of a process, simply because there can be many processes under the
 658same partition (collection of resources allocated).
 659
 660> >  
 661> Simplicity!
 662> 
 663> Yes, we can allow lots of options, but at the end of the day, the 
 664> simplest model that does what we need is likely the best. I don't
 665> want us to hack together a frankenscheduler.  
 666
 667My view is also simple in the case of one address space/process per
 668'partition.'  Extending it to multiple address spaces is simply asking that
 669resources be shared between processes, but without design details that I
 670imagine will be brutally complicated in the Cell model.
 671
 672
 673Part 6: Use Cases
 674===============================
 6756.1: Matrix Multiply / Trusting Many-core app
 676-------------------------------
 677The process is created by something (bash, for instance).  It's parent makes
 678it runnable.  The process requests a bunch of cores and RAM.  The scheduler
 679decides to give it a certain amount of resources, which creates it's partition
 680(aka, chunk of resources granted to it's process group, of which it is the
 681only member).  The sysadmin can tweak this allocation via procfs.
 682
 683The process runs on its cores in it's many-core mode.  It is gang scheduled,
 684and knows how many cores there are.  When the kernel starts the process on
 685it's extra cores, it passes control to a known spot in code (the ELF entry
 686point), with the virtual core id passed as a parameter.
 687
 688The code runs from a single binary image, eventually with shared
 689object/library support.  It's view of memory is a virtual address space, but
 690it also can see it's own page tables to see which pages are really resident
 691(similar to POSIX's mincore()).
 692
 693When it comes time to lose a core, or be completely preempted, the process is
 694notified by the OS running a handler of the process's choosing (in userspace).
 695The process can choose what to do (pick a core to yield, prepare to be
 696preempted, etc).
 697
 698To deal with memory, the process is notified when it page faults, and keeps
 699its core.  The process can pin pages in memory.  If there is memory pressure,
 700the process can tell the kernel which pages to unmap.
 701
 702This is the simple case.
 703
 7046.2: Browser
 705-------------------------------
 706In this case, a process wants to create multiple protection domains that share
 707the same pool of resources.  Or rather, with it's own allocated resources.
 708
 709The browser process is created, as above.  It creates, but does not run, it's
 710untrusted children.  The kernel will have a variety of ways a process can
 711"mess with" a process it controls.  So for this untrusted child, the parent
 712can pass (for example), a file descriptor of what to render, "sandbox" that
 713process (only allow a whitelist of syscalls, e.g. can only read and write
 714descriptors it has).  You can't do this easily in the cell model.
 715
 716The parent can also set up a shared memory mapping / channel with the child.
 717
 718For resources, the parent can put the child in a subdirectory/ subpartition
 719and give a portion of its resources to that subpartition.  The scheduler will
 720ensure that both the parent and the child are run at the same time, and will
 721give the child process the resources specified.  (cores, RAM, etc).
 722
 723After this setup, the parent will then make the child "runnable".  This is why
 724we want to separate the creation from the runnability of a process, which we
 725can't do with the fork/exec model.
 726
 727The parent can later kill the child if it wants, reallocate the resources in
 728the partition (perhaps to another process rendering a more important page),
 729preempt that process, whatever.
 730
 7316.3: SMP Virtual Machines
 732-------------------------------
 733The main issue (regardless of paravirt or full virt), is that what's running
 734on the cores may or may not trust one another.  One solution is to run each
 735VM-core in it's own process (like with Linux's KVM, it uses N tasks (part of
 736one process) for an N-way SMP VM).  The processes set up the appropriate
 737shared memory mapping between themselves early on.  Another approach would be
 738to allow a many-cored process to install specific address spaces on each
 739core, and interpose on syscalls, privileged instructions, and page faults.
 740This sounds very much like the Cell approach, which may be fine for a VM, but
 741not for the general case of a process.
 742
 743Or with a paravirtualized SMP guest, you could (similar to the L4Linux way,)
 744make any Guest OS processes actual processes in our OS.  The resource
 745allocation to the Guest OS partition would be managed by the parent process of
 746the group (which would be running the Guest OS kernel).  We still need to play
 747tricks with syscall redirection.
 748
 749For full virtualization, we'd need to make use of hardware virtualization
 750instructions. Dealing with the VMEXITs, emulation, and other things is a real
 751pain, but already done.  The long range plan was to wait til the
 752http://v3vee.org/ project supported Intel's instructions and eventually
 753incorporate that.
 754
 755All of these ways involve subtle and not-so-subtle difficulties.  The
 756Cell-as-OS mode will have to deal with them for the common case, which seems
 757brutal.  And rather unnecessary.
 758