- Sep 03, 2008
-
-
kolya authored
-
- Aug 20, 2008
-
-
rsc authored
-
- Sep 27, 2007
-
-
rsc authored
kernel SMP interruptibility fixes. Last year, right before I sent xv6 to the printer, I changed the SETGATE calls so that interrupts would be disabled on entry to interrupt handlers, and I added the nlock++ / nlock-- in trap() so that interrupts would stay disabled while the hw handlers (but not the syscall handler) did their work. I did this because the kernel was otherwise causing Bochs to triple-fault in SMP mode, and time was short. Robert observed yesterday that something was keeping the SMP preemption user test from working. It turned out that when I simplified the lapic code I swapped the order of two register writes that I didn't realize were order dependent. I fixed that and then since I had everything paged in kept going and tried to figure out why you can't leave interrupts on during interrupt handlers. There are a few issues. First, there must be some way to keep interrupts from "stacking up" and overflowing the stack. Keeping interrupts off the whole time solves this problem -- even if the clock tick handler runs long enough that the next clock tick is waiting when it finishes, keeping interrupts off means that the handler runs all the way through the "iret" before the next handler begins. This is not really a problem unless you are putting too many prints in trap -- if the OS is doing its job right, the handlers should run quickly and not stack up. Second, if xv6 had page faults, then it would be important to keep interrupts disabled between the start of the interrupt and the time that cr2 was read, to avoid a scenario like: p1 page faults [cr2 set to faulting address] p1 starts executing trapasm.S clock interrupt, p1 preempted, p2 starts executing p2 page faults [cr2 set to another faulting address] p2 starts, finishes fault handler p1 rescheduled, reads cr2, sees wrong fault address Alternately p1 could be rescheduled on the other cpu, in which case it would still see the wrong cr2. That said, I think cr2 is the only interrupt state that isn't pushed onto the interrupt stack atomically at fault time, and xv6 doesn't care. (This isn't entirely hypothetical -- I debugged this problem on Plan 9.) Third, and this is the big one, it is not safe to call cpu() unless interrupts are disabled. If interrupts are enabled then there is no guarantee that, between the time cpu() looks up the cpu id and the time that it the result gets used, the process has not been rescheduled to the other cpu. For example, the very commonly-used expression curproc[cpu()] (aka the macro cp) can end up referring to the wrong proc: the code stores the result of cpu() in %eax, gets rescheduled to the other cpu at just the wrong instant, and then reads curproc[%eax]. We use curproc[cpu()] to get the current process a LOT. In that particular case, if we arranged for the current curproc entry to be addressed by %fs:0 and just use a different %fs on each CPU, then we could safely get at curproc even with interrupts disabled, since the read of %fs would be atomic with the read of %fs:0. Alternately, we could have a curproc() function that disables interrupts while computing curproc[cpu()]. I've done that last one. Even in the current kernel, with interrupts off on entry to trap, interrupts are enabled inside release if there are no locks held. Also, the scheduler's idle loop must be interruptible at times so that the clock and disk interrupts (which might make processes runnable) can be handled. In addition to the rampant use of curproc[cpu()], this little snippet from acquire is wrong on smp: if(cpus[cpu()].nlock == 0) cli(); cpus[cpu()].nlock++; because if interrupts are off then we might call cpu(), get rescheduled to a different cpu, look at cpus[oldcpu].nlock, and wrongly decide not to disable interrupts on the new cpu. The fix is to always call cli(). But this is wrong too: if(holding(lock)) panic("acquire"); cli(); cpus[cpu()].nlock++; because holding looks at cpu(). The fix is: cli(); if(holding(lock)) panic("acquire"); cpus[cpu()].nlock++; I've done that, and I changed cpu() to complain the first time it gets called with interrupts disabled. (It gets called too much to complain every time.) I added new functions splhi and spllo that are like acquire and release but without the locking: void splhi(void) { cli(); cpus[cpu()].nsplhi++; } void spllo(void) { if(--cpus[cpu()].nsplhi == 0) sti(); } and I've used those to protect other sections of code that refer to cpu() when interrupts would otherwise be disabled (basically just curproc and setupsegs). I also use them in acquire/release and got rid of nlock. I'm not thrilled with the names, but I think the concept -- a counted cli/sti -- is sound. Having them also replaces the nlock++/nlock-- in trap.c and main.c, which is nice. Final note: it's still not safe to enable interrupts in the middle of trap() between lapic_eoi and returning to user space. I don't understand why, but we get a fault on pop %es because 0x10 is a bad segment descriptor (!) and then the fault faults trying to go into a new interrupt because 0x8 is a bad segment descriptor too! Triple fault. I haven't debugged this yet.
-
rsc authored
drop , address=0xf0000 from romimage line. newer bochs has a 128k bios that it loads elsewhere. so let bochs decide where the romimage goes. change cpu quantum to 1 (default is 5, max is 16) in an attempt to provoke more races. only provokes them slightly more frequently, may not be worth the slowdown.
-
- Sep 26, 2007
-
-
rsc authored
-
rsc authored
the macro expansion of "char *cp;" turned into char *(curproc[cpu()]); which declares a dynamically sized array of char* called curproc. so then &cp == &(curproc[cpu()]) was actually a stack variable as "expected". it was one past the end of the array, but the implicit alloca allocated more than was necessary. do not tell me that making cp a #define was a bad idea. there are worse problems to fix. more on that later.
-
- Jul 26, 2006
-
-
rtm authored
-
- Jul 15, 2006
-
-
rsc authored
New scheduler. Removed cli and sti stack in favor of tracking number of locks held on each CPU and explicit conditionals in spinlock.c.
-
- Jul 10, 2006
-
-
rsc authored
Linux 2.4 box using gcc 3.4.6 don't seem to follow the same conventions as the i386-jos-elf-gcc compilers. Can run make 'TOOLPREFIX=' or edit the Makefile. curproc[cpu()] can now be NULL, indicating that no proc is running. This seemed safer to me than having curproc[0] and curproc[1] both pointing at proc[0] potentially. The old implementation of swtch depended on the stack frame layout used inside swtch being okay to return from on the other stack (exactly the V6 you are not expected to understand this). It also could be called in two contexts: at boot time, to schedule the very first process, and later, on behalf of a process, to sleep or schedule some other process. I split this into two functions: scheduler and swtch. The scheduler is now a separate never-returning function, invoked by each cpu once set up. The scheduler looks like: scheduler() { setjmp(cpu.context); pick proc to schedule blah blah blah longjmp(proc.context) } The new swtch is intended to be called only when curproc[cpu()] is not NULL, that is, only on behalf of a user proc. It does: swtch() { if(setjmp(proc.context) == 0) longjmp(cpu.context) } to save the current proc context and then jump over to the scheduler, running on the cpu stack. Similarly the system call stubs are now in assembly in usys.S to avoid needing to know the details of stack frame layout used by the compiler. Also various changes in the debugging prints.
-
- Jun 13, 2006
-
-
rtm authored
-
- Jun 12, 2006
-
-
rtm authored
-