参见文章:http://blog.chinaunix.net/uid-22695386-id-272098.html
linux2.4之前的内核有进程最大数的限制,受限制的原因是,每一个进程都有自已的TSS和LDT,而TSS(任务描述符)和LDT(私有描述符)必须放在GDT中,GDT最大只能存放8192个描述符,除掉系统用掉的12描述符之外,最大进程数=(8192-12)/2, 总共4090个进程。从Linux2.4以后,全部进程使用同一个TSS,准确的说是,每个CPU一个TSS,在同一个CPU上的进程使用同一个TSS。TSS的定义在asm-i386/processer.h中,定义如下:
extern struct tss_struct init_tss[NR_CPUS];
在start_kernel()->trap_init()->cpu_init()初始化并加载TSS:
void __init cpu_init (void)
{
int nr = smp_processor_id(); //获取当前cpustruct tss_struct * t = &init_tss[nr]; //当前cpu使用的tss
t->esp0 = current->thread.esp0; //把TSS中esp0更新为当前进程的esp0
set_tss_desc(nr,t);
gdt_table[__TSS(nr)].b &= 0xfffffdff;
load_TR(nr); //加载TSS
load_LDT(&init_mm.context); //加载LDT}
我们知道,任务切换(硬切换)需要用到TSS来保存全部寄存器(2.4以前使用jmp来实现切换),
中断发生时也需要从TSS中读取ring0的esp0,那么,进程使用相同的TSS,任务切换怎么办?
其实2.4以后不再使用硬切换,而是使用软切换,寄存器不再保存在TSS中了,而是保存在
task->thread中,只用TSS的esp0和IO许可位图,所以,在进程切换过程中,只需要更新TSS中
的esp0、io bitmap,代码在sched.c中:
schedule()->switch_to()->__switch_to(),
void fastcall __switch_to(struct task_struct *prev_p, struct task_struct *next_p)
{
struct thread_struct *prev = &prev_p->thread,
*next = &next_p->thread;
struct tss_struct *tss = init_tss + smp_processor_id(); //当前cpu的TSS/*
* Reload esp0, LDT and the page table pointer:
*/
ttss->esp0 = next->esp0; //用下一个进程的esp0更新tss->esp0//拷贝下一个进程的io_bitmap到tss->io_bitmap
if (prev->ioperm || next->ioperm) {
if (next->ioperm) {
/*
* 4 cachelines copy ... not good, but not that
* bad either. Anyone got something better?
* This only affects processes which use ioperm().
* [Putting the TSSs into 4k-tlb mapped regions
* and playing VM tricks to switch the IO bitmap
* is not really acceptable.]
*/
memcpy(tss->io_bitmap, next->io_bitmap,
IO_BITMAP_BYTES);
tss->bitmap = IO_BITMAP_OFFSET;
} else
/*
* a bitmap offset pointing outside of the TSS limit
* causes a nicely controllable SIGSEGV if a process
* tries to use a port IO instruction. The first
* sys_ioperm() call sets up the bitmap properly.
*/
tss->bitmap = INVALID_IO_BITMAP_OFFSET;
}
}
以及代码:
/*
* switch_to(x,yn) should switch tasks from x to y.
*
* We fsave/fwait so that an exception goes off at the right time
* (as a call from the fsave or fwait in effect) rather than to
* the wrong process. Lazy FP saving no longer makes any sense
* with modern CPU's, and this simplifies a lot of things (SMP
* and UP become the same).
*
* NOTE! We used to use the x86 hardware context switching. The
* reason for not using it any more becomes apparent when you
* try to recover gracefully from saved state that is no longer
* valid (stale segment register values in particular). With the
* hardware task-switch, there is no way to fix up bad state in
* a reasonable manner.
*
* The fact that Intel documents the hardware task-switching to
* be slow is a fairly red herring - this code is not noticeably
* faster. However, there _is_ some room for improvement here,
* so the performance issues may eventually be a valid point.
* More important, however, is the fact that this allows us much
* more flexibility.
*
* the task-switch, and shows up in ret_from_fork in entry.S,
* for example.
*/
struct task_struct *
struct task_struct *next_p)
30: {
struct thread_struct *prev = &prev_p->thread,
32: *next = &next_p->thread;
int cpu = smp_processor_id();
struct tss_struct *tss = &per_cpu(init_tss, cpu);
35: fpu_switch_t fpu;
36:
/* never put a printk in __switch_to... printk() calls wake_up*() indirectly */
38:
39: fpu = switch_fpu_prepare(prev_p, next_p);
40:
/*
* Reload esp0.
*/
44: load_sp0(tss, next);
45:
/*
* Save away %gs. No need to save %fs, as it was saved on the
* stack on entry. No need to save %es and %ds, as those are
* always kernel segments while inside the kernel. Doing this
* before setting the new TLS descriptors avoids the situation
* where we temporarily have non-reloadable segments in %fs
* and %gs. This could be an issue if the NMI handler ever
* used %fs or %gs (it does not today), or if the kernel is
* running inside of a hypervisor layer.
*/
56: lazy_save_gs(prev->gs);
57:
/*
* Load the per-thread Thread-Local Storage descriptor.
*/
61: load_TLS(next, cpu);
62:
/*
* Restore IOPL if needed. In normal use, the flags restore
* in the switch assembly will handle this. But if the kernel
* is running virtualized at a non-zero CPL, the popf will
* not restore flags, so it must be done in a separate step.
*/
if (get_kernel_rpl() && unlikely(prev->iopl != next->iopl))
70: set_iopl_mask(next->iopl);
71:
/*
* Now maybe handle debug registers and/or IO bitmaps
*/
if (unlikely(task_thread_info(prev_p)->flags & _TIF_WORK_CTXSW_PREV ||
76: task_thread_info(next_p)->flags & _TIF_WORK_CTXSW_NEXT))
77: __switch_to_xtra(prev_p, next_p, tss);
78:
/*
* Leave lazy mode, flushing any hypercalls made here.
* This must be done before restoring TLS segments so
* the GDT and LDT are properly updated, and must be
* done before math_state_restore, so the TS bit is up
* to date.
*/
86: arch_end_context_switch(next_p);
87:
/*
* Restore %gs if needed (which is common)
*/
if (prev->gs | next->gs)
92: lazy_load_gs(next->gs);
93:
94: switch_fpu_finish(next_p, fpu);
95:
96: percpu_write(current_task, next_p);
97:
return prev_p;
99: }