|
MINIX Kernel Documentation
|
Defines a simple, non-recursive spinlock using GCC atomic builtins. More...
#include <minix/sys_config.h>
Go to the source code of this file.
Classes | |
| struct | simple_spinlock_t |
| Structure representing a simple spinlock. More... | |
Macros | |
| #define | MAX_SPIN_THRESHOLD 100000 |
| Maximum number of spin iterations before attempting to yield. | |
| #define | KERNEL_YIELD_DEFINED |
Functions | |
| static void | arch_pause (void) |
| Placeholder for arch_pause on non-x86 architectures. | |
| static void | kernel_yield (void) |
| Yields the CPU, typically to the scheduler. (Stub Implementation) | |
| static void | simple_spin_init (simple_spinlock_t *lock) |
| Initializes a spinlock to the unlocked state and resets statistics. | |
| static void | simple_spin_lock (simple_spinlock_t *lock) |
| Acquires a spinlock, busy-waiting if necessary. | |
| static void | simple_spin_unlock (simple_spinlock_t *lock) |
| Releases a previously acquired spinlock. | |
Defines a simple, non-recursive spinlock using GCC atomic builtins.
This header provides a basic spinlock implementation suitable for short critical sections, particularly in contexts where sleeping is not permissible (e.g., some interrupt handlers or core kernel code before schedulers are fully active). It is designed with SMP considerations, relying on GCC's atomic builtins which typically ensure full memory barriers for sequential consistency. Includes adaptive spinning using arch_pause() for supported architectures.
| #define KERNEL_YIELD_DEFINED |
| #define MAX_SPIN_THRESHOLD 100000 |
Maximum number of spin iterations before attempting to yield.
This threshold is used in simple_spin_lock to prevent a CPU from monopolizing resources by spinning indefinitely on a highly contended lock. After this many spins in the inner loop, kernel_yield() is called. The value should be tuned based on system characteristics and expected contention levels.
|
inlinestatic |
Placeholder for arch_pause on non-x86 architectures.
For architectures other than i386/x86_64, this function currently acts as a no-op. It can be defined with architecture-specific pause/yield instructions if available to improve spin-wait loop performance.
|
inlinestatic |
Yields the CPU, typically to the scheduler. (Stub Implementation)
This function is called when a spinlock has been spinning for too long (exceeding MAX_SPIN_THRESHOLD), as a mechanism to prevent CPU monopolization and allow other threads/processes to run.
arch_pause() to reduce contention. A real implementation might call something like sched_yield() or yield().
|
inlinestatic |
Initializes a spinlock to the unlocked state and resets statistics.
| lock | Pointer to the simple_spinlock_t to initialize. |
This function must be called before the spinlock is used for the first time. It sets the lock state to 0 (unlocked) and initializes statistics counters to zero.
|
inlinestatic |
Acquires a spinlock, busy-waiting if necessary.
| lock | Pointer to the simple_spinlock_t to acquire. |
This function attempts to acquire the lock. If the lock is already held, it will spin (busy-wait) until the lock becomes available. This function is non-recursive; a thread attempting to acquire a lock it already holds will deadlock. Includes a spin counter and calls kernel_yield() if spinning excessively. Also updates lock acquisition and contention statistics.
|
inlinestatic |
Releases a previously acquired spinlock.
| lock | Pointer to the simple_spinlock_t to release. |
This function releases the lock, allowing another thread to acquire it. It must only be called by the thread that currently holds the lock.