3
$\begingroup$

In the textbook I'm reading, it states that

If the interrupt is of a lower/equal priority to the current process then the current process continues If it is of a higher priority the CPU finishes its current Fetch-Decode-Execute cycle. The contents of the CPU’s registers are copied to a st...

In short, the CPU reacts instantly to the interrupt and either ignores/defers it or waits until the current cycle is completed before processing it.

I don't know if this is just a simplification, but this doesn't seem right...

I thought interrupts were pushed into a buffer, while the CPU is running, and after every cycle, the buffer was checked by the CPU. This makes more sense, and seems more plausible, but also contradicts the previous explanation.

Does the CPU physically get interrupted by the interrupt, or does it briefly check the buffer after every cycle?

I don't know much of the terminology, besides "an interrupt service routine."


Edit: Is it a mixture of both? The highlighted explanation references a process, which I realise could encompass many CPU cycles, which are ended in the check of the buffer. If the precedence is higher, the process is popped out, for the ISR.

$\endgroup$
4
  • $\begingroup$ The textbook is outdated, since it's talking about the current cycle. In modern CPU's, there are multiple. Some CPU's even have multiple cycles per CPU _core (E.g. Intel Hyperthreading, two cycles/core). And indeed, interrupt dispatching on these systems is much more like your buffer model. $\endgroup$
    – MSalters
    Commented Jun 16, 2017 at 12:05
  • $\begingroup$ It's a pre-university level CS course, with lots of needless abstraction, like this. $\endgroup$
    – Tobi
    Commented Jun 16, 2017 at 13:40
  • $\begingroup$ The model CPU we're taught is a very basic ~1960s CPU, unrepresentative of today's. $\endgroup$
    – Tobi
    Commented Jun 16, 2017 at 13:41
  • $\begingroup$ "I don't know if this is just a simplification" - Sounds like you do know. $\endgroup$
    – MSalters
    Commented Jun 16, 2017 at 13:46

1 Answer 1

4
$\begingroup$
  1. It would only make sense to service interrupts after a full cycle completes, not immediately, because intermediate values are only saved to the register file at the end of each cycle. Being able to service an interrupt at any point in the cycle would add a lot of complexity to a processor for very little benefit. Interrupts are time-critical, but they're typically not microseconds and nanoseconds critical.

    How this happens specifically is a matter of micro-architecture, so it can vary from processor to processor, and the processor manufacturers do not share that level of implementation detail.

  2. The interrupt hardware does not queue interrupts. Different hardware interrupts are asserted by electrifying pins on the processor. When an interrupt pin is sensed, the processor transfers control to a special piece of software called the interrupt service routine in the operating system. Thus, handling an interrupt requires many cycles at a minimum. It is possible that an interrupt could be dropped if two interrupts were to occur near-simultaneously.

  3. However, modern OSes use a variety of techniques to handle interrupts efficiently, some of which might involve queuing. These techniques improve the behavior of the computer as well as essentially eliminating the possibility of dropping interrupts under normal conditions.

    A classic approach used in Linux is the top-half/bottom-half approach. When an interrupt occurs, the physical hardware is handled by the top half, but any significant processing is deferred to the bottom half that can run at a later time. For example, suppose we're writing a driver for a network card. The top-half of the network card driver might be responsible for copying an incoming packet from the network card into the computer's memory and clearing the interrupt, while the bottom half (also called a tasklet in Linux) is responsible for processing the packet and routing it. This allows the physical interrupt to be cleared as quickly as possible by retrieving and queuing multiple packets in software, even though the processor interrupt hardware does not have any notion of queuing.

    However, modern Linux has superseded all of these concepts, and now it is common to use threaded interrupt handlers, which uses a similar deferred processing approach as the top-half/bottom-half mechanism, but simplifies the instantiation of the bottom half in the kernel. Here, there might not be an actual queue of interrupts, but each interrupt can be handled concurrently by a different thread in the kernel.

$\endgroup$
5
  • $\begingroup$ So there's no interrupt buffer, just pins that are checked by the CPU after every cycle? These pins could be overwritten by a following interrupt (only of the same type? Are there different pins for different types), but this is very unlikely because of the clock-speed? When you say hands over control, do you mean at the end of the cycle, if an interrupt needs to be handled, the CPU swaps out the current process for the interrupt handler (called the interrupt service routine, right?)? $\endgroup$
    – Tobi
    Commented Jun 16, 2017 at 4:51
  • $\begingroup$ The modern OSs that use a variety of techniques to handle interrupts efficiently start with the Atlas supervisor in 1962, in which was invented the three-level architecture (called interrupt routine, supervisor extracode routine, and object program) that prevails to this day. Atlas also invented virtual memory, memory-mapped IO, automatic buffering of slow devices (now redundant), ... . Seymour Cray's CDC6600 (another wonderful design) killed Atlas. $\endgroup$
    – Thumbnail
    Commented Jun 16, 2017 at 8:09
  • $\begingroup$ "It would only make sense to service interrupts after a full cycle completes" - well, except for the fact that on pipelined architectures the cycles overlap. And that's not even mentioning Out Of Order processing where the overlap varies with the instruction, or even the data (!). "hardware interrupts are asserted by electrifying pins on the processor" - not on normal PC's with a PCIe bus; they use Message Signaled Interrupts. $\endgroup$
    – MSalters
    Commented Jun 16, 2017 at 14:04
  • $\begingroup$ @MSalters Modern processor architectures have a lot more going on, but they don't change the conceptual picture that the processor is not going to handle intra-cycle interrupts. It's going to handle them only at cycle boundaries. With respect to MSI, the interrupt is still ultimately asserted via a pin to the processor (through a PIC or APIC) $\endgroup$
    – David
    Commented Jun 16, 2017 at 15:04
  • $\begingroup$ @Tobi Conceptually yes- interrupt handling has evolved a bit over the years. Early Intel processors had one, two, or three pins for handling interrupts (called INTR, NMI, and TRAP). Then we evolved programmable interrupt controllers (PIC) and then advanced programmable interrupt controllers (APIC), which multiplex multiple interrupt lines into one or two pins. Then as MSalters points out we have yet more complex interrupt handling hardware. $\endgroup$
    – David
    Commented Jun 16, 2017 at 15:22

Your Answer

By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy.

Not the answer you're looking for? Browse other questions tagged or ask your own question.