Dave wrote: Sun Sep 07, 2025 9:00 pm
Can an interrupt be too frequent? Even in the fastest system?
Sure - when the overhead of just responding to an interrupt gets to be on the order of normal processing, you got a problem.
Of course there is one more way this can happen and that's when interrupt processing (i.e. actual code that handles interrupts plus CPU interrupt overhead) gets on the order of non-interrupt processing you may need to re-think your system design...
One thing that helps when deciding on this is, assess the CPU incurred overhead on interrupt processing, which is the number of cycles needed for the CPU to recognize the interrupt and all the way to the first instruction of the handling code, and add to that the number of cycles needed to exit from interrupt back to previously executed code, i.e. from the last instruction of the interrupt handler (usually RTE) to non-interrupt code. This is the time penalty for every interrupt. Keep in mind there is a worst and best case scenario - for instance, it will take much more for an interrupt to be recognized if the CPU is executing a division than a simple move, since it needs to finish the current instruction - they take (sometimes radically) different times to execute.
Then there is the case of multiple level interrupts occurring just as one has entered processing, the other occurs - some of the processing is not interruptible even if a higher level interrupt occurs, so you get extra delays. This can actually make overhead less predictable and repeatable. Another thing to keep in mind when calculating worst case response.
Finally, if the CPU has a cache, worst and best case times can differ radically.
In any case, this is a calculation that should be made before one actually considers how fast the actual interrupt stuff (reading status registers, data registers, loading/storing data, et.) takes. More on this WRT interrupt sharing.
Is there any case where an interrupt over, say, 8096 Hz is useful?
Possibly. This is the other part of the basic interrupt system design.
What kind of an interrupt rate can we expect? This depends on a lot of aspects of the hardware design. For instance, if data is to be moved under interrupt, is there some sort of buffering, is there some sort of handshake involved, and can the data flow be completely stopped if needed?
The last question is important as the system can get bogged down with trying to do several things at once and some can be deferred, while others can't - and while deferring data transfers will slow down transfer, no loss of data will occur. Hence, the ones that can't be deterred are more important.
Here is an example - suppose we are reading from an ED floppy drive, which is reading at 1Mbit per second. Since the disk spins, there is no waiting or there will be data over-run, hence data lost. Since we are reading bytes, the byte rate is 125k/s, and the reading is done one byte at a time, i.e. there is only one byte of 'buffer' between reading the data from the floppy controller and the CPU. In this case, reading this data under interrupt is a kind of mission impossible as in order to move one byte, there are approaching 100 bytes moved in interrupt pre and post processing plus the actual moving the byte code. Normally this would be a loop along the lines of 'read status register to see if a new byte is ready, if so, move byte from a data register into RAM, increment address where it's going to go on the next loop, is this the last move, if not, loop to beginning.
If this was done under interrupt, along with CPU interrupt handling, now we need to restore the address of the status and data registers, the address where the data is supposed to go, then do what happens in the above loop iteration, and update the address for the next transfer, then return from interrupt, for EVERY byte.
On the other hand, what if there is a FIFO data buffer in the hardware of the floppy disk controller? Well, as soon as the buffer is not empty (OR even better the FIFO is full to a defined threshold), an interrupt is generated.
The CPU has as much time to react to the interrupt and get to moving the data from the FIFO to RAM, as there is space in the FIFO left. Once the moving starts, there might actually be some time savings because it is sometimes possible to know that there is a minimum of bytes that can be moved, without needing to check if there is a byte available. So, once the moving starts, it may actually be faster that doing direct programmed IO, as was described above.
So, how often would the interrupt occur if we manage to use the whole FIFO capacity? Well, about 125kbytes divide by the size of the FIFO, so that would be about 7813 times a second.
At this point it is worth mentioning that for tis particular case it would mean that just as you finish emptying the FIFO, and return from the interrupt, the FIFO will start filling again and you get an interrupt again - so in other words, this is the interrupt break-even with just ONLY doing the data transfer, obviously not an improvement.
However, if the CPU is fast, the actual data transfer could take a very short time, so the proper strategy would be not to generate an interrupt until a certain amount of data is already present in the FIFO, this being the FIFO threshold (number of bytes present) when the interrupt is generated. This takes into account hat the CPu can react fairly quickly to the interrupt, and that when it does occur, the FIFO will actually be close to full, the CPU will quickly unload it to RAM, and go back to usual business. For our example of a 16 byte FIFO, if we set the threshold to 12, what will happen is that the CPU will do 'other things' for 12/16 of time, while the data transfer will take 4/16 of time.
But what if your hardware does have a FIFO but no threshold setting?
Well, we calculated that the FIFO would need to be emptied at least 7813 times a second. If we cannot count on the actual hardware having a configurable interrupt timing, one thing we could use is a periodic interrupt which will check the FIFO state and if needed transfer data. Obviously one could make the periodic interrupt happen very often, but then you would not really be using the FIFO effectively, so some optimisation is necessary. Would 8096 times a second be often enough? (Hint: risky, but we are on the right track)
This creates a conflict because it is stated that Minerva starts at the top of the autovector table and doesn't leave room for the manual vector table that would reside right above it. QDOS certainly doesn't. This makes it likely I am limited to using alternate interrupts but restricting myself to autovector.
Wellllll..... note that there is no protection on interrupt vector numbers, so you could use other unused vector numbers below vector $31, which actually includes levels not implemented on 68008, unused and uninitialized vectors, etc.
This is disappointing because it limits the number of different cards that can uniquely interrupt to 4.
3, as level 0 is 'no interrupt'
Past that, alike cards would need to share an interrupt. I am sure if two identical cards in different slots share an interrupt, the person who develops them can write the handler to query the cards and have only the relevant one respond. This does mean only one card in the group can issue an interrupt at a time. The other card would have to be held up until the first card is acknowledged and negates its own interrupt request. You can see why this isn't ideal.
Actually no.
Cards would generate interrupts using a wired-or mechanism. The way interrupt handlers are linked into the OS makes it very simple, even when disparate cards use the same interrupt, although it would be prudent to share interrupts with similar characteristics regarding how they are processed (particularly regarding the time interrupt processing takes).
In any case, all hardware capable of causing interrupts must have a mechanism to clear the interrupt (stop generating the interrupt) once it's servicing has started.
Keep in mind the QL itself has multiple sources of interrupt all sharing one level. The handler(s) first check each linked interrupt handler checks some sort of status register to figure out if 'it's' hardware caused the interrupt, and then as a part of handling the interrupt, the bit that said 'I caused it' gets cleared, which also stops the interrupt signal. However, if there are other pieces of hardware are generating an interrupt, the next linked handler will check for that, etc. until all are checked, handled and cleared. This is an extremely common mechanism. Yes, it does have a risk of 'something' generating the interrupt but none of the handlers find it and the interrupt keeps hanging on. But so does any system that is not configured properly, for instance the wrong interrupt vector being generated.
It should be noted that interrupt sharing in this manner has actually less overhead than classic one-per-level/one-per-vector approach, because once an interrupt happens, the actual interrupt processing by the CPU happens only once for any number of shared interrupts having to be processed, so there is overlap, that does not require all the CPU context stacking and restoring, etc. When this is combined with an appropriate set of periodic interrupt(s) this can actually be very efficient. I have some nice documentation where Tony Tebby argues this as an important point regarding predictability of real time operating systems.