Simple answer: Use a FIFO.
Background:
Lets look at your problem from a more abstract point of view. You have multiple critical sections which compete against each other. This is an example of over provisioning leading to over production or over consumption. There are multiple resources attempting to get the CPU. Interrupts are notorious for doing this to you. They are so short sighted and simple this will happen. A lot of digital logic people do this at call stuff and it is a bad habit. The logic ends up being too tight and brittle.
What people in this millennium have come to recognize is that we should use deferred processing in a synchronous manner via non-blocking logic. Sometimes referred to as asynchronous. Event driven sounds different from deferred processing but it is very similar. Most of these models work well with cooperative and preemptive scheduling models. However cooperative technically has the least overhead, which should not be your first concern.
The issue we have with microcontrollers is they are like processors. They do not manage dynamics particularly well without large resources. Like digital logic they also somewhat struggle against large single threaded static workloads. Which means that overloading them is easy to do in critical sections of time. A processor running at 1GHz which only needs 100MHz may actually still fail to meet timing. This is again due to over production and can be mitigated with decoupling into deferred processing. Potentially through the use of parallelism, but this depends on a case by case basis. Generally in times like this it is best to figure out how scalable the problem really is.
Solution:
So lets start nice and easy. Create one process for the data capture tasks. We will call this one CISC operation. Store the results of this operation into a short queue. Have the interrupt test the state of the queue and skip if empty. Pop from the queue if not empty. This will give the interrupt what it is looking for. The effective result of this is pipelining, which is the architecture of choice for IO processes. This does in fact create latency which we may or may not care about. This CISC operation will create a data packet and ensure that it is ready to go. It will hold it back till such a point in time.
We can do deferred processing as a consumption process (requires buffered control flow or massive single threaded processing power) or production process. You are a production process which is bottlenecked on consumption. The good thing about production processes like this is you can implement the control flow yourself. Queues are great tools for decoupling the timing of processes. Queues should make scheduling more manageable. Queues do increase resource requirements.
I will skip over DMA, networking and parallelism for now. Interrupts are scheduling notions and usually used to call the scheduler and/or modify the execution state(s). Control flow and synchronization should take effect, potentially indefinitely. Watchdogs and timeout logic can be used to restore functionality as part of error handling processes.
Background:
Lets look at your problem from a more abstract point of view. You have multiple critical sections which compete against each other. This is an example of over provisioning leading to over production or over consumption. There are multiple resources attempting to get the CPU. Interrupts are notorious for doing this to you. They are so short sighted and simple this will happen. A lot of digital logic people do this at call stuff and it is a bad habit. The logic ends up being too tight and brittle.
What people in this millennium have come to recognize is that we should use deferred processing in a synchronous manner via non-blocking logic. Sometimes referred to as asynchronous. Event driven sounds different from deferred processing but it is very similar. Most of these models work well with cooperative and preemptive scheduling models. However cooperative technically has the least overhead, which should not be your first concern.
The issue we have with microcontrollers is they are like processors. They do not manage dynamics particularly well without large resources. Like digital logic they also somewhat struggle against large single threaded static workloads. Which means that overloading them is easy to do in critical sections of time. A processor running at 1GHz which only needs 100MHz may actually still fail to meet timing. This is again due to over production and can be mitigated with decoupling into deferred processing. Potentially through the use of parallelism, but this depends on a case by case basis. Generally in times like this it is best to figure out how scalable the problem really is.
Solution:
So lets start nice and easy. Create one process for the data capture tasks. We will call this one CISC operation. Store the results of this operation into a short queue. Have the interrupt test the state of the queue and skip if empty. Pop from the queue if not empty. This will give the interrupt what it is looking for. The effective result of this is pipelining, which is the architecture of choice for IO processes. This does in fact create latency which we may or may not care about. This CISC operation will create a data packet and ensure that it is ready to go. It will hold it back till such a point in time.
We can do deferred processing as a consumption process (requires buffered control flow or massive single threaded processing power) or production process. You are a production process which is bottlenecked on consumption. The good thing about production processes like this is you can implement the control flow yourself. Queues are great tools for decoupling the timing of processes. Queues should make scheduling more manageable. Queues do increase resource requirements.
I will skip over DMA, networking and parallelism for now. Interrupts are scheduling notions and usually used to call the scheduler and/or modify the execution state(s). Control flow and synchronization should take effect, potentially indefinitely. Watchdogs and timeout logic can be used to restore functionality as part of error handling processes.
Statistics: Posted by dthacher — Thu Dec 05, 2024 3:34 pm