I tried searching on google, but I need to apply “effects” as in mathematical equations (especially sine waves) for around 8192 values. All of this should ideally be around a few milliseconds, but should not take longer than 10ms.
Similarly, I need to run Input on 8192 values too. This most likely will happen with memcpy, from one array to another, but this might take longer as it can have multiple inputs (network/serial/usb).
In this case, I have to guarantee, that both steps before outputting do not take longer than around 25 ms.
Is there a way to definitely limit these functions to timeframes, so they will be killed/returned if taken to long?
I cannot spawn new threads for each input/effect step, as it takes some time to build these threads. Both methods will be called in a loop, directly after another, so they will run approx. 30-45 times a second.
Whats the way to limit these? I can surely guarantee the read-time where I copy a buffer, but running maths operations seems like something that could potentially take too long.
Realtime-Operating Systems also can somehow guarantee those times, so whats the way here?
(BTW, this all is going to run on a minimal debian linux system, with the GUI decoupled from the actual mehhanism.)
One of my ideas would be to pre-calculate those values in some other thread, lets say the next 100 ops for a given value and then just replay them, so I have some buffer.
Other ideas?
Thanks, its not a university project, more of a home project trying to beat some other software :)
Fore more info, I just posted under @deegeese@sopuli.xyz comment on this post!
It will be open source later on, but I have to tidy everything up before pushing to github