In a nutshell, working with RioDB is similar to working with a traditional database. But instead of querying data from tables and views, it queries data from streams and windows, which are constantly changing. An example of a "window" would be every record received in the last two hours. Or it could be the last one million stock market transactions, or all requests posted to a web server during the last five minutes.
Developers can enter many queries that interact with many windows. These queries are deployed to run continuously and indefinitely. Every time the query finds a match, it results in an action (like alerting another system), and the query continues to run looking for the next match (unless the query is set to auto-expire or to take a break).
"Batch processing" is often chosen when dealing with high volumes of data stream. However, it sacrifices precision for performance. RioDB offers continuous processing (record-by-record query execution) in a high performance package that is easy to implement, and runs on simple infrastructure.
The trade-off is that RioDB sacrifices historical analytics. It's meant for real-time only. Some may need their stream channeled into two destiations: RioDB for real-time data monitoring, and a datalake for historical analysis.
A single RioDB instance by itself is not fault-tolerant.
Resilience can be achieved by deploying multiple instances of RioDB behind a load-balancer, or behind a common, fault-tolerant broker like Kafka, Redis queue, Elasticache, etc. This way, if a RioDB server suffers an outage, the processing continues to be handled by other RioDB instances.
If you don't need to guarantee processing of every single message (for example, a disposasble UDP stream), then you can just point the stream directly to RioDB and let it have it. This significantly reduces complexity. However, if the RioDB server suffers an outage, messages will be missed until the service is restored.