Sounds interesting. I'll put this on my list of episodes to cover.
At a first glance, I would approach it a few different ways:
1. use sidekiq-cron and create a scheduled task to monitor all of the "notifications" that need to be sent out. Downfall is that as the application grows, even a well indexed table could start to get slow to query.
2. use sidekiq to tie into activejob's perform at. With sidekiq it would look something like this
Job.set(wait_until: send_at_this_time).perform_later(something)
And the benefit of this approach would be that the query wouldn't need to happen since the job is already queued to execute at a later time. The downfall of this could be an improperly configured Redis instance (memory set to volatile instead of nonvolatile) which could result in missing jobs.
If data integrity was an must, I typically prefer persistent queues like delayed_job over in memory. However, this is questionable on "data integrity", but it makes me feel better inside that critical data is not lost in a memory/persistent (write cache to disk) queue.
Regardless, I think option 2 would be a better way to go, there would just have to be some sort of hook to remove the job if the scheduled event were cancelled.