As a mental exercise, I set out to do the math.
Specifically using an web service resource that has been designed to be polled. Something akin to a queue. Our hypothetical web service will have a single operation, GetMessage. Each time we call it, if there is a message waiting to be processed, a message gets returned. If no message is available for processing, effectively a NullMessage is returned.
Lets forget about how this scales for the moment and just get an idea of the amount of operations to process 100 messages in both a polling and eventing scenario.
For the example, lets assume that I'm calling the GetMessage operation, 1/minute as my 'poll':
- 1 GetMessage requests/minute
- 60 GetMessage requests/hour
- 1440 GetMessage requests/day
Each invocation of GetMessage, regardless of whether or not data is available, is processed down through the application stack: web service, business object and data tier. This puts unnecessary load on the resources involved: CPU, Memory, Disk and Network - across multiple systems. That can be a pretty expensive operation, given that it might not even produce any actual data.
If GetMessage supports multiple clients, say with operation GetMessage(clientId), to retrieve any messages for a specific client. Now, that same IO overhead increases exponentially for each client thats calling GetMessage.
Assuming (3) clients, and that all of three (3) are polling GetMessage(clientId) with the same frequency, our requests now look something like this:
- 3 GetMessage requests/minute
- 180 GetMessage requests/hour
- 4320 GetMessage requests/day
Start imaging if there are more clients, or they are polling more frequently. We may find ourselves in a position of having to have hardware in place just to mainly support operations that don't produce any business value.
With an eventing scenario, to process the same 100 messages, I have 100 operations. This should give better overall resource utilization.
With anything, there are tradeoffs that have to be understood and made.