For time-constrained applications, repair-server-based active local recovery approaches can be valuable in providing low-latency reliable multicast service. However, an active multicast repair service consumes resources at the repair servers in the multicast tree. A scheme was thus presented in Per Osland et.al's previous work to dynamically activate/deactivate repair servers with the goal of using as few system resources (repair servers) as possible, while at the same time improving application-level performance.
In this paper, we develop stochastic models to study the distribution of repair delay both with and without a repair server in a simple multicast tree. From these models, we observe that the application deadline, downstream link loss rates, the number of receivers, and the upstream round trip time of a repair server all influence the overall value of activating an active repair server. Based on these observations, we propose a modified dynamic repair server activation algorithm that considers the packet loss rate, the number of downstream receivers, and the round trip time to the nearest upstream active repair server when activating/deactivating a repair server.
From simulation, we observe that our modified dynamic repair server activation algorithm provides a significant reduction in the latency of successful packet delivery (over the original algorithm) while using the same amount of system resources. We also find that much of the performance gains achievable by having active repair servers can be obtained by having only a relatively small fraction of repair servers actually being active.