Have a set up where I'm using OpenPGM on the server-side, MS-PGM on the
client-side trying to multicast a large file. What I'm running into is an odd
performance problem when trying to increase the speed.
For example (on the sender):
PGM_MTU: 7500
PGM_TXW_MAX_RTE: 7,000,000
Operates as I would expect, with an approx. speed of ~7000KB/sec
However, when I change PGM_TXW_MAX_RTE to 8,000,000, the rate drops to the
floor. It's not the repair cycle or anything, it's the rate of packets coming
from the sender. If I increase the PGM_MTU value to 9000, then performance
picks back up.
Looking at the code, I think the problem is in the calculations done during
setup in the rate engine. Specifically, looking at pgm_rate_create in
rate_control.c, I see:
if ((rate_per_sec / 1000) >= max_tpdu) {
bucket->rate_per_msec = bucket->rate_per_sec / 1000;
bucket->rate_limit = bucket->rate_per_msec;
} else {
bucket->rate_limit = bucket->rate_per_sec;
}
My first impression is that bucket->rate_limit is being set wrong, shouldn't
that be bucket->rate_limit = bucket->rate_per_sec?
The basic work flow for the sending loop (non-blocking) is to send, check the
status code, and in the case of PGM_IO_STATUS_RATE_LIMITED, use pgm_getsockopt
to fetch PGM_RATE_REMAIN. Once PGM_TXW_MAX_RTE/1000 is >= PGM_MTU, the values
returned skyrocket, causing the sender to slow down greatly.
So I'm trying to figure out if this is a.) a mistake on our part with the
program flow, or b.) a bug in OpenPGM.
Additional bit:
- server is running on FreeBSD 7.0
- we're using libpgm-5.2.122
- client is running on Windows 7 using MS-PGM
Thanks,
Jon