Bug #390
closedtime for reading port with sequence scales weirdly
Description
I'm trying to read data from a port which contains a sequence of bytes through genomix (I have the same issue with both python-genomix and matlab-genomix), and it takes rather long time (about 150ms for 500k bytes).
I tried to scale down my images to read them in a more reasonable time, and strangely there is a step at 1757 bytes: reading a sequence of 1756 takes less than a ms, while reading a sequence of 1757 takes ~45ms.
Is the genomix server limiting the bandwidth? And is it normal that such steps occur at arbitrary values?
(I'm reading from multiple out port, if that matters)
Updated by Anthony Mallet over 1 year ago
On Monday 17 Jul 2023, at 20:02, Martin Jacquet wrote:
I'm trying to read data from a port which contains a sequence of
bytes through genomix (I have the same issue with both
python-genomix and matlab-genomix), and it takes rather long time
(about 150ms for 500k bytes).
The genomix protocol is HTTP/JSON (so basically ASCII), which means
quite some overhead. E.g. a byte array in JSON is encoded as a
string like "[66,128,47,78 ... ]".
Not the most efficient for what you are doing.
I tried to scale down my images to read them in a more reasonable
time, and strangely there is a step at 1757 bytes: reading a
sequence of 1756 takes less than a ms, while reading a sequence of
1757 takes ~45ms.
This may correspond to some threshold in the encoding, for instance a
threshold exceeding the MTU of the interface or requiring extra RAM
pages or similar. It's hard to tell without knowing the exact use
case and setup. Is it all on localhost, is there wifi involved, wired
ethernet ...
Is the genomix server limiting the bandwidth? And is it normal that
such steps occur at arbitrary values?
There is no bandwidth management.
A sample minimal test case may be helpful to analyse more in depth,
but in any case it will never be very efficient. Maybe details on what
you are trying to do may help finding a more efficient solution.
Updated by Martin Jacquet over 1 year ago
I'm using Gazebo to generate depth images (https://redmine.laas.fr/projects/gazebo_models/repository/gazebo_models/revisions/master/entry/mrsim-depth-camera/model.sdf#L62) that I convert to genom messages using the camgazebo-genom3 component I made while at LAAS https://redmine.laas.fr/projects/genom-vision/repository/camgazebo-genom3, since I'm doing some simulations in genom.
I have some python code running in parallel which performs some processing on the depth images, therefore I'm using genomix to retrieve the images from camgazebo-genom3 (directly reading the port) using something like
camgz = client.load('camgazebo')
frame_data = camgz.frame('compressed')['frame']
I understand that it will never be super efficient, but I was surprised by these "step" in port reading time which I observed when trying to decrease the size of images and compressing them.
Updated by Anthony Mallet about 1 year ago
After encountering the very same behaviour, I could investigate that
this is caused by the Naggle algorithm.
The Naggle algorithm will delay any small packet for up to 200ms in
the hope that further transmission will fill it more and make the
transfer more efficient.
The "size threshold" you experienced was actually the particular size
in your case that was just overflowing a multiple of a network packet
size (typically around 1500 bytes), leaving only a small packet to
transmit at end and triggering the Naggle algorithm.
Using TCP_NODELAY in genomix helps a lot (although the transmission
for real big images will still be inefficient).
Updated by Anthony Mallet about 1 year ago
- Status changed from New to Closed
Applied in changeset genomix|f539ecd4982ee4c9924b3434b28a4632d1f05ef5.