- This topic has 11 replies, 2 voices, and was last updated 10 years, 2 months ago by Anonymous.
-
AuthorPosts
-
September 19, 2014 at 21:53 #3300AnonymousInactive
I’m using code like the following
connection.SendObject<byte[]>(message.Type, message.Bytes);
The problem is that that message.bytes is a byte array that I’m building prior to sending.
My build flow is similar to the following…Over Estimate how many bytes I’ll need (there are strings involved) and allocate them as an array.
Convert the ints, longs, strings etc. into this array.
I won’t have used the full space I originally allocated and i don’t really want to send over-sized data over the wire.
So now create a new byte array of the correct size and copy the bytes actually used from the original array to the new array.I can now send the new array and only the exact number of bytes required will be sent.
But, I’ve had to create two arrays in total per message sent.
If a large number of packets are sent, then I will run into memory fragmentation and GC issues down the road.Is there any way to say, “take this 4k array and only send the first 2543 bytes please”?
Even better would be the ability to say, “Take this 250 megabyte array, and starting from offset 2404352, send 2543 bytes”, then i would be able to set up a large pool and park it in the LOH from the get go.September 20, 2014 at 13:21 #3301AnonymousInactiveYes absolutely, this is a common problem. You can manage your own send buffers if you wrap them in a
StreamSendWrapper
. We have demonstrated this using a FileStream, although in your case if would probably be a MemoryStream – https://networkcomms.net/streamsendwrapper/If you have any other questions please post back.
Regards,
MarcSeptember 20, 2014 at 15:52 #3303AnonymousInactiveThanks MarcF,
I’ve got my head round that and am implementing something suitable.
One question though. I’ve got rough code of the form…Message.Serialise(); MemoryStream memoryStream = new MemoryStream(Message.Bytes); StreamTools.ThreadSafeStream threadSafeStream = new StreamTools.ThreadSafeStream(memoryStream); StreamTools.StreamSendWrapper streamSendWrapper = new StreamTools.StreamSendWrapper(threadSafeStream, Message.BaseOffset, Message.Length); Connection.SendObject(Message.Type, streamSendWrapper); Message.FreeInternalBuffer();
The thing that bothers me is the last line, the freeing up of the internal buffer in the message.
This just returns the buffer back to the pool for immediate re-use by something else.
But, i’m not sure what happens with the SendObject call just prior.
Does this make a copy of the data as specified by the stream calls previous or might the buffer still not be sent?Obviously, if the SendObject call relies on the original data being in place for a period of time, it would be bad for something else to start writing all over it. 🙂
cheers
- This reply was modified 10 years, 2 months ago by .
- This reply was modified 10 years, 2 months ago by .
September 20, 2014 at 16:34 #3306AnonymousInactiveJust realised, if the code within SendObject copies the data passed in into it’s own internal buffering scheme, then i don’t need a buffer pool. Just a single buffer would suffice (or 1 buffer per thread, if i’m using many threads).
Does SendObject copy internally?
September 21, 2014 at 11:57 #3309AnonymousInactiveConnection.SendObject(Message.Type, streamSendWrapper);
only returns once the corresponding data has been successfully written to the underlying network stream. As such your usage should be fine.
September 21, 2014 at 11:58 #3310AnonymousInactiveI’ve knocked up some test code along the following lines…
Set up multiple listen servers on ports X to X + 49
Set up 50 clients and connect each of them to a single server on port X to x + 49
The clients send a string to the server they are connected to, with a delay of 200 ms between sends.
All servers echo this string back to their respective client. So client 0 sends “abc”, and all clients receive “abc” back.
This is basically a chat client with all clients logged into the same channel.The Server(s) work along the following lines…
Anything received goes into a single Work Queue to be acted on.
A single thread processes each item from the Work Queue in order.
Any sends required are placed into a second Send Queue which is serviced by a number of threads.
The threads that service the Send Queue just dequeue an item, serialise it to a single buffer (per thread), and perform a send.For testing, immediately after the send is performed, I overwrite the byte array that the object was serialised into with character 88, corrupting it.
In total, the server transmits 250000 strings in response to receiving 5000 strings.
I’ve tested with between 1 to 10 threads servicing the send queue, the sweet spot seems to be around 4 threads.
All data appears to be echoed correctly back to the clients with no issues.
From this I’m pretty sure that the NetworkComms library must either be performing a copy of the send buffer internally and using that copy, or it may be waiting until the network stack is finished with the buffer before exiting.Does this sound about right? Or have i missed something?
cheers
- This reply was modified 10 years, 2 months ago by .
September 21, 2014 at 12:00 #3312AnonymousInactiveHehe, we posted at the same time 🙂
Great, thanks for the heads up again MarcF.
cheers
September 21, 2014 at 12:02 #3313AnonymousInactiveLooks like you may still have the default compression enabled, if you use:
NetworkComms.DefaultSendReceiveOptions = new SendReceiveOptions<ProtobufSerializer>();
you should notice a significant difference.
September 21, 2014 at 12:03 #3314AnonymousInactiveIf you restrict yourself to only sending byte[] you can also go on step further:
NetworkComms.DefaultSendReceiveOptions = new SendReceiveOptions<NullSerializer>();
September 21, 2014 at 12:19 #3315AnonymousInactiveI’ve been using
new SendReceiveOptions<NullSerializer>();
for a while now.thanks again.
-
AuthorPosts
- You must be logged in to reply to this topic.