Since adding a timeout to the sockets, I've observed that the message header can sometimes become corrupted when it times out during:
|
int r = client.Socket->Receive(headerMsg->GetPackPointer(), headerMsg->GetPackSize()); |
This results in the message being filled with garbage, and when the message buffer is allocated, it will be be allocated using a random (often large!) size.
This means that OpenIGTLink needs to be able to determine from the header contents if a message header is valid, and if it isn't, decide how to recover from the failure (Read all remaining data before resuming communication?).
What are everyone's thoughts?
Since adding a timeout to the sockets, I've observed that the message header can sometimes become corrupted when it times out during:
OpenIGTLinkIO/Logic/igtlioConnector.cxx
Line 430 in ab439c2
This results in the message being filled with garbage, and when the message buffer is allocated, it will be be allocated using a random (often large!) size.
This means that OpenIGTLink needs to be able to determine from the header contents if a message header is valid, and if it isn't, decide how to recover from the failure (Read all remaining data before resuming communication?).
What are everyone's thoughts?