Author Message
kallekula
Joined: Feb 5, 2018
Messages: 7
Offline
Hi all,
I'm trying to register around 500 devices for route registration. About 30 passes and then all subsequent GetDeviceId() calls fail. I have no clue why this happens but I found this in the trace log around the same time. Feels like it has something to do with it?
I would really appreciate some help on this one.
Thank you,
/Allan


2018-05-11 23.29.15,322 :T-40: com.avaya.mvcs.proxy.TPacketReaderNode handleRead
INFO:
java.io.IOException: Connection reset by peer
at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39)
at sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:218)
at sun.nio.ch.IOUtil.read(IOUtil.java:191)
at sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:359)
at com.avaya.common.nio.managed.defaultImpl.ManagedByteChannel.read(ManagedByteChannel.java:90)
at com.avaya.common.nio.managed.defaultImpl.ManagedByteChannel.read(ManagedByteChannel.java:90)
at com.avaya.common.packet.TPacketizer.readChannel(TPacketizer.java:152)
at com.avaya.common.packet.TPacketizer.handleCallback(TPacketizer.java:282)
at com.avaya.common.packet.TPacketizer.handle(TPacketizer.java:329)
at com.avaya.mvcs.proxy.TPacketReaderNode.handleRead(TPacketReaderNode.java:214)
at com.avaya.mvcs.proxy.Pipeline.handleRead(Pipeline.java:442)
at com.avaya.common.nio.managed.defaultImpl.DelegatingWritableReadChannelHandler.handleRead(DelegatingWritableReadChannelHandler.java:89)
at com.avaya.common.nio.channels.defaultImpl.DefaultChannelServicer.serviceChannels(DefaultChannelServicer.java:343)
at com.avaya.common.nio.channels.defaultImpl.SingleThreadedSocketChannelDaemon.run(SingleThreadedSocketChannelDaemon.java:109)
at java.lang.Thread.run(Thread.java:722)
2018-05-11 23.29.15,322 :T-40: com.avaya.mvcs.proxy.ExceptionEventHandlerNode handleEvent
WARNING: Abnormal Operation: The far end unexpectedly closed the socket, event message=Connection reset by peer
2018-05-11 23.29.15,322 :T-40: com.avaya.common.packet.TPacketizer readChannel
INFO: End of stream from DefaultTCPChannel Bound to SocketAddress: /172.24.70.44:4721 Connected to SocketAddress: /10.64.172.190:53646
2018-05-11 23.29.15,322 :T-112: com.avaya.mvcs.proxy.TPacketReaderNode handlePipelineCommand
INFO: Closing channel=DefaultTCPChannel Bound to SocketAddress: 0.0.0.0/0.0.0.0:4721 Connected to SocketAddress: /10.64.172.190:53646 and marking session=session 2E4C5CF6175DDDD58EAB8C4CA81F6BFE-154796 as inactive

MartinFlynn
Joined: Nov 30, 2009
Messages: 1922
Online
Does the application control the flow of requests to AE Services or does it make hundreds of simultaneous requests?

Is this happening in a lab or in production?

Martin
JohnBiggs
Joined: Jun 20, 2005
Messages: 1139
Location: Rural, Virginia
Offline
I read your post to say it takes 30 passes before you come close to your 500 route registrations. That is indicative of an underlying problem, must likely with the success of each individual transaction. As Martin asks, how fast are you making requests. Try pacing them down to 10 per second and see what happens. The I/O error indicates AES closed its socket to your application because (most likely) it could not put a packet in your application's buffer after some number (I think 3) attempts. Consider increasing the size of your receive TCP buffer as well.
kallekula
Joined: Feb 5, 2018
Messages: 7
Offline
Hi,
Thanks for the answers!
No, we are not controlling the flow of requests, it's just a for-loop bombarding the AES. My guess was also that we might be flooding it.
We are testing this in a production environment after work hours.
How do I increase the size of the receive TCP buffer? Is that an API setting/parameter??

We will make another test tomorrow evening and we'll add a short delay between requests to see if that works better. Would be great if someone could explain how to increase the TCP Buffer size so we can play around with that as well.

Sincerely,
Allan


JohnBiggs
Joined: Jun 20, 2005
Messages: 1139
Location: Rural, Virginia
Offline
I suspect flow controlling the requests will have the most positive impact, but read on. Better still if you wait for the responses to the outstanding requests so that there are only X outstanding requests at any moment in time. You need to have at least two threads one making requests, one processing the responses from AES. It is sounding like you are not actively receiving the responses while you spin through the for loop making requests. Not processing the received messages while you are making so many requests will definitely cause your application problems even with the flow controlling in place (unless you insert a large enough TCP buffer to hold all the responses until your for loop finishes).

Dithering with the TCP receive buffer size is OS dependent activity (you impact ALL TCP buffers by doing it at the OS level), and beyond recommending you Google for an answer there isn't much more direction I can provide since I dont know your OS and version. An application can request non default buffer sizes on socket initialization from some OS; that would require more code changes that I suspect you would like to avoid in the near term.
Go to:   
Mobile view