Author Message
VonDvoracek
Joined: Feb 7, 2012
Messages: 0
Offline
I'm attempting to inject an audio file into a conversation using the Server Media control of the dashboard. Everything appears to be working, only I don't hear the audio. I can see from the logs and event window that the File starts playing and Stops playing when it hits the end, but I never hear my recording. It's almost like the RTP is going somewhere else???

Any ideas?

Media Control: Server Mode
Dependency: Independent (Tried Dependent too)

Networking --> Ports --> H.323 Ports \ Server Media Enabled
RTP Local UDP Port Min* - 2048
RTP Local UDP Port Max* - 13057

JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
did you setup the RTP IP addresses in the dashboard to be the PC you are using (where the dashboard is installed)?

Did you enable the media threads so the dashboard bothers capturing the RTP media stream?

Did you read the Mastering the DMCC Dashboard document where it indicates that although the dashboard can receive RTP, it does not play it through to the speakers of the PC, so you will not hear anything? Thus if you want to hear the stream you can use a tool like ethereal (wireshark) which can capture it and subsequently play it out.
JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
and did you make sure the ports you chose were not already in use by some other application?
VonDvoracek
Joined: Feb 7, 2012
Messages: 0
Offline
I'm confused... is this required for Server Side Media Controls? Everything I've read indicates that the media is played from the Server and these parameters cannot be changed. I'm under the impression from reading the "Mastering the DMCC Dashboard" document among other things, that the playing of a recorded file is performed at the server and not the client. Is this not the case?

"When a DMCC device is registered in Server Media mode, Communication Manager directs the media streams at the extension to the AE Services server. Server-side media control allows you to take advantage of AE Services media control resources to record calls, play messages and collect DTMF tones."

"In Server Media mode, the RTP media parameters are predefined and cannot be amended by the client application."
JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
Maybe I misread this statement "I'm attempting to inject an audio file into a conversation using the Server Media control of the dashboard. "

based on your response... I now assume you have a two party call A-B. and you are using the DMCC Dashboard to inject send a request to inject server based media into that call.

Is A a physical station?
Is B a DMCC Device?
Is it a simple AB call?
Or was it an AC call that you joined B into? If so using what technique? SSC? B holds/make-call/conference?

Have you validated that the codecs B announces support for are supported in the ip-codec form for the network region that B is registered into?

Where are you listening for Audio? A?
Have you packet sniffed near AEServices to see where it is trying to send the RTP?
VonDvoracek
Joined: Feb 7, 2012
Messages: 0
Offline
Your assumption is correct.

I have a physical station (A) making an outbound call to an external phone number (B).

A & B are involved in a phone conversation, and the agent at A wants to play a recorded message from the server over the active conversation.

In this case I'm using the DMCC Dashboard to create the DMCC device (C) registered to the extension of A in dependent mode.

Once registered I want to play the audio file to both A & B, so I hit Play under the "Play Capabilities" of the Server Media tab.

As for where am I listening for Audio, I'm listening at both A & B.

No, we haven't packet sniffed near AEServices to determine where it's trying to send the RTP yet.

Based on my description above, is this something I should be able to do? By all appearances it is. It just seems like I'm missing something in the configuration, or perhaps there's something in the network interfering.


JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
ok so we have A (the phone), and A' (the DMCC device).
A' is registered in multiple registrations per device mode against A's extension.

The thing to know is that Communication Manager only allows one talker at a time from a particular source extension. Thus either A or A' can be talking but not both at once.

In DMCC there is a way to switch talkers from A to A' and visa versa. If you are going to use this approach you need to switch talkers. An alternative approach would be to SSC in a third extension (a DMCC main, server media mode device) and play the announcement from there. This consumes more party slots in the call than the approach you are using, however you will not need to manage talkers as each party in the call will be a unique extension with the alternative approach.

I am going to use the text from the 6.2 version of the DMCC XML Programmer's Guide as my reference. Similar content is in all of the DMCC Programmer's guides, however the method calls will be slightly different. Search the document for the "share-talk" button for all the relevant information about it. I have included enough here for you to understand its use/value in this context.

The "share-talk" button
Although Communication Manager 5.0 (and later releases) allows up to three Device, Media and Call Control station clients to register to one extension, for each extension only one "Talk" time slot is used. If there are three endpoints registered with that extension, only one at a time will be able to talk, but all three can listen.
If your application wants the ability to share the talk capability, you will use the "share-talk" button. The "share-talk" button must have been administered in Communication Manager. For information on how to administer the share-talk button on Communication Manager, please see the Avaya Aura Application Enablement Services Administration and Maintenance Guide.
Communication Manager will process a "share-talk" button push only if the media mode of that endpoint is not No Media and the extension is in a call.
Once in a call, an endpoint registered as Main can press this button to block any endpoint registered as Dependent or Independent from taking over the talk capability. The Main endpoint can then unblock it by pushing this button again.
If a Main endpoint has not blocked the talk capability, a Dependent or Independent endpoint can press this button to acquire the "Talk" capability. The Dependent or Independent endpoint can press this button a second time to move the talk capability back to the Main endpoint.
Interpretation of the 'share-talk' button lamp state
By an endpoint registered as Main:
 Steady On
The Main endpoint currently has the Talk capability. If Main presses the button while in this state, the Talk capability will be blocked (see Flutter).
 Flutter
The Main has blocked the talk capability from being taken over by a Dependent or Independent endpoint. Main can unblock by pressing this button one more time and the lamp will transit back to "Steady On".
 Off
A Dependent or Independent endpoint has taken over the talk capability. If a Main endpoint wants to talk it can take over the talk capability at any time by pressing the button (lamp will transit back to "Steady On" after the button push).
By an endpoint registered as Dependent or Independent:
 Steady On
The Dependent or Independent endpoint currently has the Talk capability. When this transition happens Communication Manager will turn the "share-talk" button lamp off at other endpoints associated with this extension. While in this state, a Dependent or Independent endpoint can transfer the talk capability back to Main by pushing the button.
 Flutter
The Main has blocked the talk capability from being taken over; The Dependent or Independent endpoint cannot obtain the talk capability.
 Off
A Dependent or Independent endpoint has no Talk capability, however it can take over the Talk capability if it desires.
VonDvoracek
Joined: Feb 7, 2012
Messages: 0
Offline
Thanks John. From the sound of it, the "shared-talk" feature is the missing piece to the puzzle. It doesn't look like I can test this out using the Dashboard, which must be the reason why there's no mention to Shared-Talk anywhere within.

I have a client app that is essentialy a Media Player (Play, Pause, Resume) whereby the agent can play a recorded message at will. I will modify my app to use the Shared-Talk button and see how it goes.

I'll keep you posted of the results. Thanks again for your help.

JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
share-talk is a feature function button configured on to the phone. Once it is provisioned through the Communication Manager SAT, you can push the button using the DMCC Dashboard's phone tab. I recommend putting the button in the first 10 buttons on the phone so it can be displayed in the dashboard's UI.
OzEzer
Joined: Nov 19, 2013
Messages: 1
Offline
Hi,
Can you explain me the all scenario how I can inject an audio file into a conversation using DMCC ?

Thanks!
JohnBiggs
Joined: Jun 20, 2005
Messages: 1135
Location: Rural, Virginia
Offline
Oz, there are three methodologies to insert audio into a conversation. Depending on a number of variables one may be better than another for your situation.

I recommend you review the following document
https://devconnect.avaya.com/public/flink.do?f=/public/download/AES/WhitePapers/AEServicesCallRecording_SVC4050.pdf

The same constraints/advantages associated with call recording would apply to inserting a device to play audio.

By far the simplest method would be to do a single step conference, and have the DMCC Device in client media mode play a wave file. A slight modification would be to use server media, and send the audio from there. This impacts AEServices capacities, so it is not the preferred methodology.

SO 'could' be use, but it seems like a lot of work as compared to SSC. The advantage would be it would use fewer TSAPI licenses.

Multiple registration per device, could be use, but as shown in this thread you would have to manipulate the Share Talk button, and the party whose registration the DMCC device was sharing could not speak while the announcement was playing. The advantage would be you would not interfere with the party count for a conference.
Go to:   
Mobile view