Author Message
KyleHodess
Joined: Nov 17, 2015
Messages: 14
Offline
Hi everybody!

I'm debugging an application I didn't write (and I'm not an application developer...) where it would appear that Nuance is returning HTTP 500's and my MPPs are getting PASR 19s during busy periods. We seem to be able to line this up in the Nuance nss.logs to oversubscribing our ASR resources.

The application I'm working on is pretty complicated, but the basic part is that it starts with a language prompt (dtmf, not speech) and before even making the selection, I see myself opening an MRCP connection and using a session in Nuance. This leads me to believe that whatever that app is doing from the getgo, it's asking for a speech reco license to do it.

Now, there's another application, a bit simpler, also with language selection, that has an option to "if you'd like to go to the main menu, press 3". That app never ties up an ASR instance - even when it gets me back to the main menu of the primary app. The only difference I can surmise is making my language selection first in the second simpler application.

So, I got to cracking open some OD samples and started looking at the referenced libraries in both applications. I notice the one that always ties up a license calls a scert-7xyz and a scertcommon-7.xyz where my app that never calls up an ASR license only references the scert-7xyz but not the scertcommon jar - and for whatever reason, the app that doesn't tie up the ASR is calling an older scert-7 library too...

Anyhow, I start googling up com.avaya.sce stuff I wind up here and it's the best thinking I can come up with as to why one app always ties up a ASR license even when not doing speech reco yet in the call and another app doesn't at all.

So, that's my story. If anyone can clue me in as to what the mechanics are in an application that triggers an MRCP connection or tries to bring Nuance in, I'd really appreciate it. We sized this box to have more MPP ports than speech reco ports on the basis that we wouldn't be tieing them up in a 1:1 ratio, so I'd like to know if that's feasible through application design. If so, I promise I won't ask how to hack it up myself.

Thanks for listening.

-Kyle.
RossYakulis
Joined: Nov 6, 2013
Messages: 2652
Offline
When you configure your application did you enable remote dtmf processing? See attachment. You probably do not want remote dtmf enabled.
  • [Thumb - Capture.PNG]
[Disk] Download
KyleHodess
Joined: Nov 17, 2015
Messages: 14
Offline
I think you got it - grConfirmPIN_DTMF-nuance-osr.grxml is in the main menu, and if I go the other way thru the simpler app and get to a speech reco step thru main menu I fail because the speech reco resources were never initialized at the beginning.

I guess I know the answer, and if I could bug you for one more thing, it would be - what is the best practice for invoking speech reco resources when required rather than reserving licenses if potentially ever needed in a flow? What's the 2 line answer to "how do I bury a call for speech reco closer to the point in the app where I need it?"
RossYakulis
Joined: Nov 6, 2013
Messages: 2652
Offline
I do not know if there a 2 line answer. I have seen this question come up in the past, though I cannot remember the answer. I will post when I find out.
KyleHodess
Joined: Nov 17, 2015
Messages: 14
Offline
The more I ask around, the more I think I'm finding out the answer is that you have to reserve a speech reco license as soon as an app that may need it starts.
RossYakulis
Joined: Nov 6, 2013
Messages: 2652
Offline
Using a custom CCXML application and configuring things properly, speech resources can be assigned by VXML dialog.
• Configure the custom CCXML application to use no speech resources
• Configure each VXML dialog application separately to use the resources required by that dialog
• The custom CCXML application must then launch the dialogs by name (<dialogprepare/>) using the form
app://<configured VXML application name>

This allow resources to be allocated by dialog and was originally developed for a customer that wanted to be able to configure different languages on separate speech servers in order to lower costs. In Ross’ case, the user would launch a VXML dialog that didn’t use speech resources followed by a dialog that did. In this case the first interaction would take any speech resources. A custom CCXML application is required to manage the launching of both the dialogs.
KyleHodess
Joined: Nov 17, 2015
Messages: 14
Offline
Thanks! I'll check in to that.

What I just can't wrap my mind around, thinking aloud - and maybe you can speak to this or not - is why do TTS in a way where the license is acquired and released on the fly but by default have speech reco take a license up for the whole call?

Perhaps its a question geared more to Nuance and their product, it just seems like quite the caveat. I looked at the "Planning for AAEP" doc (https://downloads.avaya.com/css/P8/documents/100146998) and it mentioned:

Speaking, the MPP establishes a connection to an Automatic Speech
Recognition (ASR) server and sends the caller's recorded voice response to
the ASR server for processing. The ASR server then returns the results to the
application for further action.
Note: This connection requires one ASR license, which is not released until the
entire call is complete.


Again, thinking aloud - wouldn't it be nice if that read "This connection requires one ASR from cradle to grave unless you spoke to Ross and made something really custom!"?

In any case, I'm just complaining at this point. I do appreciate you having taken the time to explain this to me. Maybe there's a good reason it works this way - maybe there isn't!
RossYakulis
Joined: Nov 6, 2013
Messages: 2652
Offline
Sorry this is a platform issue and I cannot help with that. :(
Go to:   
Mobile view