Author |
Message |
|
|
Ivalberto
Joined: Sep 5, 2018
Messages: 45
Offline
|
Hi guys, i testing the dialog flow integration with Ochestration Designer with the Example APP DefaultDialogflowApp that comes in Od 7.2.3 ...
The app workings fine , but my dialogFlow project have activated the option to return the audio response ... but in the response that the application recieved i can not see the audio bytes or any tag related to that,so i have to use nuance TTS to prompt the text response.
I if test my project with the defaullt app that comes on the MPP(dialogFlowapp) i can hear the response in the call and not tts are using for that.
Can i have the same functionality of the default app on Od application?
Thanks in advance
Best Regards
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
Oh, maybe you still have the older version of the OD app that uses TTS. Try the version I attached here.
Filename |
DefaultDialogflowApp.zip |
Download
|
|
|
|
|
|
Ivalberto
Joined: Sep 5, 2018
Messages: 45
Offline
|
Thanks Wilson , now i see the difference on prompt items, but can you tell if only the data have not printed on the log , and the app just have this change on the prompt ?
Another question is there a reason for the ringback promt of 24 seconds at the init of the app ?
Best Regards.
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
You don't have to have the ringback or you can customize the ringback wav file.
There are couple things we needed to do to make the prompt play since the sound content is not on the app server side. First, we need to have the data/root.vxml and the global vxml variable "reply_uri" declared then in the Form node java code (i.e FirstPrompt.java), you have to have these methods so that the global variable stored the reference of the content streamed from Dialogflow. That's what make the prompt item that you've notice works in vxml.
@Override
public String getFilledCustomScript() {
String script = "application.reply_uri = FirstPrompt.reply_audio;";
return script;
}
@Override
public String getAppRootAttr() {
return "data/root.vxml";
}
|
|
|
|
|
Ivalberto
Joined: Sep 5, 2018
Messages: 45
Offline
|
Perfect !! and works great!!
|
|
|
|
|
Ivalberto
Joined: Sep 5, 2018
Messages: 45
Offline
|
Hi Wilson , i have a few more question.
It's necesary to do the currents validation in the OD Example, i thins that the example is oriented to simulated de dialogFlowApp on the MPP rigth ?
How works the use of the license in the system, is release when finish to use a prompt or is using in the entire call ?
Can i have multiple vendors(Google) in the same application example google asr and dialogFlow because in one apps i need to recognize intention and just speech to text to another thing.
Can I create my custom payload, to define the flow in the OD application ? example :
"toDo": {
"action": "transfer",
"destination": "123456",
"audio":"x.wav"
}
the only way to send a interaction to DialogFlow is with grammar ?
In the use and develpment going to be more question, can i ask here or create a new thread ?
Thanks in advance.
Regards.
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
It's necesary to do the currents validation in the OD Example, i thins that the example is oriented to simulated de dialogFlowApp on the MPP rigth ?
You need to run this on EP. There is no simulation for Dialogflow.
How works the use of the license in the system, is release when finish to use a prompt or is using in the entire call ?
You would have to ask EP support about this.
Can i have multiple vendors(Google) in the same application example google asr and dialogFlow because in one apps i need to recognize intention and just speech to text to another thing.
You can switch to another ASR by using the External Property.
Can I create my custom payload, to define the flow in the OD application ? example :
"toDo": {
"action": "transfer",
"destination": "123456",
"audio":"x.wav"
}
I am not sure what that means You mean defining the custom payload in Dialogflow?
the only way to send a interaction to DialogFlow is with grammar ?
Yes, grammar is the only way.
In the use and develpment going to be more question, can i ask here or create a new thread ?
Sure if I can answer them. However my knowledge is limited in this area. The OD sample app is pretty much mimicking the DefaultDialgflow vxml version on mpp, and we go with the whitepaper that comes with it.
|
|
|
|
|
AmbikaSivasurianarayanan
Joined: Nov 8, 2013
Messages: 29
Offline
|
Hi Wilson,
In our integration of Dialogflow with Experience Portal we are using the default dialogflow app.
It is working perfect in playing the intent responses using dialogflow voice engine, but if the intent in dialogflow returns a custom payload to transfer the call to an Agent, we would like it to read the response from Dialogflow before it transfers. But in the default dialogflow vxml app, the prompt "xferring" inside "Blind Transfer" is using TTS, how do we change this to use dialogflow voice?
Thanks
Ambika
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
So the audio from dialogflow is cached in the application variable called application.reply_uri. You can take a look at the replyprompt.prompt in which you can see it is using the Expression(application.reply_uri) element to play the audio. You can replace the TTS element with the exact same thing in the xferrring.prompt.
|
|
|
|
|
AmbikaSivasurianarayanan
Joined: Nov 8, 2013
Messages: 29
Offline
|
Will the "application.reply_uri" get populated automatically or do I have to do the overwrite method for "getFilledCustomScript" in BlindTransfer.java where it has the "Xferring" prompt?
Thanks for your help
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
This uri should be set by the execution of the previous node in the flow, either the FirstPrompt or SpeechInput in this case. The getFilledCustomScript is implemented on those nodes.
|
|
|
|
|
AmbikaSivasurianarayanan
Joined: Nov 8, 2013
Messages: 29
Offline
|
Hi Wilson,
I modified the Xferring prompt to use the application.reply_uri to read the response text to the caller, the transfer works but the response text is not being played.
Please see my attached log and the screenshot of my xferring prompt.
Filename |
DefaultDialogflowApp.trace.2020-11-04.log |
Download
|
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
So I see in the log that this should have been executed in the SpeechInput node prior to the transfer
28:<filled>
29:<script><![CDATA[
30:application.reply_uri = SpeechInput.reply_audio;
31:]]></script>
Then in the BlindXTranser node, it play what is in the uri
8:<prompt bargein="true" bargeintype="speech" timeout="8000ms">
9:<audio expr= "application.reply_uri"/>
10:</prompt>
If you don't hear anything, it mean there is nothing in the reply_uri. You can take a look into the platform voice browser log to see what exactly it plays.
|
|
|
|
|
AmbikaSivasurianarayanan
Joined: Nov 8, 2013
Messages: 29
Offline
|
Hi Wilson,
Continuation from my previous post above, for the transfer message to play before the actual transfer I ended up implementing
the getFilledCustomScript and getAppRootAttr methods in the BlindXfer node java file. This works.
My question now is, is there a way to trace what is the value of "application.reply_uri" EcmaScript variable?
Appreciate your help.
Thanks.
|
|
|
|
|
WilsonYu
Joined: Nov 6, 2013
Messages: 3950
Offline
|
application.reply_uri is a global variable that does not have a value to begin with until you assign one to it like the sample code:
application.reply_uri = FirstPrompt.reply_audio;
FirstPrompt.reply_audio would have the audio bytes base64 encoded that would look like this:
'data:audio/x-wav;base64,UklGRi67AQBXQVZFZm10IBAAAAABAAEAQB8AAIA+AAACABAAZGF0YQq7AQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA\
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
So I am not sure if it is meaningful to see what it carries.
|
|
|