Hello all,
I require to convert some text to speech dynamically when the call is answered in the call Intercepted interface implementation. I do have the Avaya media server access. I am not sure whether Nuance Speech server is installed with Avaya media server (I have the Aurix speech server enabled. Not sure if that is supposed to help). Anyway, I tried giving a simple string in the source while creating the PlayItem, but that did not work(it did not convert it to speech and nothing was played when the mediaservice was played with the playItem).
To avoid the dependency with the Nuance speech server, I tried converting the text to speech using freetts in the implementation itself. As stated in the playItem.setSource Api doc, I tried placing the wav file generated using freetts at the location /opt/avaya/app/localmedia/ and set playItem.setSource("file://nameoffile.wav"). That also did not work. However the cstore example did work but uploading the media everytime it is generated to the media server is not feasible, since text changes according to user input. Since the entire code is packaged into an svar and then decompressed at the server, I was not sure where my wave file will be placed if I included the code generating it in the local file system. I thought may be I could make it generate it at the above mentioned path and that would work.
Also, how do I check the location /opt/avaya/app/localmedia/output on the collaboration server? SShing into it, says no such file or directory.
If you could suggest something, I would be grateful.
Thanks,
Heena
|