Avaya Aura Application Enablement Services

Latest Release: 10.2 (Dec 2023)

Frequently Asked Questions Expand All / Collapse All

Registering and Unregistering

Device Media and Call Control (DMCC) Multiple Registrations (MR) allows more than one H.323 terminal (or equivalent) to register against a single extension in Communication Manager.

There are limitations to using the MR functionality, some inherent in the service, and some that come from the overall environment in which an application may utilize the service. The most notable limitation of the service is that, while it allows the application to receive the RTP stream that Communication Manager is sending to the endpoint, it does not allow multiple terminals to send RTP to Communication Manager simultaneously for that party in the call (there is only one ‘talker’ per extension allowed at a time). Additionally, the ‘talking’ terminal does not receive a copy of their RTP stream back in the receive RTP stream whereas the non-talker device registrations do receive a copy of the talker’s RTP stream. Receiving the RTP information from all talking participants is useful for call recording applications. The talker role can be passed between terminals on a common extension.

MR service also is limited in the total number of registrations supported per station. Up to three H.323 terminals can be simultaneously registered against an extension. This is typically a desk phone or soft phone and two additional DMCC device registrations. Each DMCC registration must use a unique “Device Instance Number” in the registration request to differentiate them, this value ranges from 0-2, and the application must guess at an available one unless it has some other way of knowing a free one at any moment in time.

If the application using MR is used in an environment where Selective Listen Disconnect feature is used on the same extension, the application must expect that the media content it is receiving will not contain all of the media that is present in the call. If the application requires all of the media in the call, it must use a unique extension in the call added via Conferencing actions, Single Step Conference or Service Observing.

The most significant environmental restriction comes from the workload created by device registration and un-registration activity on AE Services and Communication Manager. This workload consumes a lot of CPU due to the large number of messages that are exchanged in the process of handling each of those activities. The total message counts and the fact that two separate servers are incurring work means that there is a reasonable amount of real time that transpires between when the application makes a request to un/register and when it completes. The total amount of real time is dependent on the other work load on AE Services (other CTI traffic) and the other work load on Communication Manager. In fact, it can take several seconds or more for a terminal to be fully registered/unregistered.

Underlying the work load on Communication Manager is built in protection against activity that Communication Manager classifies as hyperactivity. If for some reason, like a power failure or network outage recovery, many registrations are occurring in a small window of time, or overall call traffic is high, Communication Manager may ignore some or all, of the registration attempts in an effort to protect the overall operation of the system from degrading past certain thresholds. Stations and CTI applications are expected to retry the registration later (after some reasonable back-off time).

There are also race conditions between activity on Communication Manager (e.g. the start of a call, state changes on calls, and administration activity on the device) and when an application may initiate a registration request. The net of this is that a MR application that frequently registers and unregisters, may incur delays in a registration completing. This can impact the service the application is trying to provide. In a call recording context, this can result in missed recordings or a portion of the beginning of the recording not being present. The race conditions between call activity and registration activity may lead to an application missing some events.

Unless the application is tolerant of these limitations, and the possibility of a significantly delayed (or even rejected) registration, the preferred approach is to register the application’s MR terminal once when the application initializes (or perhaps when the agent logs in for the work day) and unregister it when the application is shut down or the agent logs out (or similar).

There is a "Max. Simultaneous Devices" feature within a SIP User’s Session Manager Profile that allows multiple SIP registrations against a single SIP station extension. The DMCC MR functionality may be used in conjunction with the "Max. Simultaneous Device" feature. If “Allow H.323 and SIP Endpoint Dual Registration” is enabled within the SIP User’s Communication Endpoint Profile (which would allow both an H.323 and SIP primary registration), “Max. Simultaneous Devices” must be set to 1 (one).

Some applications may choose to use a dynamic approach to terminal registrations (e.g. a call recorder that only records a subset of the calls occurring at a station/extension). When a more dynamic approach to multiple registrations is utilized, make sure to use the following best practices:

  • The Applications Enablement Server should be configured to do DMCC License Reservations (see the Administering and Maintaining Avaya Aura Application Enablement Services Guide – section titled “Reserving DMCC licenses”). Enabling license reservations reduces a round trip delay between the license server process/server and the DMCC application on AE Services each time a un/registration occurs.
  • Auto answer for the agent/station should be disabled.
  • The agent should be configured for an automatic after call work interval of at least a few seconds.
  • In both inbound and outbound contact center environments, even with the agent configured for automatic after call work intervals (e.g. direct call to the station occurs), there may be very short (milliseconds) delays between active calls at an agent/call handler. If the application is recording all calls at the agent/station, the application should implement a delay before unregistering its DMCC device to allow for a subsequent call to be handled/recorded. The interval of this delay should be factored into the provisioned after call work interval.
  • Disable media shuffling for the station the agent is using to login on.
  • The application must wait for an unregister notification (not just the confirmation to the unregister request) before trying to register the same extension/Device Instance Number pair again. Ignoring this constraint will cause race conditions between the unregister and the register which will result in unpredictable behaviors.
  • A round robin policy should be implemented for the DMCC device instance number. A strategy to avoid immediately reusing the DMCC device instance number will help prevent collisions with activity that has not finished on the prior instance number when there is a subsequent activity (i.e. a new reservation) on the same extension. In some environments when all three instance numbers are being utilized this will not be possible.
  • The application must tolerate rejected registrations and retry them after a delay. Multiple failed registrations back to back (even across multiple extensions) should cause the application to use larger and larger delays as it can indicate Communication Manager is in an overloaded state.
  • Per general guidance related to request traffic from an application to AE Services, any application (using MR or any set of services) should not have more than 10 outstanding requests to AE Services at any one moment in time. When a ‘final’ response to an outstanding request is received a new request can be placed. This strategy allows for other applications to send requests to AE Services and factor in the event traffic that may be occurring simultaneously.

When registering a DMCC Multiple Registration (MR) recorder to monitor a SIP phone, an associated DMCC terminal must be registered in Independent mode. For other phone types, it is possible to use either Dependent or Independent mode.

When using Independent mode, it is possible to register a DMCC MR Terminal even if the monitored phone is unregistered. The DMCC MR Terminal will remain registered when the monitored phone becomes unregistered. When this happens, incoming calls to the monitored phone’s extension will be delivered to the DMCC MR terminal. This may lead to an unsatisfactory call handling experience.

Therefore, it is recommended that applications only register a DMCC MR terminal when the monitored phone is already registered. It should unregister the DMCC MR terminal if the phone becomes unregistered. The application can use Endpoint Registration Requests and Events to keep track of the registration status of the monitored phone. There is information on using Endpoint Registration features in the DMCC Programmers Guide.

The following table summarizes the various configuration approaches and the call recording strategies that Avaya Aura supports for recording calls on various endpoint types as of Avaya Aura release 8.0.

Recording Strategy

Target
Endpoint Type

SIP Configuration

DMCC Multi-Registration

Service Observe

Single Step Conference

SIP

SIP
Multi-Registration

Yes
(Independent Mode)

Yes
(Main Mode)

Yes
(Main Mode)

 

SIP
Dual-Registration

Yes
(Independent Mode)

Yes
(Main Mode)

Yes
(Main Mode)

 

SIP
Multiple Device Access

Not Supported

Yes
(Main Mode)

Yes
(Main Mode)

SIPCC or *CC

SIPCC

Yes
(Independent Mode)

Yes
(Main Mode)

Yes
(Main Mode)

H.323

N/A

Yes (Dependent or
Independent Mode)

Yes
(Main Mode)

Yes
(Main Mode)

Digital

N/A

Yes (Dependent or
Independent Mode)

Yes
(Main Mode)

Yes
(Main Mode)

Analog

N/A

Not Supported

Yes
(Main Mode)

Yes
(Main Mode)

 

DMCC Multi-Registration refers to having AE Services register a second, third or more H.323 endpoint on Communication Manager for the same station extension. It is necessary to enable “IP-Softphone” on Communication Manager in order to access this capability. This functionality can be used with Digital, H.323 and SIP station types. Note: CM/AES 8.0.1 allowed more than three (up to 12) multi-registrations per station in support of split stream recording. There are plans to increase this further in future releases. Prior to release 8.0.1 a maximum of three H.323 registrations per extension were supported.

SIP Dual-Registration refers to allowing both a SIP and H.323 endpoint to register to the same station extension. There are two ways to configure Dual-Registration. The first form of Dual-Registration is configured via System Manager (SMGR) in the CM Endpoint Profile section of the User form. A station extension configured for SIP Dual-Registration allows a SIP station to register to the extension through Session Manager and DMCC to record voice calls handled by that extension (via the SIP station) by having an DMCC Independent Mode registration in place simultaneously that is receiving the call’s audio for active call. In this configuration SIP is the preferred protocol, so feature behavior for the SIP endpoint is optimal. In addition to the SMGR Dual-Registration flag being set, the OPS station mapping is required to be administered for the H.323 station in CM.

A second mechanism to allow Dual-Registration is to configure the OPS station mapping in Communication Manager and configure the station in SMGR in the CM Endpoint Profile section of the User form - without setting the Dual-Registration checkbox. In this configuration, H.323 is the preferred protocol and feature behavior for the H.323 endpoint is optimal.


SIP Multiple Registration (a.k.a. Max. Simultaneous Devices or Multiple Device Access (MDA)) refers to allowing more than one SIP endpoint (soft-phone or desk phone) to register using the same Session Manager communication profile. If more than one endpoint is registered, all the endpoints will receive voice calls simultaneously. Max. Simultaneous Devices) (Multiple Registrations or Multiple Device Access) is configured via SMGR in the Session Manager Profile section of the User form’s Communication Profile tab.

 

SIPCC Registrations: If the station is configured as a SIPCC (there are many SIP station types that have the CC suffix), it is an indication that the station is used in conjunction with the SIP Contact Center (also read as Call Center) functionality. In earlier releases, the expectation was that Max. Simultaneous Devices would not be utilized with Contact Center endpoints. In more recent releases (e.g. 8.0) this expectation is no longer necessary.

Registration failed because Registration Reject reason: securityDenial
Login Denied - Access Code invalid diagnostic string= code= 63773

The password that is being provided for the deviceID (extension) is invalid.

An application should provide some interval between the sending of each request to the AE Server. In some cases (e.g. dialing digits or feature function button pushes), if the application sends a sequence of requests to the AE Server rapidly enough, although each request receives a response from the AE Server, as the stimuli are processed by Communication Manager, requests may be silently discarded due to a perceived overload condition at Communication Manager. A suggested inter request interval is 200ms.

The application must also wait between a feature button push (e.g. call appearance or no hold conference) and the pushing of digit buttons for CM to enter into the proper feature state to accept the digits provided by the application has been entered at CM. This delay should be at least 500 ms.

While it is possible in both cases for the application to work without the suggested delays, under load in a real customer network the loss of button requests has been observed and traced to the underlying overload controls in Communication Manager.

When DMCC is used with Custom Media Streams (a.k.a. split stream or stereo), AES allows for (may possibly consume) one DMCC device per media stream.

  • In a two-party call there will be two additional DMCC devices.
  • In a three- party call – we need to understand how the recorder actually works.
    • Most likely the recorder is going to be configured to care about the contact center agent as a single stream, and the other stream would be all the rest of the parties in the call, or it could be configured as the customer (PSTN) should be singled out, and the remaining parties represent the other media stream – in these two cases we would still only see two DMCC devices used to record the media streams in the call.
    • HOWEVER, Custom Media Streams allows for a recorder per party in the call… SO the recorder could add a third DMCC device to the call when it becomes a three-party call. This could continue for the fourth, fifth… twelfth party in a call resulting in one DMCC device per talking party in the call.
    • The recorder could cap the number of streams it wants to separate out at some arbitrary number (e.g. 3) and any parties joining the call beyond that number would have their media summed into one of the existing three DMCC devices.
    • To answer what happens with 3+ party calls we really need to open a dialog with the call recording vendor and understand how the customer would configure that solution (assuming the recorder doesn’t stop at the first solution and avoid the complexity of these latter possibilities).

A DMCC device in what is described herein equates to one of the 8000 DMCC supported on AE Services (as of release 8.0).

The reason code of 2018 is provided by CM when a busy-out and release of the port being monitored at the Communication Manager has occurred. A busy-out event may occur due to hardware failure, communications failure (with the hardware), or command at the SAT.

A complete list of reason codes is not available. In general a condition has occurred on CM and it has unregistered the device. The application must begin attempting to re-register with AE Services, and then re-acquire the state information for the monitored device.

Registration Failedsession[null] com.avaya.csta.binding.RegisterFailedEvent@2bfdff
Registration failed because Registration Reject reason: resourceUnavailable
Request rejected from switch with error code= 18239

This message indicates that the maximum number of registered IP endpoints has been exceeded. Check the second page of the "display system-parameters customer-options" form to see the number administered. Customers will need to purchase licenses for additional IP endpoint registrations to overcome this situation. The use of the "list registered" command will show all active registered IP devices.

Registration Failed session[null] com.avaya.csta.binding.RegisterFailedEvent@19ea173
Registration failed because Registration Reject reason: resourceUnavailable
The number of registered stations exceeded the capacity specified on the license file.
Application is terminating; performing cleanup....

The number of IP_API_A licenses is insufficient to support the number of registered applications. In the example, there are zero available. The information is accessed using the "display system-parameters customer-options" form, page 2 on the Communication Manager's System Access Terminal.

The following XML can be received by an application in response to a releaseDeviceID request:

Request:

1808 07/04 15:53:03.122 => [0014]
<ReleaseDeviceId xmlns="http://www.avaya.com/csta">
    <device typeOfNumber="other" mediaClass="" bitRate="constant">
      10.202.8.187:38605:0
    </device>
</ReleaseDeviceId>

Response:

1808 07/04 15:53:03.603 <= [0014]
<?xml version="1.0" encoding="UTF-8"?>
  <CSTAException>
    <exceptionClass>
      ch.ecma.csta.errors.InvalidDeviceIDException
    </exceptionClass>
    <message>
      Client=session[125] is using this device, not you(session[119] ) 
      so you can't release it
    </message>
    <stackTrace>
      com.avaya.mvap.extsvc.DeviceServicesImpl.releaseDeviceID(DeviceServicesImpl.java:160)
      sun.reflect.GeneratedMethodAccessor14.invoke(Unknown Source)
      sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
      java.lang.reflect.Method.invoke(Method.java:324)
      com.avaya.mvcs.proxy.CstaRouter$ServiceMethod.invoke(CstaRouter.java:223)
      com.avaya.mvcs.proxy.CstaRouter.routeRequest(CstaRouter.java:168)
      com.avaya.mvcs.proxy.CstaRouterService.routeRequest(CstaRouterService.java:69)
      com.avaya.mvcs.proxy.CstaRouterNode.processPacket(CstaRouterNode.java:210)
      com.avaya.mvcs.proxy.AbstractPipelineNode.process(AbstractPipelineNode.java:102)
      com.avaya.mvcs.proxy.Pipeline$PipelineSubscriber.inform(Pipeline.java:364)
      N more removed

The cause for this response is that a different socket (and therefore different session) is releasing the DeviceID than the one that originally got the DeviceID (that is being unregistered). An application cannot share DeviceID across different CMAPI sessions (sockets). Alternatively, a different instance of the application (or different application altogether) presumably on a separate AE Services server has "stolen" the device out from underneath this instance of the application by using a registrationRequest with the

    <forceLogout>true<forceLogout>

option sent with the request. When the application that initially secured the deviceID received notification that the device had been unregistered the application may have attempted to release the deviceID in question causing the error to be thrown.

The extension the application is registering against (provided as part of the deviceID) is not administered on the Communication Manager that is accessed by the provided IP address or H.323 Gatekeeper List.

When unregistering devices (e.g. during the cleanup or shutdown phase of an application), the application should wait for the unregistered event (in Release 3.0 the event is an unregistered() event sent to the terminalListener in 3.1 it is a UnregisterTerminalResponse to the terminalServices.unregister() request) before stopping monitoring and releasing the device ID. Under some conditions if the unregistered event is not received before the application continues shutting down, then the application will not be able to re-register the device until the previous session's SessionDurationInterval expires even though the session has been released.

Observing the response from the AE Services Server is one method, another is to use a command at the system access terminal (SAT) of Communication Manager. Utilize the list registered command to observe all active registrations for IP stations.

For applications using the Java API RegisterDevice (3.0) returns an asynchronous event indicating the success or failure of the registration request. In Release 3.1 RegisterTerminalRequest (3.1) returns a synchronous indication of the result of the registration for applications using the Java API.

DMCC only allows one application to have shared or exclusive control of a specific extension. The most common cause of this exception is the application has restarted (via a crash or management action) without releasing the device IDs from the previous instance of the application AND the registration request has occurred with in the interval (default 3 minutes) before the AES releases the registered device IDs automatically due to the Application Session timing out. The developer should review the information in the application programmer's guide(s) regarding Session Management, and proper shutdown procedures.

Functionally, there is no difference between 'RegisterTerminalRequest' and 'RegisterDevice'. 'RegisterTerminalRequest' should be used in AE Services 3.1 and later versions of AE Services, because 'RegisterDevice' is deprecated in the more recent versions.

This event occurs when an extension #### is already registered, and subsequently CM receives a 'Force Login' request for the same extension. This could occur when an application makes a 'RegisterTerminalRequest' request with a force login set to true and the extension is already registered by some other application. CM interprets the request to mean that the extension is moved to a new location. CM then sends a 'Forced Unregister' request to the extension, which leads to this message. The event can also occur when an application is monitoring (shared control) an extension and the user does a forced login from a different station.

	registration failed reason=Registration Reject reason: resourceUnavailable
Invalid product ID is specified. Check the version of switch or license file

Starting with AE Services Release 5.1 the license allocation behavior for DMCC changed. Please review the FAQ titled When is a DMCC_DMC license allocated and when is it not? for more details.

This message indicates that AE Services could not allocate a DMCC_DMC license for a variety of reasons, and attempted to fall back to IP_API_A license allocation from Communication Manager. That fallback license allocation failed, either because a switch connection is not provisioned, it is out of service, or Communication Manager holds no available IP_API_A licenses.

Use AE Services web OA&M to check for a provisioned switched connection and its status. Use Communication Manager SAT to check for IP_API_A licenses using "display system-parameters customer-options" The information is on page 10/11 in Communication Manager Release 6.2.

This depends upon the type of Media mode (server or non-server) and Signaling Encryption used. For more information on this, refer to the AE Services 4.1 Overview document, document ID 02-300360 Release 4.1 Issue 4 dated December 2007, available on the DevConnect portal (www.avaya.com/devconnect). The tables on pages 24 and 25 in this document list the capacities of RegisterTerminalRequests with respect to the type of server and signaling encryption used.

Registering a terminal allows the application access to the signaling and possibly the media of the DCP or IP telephone, or an extension that is administered for softphone access on Communication Manager. Only devices that are speaker phone enabled can be successfully registered. A speaker phone is required so that Communication Manager can force the device off-hook when performing call control actions (makeCall, answer) or physical device hook-switch control. CallMaster IV/VI phones are not speaker phone equipped and fail to meet this DMCC requirement. If an attempt is made to register a device that is provisioned on Communication Manager as a CallMaster station, DMCC throws the specified error.

This response indicates that AE Services made an attempt to send a registration request to Communication Manager and was unsuccessful. There are many possible reasons for the registration to fail. This FAQ provides some general guidelines for how to proceed, however it can not cover all possible sources for the problems that will result in this response.

As a first step, use the Communication Manager System Access Terminal and verify that the extension you are trying to register is administered as a H.323 station, that there is a security code provisioned, and that the IP Softphone flag on page 1 is set to y.

Next, you need to determine if Communication Manager is reachable from AE Services. Get a Linux prompt on AE Services, and ping the IP address of the Communication Manager. Make sure to use the appropriate source NIC. Most AE Services servers have at least two NICs. One is intended to handle traffic between AE Services and Communication Manager. The other is used to connect to the client applications. See the AE Services Installation and Maintenance guide for more details. You can determine what IP address to use for the Communication Manager IP by examining the deviceID that you are trying to register. The CLAN/procr IP address that will be used is usually imbedded in the deviceID. If the IP address is not present in the device ID, check that the H.323 Gatekeeper's list associated with the Switch Connection Name (also usually imbedded in the device ID) has the correct set of IP addresses for CLAN/procr interfaces on Communication Manager. If the application provides the IP address during the getDeviceId() operation, verify it is the correct IP address for Communication Manager. Note that Communication Manager has many IP addresses. There are typically one group dedicated for handling H.323 station registrations (a subset of these should be dedicated to handling AE Services device registrations - an exception to this guidance is on smaller systems where just the procr is utilized). There will be other IP addresses associated with Communication Manager (e.g. a management related address for accessing the SAT, media processors, IPSIs, etc). These addresses are typically not configured to accept AE Services device registrations (or other H.323 station registrations). Make sure AE Services or the application is configured to use IP addresses on communication manager that are dedicated and configured to handling H.323 and AE Services device registrations

If there is ping level connectivity between AE Services and Communication Manager (note that this is not on the same ports that H.323 traffic will be sent), access the Communication Manager System Access Terminal (SAT), and run the command 'list trace ras ip-station XXXXX'. Now, try to register the DMCC device. If there is output from the command, check for any denial codes. The denial codes are cryptic but very useful when they are available. If there is not even a GRQ (Gatekeeper Request) appearing, then verify that the target CLAN/procr for the registration is accepting H.323 registrations; examine the Allow H.323 Endpoints flag on the 'display ip-interface' form. Note that you may need to do a 'list ip-interface' to find the board address for the CLAN/procr with the Communication Manager IP address you are working with.

Use a normal H.323 IP station (e.g. 9620) to attempt the registration. Statically set the Call Server IP address to the same IP address as is being used by AE Services as the Communication Manager registration IP address. Assuming that works, move the station to be in the same subnet as AE Services (physically as close to AE Services as possible), and re-run the test. In some network arrangements, we have seen a firewall or router issue blocking the registration attempt.

If the 'list trace' command produces a denial event 2041: IP RRJ-No DSP Resource message, verify that Communication Manager has medpro resources that are accessible from the network region that the registration is occurring in. Use the SAT command 'list media-gateway' or use 'list ip-interface medpro' to make sure that there are media processor resources configured on Communication Manager. Use the 'list ip-interfaces all' command along with an examination of the 'display ip-network-region X' form to check that the network region that the device is registering into (from the associated CLAN/procr) is allowed to use medpro resources either in the same network region or some remote network region. If the registering device can not access medpro resources then the registration will be blocked. Note that the network region assigned to a CLAN/procr can be overridden for specified originating IP addresses using the 'change ip-network-map' form.

Are all registrations failing or only some? If only some, then look closely at the provisioning of those stations for differences (e.g. station type), or other differences (e.g. the IP address the registrations are being sent to, or the network-region).

Can you register through AE Services using the DMCC Dashboard? If yes, then the problem is most likely that the application is not well behaved. Look for errors in the registration request itself (dependency mode, media mode, etc).

Does the provisioned station type support a speakerphone (this is required for DMCC registrations to succeed)? Change the station type to something that does (e.g. 4620).

It is possible that the CLAN/procr is out of available sockets? Use the SAT 'status socket-usage' command to gain insight.

Are there adequate DMCC_DMC or IP_API_A licenses? Check using the AE Services WebLM interface (the Administration and Maintenance Guide provides release specific access details). Also check the Communication Manager IP_API_A licenses using the 'display system-parameters customer-options' form. There must be one type or the other available.

A device registration requires that a DMCC license can be acquired. DMCC licenses can reside on the WebLM server associated with AE Services (DMCC_DMC), or they can reside on Communication Manager (IP_API_A). When AE Services can allocate a DMCC_DMC license, AE Services needs to inform Communication Manager that it has done so prior to initiating the device registration. This notification is done through the "DAPI link." The DAPI link uses the switch connection between AE Services and Communication Manager for transport of messages. If the switch connection is not operational, then this notification can not be sent and the registration attempt releases the DMCC_DMC license and attempts to register 'the old way'. If Communication Manager does not have any available IP_API_A licenses, the registration is blocked. Verify that the deviceID has a switch connection name in it, and that the corresponding switch connection's state is 'Talking'.

Session Handling

The CMAPI/AE Services server expects a TCP socket to always be established between the application server and the CMAPI/AE Services server. Interruption of the connectivity of this link must be handled by either "reconnecting" to the server, or releasing and re-establishing the session. Some forms of network equipment (e.g. firewalls) may influence the reliability of the link based on its internal timeouts.

The best treatment of this subject can be provided by referring to existing Java Programmers guide and the Java Sample Code. The document to reference is the Avaya Aura Application Enablement Services Device, Media, and Call Control API Java Programmer's Guide for the release you are using. Review the information in the "Session Management", and "Recovery" sections. Even if your application is XML based, the information in the Java Programmers Guide is very helpful and can be largely applied to a XML implementation. There is also information provided in the XML Programmers Guide. Additional guidance for Java applications can be found in the sample code found in the CMAPI SDK for the CLICK2CALL application. This application demonstrates recovery of the session using CMAPI 3.0 capabilities. See also "During the Session Inactive state what happens with CM events destined for the application".

If the Session Duration Interval is ignored by the application, it may encounter
"ch.ecma.csta.errors.ResourceBusyException: The device [43100:S8700:192.168.241.64:0] is being used by another application"
error when it attempts to create a device id for a device it was controlling prior to the network outage. This condition will last until the Session Duration Interval expires and the AE Services server clears the old session.

If the application is trying to reconnect after the previous instance of the application aborted abnormally the application you may see the error above and the application will need to wait till the time expires on its own (there is no way to force the interval to expire).

The default session duration is 180 seconds, though the application can change that when the session starts (StartApplicationSession request). If you do decide to change it, consider how often the session keep-alive is sent (ResetApplicationSession request), i.e. if the session duration is set too short and the keep-alives are not frequent enough, the sessions will expire more often.

When the session duration timer expires, then the cleanup delay timer starts. The session is inactive until the cleanup delay expires; you can try to recover an inactive session (see the Java Programmers Guide and sample code for more details). If the cleanup delay expires, then the session is cleaned up (device unregistered, DeviceID released, monitors stopped, etc.) and recovery is not possible. The default cleanup delay is 0, but again you can change it when you start the session.

See the article "During the Session Inactive state what happens with CM events destined for the application" for additional information regarding necessary processing after session recovery.

Building and Invocation Errors

Exception in thread
"QueuedExecutor"
javax.xml.parsers.FactoryConfigurationError: Provider
org.apache.crimson.jaxp.SAXParserFactoryImpl could not be instantiated:
java.lang.NullPointerException

The application is apparently utilizing the release 1.5 JRE, which no longer supports the Crimson parser. This is not supported with CMAPI 2.1 and 3.0. Please make sure that you are utilizing the 1.4 Java runtime environment. JRE 1.5 is supported by release 3.1 of the Device, Media and Call Control API (CMAPI).

In Eclipse, you can check the runtime environment version being used by going to the menu bar, clicking "Window", and selecting "Preferences". Then click on "Java" in the left panel, and then on "Installed JREs". A popup window should appear. Make sure that a 1.4 version of the RE is selected. You may need to add one. If you do not have one installed, please access www.sun.com and locate it, download and install it.

Make sure you delete any executables that have been compiled with the a different version (e.g., 1.5) of the JRE since they will be incompatible with the 1.4 JRE. The following error is an indication that you need to "clean" your build environment.

runClick2Call:
[java] java.lang.UnsupportedClassVersionError: sampleapps/click2call/Click2Call (Unsupported major.minor version 49.0)
[java] at java.lang.ClassLoader.defineClass0(Native Method)
[java] at java.lang.ClassLoader.defineClass(ClassLoader.java:539)
[java] at java.security.SecureClassLoader.defineClass(SecureClassLoader.java:123)
[java] at java.net.URLClassLoader.defineClass(URLClassLoader.java:251)
[java] at java.net.URLClassLoader.access$100(URLClassLoader.java:55)
[java] at java.net.URLClassLoader$1.run(URLClassLoader.java:194)
[java] at java.security.AccessController.doPrivileged(Native Method)
[java] at java.net.URLClassLoader.findClass(URLClassLoader.java:187)
[java] at java.lang.ClassLoader.loadClass(ClassLoader.java:289)
[java] at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:274)
[java] at java.lang.ClassLoader.loadClass(ClassLoader.java:235)
[java] at java.lang.ClassLoader.loadClassInternal(ClassLoader.java:302)
[java] Exception in thread "main"

If you receive a message containing the following when attempting to "sh runTutorial.sh" on a Linux machine:

This script is a placeholder for the /usr/bin/java and /usr/bin/javac
master links required by jpackage.org conventions. libgcj's
rmiregistry, rmic and jar tools are now slave symlinks to these masters,
and are managed by the alternatives system.

This change was necessary because the rmiregistry, rmic and jar tools
installed by previous versions of libgcj conflicted with symlinks
installed by jpackage.org JVM packages.
libgcj-java-placeholder.sh
<

Modify your /etc/profile as follows:
In the "Path Manipulation" section, there is an if/fi block. Add the following under the last pathmunge statement:
    pathmunge /usr/java/j2sdk1.4.2_06/bin

Save the file, exit, and at the command line, enter "source /etc/profile". Then enter "echo $PATH" and verify that your PATH includes the above. You should not have to run the "source /etc/profile" command again once you reboot.

The application is unable to find the required "avaya_logo.gif" file. There are two changes that need to be made in the source code. The first is the file name needs to change to 'logo_avaya.gif' instead of 'avaya_logo.gif'. The second change is that getClassLoader() should be in the code sequence. To resolve this, in the file "LoginGUI.java", please replace:

URL url = LoginGUI.class.getResource("logo_avaya.gif");
    with
URL url = LoginGUI.class.getClassLoader().getResource("avaya_logo.gif");

Re-compile and execute the application.

Resource Allocation and Capacity Information

The AE Services Overview document (02-300360_1.pdf) provides capacity information for CMAPI and other APIs. This information is copied from that source. The reference to 720 events in tables 7-9, is events per second.

The number of simultaneous active calls that an application can expect to handle depends on many factors, such as:

  • What else is running on the application machine
  • What else is running on the AE Services Server
  • The processor speed of the application machine and AE Services Server
  • The amount of Communication Manager IP traffic and amount of IP resources (such as CLANs) to handle the traffic
  • The amount of other IP network traffic
  • The combination and timing of service requests your application makes
  • Your application's demand for VoIP resources relative to the VoIP resources available on Communication Manager
  • The codec used /packet size for media
  • Media mode used

In lab tests, the following results were obtained for our call recording application and station registration application only (no call recording). These results were obtained using a remote client proxy and a single AE Services Server. These results scale linearly with the number of AE Services Servers.

Call Recording (server media)
G.711A or Mu law20ms packets G.72960ms packets
Exclusive Control - Server Media Mode 75 simultaneous sessions 120 simultaneous sessions
Exclusive Control - Client media mode or Telecommuter mode 1000 simultaneous sessions 1000 simultaneous sessions
Exclusive Control - Server Media Mode 75 simultaneous sessions 120 simultaneous sessions
Shared Control 1000 simultaneous sessions 1000 simultaneous sessions

If an endpoint is involved in a conference these numbers are not impacted as the conferencing is done by Communication Manager not the AE Services Server. If more than one party in a conference is being monitored/recorded, they each count as a session from a capacities perspective.

The limit on conference participants is six parties total. If the limit is reached, then service observation cannot be performed on the conference (a service observer counts as one participant). An attempt to activate service observing on a conference that has reached its conference limit will be denied by Communication Manager.

  • To what region does the CMAPI soft phones registered to?
    There are two methods.
    • You can map a range of IP addresses which includes the CMAPI soft phones' addresses to network regions in the ip-network-map form (as is covered question 2).
    • If you do not do the first method, then the CMAPI soft phone inherits the network region of the C-LAN that it registers with.
  • Can the CMAPI soft phones be mapped to a certain region by the "IP ADDRESS MAPPING" like any other IP-phone?
    Yes.
  • Do all the CMAPI soft phones (on the same AE Services/CMAPI server) register to the same region (because they all have the same IP address - of the AE Services/CMAPI server itself)?
    Yes, if you do the above first method (1A), No if you do the second method (1B). In the second method, you would specify different C-LANs (assuming the different C-LANs are configured with different network regions) for different CMAPI soft phones.
    If you are having problems receiving media at an RTP application server, check the following things:
    • When the CMAPI application registered the extension it specified the codec that will be used for the duration of that "registration session". It also specified the RTP address for the device for the duration of the "registration session."
    • The IP region that the C-LAN that is being used for registration is configured to allow the codec on CM in the ip-codec set. If there is not a match, the application will not receive media.
    • If the call is a inter region call, that the inter-region connection configured properly on CM
    • If CM is configured for extension mapping, is the ip-codec set for the region that the endpoint is assigned to properly configured.

Currently Device Media and call Control (DMCC) does not allow an application to get the RTP stream for only one participant in a call. This functionality may be considered for future releases.

A noise instead of audio is typically caused by using the wrong codec to decode the RTP stream. The DMCC API provides information related to the codec utilized for the connection in the media start event and you can refer to "simple record" sample code contained in the .NET SDK for more detailed code information.

The AE Server provides media encryption for Device, Media, and Call Control applications that use Exclusive Control Client or Server media mode. In this Client media mode, the Device, Media, and Call Control application is responsible for decrypting incoming media and encrypting outgoing media. In Server media mode, the AES handles the RTP media stream. The Media Encryption setting on the Communication Manager IP Codec Set form applies to both Client media mode and Server media mode. The AE Server encrypts and decrypts media. In the case of Client media mode, media is terminated on a machine the application indicates when registering the device. The server the application designated during device registration is responsible for encrypting and decrypting media. AE Services also provides a media stack in the Device, Media, and Call Control Java Client SDK that encrypts and decrypts media. The media stack can be used by applications that rely on client media mode. More information regarding this subject is available in the document 'Avaya MultiVantage Application Enablement Services Administration and Maintenance Guide Release 4.0' 02-300357 Issue 6 February 2007. This document is available on the Avaya support website http://support.avaya.com/elmodocs2/AES/4.0/02_300357_6.pdf.

Yes. It is possible to register multiple devices in the same session using DMCC API. The 'GetDeviceId' request can be used to set up a device identifier for each CM extension that the application needs to instantiate as a softphone. If the application needs to register for many devices, spread out the registration of the devices. That is, register no more than 50 stations at a time and wait until the application has received all of the responses before attempting to register any more stations.

The AE Services 4.0 "Operations, Administration and Maintenance" (OA&M) page provides an option to list and un-register the devices which are registered using DMCC.

  • 1) Navigate to 'OA&M Home -> CTI OA&M Administration -> Status and Control -> Services Summary'.
  • Select 'DMCC Service' and
  • Click 'Details -> Device Summary' in the upper portion of this form. On this page, devices can be unregistered by selecting the appropriate device(s) and clicking the 'Terminate Devices' button.

Media Encryption

Following is the list of codecs supported by AE Services:

  • G.711 A-law (g711A)
  • G.711 Mu-law (g711U)
  • G.729 (g729)
  • G.729 Annex A (g729A)
  • G.723 - client media mode only (g723)

The codec set needs to be specified at the time of device registration for client and server media mode registrations. If no specific codec set is chosen, the Communication Manager (CM) will default to G.711 A-law as the first choice and G.711 Mu-law as the second choice. If CM cannot satisfy a user request for a specific codec, the call will still go through but media will not be available. When a call is established, CM will use the codec set from the endpoints and an internal codec set associated with the parties IP network region to determine the codec chosen for the call. Once the call is established, media shuffling with the far end endpoint may occur resulting the codec changing midstream in the call.

The following XML fragment in the RegisterTerminal DMCC XML request will set the codec to G.729:
< localMediaInfo>

< codecs>g729</codecs>

</ localMediaInfo>

The same can be accomplished using the DMCC Java SDK:

MediaInfo localMediaInfo = new MediaInfo();
localMediaInfo.setCodecs(new String[] {Audio.G729});

For server media mode, the user cannot specify a mixture of G.711 and G.729 codecs for a single device. This is because there is no means for AE Services to select a file containing the right audio encoding when it is requested to play media to the connection.

While encrypting media (voice) data, a stream of voice bits are XOR'd with a second stream of encryption bits before transmission. This second stream of bits is called cipher vector and its contents is a function of an Initialization Vector (IV) and an encryption key. In a 16 byte (128 bit) encryption scheme, a 128 bit IV is encrypted with 128 bit encryption key (KE) to get 128 bits cipher vector for use with AES encryption. The 128 bit cipher vector is then XOR'd with first 128 bits of voice data and the result is ready to be transmitted. The IV is then incremented by one and the process is repeated for next 128 bits of voice data. If the remaining voice stream (packet) is less than 128 bits then only that part of the cipher vector is used. Once the voice stream (packet), is fully encrypted, the IV is incremented and the process repeated on the next voice packet.

Here is the graphical representation for encryption of the voice stream. Note that if the voice stream is smaller than 383 bits, the sequence would complete prior to all bits in the IV being used.


Programmatically, it can be accomplished as follows:

x=0; y=0;

cipher = calc_first_cipher (session_key, salting_key, RTP_header);

do
{
    Encrypted_voice[x++] ^= cipher[y++]
    if (y == 16) {
       cipher = calc_next_cipher(cipher);
       y=0;
    }
} while (x < size of voice packet);

Where,
x= packet index (0- packet size-1, packet size in bytes)
y= cipher block index (0-15, 16 bytes)
cipher = vector which is a function of session key, initialization vector, salting key, and RTP header info recalculated after processing/encrypting 16 bytes.

Implementations may wish to optimize these functions by performing the XOR operation for 2, 4, 8, or 16 bytes at time, but any implementation must be able to encrypt any left over bytes individually, one at a time (using the appropriately updated cipher if necessary), and just consuming as much of the cipher vector as there are media packet bytes leftover to encrypt. The IV is used to create a cipher for every 16 bytes (block) of payload to be encrypted. The remaining/leftover bytes is treated like a new block, in that the IV is incremented(IV++) and a new 16-byte cipher is created with that IV.

Encryption of the remaining payload bytes with the new 16 byte cipher starts at the low order.

For devices being registered in exclusive control mode, the application can optionally specify, at registration time, any one or more of the following media encryption options:

  • Advanced Encryption Scheme (AES)
  • none (i.e. no encryption of the media stream)
The encryption option has to be specified at the time of terminal registration and this can be done by including the following XML fragment in the RegisterTerminal request:
<localMediaInfo>
<codecs>g729</codecs>
<encryptionList>aes</encryptionList>
<encryptionList>none</encryptionList>
</localMediaInfo>

The same can be configured using the DMCC Java SDK as follows:
MediaInfo media = new MediaInfo();
String [] encryptionList = {"AES", "NONE"};
media.setEncryptionList(encryptionList);
If the user does not specify any encryption option, the AE Services server will default to "none" (no media encryption).

Other

An error in the DMCC Java SDK version 8.1.3 means it is not possible to start a Call Control monitor when using a DMCC protocol older than 8.1.3 (http://www.ecma-international.org/standards/ecma-323/csta/ed3/privE).  This means that any application built with the 8.1.3 SDK cannot be used with an older version of AE Services.

This problem will be fixed in version 8.1.3.2 of the DMCC Java SDK, due to be released in the second half of 2021.

In the meantime, a hotfix is available on the Devconnect website here. On this page you will find the following files:

  • DMCC Java SDK version 8.1.3 (both Windows and Linux)
  • A link to the PSN document which describes how to install the hotfix on top of the SDK
  • The hotfix files as a zip archive.

The hotfix comprises two files (one jar and one xml).  Developers who need fixes/features in the DMCC Java SDK version 8.1.3 should use these files, instead of the equivalent files in the SDK, when building, testing and deploying an application that uses the 8.1.3 version of the DMCC Java SDK. The hotfix is recommended to be applied in all situations.

The purpose of the following scenarios and questions is to understand the mechanism which determines on which Gateway (or Gateways) will a conference call be held (also in terms of MedPro/VoIP and Time-slots resources) in a system which include several regions and several Gateways (or Port Networks).

Please elaborate on the exact way each scenario in the table below will set up the conference between the 3 participants in each scenario (the participants belong to different regions according to the different columns).

I consider the IP-phones to be configured as direct IP-IP audio in both inter and intra region communications in an IP-Connect configuration.

Many countries have regulations which require that a tone be inserted into a call while it is recorded.  Communication Manager can be configured to generate such a tone. Additionally, the application can configure a device to insert a tone when that device is involved in a call in some recording configurations.

Service Observing Recorders
If the Service Observing (SO) methodology of call recording is being used, then Communication Manager can be configured to insert a recurring tone into the call through the Service Observing: Warning Tone? parameter on the 'change system-parameters features' SAT form in the Call Center section (page 11/19 in release 8.1).
When this parameter is enabled (it is enabled by default), a warning tone is played into a call at regular intervals while a service observer is present in the call. It is not possible to alter this tone.

Single-Step Conference (SSC) and Multiple Registration (MR) Recorders
As of AE Services 6.3 and CM 6.3, a DMCC device can be pre-configured to cause the Service Observer Warning Tone to be inserted into a call when the device is added into the call.

The application creates the device (the recorder) and then configures it to add recording tone using the Generate Telephony Tones feature.  After that, any time the device is added into a call (via SSC or Multiple Registrations), the tone is provided to the call.   Currently (release 8.1) it is not possible to alter this tone. The tone is the same tone that is used with Service Observing. The tone cadence is:

  • 1400 Hz at -11 db for 200 msec. 
  • Silence for 15 sec
  • repeat [1400, silence] forever

For example, to enable the Generate Telephony Tones feature for a recorder, using .Net:

serviceProvider.getCallAssociated.GenerateTelephonyTones(

recordingDevice.getDeviceIdAsString, null);

See the appropriate DMCC programmers guide for more information on Generate Telephony Tones.

Alternative Methods

Conference Tone

If the Single Step Conference (SSC) methodology of call recording is being used, then Communication Manager can be configured to provide a recurring tone to all conference calls. This is enabled through the Conference Tone flag of the 'change system-parameters features' SAT form (page 6/19 in release 8.1). The system must also have a conference tone configured which will generate a repeating tone.  For example, the following causes a short tone to be generated every 2 seconds.

change tone-generation                                          Page   2 of  21

TONE GENERATION CUSTOMIZED TONES

Tone Name        Cadence        Tone
Step    (Frequency/Level)

conference             

1:     330/-8.0           Duration(msec): 200
2:     silence              Duration(msec): 2000
3:     goto                         Step: 1

Note: ALL conference calls will hear this tone, regardless of the call being recorded or not.

Application Injected Tone

When SSC or SO are in use for recording, the recording application has a media capable endpoint (the DMCC based recorder) connected into the call.  The application can use this device to send RTP representing a tone to Communication Manager that will be summed into the audio stream sent to all the call participants.

Normally, it is not required for the DMCC endpoint to inject RTP into a call so the application adds it in listen only (SO) or silent (SSC) mode.   In order to be able to inject RTP into a call, the recorder must be added in listen/talk (SO) or active (SSC) mode.

There are some draw-backs with using non-silent mode.  Firstly, it consumes an extra talk timeslot for the recorder which can impact the maximum number of simultaneous recordings that can be performed on a G4x0 gateway as compared to silent mode recording.  Secondly, for SSC, the phone displays may show “Conference” instead of the number/name of the other party.

If MR is used for recording, the monitored phone and the DMCC device cannot ‘talk’ at the same time.  In order to inject RTP into the call, the application must use the share-talk feature to be able to send RTP instead of the monitored phone.  The sequence is as follows:

  1. Application presses share-talk button to take control of the voice path (potentially interrupting the speaker)
  2. Application sends tone as RTP to the media server or media gateway
  3. Application presses share-talk button to return control of the voice path to the phone

There is more information on the share-talk feature in the appropriate DMCC programmers guide.

Note:  Anything the agent says while the tone is being played (more specifically between when the share-talk button presses are processed by Communication Manager) will be lost.  This can have undesirable effects.

Be aware of the following things:

  • Add listeners prior to registering so that events are not missed after the registered event is received.
  • If the device is unregistered (either by the application actions or other factors such as loss of connectivity with CM), note that the Listeners that have been added are still associated with the device. All that is needed is to re-register. It will cause problems if the listeners are added again. Alternatively, disconnect the listeners prior to restarting the process of re-registering.

If the AE Services Server does not receive requests from the application or keep-alive messages and the session enters the inactive state, if events are received from CM for devices being tracked by the session they are not queued (i.e. they discarded). If the session is reestablished prior to the session being terminated, the application must reacquire the state information for the device(s) that it was monitoring. Thus the application will need to issue getLampState, getHookswitch and getDisplay requests to update any state information the application was keeping. It is also a good practice to issue a getButton and verify that there have been no provisioning changes to the device that was being monitored at regular intervals including after a condition where the service has gone inactive. Note that it is not necessary to reconnect listeners when the session is reestablished.

Normally the Exception "java.net.NoRouteToHostException" according to Sun/Java: "Signals that an error occurred while attempting to connect a socket to a remote address and port. Typically, the remote host cannot be reached because of an intervening firewall, or if an intermediate router is down"

To disable the firewall on the CMAPI connector server (2.1 and prior) run the following command:

/sbin/service iptables stop

Review your network for an unanticipated firewall or misconfigured router.
Make sure the destination IP address is correct using ping or similar technique.
Review the /etc/hosts file for a misconfigured AE Services Server.
If using a DNS name, make sure that the domain name server is operational and accessible from the application server.

The following instructions are provided for developers working with "lab" or development machines. Turning "up" logging is not intended to be done on production machines due to potential service impacts associated with doing so. If logging is increased on a production machine, be sure to decrease it when you are finished.

These instructions apply to AE Services release 6.3 and later. For releases prior to this, please see the FAQ How can I monitor the XML being sent and received by the AE Services Server (debug, log, trace) – pre 6.3 release?

As of AE Services release 6.3, it is possible to enable and download DMCC traces using the AE Services OAM administration web page.

To change the trace level, login to the OAM Administration website and navigate to Status > Log Management. Next, change the value for “XML Logging” in the DMCC section to “Finest” and click Apply Changes.

Log Manager screen on Management Console

On the next screen, click Apply. Once enabled/disabled through the web interface, logging will automatically begin/end immediately.

When you are done collecting logs at a higher log level, remember to return to this web page and reduce the logging filter to "FINE".

To retrieve DMCC traces from AE Services, login to the OAM Administration website and navigate to Status > Logs > Error Logs. Select "dmcc-trace.log.0" and click Download. Click Here to download the trace as a zip file to your computer.

Note that AE Services uses a rolling log file technique (.0 becomes .1, .1 becomes .2 and so on) to avoid creating overly large log files. Depending on the interval between problem occurrence and downloading the log file, the logfile name may have been updated. It is useful to synchronize AE Services clock with the proper time of day, and use timestamps to locate the appropriate file that contains the information of interest.

The following instructions are provided for developers working with "lab" or development machines. Turning "up" logging is not intended to be done on production machines due to potential service impacts associated with doing so. If logging is increased on a production machine, be sure to decrease it when you are finished.

This FAQ is valid for all releases of AE Services. However, as of AE Services 6.3, there is a simpler alternative procedure available. This is described in the FAQ, How can I monitor the XML being sent and received by the AE Services Server (debug, log, trace)?

In lieu of understanding the following instructions you may wish to use the trailing text which is an example dmcc-logging.properties file.

Changes to Logging in AE Services 5.2

Please note that this guide is based on AE Services Release 5.2 and later. For releases of AE Services before 5.2, the configuration file, /opt/mvap/conf/dmcc-logging.properties, is called /opt/mvap/conf/logging.properties and the log files, /opt/mvap/logs/dmcc-trace.log.*, are called /opt/mvap/logs/mvap-trace.log.*

Understanding how the logging levels / handlers work

The Java logger in the AE Services Server uses a concept of handlers. The idea is that various handlers know how to handle the logs that are emitting from the application. Each handler has a log level associated with it that specifies how it should filter. It won't let any logs past it that are not at least as severe as the level defined for that handler. Handlers can be chained together, in which case each handler applies its filter in turn.

The first thing the logger does when applying filters is to check to see if the calling class has a log level specified for it at the bottom of the /opt/mvap/conf/dmcc-logging.properties file. This would typically be done to enable FINER or FINEST level logging for a particular piece of code. If a level is defined for this class, that level is applied as a filter. If not, the global .level setting is used from the top of the file.

Next, the logger will send the log to each of the handlers that has been entered on the "handlers" line. These handlers will each apply their own filters and rules, and do the appropriate thing with the logs.

DMCC handlers

DMCC has three handlers specified on its "handlers" line:

  • ThreadedHandler: This is the handler that ends up going to the dmcc-trace.log.* files. It ends up actually writing to the log file on a different thread so that it doesn't cause the main DMCC threads to back up because of heavy log traffic. It is chained with the FileHandler, which is the handler that actually writes to the file. If you want to change the log level for the dmcc-trace.log.* file, you have to change the level for both the ThreadedHandler and the FileHandler, since they each apply their filter. By default, FINE logs and above get logged here.
  • ErrorFileHandler: This is the handler that ends up going to the dmcc-error.log.* files. This uses the MemoryHandler behind the scenes. All logs down to the FINER level are held in memory and are not written to the file. If a WARNING level log is received, however, the last 500 entries at FINER level or above are pushed to the log file. This allows the individual who is debugging to see some context of what might have caused the problem.
  • ApiFileHandler: This handler logs all API calls to a file.

How to increase the level of logging

In general, if you want to increase the detail of the logging on DMCC, you'll want to change what's getting logged to the dmcc-trace.log.* files. You'll need to edit the /opt/mvap/conf/dmcc-logging.properties file on the AE Services Server, and then restart the server. There are two ways you might want to do turn up logging:

  • Turn up the log levels for all classes: If you want to increase the log level for all classes, you need to change the level for the ThreadedHandler and for the FileHandler. If you want to go all the way to FINEST, you'll also have to change the global level to FINEST.
  • Turn up the log level for individual classes: If you only want to increase the log level for some classes, you need to add lines at the bottom of the file that are of the form <packagename.classname>.level = FINEST (e.g. com.avaya.mvcs.proxy.CstaMarshallerNode=FINEST). You'll then have to change the levels for the ThreadedHandler and FileHandler to not filter out these logs. Note that you may then have to change the global level to FINE, if you don't want all the FINER logs from the other classes. This is necessary since you relaxed the restrictions on the ThreadedHandler and FileHandler.

To see all XML messages coming in and out of the AE Services server

Add the following to the bottom of the dmcc-logging.properties file in the AE Services server in the /opt/mvap/conf directory:

# #################################################
# Enable tracing of all XML messages into the dmcc-trace.log.* files
com.avaya.mvcs.proxy.CstaMarshallerNode.level=FINEST
com.avaya.mvcs.proxy.CstaUnmarshallerNode.level=FINEST
# #################################################

Before you will see the output you need to change the com.avaya.common.logger.ThreadedHandler.level and com.avaya.common.logger.FileHandler.level to FINEST:

com.avaya.common.logger.ThreadedHandler.level=FINEST
com.avaya.common.logger.FileHandler.level=FINEST

Then you probably want to reduce the global filter level to FINE to avoid getting all FINEST level eventing for handlers that have been set to FINE:

.level=FINE

The log files are kept in /opt/mvap/logs on the AE Services. A maximum of 20 log files are kept. The most recent is in dmcc-trace.log.0 and they wrap to a few file based on total file size.

Please remember to return logging to the default level when you are done.

Enabling the New Logging Settings

The new logging levels/information can be enabled by one of the following:

  • Restart the AE Services as the root user via the following command line interface: [root@youraes ~]# /sbin/service aesvcs
  • Starting with Build 37-1 of AE Services 3.1, it is possible to change the logging levels without a service disruption. To do this, restart the DmccMain JVM from the command line as follows as a root user: [root@youraes ~]# jps
    3250 Bootstrap
    3732 run.jar
    3707 WrapperSimpleApp
    5552 Jps
    4119 SnmpAgent
    3649 Main
    3466 LcmMain
    8035 DmccMain
    [root@youraes ~]# kill -12 8035

BEWARE: If you are tailing dmcc-trace.log.0, this files rolls over to a new file, hence you will have to restart your "tailing".

The following information is a sample dmcc-logging.properties file from a release 6.2 AE Services server with the above changes applied to it. Note that future releases of AE Services server may have additional filters in the dmcc-logging.properties file so you may wish to be careful when applying this example to other releases of an AE Services server. The lines highlighted in RED have the changes described applied.

Example logging.properties file:

############################################################
# DMCC Server Logging Configuration File
############################################################

############################################################
# Global properties
# Default global logging level.
# This specifies which kinds of events are logged across
# all loggers. For any given facility this global level
# can be overriden by a facility specific level
# Note that the ConsoleHandler also has a separate level
# setting to limit messages printed to the console.

.level=FINE
# lower the .level setting from FINER to FINE to reduce the extraneous
# FINEST information from the logs and just get XML.

# handlers defines a whitespace separated list of class
# names for handler classes to load and register as handlers
# on the root Logger (the Logger named ""). Each class name
# must be for a Handler class which has a default
# constructor. Note that these Handlers may be created
# lazily, when they are first used.
handlers=com.avaya.common.logger.ThreadedHandler com.avaya.common.logger.ErrorFileHandler com.avaya.common.logger.ApiFileHandler

# config defines a whitespace separated list of class names.
# A new instance will be created for each named class. The
# default constructor of each class may execute arbitrary
# code to update the logging configuration, such as setting
# logger levels, adding handlers, adding filters, etc.
#config=
############################################################

############################################################
# configure com.avaya.common.logger.ThreadedHandler

# com.avaya.common.logger.ThreadedHandler logs to its target
# Handler asynchronously (on an independent thread),
# preventing server threads from blocking for disk I/O
com.avaya.common.logger.ThreadedHandler.target=java.util.logging.FileHandler

com.avaya.common.logger.ThreadedHandler.level=FINEST
############################################################

############################################################
# configure java.util.logging.FileHandler
# level specifies the default level for the Handler (defaults to Level.ALL).
# filter specifies the name of a Filter class to use (defaults to no Filter).
# formatter specifies the name of a Formatter class to use (defaults to java.util.logging.XMLFormatter)
# encoding the name of the character set encoding to use (defaults to the default platform encoding).
# limit specifies an approximate maximum amount to write (in bytes) to any one file. If this is zero, then there is no limit. (Defaults to no limit).
# count specifies how many output files to cycle through (defaults to 1).
# pattern specifies a pattern for generating the output file name. (Defaults to "%h/java%u.log").
# append specifies whether the FileHandler should append onto any existing files (defaults to false).

java.util.logging.FileHandler.level=FINEST
java.util.logging.FileHandler.pattern=../logs/dmcc-trace.log
java.util.logging.FileHandler.limit=10485760
java.util.logging.FileHandler.count=20
java.util.logging.FileHandler.formatter=com.avaya.common.logger.MillisecFormatter
############################################################

############################################################
# configure com.avaya.common.logger.ErrorFileHandler
# This handler contains code that uses a MemoryHandler that
# pushes to a ThreadedHandler whose target is a FileHandler
# with the pattern specified here. The level set here
# is propagated through the entire Handler chain.
# The result is a log containing detailed error pretext.
com.avaya.common.logger.ErrorFileHandler.level=FINER
com.avaya.common.logger.ErrorFileHandler.pattern=../logs/dmcc-trace.log
############################################################

############################################################
# configure java.util.logging.MemoryHandler
# filter specifies the name of a Filter class to use (defaults to no Filter).
# level specifies the level for the Handler (defaults to Level.ALL)
# size defines the buffer size (defaults to 1000).
# push defines the pushLevel (defaults to level.SEVERE).
# target specifies the name of the target Handler class. (no default).
java.util.logging.MemoryHandler.level=FINE
java.util.logging.MemoryHandler.size=1000
java.util.logging.MemoryHandler.push=WARNING
############################################################

############################################################
# configure com.avaya.common.logger.ApiFileHandler
# This handler is a ThreadedHandler whose target is a
# FileHandler with the pattern specified here. The level set
# here is propagated to the FileHandler. By default, this
# Handler is configured with a filter to log all API calls.
# filter specifies the name of a Filter class to use (defaults to no Filter).
com.avaya.common.logger.ApiFileHandler.level=FINE
com.avaya.common.logger.ApiFileHandler.pattern=../logs/dmcc-trace.log
com.avaya.common.logger.ApiFileHandler.filter=com.avaya.common.logger.RegExFilter
############################################################

############################################################
# configure com.avaya.common.logger.RegExFilter
# Filters LogRecords by matching their Logger name using the
# regular expression specified in the pattern property.
com.avaya.common.logger.RegExFilter.pattern=^com\.avaya\.api.*
############################################################

############################################################
# Facility specific properties (extra control per logger)
#com.xyz.foo.level = SEVERE
sun.rmi.level = WARNING
com.avaya.platform.jmx.Mx4jXSLTProcessor.level = WARNING
############################################################

############################################################
# Enable tracing of all XML messages into the dmcc-trace.log.* files
com.avaya.mvcs.proxy.CstaMarshallerNode.level=FINEST
com.avaya.mvcs.proxy.CstaUnmarshallerNode.level=FINEST
############################################################

You can monitor for Physical Device Events, specifically Lamp Mode events, and look for the following:

  • If you are enabling Service Observation via a feature access code, then the active call appearance (where the feature access code has been "dialed") will change from steady green (Lamp Mode = steady, Lamp Color = Green) to dark (Lamp Mode = off).
  • If enabling via an SO button on the CMAPI phone, then the lamp (Lamp Mode) associated with the SO button flutters. If a feature access code is used and the station has a SO button provisioned, it will flutter as well.
  • If the SO enabling is unsuccessful, then the lamp will be steady green for the period of time when the switch is playing an intercept tone to the CMAPI phone.

The DMCC .NET API has been released with the 4.1 version of the AE Services software. It can be found on the Release 4.1 AE Services page at the following URL:Avaya Aura Application Enablement Services.

Yes. The same capabilities are available in all three DMCC APIs. The Java and .NET APIs provide simplifications to some interfaces such as monitoringServices and session handling. The Java and .NET APIs send and receive XML to AE Services. Programmers using the all three APIs have access to the same capabilities.

You can use a SIP trunk (where DTMF over IP is set to inband RTP [i.e. "rtp-payload" on the signaling group form] -- note that out-of-band is not supported on SIP trunks).

The CMAPI tone detection (ttd_mode) should be set to "out-of-band" for proper delivery of DTMF from TDM sources and IP trunk sources configured as "in-band." The "Intra-System IP DTMF Transmission Mode" setting in the system-parameters ip-options form must be set to "out-of-band" when ttd_mode is "out-of-band"

The "DTMF over IP" setting in the signaling group applies to the trunk that the signaling group is associated with. IP trunks connect two separate switches that are usually a considerable distance apart and so, a compressed codec like G.729 is usually used on the IP trunk. In-band DTMF detection in G.729 streams is not always reliable, and so most people use out-of-band DTMF on IP trunks.

SIP trunks can only support in-band RTP. This is different from in-band tone detection. So essentially there are three different modes in CM:

  1. In-band RTP (through RTP header)
  2. In-band (through payload itself)
  3. Out of band (through h.323 signaling)

CM will always signal tones out-of-band to an H.323 IP phones. IP (H.323) Phones will not do in-band tone detection. Similarly, tones originating from an IP (H.323) phone are always out of band.

With respect to CMAPI endpoints, tones will always be passed out of band to CMAPI endpoints if the far end is an IP (H.323) endpoint or sip trunk.

In CM 2.1, the settings on the ip-options form is for inter-gateway calls as well as calls between digital endpoints and CMAPI stations.

Also in CM 2.1 the settings on the signaling group form is for IP (H.323) trunks only.

CM 3.0 all tones are out of band for CMAPI endpoints. Note that IP phones always send tones out of band. This form (signaling group) and the (ip-parameters) form, apply only to Circuit Switched endpoints communicating with a CMAPI endpoint. With CM 3.0, load 333, this administration will be disabled.

All events from the CMAPI/AE Services server have an invokeID of 9999.

Responses are not necessarily immediate and in the same order as the requests were made. The invokeID can be used to correlate responses to requests.

Call related events may be in different order from one call to the next (ringing, lamp and display updates are one example).

Applications will receive a response to a command/request in the form of a positive response or a negative response. Most negative responses come in the form of exceptions. There are a few cases of explicit negative responses, such as StartApplicationSessionNegResponse.

It is also possible for events related to application of the request to be received before the response for that request.

Please take a look at the XML Programmers' Guide, which discusses the request/response framework.

If you're interested in implementing your own CMAPI soft phone and using it to converse as a phone in order to access the media stream from Communication Manager, then you need to first register your CMAPI application in exclusive control mode of the device (extension), with client media mode. In this mode, your application will have exclusive control of the signaling interface for an extension (which first needs to be administered in the switch) and will send and receive the RTP stream to and from a RTP address that the application specifies when registering.

Because the client application is terminating the RTP stream, it will need an RTP stack. The developer can choose from the Avaya-provided Media Stack, a 3rd party stack, or the developer can implement their own. The application should also monitor for Media Control Events (MediaStart and MediaStop) - these will provide the application with the far end RTP information. The "simple IVR" Java sample code included with the SDK (or the RPTC .NET Sample App) may be a useful starting point for programmers to work from.

Digits are dialed by utilizing button presses using the button id's for the digits. Please see the section 'Making a call' in the XML or Java Programmers Guide.

The application should use the AE Services server IP address and CMAPI port (in release 3.0 default port is 4721, for 3.1 use the secure port of 4722) to connect to a release 3.0 AE Services server - see the 'Establish a connection to the connector server' section in the XML Programmers Guide.

The application first establishes an application session in order to exchange applications messages (requests, responses, and events) with the AE Services server - see the 'Establishing an application session' section in the XML or Java Programmers Guide. For Java applications using the API, the starting a session is handled within the API during the getServiceProvider() request.

Then the application issues a getDeviceID request for each of the devices that it needs to control. A deviceID request provides the following pieces of information:

  • One of the following two:
    • IP address of a 'C-LAN' or PROC interface on the switch (Communication Manager)
    • A 'switch connection name' administered on the AE Services Server (this will become the preferred method in release 3.1).
  • An extension on Communication Manager, i.e. extension of the phone you want to control.

    A release 3.0 deviceID for extension 1006 on a C-LAN with IP address of 10.30.91.100 looks like:
    1006::10.30.91.100:0
    Note: For a G700/S8300, there is a single C-LAN function, so the application developer should use the IP address of the S8300 Media Server. For larger PBXs such as the S8500, S8700, and S8710 Media Servers paired with MCC1/SCC1/CMC1/G600/G650 Media Gateways, there are usually several C-LAN interfaces. In that case, the application developer would use the IP address of one of the C-LAN boards. The optional "switch connection name" lets the application developer just specify the switch your extension is on and the AE Services server will select a C-LAN for the application - though the C-LAN list ('H.323 gatekeeper list') which must be administered on the AE Services server. See the section on 'Getting device identifiers' in the XML or Java Programmers Guide.

    A good way to see the XML message exchanges between a CMAPI client application and the AE Services server is to run one of the sample JAVA applications and use Ethereal to monitor and capture the packets between the client and AE Services server. Then extract the XML messages from the captured stream.

    Another method to observe the exchange of information between the CMAPI/AE Services server and the application is to increase the logging level on the server. See the article "How can I monitor the XML being sent and received by the AE Services Server"?

    The CMAPI Java SDK has the sample apps along with README instructions. The sample application most likely to provide a good starting point is the Soft Phone sample application.

Monitoring of IP Softphones is not supported up through release 3.1 by CMAPI (Call, Device and Medial Control API). Attempts to register for control of an IP Softphone will cause the IP Softphone to be unregistered, and the CMAPI registration attempt will fail.

CMAPI has undergone quite a few naming changes from one release to the next. In all the articles in this FAQ, CMAPI is synonymous with Device and Media Control and Call, Device and Media Control.

The simplest CMAPI application requires an extension on the switch (Communication Manager). An application may use more than one extension on the switch depending on the nature of the application. The extension may be associated with a physical port (as in the case of a DCP phone) or it may be an IP endpoint. Therefore, those extensions must first exist (i.e. be administered) in the switch translations/administered data before they can be successfully registered to by the application.

The CMAPI application sends a GetDeviceID request to the CMAPI/AE Services server; the request includes that station's extension along with an identifier for the switch that extension resides on (an IP or DNS address or the "switch connection name"). After the application receives a response to the getDeviceID request, it can configure monitoring services for the device, and then register the deviceID (CMAPI application) with the CMAPI server via a RegisterDevice (3.0) or RegisterTerminalRequest (3.1) request. The register device request contains the deviceID and password for the extension. The CMAPI server in turn registers the CMAPI application with the identified switch using the extension and switch identifier that were provided in the GetDeviceID request.

While handling the register request, the extension and password are used to validate the information provided against the referenced CMs translations. If the credentials are valid and the extension is properly configured (IP Softphone enabled, and a DCP or IP station) that is in an in-service maintenance state the registration request succeeds.

Applications should not be run on the AE Services server. The only exception would be the use of the SDK sample applications running on the AE Services server for the purpose of troubleshooting or education. The typical network architecture involves one (or more) application servers feeding one (or more) AE Services/CMAPI servers which in turn feed one (or more) Communication Managers.

During the lamp refresh audit, an application will see button 262?s lamps get updated. Button 262 corresponds to the message waiting lamp which is implemented in CM as a button. The application should use the Get Message Waiting Indicator method (or get-message-waiting-indicator.xsd) within the Physical Device services to monitor the message waiting state and not depend on the updates to button 262.

No, it is not possible to monitor the "Mute/Unmute", "Exit", "Previous", or "Next" buttons (right arrow button) through DMCC. These are functions local to the phone and thus to the particular (soft)phone implementation. There is currently no way to control these buttons via DMCC. These are 'local' keys on the phone and no signaling goes to/from Communication Manager when the key is pressed. AE Services monitors the changes occurring in Communication Manager and if the phone does not inform Communication Manager, AE Services does aware of the button push. A list of other 'local' keys include buttons such as 'MUTE', 'Headset', 'Volume Up/Down', 'Options', 'Contrast' controls (on some phones), 'Redial', 'Page Left/Right' and the four soft keys under the display panel.

It is not possible to collect DTMF tones for a device that is registered in "telecommute" mode either using DMCC .NET or Java SDKs. To collect DTMF tones, the device must be registered in exclusive control mode. The application can then choose either "Server media" mode, where the connecting server handles the media or "Client media" mode, in which case the application needs to process the media all by itself.

The DMCC Java SDK includes a "Softphone" sample code located in the directory "examples\src\com\avaya\cmapi\softphone" which can be run in either Shared or Exclusive control mode. However, this sample code does not include the implementation of the audio path in exclusive mode using client media. You will need to use a third-party stack (e.g. Java Media Framework) capable of doing RTP or use the "clientmediastack" sample code located in "examples\src\sampleapps\clientmediastack" directory included with the DMCC Java SDK. Within the DMCC .NET SDK there is sample code for a "simple call recording" application. This application utilizes client media mode to send and receive RTP data from the application. While the .NET API handles many of the details regarding the packet handling for RTP, the developer can observe much of the logic and learn from it. If the developer is using the DMCC .NET SDK, they can utilize the library calls provided by the API to handle the lower layers of the RTP media stream.

You can refer to "Avaya Multivantage™ Application Enablement Services Device, Media and Call Control API XML Programmer Guide R3.1.1" document 02-300358 issue 2.2 dated May 2006 (http://support.avaya.com/elmodocs2/AES/3.1.1/02_300358_2_2.pdf). Within that document read "Appendix B: Migrating Communication Manager API 2.1 Applications to Application Enablement Services 3.0" and "Migrating from AE Services 3.0 to AE Services 3.1". The items which may need modifications during a conversion from CMAPI 2.X to DMCC 3.X are covered here. There is a similar Appendix in the "Avaya Multivantage™ Application Enablement Services Device, Media and Call Control API Java Programmers Guide R3.1" for java applications. Also reading the 'What's new' section of the 3.0 version of the programmer's guide, paying careful attention to the 'Application-affecting changes' section, will also be helpful.

The AE Services 4.0 DMCC XML SDK only supports the first party Call Control services, such as, making a call, answering a call, and so on. The AE Services 4.0 DMCC XML SDK does not support the third party Call Control services. These third party Call Control services will be available in the AE Services 4.1 DMCC release. Use the 'dashboard' application, which is bundled with the dotNET SDK, to get a preliminary view of the functionality that is planned to be provided in the AE Services 4.1 DMCC release.

In the "Full Participation" mode, the display on the Agent phone shows 'conference #' the moment conference call is established and notification is sent to the various parties that are in the conference. However, in the "Silent" mode, no notification is received on the Agent phone.

Using Avaya Device, Media and Call Control (DMCC) XML APIs, an application can make a 'snapshotCall' request and information about all the parties in the referenced call will be returned in a 'SnapshotCallResponse' element. Element 'snapshotCallResponseInfo' is repeated for every party in the call. Inside this, element 'localConnectionInfo' shows 'Connected' state for all the active connections. For inactive members, this field shows 'null'. The XML response for this scenario is provided below:

<SnapshotCallResponse xmlns="http://www.ecma-international.org/standards/ecma-323/csta/ed3"> <crossRefIDorSnapshotData> <snapshotData> <snapshotCallResponseInfo> <deviceOnCall> <deviceIdentifier
typeOfNumber="explicitPrivate:localNumber"
mediaClass="notKnown" bitRate="constant"> 2022:CMSIM::0</deviceIdentifier> </deviceOnCall> <callIdentifier> <deviceID typeOfNumber="other"
mediaClass="notKnown" bitRate="constant"> 2022:CMSIM::0</deviceID> <callID>1016</callID> </callIdentifier> <localConnectionInfo>connected</localConnectionInfo> </snapshotCallResponseInfo> <snapshotCallResponseInfo> <deviceOnCall> <deviceIdentifier
typeOfNumber="explicitPrivate:localNumber"
mediaClass="notKnown"
bitRate="constant"> 2009:CMSIM::0</deviceIdentifier> </deviceOnCall> <callIdentifier> <deviceID typeOfNumber="other"
mediaClass="notKnown" bitRate="constant"> 2009:CMSIM::0</deviceID> <callID>1016</callID> </callIdentifier> <localConnectionInfo>null</localConnectionInfo> </snapshotCallResponseInfo> <snapshotCallResponseInfo> <deviceOnCall> <deviceIdentifier
typeOfNumber="explicitPrivate:localNumber"
mediaClass="notKnown" bitRate="constant"> 2023:CMSIM::0</deviceIdentifier> </deviceOnCall> <callIdentifier> <deviceID typeOfNumber="other"
mediaClass="notKnown" bitRate="constant"> 2023:CMSIM::0</deviceID> <callID<1016</callID> </callIdentifier> <localConnectionInfo>connected</localConnectionInfo> </snapshotCallResponseInfo> </snapshotData> </crossRefIDorSnapshotData> </SnapshotCallResponse>

Refer to 'LocalConnectionState' element in the XMLdoc (bundled with Avaya DMCC XML SDK) for further details.

This behavior occurs when the 'Idle Appearance Preference' station option is set to 'Y'. This option is present on the second page of the 'display station XXX' form, The 'Idle Appearance Preference' option causes the Communication Manager (CM) to pre-select a call appearance that is idle when the call is offered to the station. This prevents the display information from being sent when the call begins to ring. When 'Idle Appearance Preference' is set to 'N' and a call is offered to the station when the station is idle, CM pre-selects the ringing call appearance, which triggers the display information for the call to be sent at the same time that the station begins to ring. 'Idle Appearance Preference' set to 'Y' users prefer to manually choose to when an off-hook should answer a call (by pressing a button), versus originate a new call (allow the pre-selected idle appearance operation). When a station has Idle Appearance Preference' set to 'Y', display updates are sent after the ringing call appearance is selected. Since DMCC requires that the station be speaker equipped, selecting the ringing call appearance triggers the call to be answered in addition to triggering the display to be updated.

In AE Services 4.1 a hole in the security checking of the deviceID imbedded in the connectionID (activeCall) element of the SSC request was closed. The application had been sending a zero in place of the deviceID in the connectionID. In AE Services 4.1 the system checks the provided deviceID to see if it is allowed to be controlled by the login that the application provided during the StartApplicationSession request. In the case where the application is providing an invalid device identifier (e.g. zero), the test fails, and the application's SSC request is rejected.

As a general rule connectionIDs should be copied from some other DMCC message and used. An application should avoid constructing any CSTA identifier such as a deviceID or connectionID. Avaya may change the format of these identifiers (particularly the deviceID), at some future point, which could break any application that is leveraging constructing or parsing the device identifier for information.

If an application is taking callIDs acquired from TSAPI/JTAPI and using them to construct a connectionID for DMCC, care should be taken with the deviceID element in the connectionID. The recommendation is to utilize a deviceID provided by the AE Services server from some response or event. Valid deviceIDs can be constructed using the getDeviceId() and getThirdPartyDeviceId() requests. The response to a snapshotCall() request will also contain deviceIDs that can be used when it is necessary to construct a connectionID. Another source for valid deviceIDs is the DeliveredEvent or the EstablishedEvent from Call Control services.

The XML Schema standard provides '< xsd:any >' as a wildcard element. '< xsd:any >' enables schemas to be extended in a well-defined manner. Any syntactically correct XML can be inserted in place of '< xsd:any >'. This type of element allows loose coupling, enables versioning and provides flexibility where XML APIs are evolving. The development team began to encounter tools that were throwing errors on encountering these tags. The tags were in the XSDs for backward compatibility in the event there is extension to the XSDs in the future. In digging through the issue it was determined that solution was to give up on trying to use '##any' to allow room for backward compatibility, so it was removed everywhere it occurred.

This response may be received in response to a request when running a higher version of DMCC SDK against a lower version of AE Services server. AE Services server supports backward compatibility. Running a lower version of DMCC SDK against a higher version of AE Services server will work.

When an application uses server media mode, most of the media processing is done by the AE Service server. In client media mode, the application needs to handle media processing by itself. Client media mode gives the application freedom to select some of the codecs (like g.723) which are available only in client media mode. Additionally, in client mode, the application has better control over the recording location, can be more responsive to user input, and support better scalability than by using server media mode. The application can use a third party utility to store the media related files.

In server media mode, media files are stored in the '/tmp' directory (by default) on the AE Service server. The user can specify any existing properly configured directory for storing these media files using the OAM administration web page. Login to OAM Administration website and navigate to 'CTI OAM Home > Administration > DMCC Configuration > Media Properties'. Next, configure the player directory and the recording directory where media files are stored. These files then can be extracted (copied off the AE Services server) using SSH or SFTP.

During recording, by default the AE Services server generates the file names having the format: "[timestamp] [extension].wav". The generated file name is returned in RecordMessageResponse message which can be then used by the DMCC application. This file name can also be found in the StopMessage event. The DMCC application can also specify appropriate file name in the RecordMessage request. File names specified for the recorded files must be relative to the configured directory and the configured directories must already exist. Recordings cannot overwrite an existing file. If an application needs to play back these recorded messages, the PlayMessage request should be used with the same filename saved earlier during recording.

Please refer the document Avaya Application Enablement Services Device, Media and Call Control API XML Programmer Guide R4.1 An Avaya MultiVantage® Communications Application 02-300358 Issue 3.0 December 2007.

It is recommended that the 'switchName' always be used The 'switchName' field in the device ID is presently only required for Call Information services, Call Control Services, Snapshot Services and Logical Device Feature Services, . If an application is not using any of these services and does not wish to take advantage of the round-robin H.323 Gatekeeper assignment feature, it is not required to administer an H.323 gatekeeper list or specify a switchName in the GetDeviceID request.

To add a H.323 GateKeeper address, use a web browser and navigate the AE Services web page to 'CTI OAM Home > Administration > Switch Connections'. Then select the appropriate connection from the list and click on the 'Edit H.323 Gatekeeper' button. Add the IP address(es) of the C-LAN(s) or procr interface on CM that are to be used for the H.323 device registrations for DMCC devices. The application can then use the switchName provisioned in AE Services when making getDeviceID requests for that CM.

For further information on this, please refer to section 'Populating the Switch Name field', chapter 3 of document Avaya Application Enablement Services Device, Media and Call Control API XML Programmer Guide R4.1 An Avaya MultiVantage® Communications Application 02-300358 Issue 3.0 December 2007.

A white paper has been written on this topic. It is recommended reading for anyone interested in call recording methodologies:

DevConnect Developer Guide: Developing Client-side Call Recording Applications using Avaya Application Enablement Services (899 KB .pdf)

If the application registers the device (extension) in exclusive mode and specifies "client media" (and provides an IP address for the RTP media to be directed to), then the application will receive the media streams for calls that device is involved in. As of AE Services release 4.0 exclusive mode was renamed "Main" mode. An alternative approach to this method is described as Method 3 below.

There are three design approaches to performing call recording using DMCC APIs and CM. The first two create an extension and cause that extension to be "conferenced" into a call. This conferencing can be done with service observing or the single step conference feature. Applications that utilize these approaches will implement one soft phone per recording device.

  • Method 1: Using service observing
    The soft phone has a pre-provisioned service observing (SO) button on the recording device/soft phone. The SO button is provisioned with the destination station that they wish to record calls for. When the application initializes, it activates the SO feature. Then when calls arrive at the observed extension the application is automatically notified of the call arrival and it answers the call in conjunction with the destination station answering. In Communication Manager 4.0 the maximum number of Service Observers in a call was increased to two.
  • Method 2: Using SingleStepConference
    In this solution the application uses call control services of TSAPI, JTAPI or 4.1 DMCC to monitor the recorded station for incoming calls and/or a specific button push that indicates the user wishes recording to begin. The application then invokes the TSAPI/JTAPI Single Step Conference (SSC) feature to conference in the recording device DMCC station when the recorded station is active on a call.

    These solutions will work if the monitored extension is making use of telecommuter mode, or direct media (either TDM or IP).

  • Method 3: Multiple Registrations for an Extension
    A third approach is available with AE Services 4.1 and CM 5.0. This approach's advantage is it does not encounter limitations imposed by Communication Manager's maximum party count for Service Observers or parties in a call. In this approach a client media mode endpoint in dependant or independent dependency mode is registered against the extension that the application wishes to record. The application provides RTP address information and codec information as part of the registration sequence. When calls are handled by the registered extension, the application receives a copy of the audio of the call. This audio stream is a sum from all parties (including the extension that the application registered as). Through DMCC's call control services information about the participants of the call can be discovered.

Note that within the .NET SDK there is sample code for a SimpleRecord application which implements method 2. If you are interested in using this method, observing the behavior of this sample code is strongly recommended.

An InvalidDevicIDException is returned when the deviceID supplied in the request is improperly formatted (e.g. missing or incorrect fields). Some applications have been manufacturing their own deviceIDs when interfacing to both TSAPI/JTAPI and DMCC. Avaya reserves the right to change the format of deviceIDs in the future, thus the application should not format its own deviceIDs based on a perceived understanding of the current arrangement of internal fields. It is recommended that the application use GetDeviceID or GetThirdPartyDeviceID requests to get the correct device ID. The former request is used for first party control and the later can be used for third party control. For further information on this, please refer to section 'Populating the Switch Name field', chapter 3 of document Avaya Application Enablement Services Device, Media and Call Control API XML Programmer Guide R4.1 An Avaya MultiVantage® Communications Application, 02-300358, Issue 3.0, December, 2007.

Avaya does not provide API capabilities to collect or measure frequency, pitch, or the tone of the voice in the conversation on a real time basis. The application must implement or interface to a third party application that provides these services.

In order to implement such a solution using the DMCC SDK, the user application needs to register the DMCC station in "Client Media Mode", and then pass the received RTP information through the analytical function.

Avaya provides a mixed RTP voice stream of all call participants, so the analysis must take this into account when processing the data. It is not presently possible to access independent voice streams for specific call participants through DMCC or other APIs available from AE Services.

The DMCC SDK always communicates with the AE Services server and hence the server is required for both the development and production environments. The Avaya Aura Basic Development Environment (BDE) can be used while developing an application as it contains both an AE Services server and a Communication Manager server. The softphone client created using the DMCC SDK is not an H.323 softphone. The AE Services server communicates with the Communication Manager using H.323 and other protocols, but the messages between the DMCC client and the AE Services server are actually in XML format and use CSTA Phase III concepts.

Call progress indication can be viewed using Device, Media and Call Control services application using DMCC protocol version 4.1 or later. With the DMCC API, the application can use call related events to be informed of call progress event information such as: OriginatedEvent, DeliveredEvent, EstablishedEvent, ConnectionClearedEvent, TransferredEvent, ConferencedEvent, etc.

The following are the steps to retrieve the contents of the display:

  1. For DMCC Java SDK, the DisplayUpdatedEvent.getContentsOfDisplay() method can be used to provide the display contents.
  2. For DMCC XML SDK, the contentsOfDisplay element in the DisplayUpdatedEvent event has to be parsed to retrieve the contents of the display.
  3. For DMCC .NET SDK, the DisplayUpdatedEventArgs.getContentsOfDisplay() method can be used to provide the display contents.

There is no limitation on the length of the display from the SDK side. Whatever the phone displays will be captured by the DMCC SDK in the DisplayUpdatedEvent event structure. Hence, this is independent of the phone set type. Within Communication Manager, the display information is dynamically generated based on the available information and the station type. Typical display areas are between 24 and 80 characters. However, some station types support multiple display lines, and thus the DMCC API may provide a longer field in the DisplayUpdatedEvent method.

For incoming calls, the Delivered Event and Established Event provides the ANI (Automatic Number Identification) of the calling device when it is available. For incoming calls over PRI (Primary Rate Interface) facilities, a "calling number" information element from the ISDN SETUP message or the assigned trunk identifier is specified in the event. If the "calling number" does not exist (i.e. it is not provided), a dynamic device ID is supplied to Delivered Event. In scenarios where the monitored party is not the original destination of the call, ANI may not be available (e.g. call transfer). In case of a conference or transfer scenario, where the initial call is not monitored, the trunk information is not always available in the call control events to the monitored party, although it may appear on the station display of the transferred to party. Even if the original call was monitored, there are call scenarios where the calling party information is not provided to the transferred to or conferenced in parties, even though it may appear on those parties' station set displays.

Using Device Media and Call Control's Display monitoring capability, an application can access the display information on a station. By using this service on the transferred to extension, an application may be able to access the external party's number. Feature buttons on Communication Manager for 'ANI Request' (ani-requst) and 'Conf Display' (conf-dsp) may be used on the station (if they are provisioned), to access calling party information in this scenario. If an application actively manipulates the display of the station by using these buttons, the end user of that station will observe the changes to the station display.

For outgoing calls, gaining access to the called number is difficult and sometimes not possible. If the application originated the call using the Make_Call or Consultation_Call methods, the application knows the digits of the external number used to place the call. The call ID provided in the response to the CTI request can be saved along with the external number that was provided to Make_Call or Consultation_Call. When someone answers the external number, the application receives, an Established_Event event and the call ID can be used to determine the called number (which may or may not be the answering party's number). This works only in case when call is initiated by the application. When the call is initiated from a physical device manually, the called number for the external party is not available.

The TransferProxy service is what your application will have to use in order to get new listener objects. Information on this service was inadvertently left out of the 4.1 and 4.2 programmer's guides. This will be rectified in the next AES release. In the interim, the DMCC Java Programmer's Reference (Javadoc) details how to use this service, and an example of how to use the service can be found in the SessionManagementApp in the Java SDK. For details on session recovery see the 4.1 or 4.2 DMCC Java Programmer's Guide section titled "Recovery."

Yes, it is possible to monitor a single station using two AE Services servers simultaneously using the 3 Endpoint Registrations per Extension feature with DMCC in AE Services server 4.1.

Different DMCC applications may register (i.e. monitor) for the same extension through up to three (which is the maximum limit) AE Services servers simultaneously. Each DMCC client application establishes separate signaling paths through the different AE Services servers. Each application may optionally establish a separate media path as well which can be over a unique hardware path if the solution is appropriately provisioned. A single application may generate multiple registrations for the same device through different AE Services servers. If one AE Services server fails, the DMCC client application can continue using signaling and media paths of the second registration. Note that this is not an automatic failover of AE Services. To use this feature, make sure that the application is registered with a separate AE Services server and alternate network path connections (i.e., signaling and media paths) are also established.

An application using the same credentials (login and password), can register up to three instances of a specific device through a single AE Services server.

Please note that the 3 Endpoint Registrations per Extension feature requires Communication Manager Release 5.0 and AE Services server Release 4.1 or later. Three registrations are only available for a DCP station. An IP station or IP soft phone will consume one of the three registrations.

The three registration capability is a limit in Communication Manager. One registration is typically consumed by the physical station (or soft client e.g. one-X Communicator). In this case Communication will restrict there to being two registrations from AE Services. Alternatively, an application may use all three registrations (e.g. in independent mode) for some purpose.

Up to four (Release 6.3 and earlier) or eight (release 7.0) AE Services servers may establish a DMCC call control monitor for a single device. This is a limitation inherited from TSAPI. There are caveats to this statement. Please review the FAQ titled “How many individual applications can monitor a specific device?” in the General AE Services category.

It depends on what service(s) the failed C-LAN was providing.

If the failed CLAN was providing H.323 registration service for the device, The application will be notified through an TerminalUnregisteredEvent.There is no possibility that a soft phone implemented with DMCC will automatically switch over to an alternate C-LAN. AE Services does not currently (as of release 4.2) support automatically switching to an alternate C-LAN from the AE Services' H.323 Gatekeeper List. Hence, the application must explicitly re-register the device until it succeeds (in this case it waits for the C-LAN to come back into service). If multiple C-LANs are available the application can release the deviceID, request a new one and use an alternate C-LAN. AE Services supports a H.323 Gatekeepers list which does a round robin allocation of C-LANs to incoming deviceID requests. If the application uses this capability, the AE Services server will allocate a CLAN for the subsequent deviceID request (not necessarily a different one, so multiple re-requests may be needed). For more details see the DMCC XML Programmer's Guide available on the devconnect web portal.

If the failed C-LAN is supporting a switch connection over which a CTI-link that the application was using is provisioned and there are other C-LANs in the provisioned list of C-LANs for that switch connection, then the failure is transparent to the application. If there is insufficient messaging transfer capacity with the reduced number of C-LANs, there will be noticeable issues with the CTI-link.

If the C-LAN that is lost is the last available C-LAN in the switch connection's C-LAN list, then the application will be notified of a call information link failure, and a monitor stop event. If the monitor was created on just call control services, then only that monitor is stopped. In the event that the monitor was on multiple services, then that set of monitors is stopped. When the switch connectivity CTI-link is restored, an event is sent to a call-information monitor. At this point the application can re-establish the monitor(s) and re-register the device.

Note that call information services uses a AE Services to Communication Manager Definity API (DAPI) link which utilizes the switch connection link. The DAPI link is hidden from the view on AE Services OA&M interfaces. The DAPI link uses the same switch connection transport link that TSAPI services use, and thus the call information service link up/down events can be used to gain insight into the status of the TSAPI link, however they do not necessarily have a one to one correspondence (TSAPI may be down when DAPI is up in some instances).

An announcement can be played every time an agent receives a call using the DMCC Call Control Services. Following are the steps needed to be performed to play the announcement:

  1. Register an extension (X), for playing announcement, through DMCC. The dependency mode could be either server media mode or client media mode. Client media mode is probably a better solution from a performance and scalability perspective.
  2. Monitor the Agent for receipt of an Established event where the Agent's station is the answering party.
  3. When an inbound call is established at the Agent's extension, Single Step Conference an extension registered through DMCC (X from step 1) to play the desired announcement into the call.
Note:Most often, the Agent and the calling party will both hear the announcement. To determine when to initiate the Single Step Conference request, the application should use Call Control Services. When an Established event is received for a call delivered to the Agent, the Single Step Conference should be initiated. The extension that is conferenced in can be part of a pool of DMCC soft-phones registered in the main dependency mode. The application must disconnect the DMCC announcement source once it has finished playing from the call. If an announcement extension is used, it will automatically disconnect once the announcement has completed.

If a call is made to an OFF PBX number, Automatic Route Selection code (ARS), Automatic Alternate Routing code (AAR) or Trunk Access Code (TAC) has to be specified in the 'calledDirectoryNumber' field.

Following is the 'calledDirectoryNumber' format:

< TAC/ARS/AAS >< Extension number >:< Communication Manager Name >:< IP Address of the communication manager >:< CSTA Extensions >
where < Communication Manager Name > and < IP Address of the communication manager > are optional fields.

Applications may freely distribute the avaya.crt and ServiceProvider.dll files with a DMCC application. The legal allowance for this is provided in a LICENSE.txt file that is bundled with the DMCC SDK. The license file has the following specific statement that covers the distribution of these files 'Avaya further grants Licensee the right, if the Licensee so chooses, to package client files with Licensee's complementary applications that have been developed using DMCC SDK.'

DMCC supports a maximum of 6 parties in a conference. This limit is imposed by Avaya Communication Manager. It is not possible to increase the number of parties beyond 6 in a conference.

The DMCC API is not meant for developing call center related applications. DMCC does not provide a generalized API to retrieve the Agent information for the Agent logged into a specific station extension. The TSAPI and JTAPI APIs are to be used for developing call centre related applications.

To utilize the basic set of DMCC services, Communication Manager must be provisioned with an appropriate number of IP_API_A licenses. One IP_API_A license per registered device is required. A device may be registered as either shared control or exclusive control. In either case each device will consume one IP_API_A license. A few DMCC services (i.e. snapshot call, snapshot device and single step conference) also utilize a TSAPI Basic license for the duration of the transaction in the AE Server.

Under certain conditions (e.g. installing WinZip 8.0) MS Windows IExplorer will change the extension of the downloaded file; e.g. a file mvapdb21022007.tar.gz would be renamed to mvapdb21022007.tar . This causes the database import to behave erroneously and the database not to be updated properly.

This will affect all versions of AES that generate files with .tar.gz extensions when accessing OAM using certain windows configurations, and the extensions might be affected differently, depending on configuration of the windows box (e.g. Change extension to .zip, etc). The OAM administrator must ensure the file has the proper extension of .tar.gz prior to invoking the restore.

Here are some methods to tell how far through setting up media your application is getting, and where to start troubleshooting.

In DMCC you should be creating start/stop media monitors. Events sent to these listeners will tell the application that CM has created media sockets to receive media on (start media), and that CM has begun sending media to the RTP addresses the application specified when it registered.

XXXX is the DMCC softphone extension in the following descriptions.

SAT command 'list trace station XXXX'
This command will monitor the high level call events occurring at a station. As the station is added to a call (e.g. via single step conference), you should see messages showing the codec, IP address and other RTP parameters for the connection displayed. If there is a problem with establishing media, in most cases you will see a 4 digit event code and an English description of the fault. This can be very helpful when troubleshooting media establishment issues.

SAT command 'status station XXXX'
Once a call is established (and you were not doing list trace), you can look at the RTP parameters for the call using the status station command at the SAT.

If you see no errors during list trace station, you can use a LAN sniffer to observe the RTP media leaving the medPro and headed for the application. At this layer you can see issues like the application did not open the socket CM is using (ICMP responses), the media is being sent to a socket other than what the application is expecting, and differences in codec selection.

Common mistakes are:

  1. No media processing resources available in the network region the application is registering in.
  2. Codec Mismatch between application and codec list specified for the network region the application is registering in.
  3. Application did not open socket for RTP address it specified when registering
  4. IP Routing issues between MedPro (or G700 gateway) and the application.

Dashboard.exe
In the .NET SDK there is a test application called the dashboard (due to the number of exposed controls). This application is a very useful learning tool regarding the XML exchange between the AES and application. It can be used to establish an RTP session between the tool and CM and the knowledge gained applied to the developer?s application.

simple call record .NET application
In the .NET SDK there is sample code for a simple call record application that goes through the necessary steps of opening up a RTP path between the application and the AES/CM. The .NET API provides logic in the .dll that handles much of the plumbing work for the application, so while helpful, it is not a complete answer for those using java or straight C/C++ code for their application.

The .NET SDK can be found by logging into the Avaya web portal and changing your URL to:
https://devconnect.avaya.com/public/dyn/d_dyn.jsp?fn=125

Look for the following link on that page:
Application Enablement Services IP Communications SDK (Device and Media Control/CMAPI) (.NET)

G.729 is a 10ms of voice which has been compressed eight times over a comparable G.711 stream. Thus every 10ms, ten bytes of encoded audio information is available. These 10 byte groups are called frames. There are SID (Silence InDication) frames/packets that are 2 bytes long that represent silence information. The RTP payload type does not change for G.729 SID packets, just the length. The length of the packet is the signal to the far end decoder that a SID packet is being sent/received. A RTP frame can contain one or more frames of G.729. There is no clear standard on where the SID packets can be sent. The typical implementation sends it as: 1) a stand-alone packet 2) the last frame in a sequence of frames in a single packet. To capture G.729, the application will need to do something along the following guidelines (assuming you are operating above the UDP layer 1) remove the 12 byte RTP header 2) attach your own header indicating the length of the data portion of the packet 3) Store the data 4) On playback send packets just the way you received them. To capture G.711, just remove the 12 byte RTP header and store the data. On playback send the data in the negotiated sized "frames" based on 80 bytes per 10 ms units. Remember to flow control the application output based on a real time clock so the far end decoder's jitter buffer does not overflow and no data is lost.

To check the Services running on an AE Services server, follow the procedure described below:

  1. Open the AE Services server's OA&M web page at http://<IP address of AE Services Server>
  2. Click on AE Server Administration and login with the username craft or a customer created AE Services administrator login and password.
  3. Click on CTI OAM Administration to go to the CTI OAM Home page where the states of all the Services that are running on the AE Services server are shown. The information that is presented includes the status of the DMCC service.

Depending on which DMCC service is being used, DMCC may require a link to be configured between Communication Manager and AE Services server. For Call Information Services, a switch connection needs to be configured. Call Control Services and Feature Control/Information Services require a switch connection and a TSAPI link must be configured. The DMCC API is comprised of the Phone and Media services which provides first party Call Control services, the DAPI service which provides the Call and Link Information services (which require a switch connection) and TSAPI services (which require a TSAPI link) to provide third party call control capabilities, such as the ability to place calls, create conference calls, divert calls, reconnect calls, and monitor call control events.

Using the DMCC Call Control services, it is possible to retrieve the number of a calling party. The device answering the call (the destination) must be monitored with Call Control Monitor to look for the calling number information in the Delivered or Established events. Alternatively, the GetDisplay(Object)method can be used to get the information presently displayed on the phone. The application should parse the display information to extract the calling party's number. Note that in some call scenarios (e.g. a conference), the display information on the phone's display will not contain calling party information.

Starting with Communication Manager 5.0 coupled with AE Services server 4.2 a 'DMCC DMC' license from AE Services is used. For older releases, or if no 'DMCC DMC' license is available, an attempt will be made to allocate an 'IP_API_A' license from Communication Manager instead. One 'DMCC DMC' or 'IP_API_A' license is used for each registered DMCC station. If DMCC uses Call Control services or Feature Control/Information services, then a TSAPI basic license is also required. A DMCC device registration will consume an IP Station license on Communication Manager.

Yes. However there are limitations regarding the level of media support.

Using the multiple registrations per device capability requires the following:

  1. The IP Softphone flag on page 1 of the change station form must be enabled.
  2. Communication Manager must be running a minimum release of 5.0.
  3. AE Services must be running a minimum release of 4.1.
No additional configuration is required for an EC500 enabled extension to utilize the multiple registrations per device capability.

The IP Softphone flag appears on stations that have been configured with a station Type that uses the IP (H.323) or DCP protocol to interface to Communication Manager (e.g. Station Types of 46xx, 96xx, 24xx, 54xx, 64xx, etc).

Enabling the IP Softphone setting allows an Avaya IP Softphone or DMCC device registration to be used with the station.

When the registration utilizes client media mode so that Communication Manager replicates the media stream for the physical phone and sends it to the DMCC device, when the physical device is active on a call, the media stream is replicated. If the call is answered by the EC500 destination (e.g. a PSTN phone), the media is not replicated (as of release 5.2) and sent to the DMCC application.

Yes, beginning with AE Services Release 4.1 coupled with Communication Manager Release 4.0, the DMCC service supports monitoring and control of Avaya IP Softphone.

The DMCC_DMC license is acquired from the WebLM associated with AE Services. The DMCC_DMC is a replacement for an IP_API_A license. In order for a DMCC_DMC license to be allocated a few pre-conditions must be met, otherwise an IP_API_A license from Communication Manager will be allocated in its place (assuming that it is available).

  • The AE Services release must be 4.2.2 or more recent and Communication Manager's release must be 5.1 or more recent. Note that although the DMCC_DMC license capability is advertised as being available with the AE Services' 4.2 release, there was a bug preventing proper allocation (when used with Communication Manager Release 5.2) that was fixed in AE Services release 4.2.2.
  • There must be DMCC_DMC licenses available from the WebLM license server for AE Services.
  • The registration method must be registerTerminal (RegisterTerminalRequest if using XML). If the registerDevice method (a deprecated method) is used, then an IP_API_A license will always be allocated.
  • A 'switch connection' must be provisioned between AE Services and Communication Manager for the device undergoing registration. The switch connection link allows AE Services to inform Communication Manager that a DMCC_DMC license has been allocated. If a switch connection is not present (it must be seen in the contents of the deviceId, then AE Services defers the license allocation to Communication Manager, which will attempt to acquire an IP_API_A license. If the application is using IP addresses (for a CLAN or procr) when allocating deviceIds, then the H.323 Gatekeeper List must be configured in AE Services so an association between the IP Address and the appropriate switch connection.
  • The DeviceID specified in the RegisterTerminalRequest must include a switch-name that matches the 'switch connection'. This can be achieved by either:
    • specifying the matching switch-name (Connection Name) explicitly in the 'GetDeviceId' request, or
    • specifying a CLAN or procr (Processor Ethernet) IP address that is included in the "Edit PE/CLAN IPs" H.323 Gatekeeper's list provisioned for the appropriate 'switch connection'.
  • In the original version of this FAQ it stated, "The DMCC protocol version of DMCC API referred in StartApplicationSession interface must be http://www-ecma-international.org/standards/ecma-323/csta/ed3/priv3 (AE Services R4.2.2) or later." When testing with AE Services 6.1 it was found that when protocol version 3.0 was used AE Services allocated a DMCC_DMC license. This may have been true with AE Services 4.2 and greater, or it could be a change made to the product in some intermediate release that is not clearly documented.

Monitoring of VDNs is official supported starting with release 6.0 of Applications Enablement Services. In prior releases of AE Services support for VDN monitors is officially not supported although some functionality may work with release 5.2.

DMCC leverages H.323 stations in Communication Manager. H.323 stations leverage basic station functionality. Thus to have a stand alone DMCC Device, you must have rights to use (RTU) for a station and an IP station. Additionally, you must have an RTU the DMCC service which is sold of increments of a registered DMCC device.

DMCC has a number of different modes (Main, Dependent, Independent). A Main mode device is a stand alone device. Dependent and Independent need a Main mode device to work. A Main mode device can either be a Main mode DMCC device, or a desk or soft phone.

Depending on which DMCC mode your device will register in, a different collection of licenses is needed.

For historical reasons two forms of DMCC licenses were made available to account for what licensing a customer may already have in place/available before the DMCC application was added to the environment. Some customers have a large number of VALUE_STA and VALUE_IP_STA available to them prior to the addition of a DMCC application into the environment. A DMCC Basic license can often meet their needs. For other customers who are utilizing their Station and IP Station licenses, they need to add to that capacity while deploying DMCC based applications.

DMCC Full provides entitlements for a CM Station (VALUE_STA) license/rtu and a DMCC (VALUE_AES_DMCC_DMC) license/rtu. DMCC Basic only provides the DMCC license/rtu component. Currently R8 CM provides an entitlement for 18,000 IP Stations. Historically a DMCC Full license has provided a VALUE_IP_STA license, but when IP Stations became an entitlement this binding was dropped.

  • A Main mode DMCC device (e.g. what is needed for Single Step Conference and Service Observing forms of call recording) will consume a CM station (VALUE_STA), an IP Station (VALUE_IP_STA) and a DMCC (VALUE_AES_DMCC_DMC) license/rtu.
  • A Multiple Registration form of a DMCC device, e.g., a call recorder (which will use DMCC Dependent or Independent mode), will consume an VALUE_IP_STA and a VALUE_AES_DMCC_DMC license/rtu but not a VALUE_STA license/rtu.

A number of different licenses are required for the complete solution. There are three forms of recording solutions: Single Step Conference, Service Observing and Multiple Registration. Each has its own licensing requirements. Further, depending on the AE Services and Communication Manager Release there can be a difference in where the DMCC license resides (the AES: VALUE_DMCC_DMC or the CM: IP_API_A).

For a review of the different methods of performing call recording with Avaya Communication Manager and AE Services please see the developer guide Developing Client-side IP Call Recording Applications using Avaya Application Enablement Services available from the Devconnect portal. This document covers various advantages and limitations of the different recording designs. Familiarize yourself with this document and choose a design approach that meets your requirements.

Licenses

  • In all the described forms of call recording, a DMCC device is used as a recording device. DMCC devices used to record media must be registered. The act of registering a DMCC device consumes a DMCC license is required. See the section DMCC Licenses below for a full description of this license type.
  • Typically, the application monitors a target device (station) for calls using DMCC, TSAPI or JTAPI. In all of these cases, a TSAPI device monitor is used which consumes a TSAPI Basic User License (VALUE_TSAPI_USERS). TSAPI Basic User Licenses are managed by the AE Services' WebLM server. Monitoring the target is not strictly required but, without a monitor, the application will miss important information about the call (e.g. ANI, DNIS or redirecting information). For these reasons most call recording applications will consume a TSAPI Basic User license.
  • If Single Step Conference (SSC) is used to join a recorder into a call, the SSC request consumes a TSAPI Basic User License (VALUE_TSAPI_USERS) for the duration of the call. This license is in addition to the license used to monitor the target station. This extra license is not required for Service Observing or Multiple Registration.
  • A recording solution that utilizes the Single Step Conference or Service Observing methods will consume a station license (VALUE_STA; "Max Stations:" from page 1 of display system-parameters customer-options form) per recorder. The Multiple Registrations method does not consume a VALUE_STA license.

DMCC Licenses
The preferred license to use is a DMCC_DMC license (VALUE_DMCC_DMC) from the AE Services' WebLM server. However, in order to use a DMCC_DMC license, all of the following criteria must be met:

  • the Communication Manager release is 5.1 or later
  • the AE Services release is 4.2.2 or later
  • there is a provisioned switch connection between AE Services and the Communication Manager on which the device will be registered
  • the DeviceID in the RegisterTerminalRequest includes a switch-name that matches the switch connection
  • the WebLM server contains available VALUE_DMCC_DMC licenses
  • the DMCC protocol used in the StartApplicationSession request is http://www-ecma-international.org/standards/ecma-323/csta/ed3/priv3 (R4.2) or later
  • the registration method is RegisterTerminal and not the deprecated RegisterDevice

If all of these criteria are met, then, when a license is available from the WebLM server, a VALUE_DMCC_DMC license is used. Otherwise, an IP_API_A license on Communication Manager is required.

The DMCC License (either VALUE_DMCC_DMC or IP_API_A), the IP Station license (VALUE_IP_STA) and the Station license (VALUE_STA) are typically bundled together as a DMCC Full license. A DMCC Basic license is used by customers who have a large pool of existing unused VALUE_IP_STA and VALUE_STA licenses from which they can draw.

Application Enablement Protocol Licensing
For Communication Manager release 4.x and earlier, in order to access TSAPI services, the AE Services server requires an Applications Enablement Protocol (AEP) license (VALUE_AEC_CONNECTIONS) for each IP connection to Communication Manager. Avaya recommends a minimum of two AEP connections between each AE Services server and a specific Communication Manager. For AE Services release 5.2 or later, AEP connections are not licensed.

If the target customer site has large recording needs you may need additional AEP connections (VALUE_AEC_CONNECTIONS). The TSAPI monitors trigger ASAI events (messages) between Communication Manager and AE Services. A single AEP can handle 200 messages per second from AE Services to Communication Manager and 240 messages per second from Communication Manager to AE Services. The minimum number of ASAI messages per answered call is 5 (five). Various features and other activity will increase the traffic. The maximum traffic from Communication Manager to AE Services is 1000 messages per second. The maximum traffic from AE Services to Communication Manager is 1000 messages per second. Starting with AE Services 5.2 and Communication Manager 5.2, a processor Ethernet (procr) interface can be used for AE Services to Communication Manager AECs. This link is also limited to 1000 messages per second.

The purchase of an AE Services License 'gives' you two AEP licenses. Based on how much recording you want to do at a specific customer site, you need to calculate how many additional Station, IP Station, DMCC and TSAPI Basic User licenses will be needed to support that location.

It is recommended that the DMCC stations use their own C-LAN for registrations (separate from the C-LANs used for AEP connections and separate from the C-LANs used for regular IP Station registration). There are obvious reasons for this additional hardware on a large configuration, but for cost reasons some customers forgo those recommendations on smaller installs.

It is also recommended that the CLAN provisioned for a switch connection not be the same CLAN that is provisioned for the H.323 Gatekeeper (that used for H.323 device registrations by AE Services). This is again for performance reasons coupled with the AEP traffic limitations imposed by the CLAN.

If the application utilizes TSAPI Advanced User licenses (and there is no reason that a call recording application would necessarily need this functionality) AE Services further licenses the type of Communication Manager with which AE Services server uses Advanced TSAPI User functionality. Different Communication Manager types are classified into Small, Medium and Large. For additional details regarding when this license is needed and what switch types are associated with which classification, see the appropriate version of Avaya MultiVantage® Application Enablement Services Overview available from the Devconnect web portal.

Avaya Aura? Communication Manager System Capacities Table, available from the Avaya Support web portal, gives the recommended number of stations/C-LAN (max 400) and ASAI message traffic limitations for an AEP connection (given above).

Summary

License Type SO SSC Multiple Registrations
VALUE_STA Required Required Not Required
VALUE_IP_STA Required Required Required
IP_API_A or DMCC_DMC Required Required Required
VALUE_TSAPI_USERS Optional
(depending on the service utilized)
One license required for Single Step Conference with the option of a second
(depending on the service utilized)
Optional
(depending on the service utilized)
VALUE_AEC_CONNECTIONS Required for Communication Manager release 4.x and prior Required for Communication Manager release 4.x and prior Required for Communication Manager release 4.x and prior.


**Additional Information regarding call recording can be found in Developing Client-side Call Recording Applications using Application Enablement Services (PDF)

Example
A recorder with Communication Manager release 5.2 that requires ANI/DNIS information will use the following licenses per monitored station.

License Type SO SSC Multiple Registrations
VALUE_STA 1 1 0
VALUE_IP_STA 1 1 1
IP_API_A or DMCC_DMC 1 1 1
VALUE_TSAPI_USERS 1 2 1

The Custom Media Stream feature was added in Avaya Aura 8.0.1.  It allows an application to record or monitor the audio from individual legs of a call separately, or any combination of call legs (parties) as a unique media stream.  You can get more information on this feature in any of the DMCC Programmers Guides.

The term recorder in this FAQ is meant to cover any application that is receiving a media stream be it for recording, analytics, or other purposes.

Licenses:

The licensing requirements for normal, Full Stream, recorders are described in the FAQ "What licenses are required for DMCC based Call Recording solution?".

While a Full Stream recorder uses one registered DMCC terminal, a Custom Media Stream recorder requires more than one DMCC terminal to be registered.  This has an impact on the number of licenses required.

Each registered DMCC multiple registrations terminal consumes one of each of the following licenses:

  • VALUE_IP_STA
  • DMCC_DMC (or IP_API_A if no DMCC_DMC is available)

Example
A Custom Media Stream recorder which receives two streams and requires ANI/DNIS information will use the following licenses per monitored station.

License Type SO SSC Multiple Registrations
VALUE_STA 1 1 0
VALUE_IP_STA 2 2 2
DMCC_DMC or IP_API_A 2 2 2
VALUE_AES_TSAPI_USERS 1 2 1

Media Resources:

As well as using extra licenses, the Custom Media Stream feature will impact the DSP requirements of the Media Server or Media Gateway.

Each Custom Media Stream will consume a logical DSP resource.   Each VoIP party in the call will also consume a DSP resource.  So, a call with two VoIP parties and two recorders will consume a total of four DSP resources.

The Custom Media Stream feature can be applied to calls containing up to Communication Manager’s maximum party count (as of release 8.1, six). If the recorder requires it, six media streams containing one talker in the call’s audio can be established and configured. In order to support high availability (redundancy), a total of twelve recorders can be added through multiple registrations on a single extension. Each recorder requires the licensing and the DSP resources outlined above.

Note:  Many two-party VoIP calls use direct IP media between the phones and, therefore, do not consume any Communication Manager DSP resources.  When a recorder is added to a direct IP media call, the call becomes anchored on the Media Server/Gateway and each VoIP party must consume a DSP resource.  Therefore, adding recorders to a direct IP media call will cause the number of DSP resources consumed to increase by two more than the number of recorders added.

There are numerous contributors to poor audio quality. One important setting in this context though is the frames-per-packet setting in the Communication Manager ip-codecs form. Make sure this setting is '2' (20 ms of audio per packet), for the ip-codec in use by the ip-network-region that the DMCC devices are registering into. The use of higher settings with AE Services server media mode causes slow, choppy audio playback. It may be necessary to segment the DMCC devices into their own ip-network-region if there are conflicts with other device's needs relative to the frames-per-packet setting.

  • The DMCC SDK in conjunction with vustats-display-format feature of Avaya Communication Manager can be used to get the basic real-time status information for a split/skill.
    Note: The information received from vustats-display is not as comprehensive as what is received from CMS real-time Data Interface. In some cases the data can be accumulated and evaluated over time to produce similar statistics.
  • Configure a vustats-display-format (at the SAT use the 'change vustats-display-format <1>' command) for skill object and data types (agents-staffed, calls-waiting, etc.) you want to display in near real-time.
    Note: the Format Description field has limited length. If you want to display a number of data-types, you may need to configure multiple vustats-display-formats each with a sub-set of intended data-types. Display formats can be 'chained' together using the "Next Format Number" field of the vustats-display-format so that each button push displays the next format in order.
  • Create a station ('add station next' command at Communication Manager's SAT) in Communication Manager, enable IP Softphone, provide a security code and assign a button vu-display feature with required format and Skill ID.
  • Make sure that the provisioned extension(s) are monitored in AE Services and the CTI User has access to the Security Database device group containing the extension (or the CTI user has unrestricted access).
  • Get a DeviceID for the extension.
  • Create Phone Monitor for the device from the application in order to receive display updates.
  • Register the corresponding extension using DMCC SDK. An example for this is included in the sample application available with SDK (Tutorial.java). A main mode, no-media registration is necessary.
  • Use the getButtonInfo(0) to discover the buttons provisioned on the extension and locate the vu-stats button.
  • Once the station is registered, use ButtonPress Class (ch.ecma.csta.binding.ButtonPress) and its pressButton() method to press the button corresponding to vu-stats feature.
  • Once the button is pressed, use GetDisplay Class (ch.ecma.csta.binding.GetDisplay) and its getDisplay() method to get a snapshot of display information on the device, or utilize the display events.
  • Use the getDisplayList() method of GetDisplayResponse class to iterate through the display and extract the current vu-stats information.
  • Put the program in loop after appropriate interval to repeat these operations to get the real-time stats.
  • If you want to display more data-types and need to use multiple vustats-display-formats, you can either assign different formats to the different buttons of a station and then press buttons sequentially followed by display snapshot OR chain the formats using the 'Next Format Number" field OR you can assign different skills and formats to different stations, register them all with DMCC, and use multiple deviceIDs to get information for each station.

Due to security reasons, most Communication Manager Gateways default configurations will prevent RTP media originating from one port and being sent to a different port for a single connection. If a device does so, the result could be one way audio. Thus, best practice is that DMCC client media applications should use the same RTP port to transmit RTP and receive RTP to/from Communication Manager gateways and endpoints.

Many applications want to provide a call recording service, but do not let the parties in the call know that recording is being done. With Single Step Conference though the CTI interface the application can request 'silent mode'. This adds the recording party to the call, but does not change the display information at other parties in the call.

In general a Service observer is added silently (no display updates) with the exception of Service Observing tones.

A multiple registrations per device recorder is always added silently. It is the least detectable of all the recording strategies provided through AE Services.

Silent mode and no-talk techniques have the advantage of reducing timeslot consumption for the call, allowing more call recording capacity out of a gateway or chassis.

To record calls where an Attendant is a party on the call use the Service Observing method for call recording. This has been tested and a normal RTP stream is received.

Here is an explanation on why SSC will not work.

First a simple call flow to explain the scenario:

32105: The originating party from an on-pbx H323 phone

32111: one -X Attendant in telecommuter mode with 32106 as the telecommuter "telephone at" destination.

32106: H323 phone as "Telephone At" extension for one-X Attendant extension 32111

32107: Registered DMCC station (DMCC dashboard) in main mode with client media control

  1. Place TSAPI monitor on 32106
  2. 32105 calls 32111
  3. Phone call is alerting on 32111 and 32106
  4. Answer the calls alerting at on 32111 and 32106
  5. Get the call ID on 32106 from the DELIVERED (or ESTABLISHED) event from the TSAPI device monitor on x32106
  6. Send a SSC request with call ID from step 5, and extension 32106 as the controlling extension to add device 32107 (the DMCC call recording device) to the call.
  7. The SSC is successful
  8. Check the media stream being sent to the DMCC extension 32107 and it is all hex f's, which indicates silence.

What is going on 'behind the scenes' are actually two separate calls in Communication Manager. One call is between the calling party and the attendant. The second call is hidden from the users, but represents the audio path between the attendant and the telecommuter extension; in Communication Manager this class of connections is referred to as a "Service Link". Both the telecommuter extension AND the attendant extension have to be answered for the telecommuter extension to have a media path to the calling party. This service link carries the audio path from the first call to the telecommuter extension. While it is convenient to refer to this connection as a 'call' bear in mind that it is special form of connection to communication manger in many ways and that uniqueness of this type of connection is why there are some services that work with it and some that do not, and some sort of work, but don't completely.

In order to demonstrate that there are two different calls, run two tests. In the first test, run the above scenario, and do a snapshotDevice on the originating station (x32105) and on the telecommuter extension (x32106). Note that there are two different callIDs. The callID given from the snapshot of the originating station (x32105) would be the callID received if monitoring of attendants was allowed in TSAPI. This would be the callID to use for the recorder to get the proper RTP stream. The service link callID (from monitoring the telecommuter extension) is going to give you problematic RTP. The state of this 'call' is such that it is only ever expected to have one party in it along with a logical relationship with the caller/attendant call and doing SSC into this call violates that assumption the code is making.

In the second test replace one-x Attendant with Avaya IP Softphone. A TSAPI monitor can be place on IP Softphone to get the callID for the call as it exists at the destination party (as opposed to the originator?if the call is contained on a single Communication Manager these two callIDs will be the same). Here is the scenario replacing the attendant with IP Softphone:

32105: The originating party from an on-pbx H323 phone

32108: Avaya IP Softphone in telecommuter mode with 32106 as the telecommuter "telephone at" destination.

32106: H323 phone as "Telephone At" extension for 32108

32107: Registered DMCC station (DMCC dashboard) in main mode with client media control:

  1. Place TSAPI monitor on 32108
  2. Place TSAPI monitor on 32106
  3. 32105 calls 32108
  4. Get the call ID on 32108 from the DELIVERED or event in TSAPI
  5. Phone call is alerting on 32106
  6. Answer call on 32106
  7. Get the call ID on 32106 from the DELIVERED/ESTABLISHEd event in TSAPI
  8. Send a SSC request with call ID from step d, and extension 32106 as the controlling extension to add device 32107 (the DMCC call recording device) to the call.
  9. The SSC is successful
  10. Normal RTP media stream is received at extension 32107

The Avaya IP Softphone acts as a normal H323 telephone and can be monitored by TSAPI (Step a). So a monitor on the extension associated with IP Softphone gets a call ID (Step d). A monitor on the 'telephone At' extension as well shows a different call ID for the service link connection (Step g). Using the call ID from the Avaya IP Softphone extension (Step d) to complete SSC gets an RTP path that is not constant, and therefore not silent. However, SSC with the service link callID creates the problem that you see.

Since TSAPI cannot be used to monitor an attendant (or IP Attendant) extension, the application will be unable to get the callID for the 'right' call. Since services like SSC, Service Observing and Multiple Registrations per Device are not supported on 'Service Links' the behavior you are seeing is due to unsupported feature interactions.

With all of that said, the issue here is that since TSAPI does not support Attendants then SSC is not going to work to record calls as you won't have the correct call ID to add the recorder to the call.

Yes. There is an interaction between Selective Listen Hold (also referred to as Selective Listen Disconnect) and DMCC Multiple Registrations where a Multiple Registrations (MR) device may be impacted. Call media is defined to be - A generic reference to any one of, or a collection of voice (RTP) traffic, in-band tones, DTMF via RTP payload type indications, and out-of-band DTMF events for calls. An application may utilize one or more of these types of call media. The use of the Selective Listening Hold service by any deployed application to modify the call media to a specific extension will impact the call media received by any/all other applications using the Multiple Registrations feature that are associated with the same extension.

Application developers who are working with call media, should review the DevConnect White Paper Recommended Guidance for DMCC Applications Utilizing Call Media for further information.

Yes. Selective Listen Hold (SLH) – also referred to as Selective Listen Disconnect prevents the flow of voice and DTMF signals coming from a specified party from being received by another party in the call. In many cases blocking that pathway is valuable functionality. One use of this functionality is with a PCI-DSS application to prevent a contact center agent from hearing personal information supplied by a party in the call. However, if a recording device that is using DMCC Multiple Registrations, is receiving the agent’s audio stream, when SLH is invoked to reconfigure the audio the agent is receiving, the recording application is also blocked from receiving the voice and DTMF information sent by the far end party. This may or may not be expected by the call recording application. Another case could be that the Multiple Registrations application is doing analytics on the audio stream; when SLH is invoked, the analytics application may be prevented from receiving audio or DTMF events unexpectedly and undesirably. Since multiple applications may be involved at a customer site (one using Multiple Registrations and the other invoking SLH), but both operating on the same party in the call, the MR application may be unaware that the interaction is occurring. If being in full control of the receipt of all of the audio and DTMF information occurring in a call is required by the application, then the guidance in the DevConnect White Paper Recommended Guidance for DMCC Applications Utilizing Call Media should be reviewed for further information about how to properly design for this interaction.

Programming

The UCID can be obtained by using the interface CallInformationServices, which allows the applications to get detailed call information and to determine the status of the call information links. Note that if the application wishes to use CallInformationServices, switch name must be available in the device ID.

For more information about populating Switch Name please refer the FAQ "When do I need to supply the 'switchName' field value while invoking getDeviceID request?" available under the section DMCC, sub-section Other on http://www.avaya.com/devconnect.

The AE Services' switch connection must be operational to retrieve the call information. The switch connection (i.e. transport link) is used to multiplex various communication links between Communication Manager and the AE Services server. One of those communication links is a Definity API (aka DAPI) link which actually is used to retrieve the call information.

Some provisionable settings are also required on Communication Manager to receive Universal Call ID information. Please refer the FAQ "What settings are required on Communication Manager to receive Universal Call Identification (UCID) information in events?" available under the section TSAPI, sub-section Private Data on http://www.avaya.com/devconnect.

The following sample code snippet is used to determine the UCID:

DeviceID device;      /* This could be obtained either by the getDeviceId 
									 * method or by the getThirdPartyDeviceId method. 
									 */
try 
{ 
	/* Obtain a CallInformationServices object by invoking the ServiceProvider.getService()*/
	
	CallInformationServices callInfoSvcs = (CallInformationServices)csta.getService(CallInformationServices.class.getName()); 

	/* Get detailed information about the active call on the specified
	 * device. The device could be obtained using the getDeviceId request 
					   * or the getThridPartyDeviceId request. 
					   */
   
	GetCallInformation req = new GetCallInformation(); 
	req.setDevice(device); 
	CallInformationResponse callInformation = callInfoSvcs.getCallInformation(req); 
	long ucid = callInformation.getUniversalCallId(); 
}

catch (CstaException e)
{ 
	e.printStackTrace(); 
}
					
				

Buttons are assigned to devices during station administration via the Communication Manager System Access Terminal (SAT) interface. Each button is associated with a function constant (a button type) and a lamp identifier (this variable is unique for each button on the station). Applications can get the list of all buttons associated with the device (e.g. station) using the request getButtonInformation and then check the function constant of each returned button to identify the call appearance button(s). The following code snippet shows how to select and press a call appearance button using DMCC Java SDK:

GetButtonInformation buttonRequest = new GetButtonInformation();

// Set the deviceID for the button provisioning that 
// is to be obtained. The deviceID is obtained using either 
// getDeviceID method or getThirdPartyDeviceID method.

buttonRequest.setDevice(deviceID);

//Reference to Physical Device services can be obtained using //ServiceProvider.getServices Method.

GetButtonInformationResponse buttonResponse = 
	physSvcs.getButtonInformation(buttonRequest);
ButtonList list = buttonResponse.getButtonList();
ButtonItem[] buttons = list.getButtonItem();

	//Loop through each button in the list looking for the first call
	//appearance and store a reference to the first call
	//appearance button

String CallAppearanceButton = null;
List callAppearanceIDs = new ArrayList();
for (int i = 0; i < buttons.length; i++)
{
if (ButtonFunctionConstants.CALL_APPR.equals(
	   buttons[i].getButtonFunction())) 
{
  System.out.println("Button" + buttons[i].getButton() + " is a call appearance button.");

  // Keep track of this button ID so we know which lamp
  // corresponds to the first call appearance.

  CallAppearanceButton = buttons[i].getButton());
  break;
}
}

//Press the selected button.

if(CallAppearanceButton != null)
{
	ButtonPress buttonPressRequest = new ButtonPress();
	buttonPressRequest.setDevice(deviceID);
	buttonPressRequest.setButton(CallAppearanceButton);
	physSvcs.pressButton(buttonPressRequest);
}
				

Please refer the document Application Enablement Services Device, Media and Call Control API Java Programmers Guide, Release 4.2, An Avaya MultiVantage Communications Application, 02-300359, Issue 4, May 2008 for additional information.

The best way to track call status changes at a device is to use the call control service available in AE Services DMCC release 4.1. This service provides events for call status changes to an application which has placed a monitor on the device (extension).

If the application will not utilize call control services, the application can monitor the display of the device for changes. The application will also need to monitor the status of the call appearance lamps on the device to track which call the display is providing information for. Since a device has multiple display modes the application will need to monitor the status of various features that utilize the display such as the directory service, call information display services, vu-stats, ANI-Request Conference Display, etc.

The physicalDeviceServices.getDisplay() method provides a snapshot of the contents of the physical device's display. The returned information for the display includes the current contents of the display.

The sample code snippet shown below demonstrates the procedure to extract display information from the GetDisplayResponse() method:

//Reference to Physical Device services can be obtained using //ServiceProvider.getServices Method.
//deviceID is used as returned by GetDeviceID method.

GetDisplay request = new GetDisplay();
request.setDevice(deviceID);
GetDisplayResponse response = physDevServ.getDisplay(request);
DisplayListItems[] items = 
	 response.getDisplayList().getDisplayListItem();

for (int i=0; i < items.length; i++)
{ 									
		System.out.println (items[i].getContentsOfDisplay());
}
				
To get the latest display update using this method, an application may need to poll for the display content.

Another approach to achieve this is by monitoring for display update events. In both the cases, it is not possible to get specific call information such as call ID, or the number of connected parties in the call from the display information.

The status of the active call at the device can also be collected using Call Information Service and Snapshot Call.

Call Information Service allows applications to get detailed call information for the active connected call and to determine the status of the call information link. The call information link must be operational to get the call information.

The Snapshot Device service provides information about calls associated with a given device. The information provided identifies each call the device is participating in and the local connection state of the device in that call.

Snapshot Call provides information about the devices participating in a specified call.

The use of the Snapshot Services requires the setup of the connection and CTI-link between the AE Services server and Communication Manager as well as a basic TSAPI license.

For more information on Call Information Services and Snapshot Services, refer to any one of the following documents:

  • AE Services Device, Media and Call Control API XML Programmer's Guide, 02-300358, Issue 4, May, 2008
  • AE Services Device, Media and Call Control API Java Programmer's Guide, 02-300359, Issue 4, May, 2008
  • AE Services Device, Media and Call Control API .NET Programmer's Guide, 02-602658, Issue 4, May, 2008

DMCC provides the getRegistrationState request to find out if a particular device ID is already registered. If the device is controlled by more than one client session, applications should make use of this request before sending a registration request. An attempt to re-allocate a device ID by the same session is always granted to help avoid lockout situations.

The following code snippet shows how to get the registration state for a particular device using DMCC Java SDK.

// Reference to Registration services(regSvcs) can be obtained using the
// ServiceProvider.getServices Method.
// deviceID is returned by the GetDeviceID method.

GetRegistrationState request = new GetRegistrationState();
request.setDevice(deviceID);
GetRegistrationStateResponse resp =    
regSvcs.getRegistrationState(request);
RegistrationState state = resp.getRegistrationState();
System.out.println("Registration state is :"+ state); 
				

Pressing the crss-alert button on a digital telephone will cancel the emergency alert, which will stop the siren alarm at the telephone. The name and the extension of the person who had dialed the emergency number persist on the phone's display. To completely cancel an alert and clear the display, each administered user must press the normal button.

For more information, please refer Feature Description and Implementation for Avaya Communication Manager, 555-245-205, Issue 6, January 2008 under the section titled 'Crisis Alert'.

To perform this function using DMCC, the application should first find out the button numbers for the button function constants of the crss-alert and normal buttons and then press these buttons.

The code snippet below is tested and works with Communication Manager version 4.0. Code snippet shows how to retrieve the button numbers for the button function constants for the crss-alert and normal buttons:


GetButtonInformation buttonRequest = new GetButtonInformation();

// Set the deviceID for the button provisioning that 
// is to be obtained. The deviceID is obtained using either 
// getDeviceID method or getThirdPartyDeviceID method.

buttonRequest.setDevice(deviceID);

// Reference to Physical Device services can be obtained using
// ServiceProvider.getServices Method.

GetButtonInformationResponse buttonResponse = 	physSvcs.getButtonInformation(buttonRequest);
ButtonList list = buttonResponse.getButtonList();
ButtonItem[] buttons = list.getButtonItem();

String CrisisAlertButton = null;
String NormalButton = null;

for (int i = 0; i < buttons.length; i++)
{
	if (ButtonFunctionConstants.CRSS_ALERT.equals
		(buttons[i].getButtonFunction())) 
	{
		System.out.println("Button" + buttons[i].getButton() +				 " is a Crisis Alert button.");
		CrisisAlertButton = buttons[i].getButton());		
	} 
	   
	if (ButtonFunctionConstants.NORMAL.equals
		(buttons[i].getButtonFunction())) 
	{
		System.out.println("Button" + buttons[i].getButton() + 				" is a Normal/Exit button.");
		NormalButton = buttons[i].getButton());		 
	} 
}

// Press the selected button.

if(CrisisAlertButton != null)
{
	  ButtonPress buttonPressRequest = new ButtonPress();
	  buttonPressRequest.setDevice(deviceID);   
	  buttonPressRequest.setButton(CrisisAlertButton);  
	  physSvcs.pressButton(buttonPressRequest);
}

if(NormalButton != null)
{     ButtonPress buttonPressRequest = new ButtonPress();
	  buttonPressRequest.setDevice(deviceID);
	  buttonPressRequest.setButton(NormalButton);  
	  physSvcs.pressButton(buttonPressRequest);
}