Avaya Client SDK

< Back to Package Overview

Getting started with Video

This article provides an introduction to the concepts and resources provided by the Avaya Client SDK to support integration of video features into your application.

iOS Video Capture and Rendering

Video components are connected via VideoSources and VideoSinks.

There are three main components for handling video:

Prerequisites

This article assumes you have access to two objects: the CSVideoChannel id and an instance of <CSVideoInterface>. These objects are needed to complete the examples below, but their full explanation is beyond the scope of this article.

Note: The code fragments below sometimes lack complete error checking for the sake of brevity.

Video Capture Setup

The first video component we need to create is the video capturer: CSVideoCapturerIOS. In most applications, a single instance is all that is needed. This object will capture video frames from the selected camera and output them to the selected <CSVideoSink>.

// create an object of video capturer
CSVideoCapturerIOS *m_capturer = [[CSVideoCapturerIOS alloc] init];

// to handle run-time errors set CSVideoCapturerDelegate delegate
[m_capturer setDelegate:self];
// and add  to interface of class

// determine the method for handling errors
- (void) videoCapturerRuntimeError:(NSError *)error;
- (void) videoCapturerWasInterrupted;
- (void) videoCapturerInterruptionEnded;

Connecting VideoSource to VideoChannel

The CSVideoChannel is responsible for transmitting the video the remote user. In order to send the video from our camera, we need to connect the CSVideoCapturerIOS and the CSVideoChannel.

// code showing connecting capturer and channel via sinks/sources

// get the video sink associated with the VideoChannel
// the video sink is then used to set/associate a videoSource/capturer 
// to the channel.
id videoSink = [self.videoInterface getLocalVideoSink: 
self.videoSessionChannel];

// connect the CSVideoCapturerIOS and the VideoChannel
[m_capturer setVideoSink:videoSink];

Now any video frames produced by "m_capturer" will be sent to the VideoChannel and eventually to the remote user.

Rendering Setup

There are typically two video renderers we want to create. One for the remote video and one for a local preview of our own video.

// Create a renderer for the local preview
CSVideoRendererIOS *m_localLayer = [[CSVideoRendererIOS alloc] init];
// Create a renderer for the video received from the remote user
CSVideoRendererIOS *m_remoteLayer = [[CSVideoRendererIOS alloc] init];

[m_localLayer addObserver:self forKeyPath:@"videoFrameSize" options:0 
context:nil];
[m_remoteLayer addObserver:self forKeyPath:@"videoFrameSize" options:0 
context:nil];

// Determine the frame size listeners
m_localLayer.videoFrameSizeListener = ^(CGSize size)
{
    self.localViewAspectConstraint = 
    [self updateAspectConstraint:self.localViewAspectConstraint
          view:self.localView
          ratio:(size.width / size.height)];
};

m_remoteLayer.videoFrameSizeListener = ^(CGSize size)
{
    self.remoteViewAspectConstraint = 
    [self updateAspectConstraint:self.remoteViewAspectConstraint
          view:self.remoteView
          ratio:(size.width / size.height)];
};

// Connect wiew's layers with renderers
self.localView.layer = m_localLayer;
self.remoteView.layer = m_remoteLayer;

// The remote renderer needs to be connected to the VideoChannel
id remoteVideoSink = (id)self.remoteView.layer;
id remoteVideoSource = [self.videoInterface getRemoteVideoSource:
 self.videoSessionChannel];
[remoteVideoSource setVideoSink: remoteVideoSink];

// For local preview
CSVideoRendererIOS *m_renderer = (CSVideoRendererIOS *) self.localView.layer;
[m_renderer setMirrored:(position == CSVideoCameraPositionFront)]; //optional
[m_capturer setLocalVideoSink:m_renderer];

Selecting a Camera

The iOS device can have front and back cameras. The CSVideoCapturerIOS instance provides a method to check camera at position. Typically the user is granted access to the switching camera.

// Pick a camera
BOOL usingFrontCamera = YES; // or NO it is your choice
CSVideoCameraPosition position = usingFrontCamera ? CSVideoCameraPositionFront 
: CSVideoCameraPositionBack;
[m_capturer useVideoCameraAtPosition:position completion:nil];

// CSVideoCameraPositionBack is assosiated with AVCaptureDevicePositionBack
// CSVideoCameraPositionFront is assosiated with AVCaptureDevicePositionFront

// to check that the device has a video camera on the position
BOOL result = [capturer hasVideoCameraAtPosition:position];

Starting Video Capture

Now that you have selected a camera to use, you are ready to start capture. Cameras support multiple capture formats. These formats can vary in resolution and frame rate. When starting a capture session, constraints can be placed on these formats in order to capture at the desired quality.

// CSVideoCapturerIOS will use the best available capture format within these 
// defined constraints
// max width, max height.
typedef NS_ENUM(NSUInteger, CSVideoCapturerParams)
{
    CSVideoCapturerParams_640x480_480x640 = 0,
    CSVideoCapturerParams_640x480_640x360,
    CSVideoCapturerParams_640x480_624x352,
    CSVideoCapturerParams_640x480_480x272,
    CSVideoCapturerParams_480x360_480x272,
    CSVideoCapturerParams_352x288_320x192,
    CSVideoCapturerParams_352x288_320x180,
    CSVideoCapturerParams_MaxIndex,
};

// choose one of them and set it to capturer
[m_capturer setParams:CSVideoCapturerParams_640x480_480x272];
// and can set the frame rate
[m_capturer setFrameRate:30];


// Starting capture is an asynchronous process.
// The first parameter is a position of camera.
// The second parameter is a block that is called once the Start operation
// has finished.
[m_capturer useVideoCameraAtPosition:position completion:^(NSError *error) {
    if (error)
    {
        NSLog(@"useVideoCameraAtPosition error:%@", error);
    }
}];

You may decide after capture has begun that you want to switch cameras. You can do this by simply calling again the following method:

- (void) useVideoCameraAtPosition:(CSVideoCameraPosition)position
                       completion:(void (^)(NSError *error))completion;

There is no need to stop the current capture session before starting again with a new camera. It will done inside the library. Also note that there is no changes required relating to the video channel or video renderer. Those associations are made with the CSVideoCapturerIOS, not an individual camera.

Stopping Video Capture

Stopping the video capture is an easy process. You simply call the following:

[m_capturer useVideoCameraAtPosition:nil completion:nil];

At this point, you may wish to disassociate the CSVideoCapturerIOS with the video channel and video renderer.

[m_capturer setVideoSink:nil];
[m_capturer setLocalVideoSink:nil];

Cleanup

Once you are completely done with an object, it is important that you dispose of it properly.

// code for cleanup

// remote all event handlers
[m_renderer removeObserver:self forKeyPath:@"videoFrameSize"];
[m_capturer setDelegate:nil];
m_localLayer.videoFrameSizeListener = nil;
m_remoteLayer.videoFrameSizeListener = nil;

// clear remote and local view
id remoteVideoSink = (id)view.layer;
id remoteVideoSource = [self.videoInterface 
getRemoteVideoSource: channelId];
[remoteVideoSource setVideoSink: nil];
[remoteVideoSink handleVideoFrame: nil];

// dispose the renderers itself
m_localLayer = nil;
m_remoteLayer = nil;

// dispose the capturer itself
m_capturer = nil;

Putting it all Together

// interface (.h file)
@interface VideoExample 

@end

// implementation (.m file)
@interface VideoExample()
{
    CSVideoCapturerIOS *m_capturer;
    CSVideoRendererIOS *m_localLayer;
    CSVideoRendererIOS *m_remoteLayer;
}
@end

@implementation VideoExample

- (instancetype)init
{
    self = [super init];
    if (self)
    {
        self.usingFrontCamera = YES;
        m_capturer = [[CSVideoCapturerIOS alloc] init];
        [m_capturer setDelegate:self];

        // Create a renderer for the local preview
        m_localLayer = [[CSVideoRendererIOS alloc] init];
        // Create a renderer for the video received from the remote user
        m_remoteLayer = [[CSVideoRendererIOS alloc] init];

        [m_localLayer addObserver:self forKeyPath:@"videoFrameSize" 
        options:0 context:nil];
        [m_remoteLayer addObserver:self forKeyPath:@"videoFrameSize" 
        options:0 context:nil];

        // Determine the frame size listeners
        m_localLayer.videoFrameSizeListener = ^(CGSize size)
        {
            self.localViewAspectConstraint = 
            [self updateAspectConstraint:self.localViewAspectConstraint
                                    view:self.localView
                                   ratio:(size.width / size.height)];
        };

        m_remoteLayer.videoFrameSizeListener = ^(CGSize size)
        {
            self.remoteViewAspectConstraint = 
            [self updateAspectConstraint:self.remoteViewAspectConstraint
                                    view:self.remoteView
                                   ratio:(size.width / size.height)];
        };

        // Connect wiew's layers with renderers
        self.localView.layer = m_localLayer;
        self.remoteView.layer = m_remoteLayer;

        // The remote renderer needs to be connected to the VideoChannel
        id remoteVideoSink = (id)self
        .remoteView.layer;
        id remoteVideoSource = [self.videoInterface 
        getRemoteVideoSource: self.videoSessionChannel];
        [remoteVideoSource setVideoSink: remoteVideoSink];

        // For local preview
        CSVideoRendererIOS *m_renderer = (CSVideoRendererIOS *) self
        .localView.layer;
        [m_renderer setMirrored:
        (position == CSVideoCameraPositionFront)]; //optional
        [m_capturer setLocalVideoSink:m_renderer];
    }
    return self;
}

- (void) videoCapturerRuntimeError:(NSError *)error {

}
- (void) videoCapturerWasInterrupted {

}
- (void) videoCapturerInterruptionEnded {

}

- (void) dealloc
{
    // remote all event handlers
    [m_renderer removeObserver:self forKeyPath:@"videoFrameSize"];
    [m_capturer setDelegate:nil];
    m_localLayer.videoFrameSizeListener = nil;
    m_remoteLayer.videoFrameSizeListener = nil;

    // clear remote and local view
    id remoteVideoSink = (id)view.layer;
    id remoteVideoSource = [self.videoInterface 
    getRemoteVideoSource: channelId];
    [remoteVideoSource setVideoSink: nil];
    [remoteVideoSink handleVideoFrame: nil];

    // dispose the renderers itself
    m_localLayer = nil;
    m_remoteLayer = nil;

    // dispose the capturer itself
    m_capturer = nil;
}

- (void) startVideoCapturer
{
    if (self.enableVideoSend)
    {
        CSVideoCameraPosition position = self.usingFrontCamera ? 
        CSVideoCameraPositionFront : CSVideoCameraPositionBack;

        id videoSink = [self.videoInterface getLocalVideoSink: 
        self.videoSessionChannel];

        m_renderer = (CSVideoRendererIOS *) self.localView.layer;
        [m_renderer setMirrored:(position == CSVideoCameraPositionFront)];

        [m_capturer setLocalVideoSink:m_renderer];
        [m_capturer setVideoSink:videoSink];
        [m_capturer setParams:CSVideoCapturerParams_640x480_480x272];
        [m_capturer useVideoCameraAtPosition:position completion:nil];
    }
}

- (void) stopVideoCapturer
{
    if (self.enableVideoSend)
    {
        [m_capturer useVideoCameraAtPosition:nil completion:nil];
        [m_capturer setVideoSink:nil];
        [m_capturer setLocalVideoSink:nil];

        [m_renderer handleVideoFrame:nil];
    }
}

- (void) clearRemoteView:(SCPTestView*)view forChannelId:(int)channelId
{
    id remoteVideoSink = (id)view.layer;
    id remoteVideoSource = [self.videoInterface 
    getRemoteVideoSource: channelId];
    [remoteVideoSource setVideoSink: nil];
    [remoteVideoSink handleVideoFrame: nil];
}

@end