Avaya Client SDK

< Back to Package Overview

Getting started with Video

This article provides an introduction to the concepts and resources provided by the Avaya Client SDK to support integration of video features into your application.

MacOS Video Capture and Rendering

Video components are connected via VideoSources and VideoSinks.

There are three main components for handling video:

Prerequisites

This article assumes you have access to two objects: the CSVideoChannel id and an instance of <CSVideoInterface>. These objects are needed to complete the examples below, but their full explanation is beyond the scope of this article.

Note: The code fragments below sometimes lack complete error checking for the sake of brevity.

Video Capture Setup

The first video component we need to create is the video capturer: CSVideoCapturerOSX. In most applications, a single instance is all that is needed. This object will capture video frames from the selected camera and output them to the selected <CSVideoSink>.

// create an object of video capturer
CSVideoCapturerOSX *_videoCapturer = [[CSVideoCapturerOSX alloc] init];

// to handle run-time errors set CSVideoCapturerDelegate delegate
[_videoCapturer setDelegate:self];
// and add  to interface of class like:

// determine the method for handling errors
- (void) videoCapturerRuntimeError:(NSError *)error;

Connecting VideoSource to VideoChannel

The CSVideoChannel is responsible for transmitting the video the remote user. In order to send the video from our camera, we need to connect the CSVideoCapturerOSX and the CSVideoChannel.

// code showing connecting capturer and channel via sinks/sources

// get the video sink associated with the VideoChannel
// the video sink is then used to set/associate a videoSource/capturer
// to the channel.
id videoSink = [self.videoInterface getLocalVideoSink: 
self.videoSessionChannel];

// connect the CSVideoCapturerOSX and the VideoChannel
[_videoCapturer setVideoSink:videoSink];

Now any video frames produced by "_videoCapturer" will be sent to the VideoChannel and eventually to the remote user.

Rendering Setup

There are typically two video renderers we want to create. One for the remote video and one for a local preview of our own video.

// Create a renderer for the local preview
CSVideoRendererOSX *m_localLayer = [[CSVideoRendererOSX alloc] init];
// Create a renderer for the video received from the remote user
CSVideoRendererOSX *m_remoteLayer = [[CSVideoRendererOSX alloc] init];

// Capturing 'self' strongly in these blocks is likely to lead to 
// a retain cycle.
// Define a weak reference to 'self' and capture that instead.
__weak typeof(self) _self = self;

// Determine the frame size listeners
m_localLayer.videoFrameSizeListener = ^(CGSize size)
{
    [_self updateView:_self.localView
              andText:_self.localText
             withSize:size];
};

m_remoteLayer.videoFrameSizeListener = ^(CGSize size)
{
    [_self updateView:_self.remoteView
              andText:_self.remoteText
             withSize:size];
};

// Connect wiew's layers with renderers
self.localView.layer = m_localLayer;
self.remoteView.layer = m_remoteLayer;

// The remote renderer needs to be connected to the VideoChannel
id remoteVideoSink = (id)self.remoteView.layer;
id remoteVideoSource = [self.videoInterface getRemoteVideoSource:
 self.videoSessionChannel];
[remoteVideoSource setVideoSink: remoteVideoSink];

// For local preview
id localVideoSink = (id) self.localView.layer;
[_videoCapturer setLocalVideoSink:localVideoSink];

Selecting a Camera

The CSVideoCapturerOSX instance provides a collection of cameras current installed on your computer. Typically this list of cameras would be presented to the user to allow the selection of the desired camera. This collection of cameras will automatically update as cameras are added or removed from the computer.

// Pick a camera from the collection
_videoCaptureDevice = _videoCapturer.availableDevices.firstObject;

// _videoCapturer.availableDevices is of type NSArray
// code for listening to changes in camera list on adding or removing camera 
// from Mac
[_videoCapturer addObserver:self
                 forKeyPath:KeyPathAvailableDevices
                    options:@"availableDevices"
                    context:nil];

- (void) observeValueForKeyPath:(NSString *)keyPath
                       ofObject:(id)object
                         change:(NSDictionary *)change
                        context:(void *)context
{
    if ([object isEqual:_videoCapturer] &&
        [keyPath isEqual:KeyPathAvailableDevices])
    {
        NSArray* availableDevices = [change valueForKey:NSKeyValueChangeNewKey];

        if (! [availableDevices containsObject:self.videoCaptureDevice])
        {
            self.videoCaptureDevice = availableDevices.firstObject;
        }
    }
}

Starting Video Capture

Now that we have selected a camera to use, we are ready to start capture. Cameras support multiple capture formats. These formats can vary in resolution and frame rate. When starting a capture session, constraints can be laced on these formats in order to capture at the desired quality.

// CSVideoCapturerOSX will use the best available capture format within these
// constraints max width, max height, min framerate, max framerate
_videoCaptureFormat = [[CSVideoCaptureFormat alloc] init];
_videoCaptureFormat.minRate = 15;
_videoCaptureFormat.maxRate = 30;
_videoCaptureFormat.maxWidth = 1280;
_videoCaptureFormat.maxHeight = 720;

// starting capture is an asynchronous process.
// The third parameter is a block that is called once the Start operation 
// has finished.
[_videoCapturer openVideoCaptureDevice:_videoCaptureDevice
                            withFormat:_videoCaptureFormat
                        withCompletion:^(NSError *error)
                        {
                            if (error)
                            {
                                // handle the error of opening capture device.
                            }
                         }];

You may decide after capture has begun that you want to switch cameras.

- (void) openVideoCaptureDevice:(CSVideoCaptureDevice*)videoCaptureDevice
                     withFormat:(CSVideoCaptureFormat*)videoCaptureFormat
                 withCompletion:(void (^)(NSError* error))completion;

There is no need to stop the current capture session before starting again with a new camera. It will done inside the library. Also note that there is no changes required relating to the video channel or video renderer. Those associations are made with the CSVideoCapturerOSX, not an individual camera.

Stopping Video Capture

Stopping the video capture is an easy process. You simply call the following:

[_videoCapturer closeVideoCaptureDeviceWithCompletion:nil];

At this point, you may wish to disassociate the CSVideoCapturerOSX with the video channel and video renderer.

[_videoCapturer setLocalVideoSink:nil];
[_videoCapturer setVideoSink:nil];

Cleanup

Once you are completely done with an object, it is important that you dispose of it properly.

// code for cleanup

// remote all event handlers
[_videoCapturer setDelegate:nil];
[_videoCapturer removeObserver:self
                    forKeyPath:KeyPathAvailableDevices];
m_localLayer.videoFrameSizeListener = nil;
m_remoteLayer.videoFrameSizeListener = nil;

// clear remote and local view
id remoteVideoSink = (id)view.layer;
id remoteVideoSource = [self.videoInterface getRemoteVideoSource:
 channelId];
[remoteVideoSource setVideoSink: nil];
[remoteVideoSink handleVideoFrame: nil];

// dispose the renderers itself
m_localLayer = nil;
m_remoteLayer = nil;

// dispose the capturer and capturer format
_videoCaptureFormat = nil;
_videoCapturer = nil;

Putting it all Together

// interface (.h file)
@interface VideoExample 

@property (nonatomic, readonly) CSVideoCapturerOSX* videoCapturer;
@property (nonatomic, readwrite) CSVideoCaptureDevice* videoCaptureDevice;

@end

// implementation (.m file)
@interface VideoExample()
{
    CSVideoCaptureFormat* _videoCaptureFormat;
    BOOL _videoCaptureStarted;

    CSVideoRendererOSX *m_localLayer;
    CSVideoRendererOSX *m_remoteLayer;
}
@end

@implementation VideoExample

static NSString* const KeyPathAvailableDevices = @"availableDevices";

- (instancetype)init
{
    self = [super init];
    if (self)
    {
        _videoCapturer = [[CSVideoCapturerOSX alloc] init];
        _videoCaptureDevice = _videoCapturer.availableDevices.firstObject;

        _videoCaptureFormat = [[CSVideoCaptureFormat alloc] init];
        _videoCaptureFormat.minRate = 15;
        _videoCaptureFormat.maxRate = 30;
        _videoCaptureFormat.maxWidth = 1280;
        _videoCaptureFormat.maxHeight = 720;


        [_videoCapturer setDelegate:self];
        [_videoCapturer addObserver:self
                         forKeyPath:KeyPathAvailableDevices
                            options:NSKeyValueObservingOptionNew
                            context:nil];


        // Capturing 'self' strongly in these blocks is likely to lead to 
        // a retain cycle.
        // Define a weak reference to 'self' and capture that instead.
        __weak typeof(self) _self = self;

        m_localLayer = [[CSVideoRendererOSX alloc] init];
        m_localLayer.videoFrameSizeListener = ^(CGSize size)
        {
            // respond to local preview frame size change
        };

        m_remoteLayer = [[CSVideoRendererOSX alloc] init];
        m_remoteLayer.videoFrameSizeListener = ^(CGSize size)
        {
            // respond to remote frame size change
        };

        self.localView.layer = m_localLayer;
        self.localView.wantsLayer = YES;

        self.remoteView.layer = m_remoteLayer;
        self.remoteView.wantsLayer = YES;

        id remoteVideoSink = (id)self.remoteView
        .layer;
        id remoteVideoSource = [self.videoInterface 
        getRemoteVideoSource: self.videoSessionChannel];
        [remoteVideoSource setVideoSink: remoteVideoSink];
    }
    return self;
}

- (void) dealloc
{
    // remote all event handlers
    [_videoCapturer setDelegate:nil];
    [_videoCapturer removeObserver:self
                        forKeyPath:KeyPathAvailableDevices];
    m_localLayer.videoFrameSizeListener = nil;
    m_remoteLayer.videoFrameSizeListener = nil;

    // clear remote and local view
    id remoteVideoSink = (id)view.layer;
    id remoteVideoSource = [self.videoInterface 
    getRemoteVideoSource: channelId];
    [remoteVideoSource setVideoSink: nil];
    [remoteVideoSink handleVideoFrame: nil];

    // dispose the renderers itself
    m_localLayer = nil;
    m_remoteLayer = nil;

    // dispose the capturer and capturer format
    _videoCaptureFormat = nil;
    _videoCapturer = nil;
}

- (void) observeValueForKeyPath:(NSString *)keyPath
                       ofObject:(id)object
                         change:(NSDictionary *)change
                        context:(void *)context
{
    if ([object isEqual:_videoCapturer] &&
        [keyPath isEqual:KeyPathAvailableDevices])
    {
        NSArray* availableDevices = [change valueForKey:NSKeyValueChangeNewKey];

        if (! [availableDevices containsObject:self.videoCaptureDevice])
        {
            self.videoCaptureDevice = availableDevices.firstObject;
        }
    }
}

- (void) videoCapturerRuntimeError:(NSError *)error
{
    [self logError:error withTag:@"CSVideoCapturerOSX 
    videoCapturerRuntimeError"];
}

- (void) openVideoCaptureDevice
{
    [_videoCapturer openVideoCaptureDevice:_videoCaptureDevice
                                withFormat:_videoCaptureFormat
                            withCompletion:^(NSError *error)
     {
         if (error)
         {
             [self logError:error withTag:@"CSVideoCapturerOSX 
             openVideoCaptureDevice"];
         }
     }];
}

- (void) closeVideoCaptureDevice
{
    [_videoCapturer closeVideoCaptureDeviceWithCompletion:nil];
}

// switch camera
// videoCaptureDevice is a new camera from the collection
- (void) setVideoCaptureDevice:(CSVideoCaptureDevice *)videoCaptureDevice
{
    _videoCaptureDevice = videoCaptureDevice;

    if (_videoCaptureStarted)
    {
        if (_videoCaptureDevice)
        {
            [self openVideoCaptureDevice];
        }
        else
        {
            [self closeVideoCaptureDevice];
        }
    }
}

- (void) startVideoCapturer
{
    id localVideoSink = (id) self.localView.layer;
    id videoSink = [self.videoInterface 
    getLocalVideoSink: self.videoSessionChannel];

    [_videoCapturer setLocalVideoSink:localVideoSink];
    [_videoCapturer setVideoSink:videoSink];

    [self openVideoCaptureDevice];
    _videoCaptureStarted = YES;
}

- (void) stopVideoCapturer
{
    [self closeVideoCaptureDevice];
    _videoCaptureStarted = NO;
}

- (void) clearRemoteView:(SCPTestView*)view forChannelId:(int)channelId
{
    id remoteVideoSink = (id)view.layer;
    id remoteVideoSource = [self.videoInterface 
    getRemoteVideoSource: channelId];
    [remoteVideoSource setVideoSink: nil];
    [remoteVideoSink handleVideoFrame: nil];
}

@end