In multi-party video calls, attendees can enable the simulcast feature to enhance the overall video quality. Simulcast is a standardized technique where the video publishers create multiple renditions, or layers, of the same video source and video subscribers have the flexibility to choose the rendition that best fits their needs based on such factors as available bandwidth, compute and screen size.
The uplink policy controls the configuration of the renditions through camera capture and encoding parameters. The simulcast-enabled uplink policy is SimulcastUplinkPolicy.
When enabling simulcast, you should use VideoPriorityBasedPolicy to allow switching between layers in response to application use-cases or network adaptation. More details about priority-based downlink policy can be
found here.
Simulcast support is built into the WebRTC library in the majority of browsers. The video input resolution is used to initially configure the attributes and maximum number of allowed simulcast streams. In WebRTC, if the RTCRtpSender is configured to have three layers of encoding, then the top layer is specified by the video input resolution. The resolution of the middle layer is scaled down by two vertically and horizontally by WebRTC. The lower layer is scaled down by four. The WebRTC library supports a maximum of three simulcast layers, and only when the input resolution is 960x540 or higher will all three be available.
The recommended resolution in JS SDK is 1280x720. This resolution will provide the most flexibility and allow subscribers to maintain smoother video quality transitions. In certain circumstances, like mobile browsers, the input resolution may not be high enough to support all desired simulcast streams, so the logic will adapt appropriately to send as many as possible.
The SimulcastUplinkPolicy configures RTCRtpSender to have three encoding layers, but only ever enables two of them. Which two are enabled is based on a variety of factors. Experiments show that configuring RTCRtpSender with three encoding layers up front and dynamically enabling and disabling layers provides a better experience and reduces the burden of having to dynamically manage capture resolution and encoding parameters.
WebRTC ultimately controls how much data is being sent to the network based on its bandwidth estimation algorithm. It's very hard to circumvent the estimated bandwidth or trick WebRTC into sending more than it estimates. Most browsers expose WebRTC peer connection statistics which developers can access to retrieve the estimated available bandwidths for uplink or downlink.
The estimated available bandwidth reveals some information of the health of the E2E network and can trigger various WebRTC behaviors when facing network adversity or in recovering from a network glitch. SimulcastUplinkPolicy does its best to anticipate and react to the underlying WebRTC behavior and work with it to avoid further impact on the video quality of services.
The SimulcastUplinkPolicy implements the following logic:
Publishers/Attendees
Estimated Uplink Bandwidth
Simulcast stream 1
Simulcast stream 2
Attendees <= 2
Any
1280x720@15fps 1200 kbps
Not used
Publishers <= 4 and
>= 1000 kbps
1280x720@15fps 1200 kbps
320x180@15fps 300 kbps
Publishers <= 6 and
>= 350 kbps
640x360@15fps 600 kbps
320x180@15fps 200 kbps
Publishers <= 6 and
> 300 kbps
640x360@15fps 600 kbps
320x180@15fps 150 kbps
Publishers > 6 and
>= 350 kbps
640x360@15fps 350 kbps
320x180@15fps 200 kbps
Publishers > 6 and
>= 300 kbps
640x360@15fps 350 kbps
320x180@15fps 150 kbps
Any number publishers
< 300 kbps
Not used
320x180@15fps 300 kbps
The table entries represent the maximum configuration. When CPU and bandwidth consumption is overused, WebRTC will dynamically adjust bitrate, disable a layer or scale down resolution. The SimulcastUplinkPolicy has a monitoring mechanism which tracks the sending status and automatically adjusts without need for application intervention.
Note that simulcast is disabled when there are only 2 or fewer attendees. This is because WebRTC has additional functionality to request lower bitrates from the remote end, and we will forward these requests if there are no competing receivers (i.e. if the receiving estimates it has 200kbps downlink bandwidth available, this will be sent and relayed in a message to the sending client). Therefore there is no need for simulcast based adaption.
import {ConsoleLogger,DefaultDeviceController,DefaultMeetingSession,LogLevel,MeetingSessionConfiguration} from'amazon-chime-sdk-js';constlogger = newConsoleLogger('MyLogger', LogLevel.INFO);constdeviceController = newDefaultDeviceController(logger);// You need responses from server-side Chime API. See 'Getting responses from your server application' in the README.constmeetingResponse = // The response from the CreateMeeting API action.constattendeeResponse = // The response from the CreateAttendee or BatchCreateAttendee API action.// This meeting session configuration will be used to enable simulcast in the next step.constconfiguration = newMeetingSessionConfiguration(meetingResponse, attendeeResponse);
Now enable enableSimulcastForUnifiedPlanChromiumBasedBrowsers feature flag
in the created MeetingSessionConfiguration.
Now create a meeting session with the simulcast enabled meeting session configuration.
// In the examples below, you will use this meetingSession object.constmeetingSession = newDefaultMeetingSession(configuration,logger,deviceController);
This meetingSession is now simulcast enabled and will have the videoUplinkBandwidthPolicy set to DefaultSimulcastUplinkPolicy. Due to these policies, the local and remote video resolutions may change. The video resolution depends on the available simulcast streams. The available simlucast streams are dependent on the number of attendees and the current bandwidth estimations. Check "Simulcast resolutions and behavior" section in this guide for more information.
The active simulcast streams are represented by the SimulcastLayers enum. Currently, the active upstream simulcast layers will only be either "Low and High", or "Low and Medium" or "Low". To receive upstream simulcast layer change notification do the following steps:
Then, add an instance of the AudioVideoObserver using the addObserver method so you can receive the changed simulcast layer notification.
Now when you are in a simulcast enabled meeting and your upstream simulcast layer is changed, you will be notified in the encodingSimulcastLayersDidChange callback with the updated simulcast layer.
import { SimulcastLayers } from'amazon-chime-sdk-js';constSimulcastLayersMapping = {[SimulcastLayers.Low]:'Low',[SimulcastLayers.LowAndMedium]:'Low and Medium',[SimulcastLayers.LowAndHigh]:'Low and High',[SimulcastLayers.Medium]:'Medium',[SimulcastLayers.MediumAndHigh]:'Medium and High',[SimulcastLayers.High]:'High'};constobserver = {encodingSimulcastLayersDidChange:simulcastLayers=> {console.log(`current active simulcast layers changed to: ${SimulcastLayersMapping[simulcastLayers]}`); }}meetingSession.audioVideo.addObserver(observer);
You can use enableSimulcastForContentShare to toggle simulcast on/off for content share. Note that you don't have
to set enableSimulcastForUnifiedPlanChromiumBasedBrowsers yourself as this configuration will be set automatically
for content share attendee as part of enableSimulcastForContentShare.
Below is the default simulcast encoding parameters:
Encoding Parameters
Simulcast stream 1
Simulcast stream 2
Max bitrate
1200 kbps
300 kbps
Scale resolution down by
1
2
Max framerate
Same as capture framerate (default is 15fps)
5
You can override the encoding parameters to tailor to the type of content.
For example, for motion content, you may want to scale resolution down more but keep a high framerate for the low
quality layer but for static content, you may want to only reduce the framerate and keep a high resolution.
// Enable simulcast and override the low layer encoding parametersawaitmeetingSession.audioVideo.enableSimulcastForContentShare(true, {low: {maxBitrateKbps:350,scaleResolutionDownBy:4,maxFramerate:10, }});awaitmeetingSession.audioVideo.startContentShareFromScreenCapture();
Video Simulcast
In multi-party video calls, attendees can enable the simulcast feature to enhance the overall video quality. Simulcast is a standardized technique where the video publishers create multiple renditions, or layers, of the same video source and video subscribers have the flexibility to choose the rendition that best fits their needs based on such factors as available bandwidth, compute and screen size. The uplink policy controls the configuration of the renditions through camera capture and encoding parameters. The simulcast-enabled uplink policy is SimulcastUplinkPolicy.
Simulcast is currently disabled by default. To enable it MeetingSessionConfiguration.enableSimulcastForUnifiedPlanChromiumBasedBrowsers must be set. Currently, only Chrome 76 and above is supported.
When enabling simulcast, you should use VideoPriorityBasedPolicy to allow switching between layers in response to application use-cases or network adaptation. More details about priority-based downlink policy can be found here.
Details
Simulcast overview
Simulcast support is built into the WebRTC library in the majority of browsers. The video input resolution is used to initially configure the attributes and maximum number of allowed simulcast streams. In WebRTC, if the RTCRtpSender is configured to have three layers of encoding, then the top layer is specified by the video input resolution. The resolution of the middle layer is scaled down by two vertically and horizontally by WebRTC. The lower layer is scaled down by four. The WebRTC library supports a maximum of three simulcast layers, and only when the input resolution is 960x540 or higher will all three be available.
The recommended resolution in JS SDK is 1280x720. This resolution will provide the most flexibility and allow subscribers to maintain smoother video quality transitions. In certain circumstances, like mobile browsers, the input resolution may not be high enough to support all desired simulcast streams, so the logic will adapt appropriately to send as many as possible.
The SimulcastUplinkPolicy configures RTCRtpSender to have three encoding layers, but only ever enables two of them. Which two are enabled is based on a variety of factors. Experiments show that configuring RTCRtpSender with three encoding layers up front and dynamically enabling and disabling layers provides a better experience and reduces the burden of having to dynamically manage capture resolution and encoding parameters.
Simulcast resolutions and behavior
WebRTC ultimately controls how much data is being sent to the network based on its bandwidth estimation algorithm. It's very hard to circumvent the estimated bandwidth or trick WebRTC into sending more than it estimates. Most browsers expose WebRTC peer connection statistics which developers can access to retrieve the estimated available bandwidths for uplink or downlink.
The estimated available bandwidth reveals some information of the health of the E2E network and can trigger various WebRTC behaviors when facing network adversity or in recovering from a network glitch. SimulcastUplinkPolicy does its best to anticipate and react to the underlying WebRTC behavior and work with it to avoid further impact on the video quality of services.
The SimulcastUplinkPolicy implements the following logic:
The table entries represent the maximum configuration. When CPU and bandwidth consumption is overused, WebRTC will dynamically adjust bitrate, disable a layer or scale down resolution. The SimulcastUplinkPolicy has a monitoring mechanism which tracks the sending status and automatically adjusts without need for application intervention.
Note that simulcast is disabled when there are only 2 or fewer attendees. This is because WebRTC has additional functionality to request lower bitrates from the remote end, and we will forward these requests if there are no competing receivers (i.e. if the receiving estimates it has 200kbps downlink bandwidth available, this will be sent and relayed in a message to the sending client). Therefore there is no need for simulcast based adaption.
Creating a simulcast enabled meeting
First, create a meeting session configuration.
Now enable
enableSimulcastForUnifiedPlanChromiumBasedBrowsers
feature flag in the created MeetingSessionConfiguration.Now create a meeting session with the simulcast enabled meeting session configuration.
This
meetingSession
is now simulcast enabled and will have thevideoUplinkBandwidthPolicy
set to DefaultSimulcastUplinkPolicy. Due to these policies, the local and remote video resolutions may change. The video resolution depends on the available simulcast streams. The available simlucast streams are dependent on the number of attendees and the current bandwidth estimations. Check "Simulcast resolutions and behavior" section in this guide for more information.Receive upstream simulcast layer change notification
The active simulcast streams are represented by the SimulcastLayers enum. Currently, the active upstream simulcast layers will only be either "Low and High", or "Low and Medium" or "Low". To receive upstream simulcast layer change notification do the following steps:
Implement the encodingSimulcastLayersDidChange method from the AudioVideoObserver interface.
Then, add an instance of the
AudioVideoObserver
using the addObserver method so you can receive the changed simulcast layer notification.Now when you are in a simulcast enabled meeting and your upstream simulcast layer is changed, you will be notified in the
encodingSimulcastLayersDidChange
callback with the updated simulcast layer.Custom Simulcast Policy
If the default simulcast uplink policy does not work for you, you can create your own simulcast video uplink policy by implementing SimulcastUplinkPolicy and set the video uplink policy via MeetingSessionConfiguration.videoUplinkBandwidthPolicy.
Enable Simulcast For Content Share
You can use
enableSimulcastForContentShare
to toggle simulcast on/off for content share. Note that you don't have to setenableSimulcastForUnifiedPlanChromiumBasedBrowsers
yourself as this configuration will be set automatically for content share attendee as part ofenableSimulcastForContentShare
.Below is the default simulcast encoding parameters:
You can override the encoding parameters to tailor to the type of content. For example, for motion content, you may want to scale resolution down more but keep a high framerate for the low quality layer but for static content, you may want to only reduce the framerate and keep a high resolution.
Give feedback on this guide