We've changed chooseAudioInputDevice to startAudioInput because you can also pass non-device objects, such as MediaStream and MediaTrackConstraints.
constaudioInputDeviceInfo = // An array item from meetingSession.audioVideo.listAudioInputDevices;// BeforeawaitmeetingSession.audioVideo.chooseAudioInputDevice(audioInputDeviceInfo.deviceId);// AfterawaitmeetingSession.audioVideo.startAudioInput(audioInputDeviceInfo.deviceId);
In v3, you should call stopAudioInput to stop sending an audio stream when your Chime SDK meeting ends.
constobserver = {audioVideoDidStop:asyncsessionStatus=> {// v3awaitmeetingSession.audioVideo.stopAudioInput();// Or use the destroy API to call stopAudioInput and stopVideoInput.meetingSession.deviceController.destroy(); },};meetingSession.audioVideo.addObserver(observer);
We've changed chooseVideoInputDevice to startVideoInput because you can also pass non-device objects, such as MediaStream and MediaTrackConstraints.
constvideoInputDeviceInfo = // An array item from meetingSession.audioVideo.listVideoInputDevices;// BeforeawaitmeetingSession.audioVideo.chooseVideoInputDevice(videoInputDeviceInfo.deviceId);// AfterawaitmeetingSession.audioVideo.startVideoInput(videoInputDeviceInfo.deviceId);
In v3, you should call stopVideoInput to stop the video input stream. null is no longer a valid input for
startVideoInput (null is also removed from Device type). Calling stopLocalVideoTile will stop sending the
video stream to media server and unbind the video element, but it will not stop the video stream.
// BeforeawaitmeetingSession.audioVideo.chooseVideoInputDevice(null);awaitmeetingSession.audioVideo.stopLocalVideoTile();// After// This will auto trigger stopLocalVideoTile during meetingawaitmeetingSession.audioVideo.stopVideoInput();
We've changed chooseAudioOutputDevice to chooseAudioOutput to follow the naming convention in the input APIs.
constaudioOutputDeviceInfo = // An array item from meetingSession.audioVideo.listAudioOutputDevices;// BeforeawaitmeetingSession.audioVideo.chooseAudioOutputDevice(audioOutputDeviceInfo.deviceId);// AfterawaitmeetingSession.audioVideo.chooseAudioOutput(audioOutputDeviceInfo.deviceId);
In v3, startVideoPreviewForVideoInput and stopVideoPreviewForVideoInput do not affect a video input published by startVideoInput (chooseVideoInputDevice in v2) anymore.
constvideoInputDeviceInfo = // An array item from meetingSession.audioVideo.listVideoInputDevices;awaitmeetingSession.audioVideo.startVideoInput(videoInputDeviceInfo.deviceId);constpreviewElement = document.getElementById('video-preview');meetingSession.audioVideo.startVideoPreviewForVideoInput(previewElement);meetingSession.audioVideo.stopVideoPreviewForVideoInput(previewElement);// In v3, stopVideoPreviewForVideoInput does not implicitly stop the video published by startVideoInput.// You should call stopVideoInput if you want to stop sending a video stream.awaitmeetingSession.audioVideo.stopVideoInput();
MessagingSessionConfiguration used to require to pass in the AWS global object for sigV4 signing which does not
work for aws-sdk v3. Starting with Amazon Chime SDK for JavaScript V3, you no longer have to pass in the global AWS object.
estimatedDownlinkBandwidthLessThanRequired and videoNotReceivingEnoughData can not be replicated anymore but you can make use of priority-based downlink to manage videos instead.
// Beforeconstobserver = {videoSendHealthDidChange: (bitrateKbps, packetsPerSecond) => {console.log(`Sending video bitrate in kilobits per second: ${videoUpstreamBitrate} and ${videoUpstreamPacketPerSecond}`); },videoSendBandwidthDidChange: (newBandwidthKbps, oldBandwidthKbps) => {console.log(`Sending bandwidth is ${availableSendBandwidth}, nack count per second is ${nackCountPerSecond}, and old bandwidth is ${this.oldSendBandwidth}`); },videoReceiveBandwidthDidChange: (newBandwidthKbps, oldBandwidthKbps) => {console.log(`Receiving bandwidth is ${availableRecvBandwidth}, and old bandwidth is ${this.oldRecvBandwidth}`); },}// Afterconstobserver = {oldSendBandwidthKbs:0,oldRecvBandwidthKbs:0,metricsDidReceive: (clientMetricReport) => {constmetricReport = clientMetricReport.getObservableMetrics();const {videoPacketSentPerSecond,videoUpstreamBitrate,nackCountPerSecond, } = metricReport;constavailableSendBandwidthKbs = metricReport.availableOutgoingBitrate / 1000;constavailableRecvBandwidthKbs = metricReport.availableIncomingBitrate / 1000;// videoSendHealthDidChangeconsole.log(`Sending video bitrate in kilobits per second: ${videoUpstreamBitrate/1000} and sending packets per second: ${videoPacketSentPerSecond}`);// videoSendBandwidthDidChangeif (this.oldSendBandwidthKbs != availableSendBandwidthKbs) {console.log(`Sending bandwidth is ${availableSendBandwidthKbs}, nack count per second is ${nackCountPerSecond}, and old bandwidth is ${this.oldSendBandwidthKbs}`);this.oldSendBandwidthKbs = availableSendBandwidthKbs; }// videoReceiveBandwidthDidChangeif (this.oldRecvBandwidthKbs != availableRecvBandwidthKbs) {console.log(`Receiving bandwidth is ${availableRecvBandwidthKbs}, and old bandwidth is ${this.oldRecvBandwidthKbs}`);this.oldRecvBandwidthKbs = availableRecvBandwidthKbs; } },};meetingSession.audioVideo.addObserver(observer);
We have renamed MeetingSessionPOSTLogger to POSTLogger and removed the MeetingSessionConfiguration dependency. You don't need to pass the MeetingSessionConfiguration object to the POSTLogger constructor anymore.
// You need responses from server-side Chime API. See below for details.constmeetingResponse = // The response from the CreateMeeting API action.constattendeeResponse = // The response from the CreateAttendee API action.// BeforeconstmeetingSessionConfiguration = newMeetingSessionConfiguration(meetingResponse, attendeeResponse);constmeetingSessionPOSTLogger = newMeetingSessionPOSTLogger('SDK',configuration,20, // LOGGER_BATCH_SIZE2000, // LOGGER_INTERVAL_MSURL,LogLevel.INFO);// Afterconstlogger = newPOSTLogger({url:'URL TO POST LOGS',});
You can create a POSTLogger object with headers, logLevel, metadata, and other options. See the POSTLoggerOptions documentation for more information.
constlogger = newPOSTLogger({url:'URL TO POST LOGS',// Add "headers" to each HTTP POST request.headers: {'Chime-Bearer':'authentication-token' },logLevel:LogLevel.INFO,// Add "metadata" to each HTTP POST request body.metadata: {appName:'Your app name',meetingId:meetingResponse.Meeting.MeetingId,attendeeId:attendeeResponse.Attendee.AttendeeId, },});// You can also set new metadata after initializing POSTLogger.// For example, you can set metadata after receiving API responses from your server application.logger.metadata = {appName:'Your app name',meetingId:meetingResponse.Meeting.MeetingId,attendeeId:attendeeResponse.Attendee.AttendeeId,};
interfaceEventController {// Adds an observer for event published to this controller.addObserver(observer: EventObserver): void;// Removes an observer for event published to this controller.removeObserver(observer: EventObserver): void;// EventReporter that the EventController uses to send events to the Amazon Chime backend.readonlyeventReporter?: EventReporter;// pushMeetingState has been deprecated}
The DefaultMeetingSession constructor no longer takes in a EventReporter and instead optionally takes in an EventController or creates one if none is given.
The eventDidReceive function that was part of AudioVideoObserver has been moved to EventObserver which is an observer that the EventController now handles. Because of this if you were to call eventDidReceive through forEachObserver on AudioVideoController this functionality will no longer be possible in 3.x, however you will still be able to call eventDidReceive by using the publishEvent method on EventController. If you have a use case not covered by this method you can implement your own EventController or make a feature request.
SDK exposed some common WebRTC metrics publicly via the metricsDidReceive event. We did not make any change to metricsDidReceive itself. However In V3, the legacy WebRTC metric specs will be removed or replaced by equivalent standardized metrics. Here is a table that summarizes all the changes and offers the steps to migrate.
We add a new rtcStatsReport property to DefaultClientMetricReport to store raw RTCStatsReport and expose it via metricsDidReceive(clientMetricReport: ClientMetricReport) event. You can get the rtcStatsReport via clientMetricReport.getRTCStatsReport(). These metrics are updated every second.
Before:
Note: The getRTCPeerConnectionStats() is on its way to be deprecated. Please use the new API clientMetricReport.getRTCStatsReport() returned by metricsDidReceive(clientMetricReport) callback instead.
Migration from SDK v2 to SDK v3
MessagingSessionConfiguration.ts
messagingSession.start
to returnPromise<void>
instead ofvoid
MeetingSessionPOSTLogger
toPOSTLogger
EventController
EventController
.eventDidReceive
observerInstallation
Installation involves adjusting your
package.json
to depend on version3.0.0
.Version 3 of the Amazon Chime SDK for JavaScript makes a number of interface changes.
Device controller
Updates to the audio input API
We've changed
chooseAudioInputDevice
tostartAudioInput
because you can also pass non-device objects, such as MediaStream and MediaTrackConstraints.In v3, you should call
stopAudioInput
to stop sending an audio stream when your Chime SDK meeting ends.Updates to the video input API
We've changed
chooseVideoInputDevice
tostartVideoInput
because you can also pass non-device objects, such as MediaStream and MediaTrackConstraints.In v3, you should call
stopVideoInput
to stop the video input stream.null
is no longer a valid input forstartVideoInput
(null
is also removed fromDevice
type). CallingstopLocalVideoTile
will stop sending the video stream to media server and unbind the video element, but it will not stop the video stream.Updates to the audio output API
We've changed
chooseAudioOutputDevice
tochooseAudioOutput
to follow the naming convention in the input APIs.Updates to the video preview APIs
In v3,
startVideoPreviewForVideoInput
andstopVideoPreviewForVideoInput
do not affect a video input published bystartVideoInput
(chooseVideoInputDevice
in v2) anymore.Updates to the video input quality API
In v3, we've removed the
maxBandwidthKbps
parameter fromchooseVideoInputQuality
because it's not related to the video input device.Instead, you can set the ideal video maximum bandwidth using
setVideoMaxBandwidthKbps
.Removing synthesize video API
In v3, we've removed
synthesizeVideoDevice
andcreateEmptyVideoDevice
APIs. They are now available in our meeting demo.Messaging
Remove AWS global object from
MessagingSessionConfiguration.ts
MessagingSessionConfiguration
used to require to pass in the AWS global object for sigV4 signing which does not work for aws-sdk v3. Starting with Amazon Chime SDK for JavaScript V3, you no longer have to pass in the global AWS object.Update
messagingSession.start
to returnPromise<void>
instead ofvoid
In aws-sdk v3, region and credentials can be async function. In order to support aws-sdk v3, we update the start API to async.
Meeting Status Code
The following meeting status code have been deprecated in v2.x and are now removed in v3.x, if your applications handle them please remove.
AudioVideo events
We have removed below
AudioVideo
events in v3.estimatedDownlinkBandwidthLessThanRequired
andvideoNotReceivingEnoughData
can not be replicated anymore but you can make use of priority-based downlink to manage videos instead.MeetingSessionPOSTLogger
toPOSTLogger
We have renamed
MeetingSessionPOSTLogger
toPOSTLogger
and removed theMeetingSessionConfiguration
dependency. You don't need to pass theMeetingSessionConfiguration
object to thePOSTLogger
constructor anymore.You can create a
POSTLogger
object withheaders
,logLevel
,metadata
, and other options. See thePOSTLoggerOptions
documentation for more information.Event Controller
We have de-coupled the
EventController
fromAudioVideoController
. Check below for the new changes and if updates are needed for your implementation.Update implementation of custom
EventController
Update creation of
EventController
.The
DefaultMeetingSession
constructor no longer takes in aEventReporter
and instead optionally takes in anEventController
or creates one if none is given.Update
eventDidReceive
observerThe
eventDidReceive
function that was part ofAudioVideoObserver
has been moved toEventObserver
which is an observer that theEventController
now handles. Because of this if you were to calleventDidReceive
throughforEachObserver
onAudioVideoController
this functionality will no longer be possible in 3.x, however you will still be able to calleventDidReceive
by using thepublishEvent
method onEventController
. If you have a use case not covered by this method you can implement your ownEventController
or make a feature request.WebRTC Metrics
Before:
The
DefaultStatsCollector
used a hybrid approach to obtain WebRTC stats from browser:getStats
APIgetStats
APIAfter:
The legacy (non-promise-based)
getStats
API will be removed and standardized (promise-based)getStats
API will be used for all browsers.SDK exposed some common WebRTC metrics publicly via the
metricsDidReceive
event. We did not make any change tometricsDidReceive
itself. However In V3, the legacy WebRTC metric specs will be removed or replaced by equivalent standardized metrics. Here is a table that summarizes all the changes and offers the steps to migrate.Now: jitterBufferMs = (Current.jitterBufferDelay - Previous.jitterBufferDelay) / (Current.jitterBufferEmittedCount - Previous.jitterBufferEmittedCount) * 1000
Now: decoderLoss = (Current.concealedSamples - Previous.concealedSamples) / (Current.totalSamplesReceived - Previous.totalSamplesReceived) * 100
Get raw RTCStatsReport
We add a new
rtcStatsReport
property toDefaultClientMetricReport
to store rawRTCStatsReport
and expose it viametricsDidReceive(clientMetricReport: ClientMetricReport)
event. You can get thertcStatsReport
viaclientMetricReport.getRTCStatsReport()
. These metrics are updated every second.Before:
After:
It's recommended to use this one. It can also improve the performance a bit as now you don't need to explicitly call
getStats
API again.Give feedback on this guide