Skip to main content
Version: v1.14.0

Publishing and Subscribing

Concepts

Three core concepts underlie real-time functionality: stage, strategy, and events. The design goal is minimizing the amount of client-side logic necessary to build a working product.

Stage

The Stage class is the main point of interaction between the host application and the SDK. It represents the stage itself and is used to join and leave the stage. Creating and joining a stage requires a valid, unexpired token string from the control plane (represented as token). Joining and leaving a stage are simple:

const stage = new Stage(token, strategy)

try {
await stage.join();
} catch (error) {
// handle join exception
}

stage.leave();

Strategy

The StageStrategy interface provides a way for the host application to communicate the desired state of the stage to the SDK. Three functions need to be implemented: shouldSubscribeToParticipant, shouldPublishParticipant, and stageStreamsToPublish. All are discussed below.

To use a defined strategy, pass it to the Stage constructor. The following is a complete example of an application using a strategy to publish a participant's webcam to the stage and subscribe to all participants. Each required strategy function's purpose is explained in detail in the subsequent sections.

const devices = await navigator.mediaDevices.getUserMedia({ 
audio: true,
video: {
width: { max: 1280 },
height: { max: 720 },
}
});
const myAudioTrack = new LocalStageStream(devices.getAudioTracks()[0]);
const myVideoTrack = new LocalStageStream(devices.getVideoTracks()[0]);

// Define the stage strategy, implementing required functions
const strategy = {
audioTrack: myAudioTrack,
videoTrack: myVideoTrack,

// optional
updateTracks(newAudioTrack, newVideoTrack) {
this.audioTrack = newAudioTrack;
this.videoTrack = newVideoTrack;
},

// required
stageStreamsToPublish() {
return [this.audioTrack, this.videoTrack];
},

// required
shouldPublishParticipant(participant) {
return true;
},

// required
shouldSubscribeToParticipant(participant) {
return SubscribeType.AUDIO_VIDEO;
}
};

// Initialize the stage and start publishing
const stage = new Stage(token, strategy);
await stage.join();


// To update later (e.g. in an onClick event handler)
strategy.updateTracks(myNewAudioTrack, myNewVideoTrack);
stage.refreshStrategy();

Subscribing to Participants

shouldSubscribeToParticipant(participant: StageParticipantInfo): SubscribeType

When a remote participant joins the stage, the SDK queries the host application about the desired subscription state for that participant. The options are NONE, AUDIO_ONLY, and AUDIO_VIDEO. When returning a value for this function, the host application does not need to worry about the publish state, current subscription state, or stage connection state. If AUDIO_VIDEO is returned, the SDK waits until the remote participant is publishing before it subscribes, and it updates the host application by emitting events throughout the process.

Here is a sample implementation:

const strategy = {

shouldSubscribeToParticipant: (participant) => {
return SubscribeType.AUDIO_VIDEO;
}

// ... other strategy functions
}

This is the complete implementation of this function for a host application that always wants all participants to see each other; e.g., a video chat application.

More advanced implementations also are possible. For example, assume the application provides a role attribute when creating the token with CreateParticipantToken. The application could use the attributes property on StageParticipantInfo to selectively subscribe to participants based on the server-provided attributes:

const strategy = {

shouldSubscribeToParticipant(participant) {
switch (participant.attributes.role) {
case 'moderator':
return SubscribeType.NONE;
case 'guest':
return SubscribeType.AUDIO_VIDEO;
default:
return SubscribeType.NONE;
}
}
// . . . other strategies properties
}

This can be used to create a stage where moderators can monitor all guests without being seen or heard themselves. The host application could use additional business logic to let moderators see each other but remain invisible to guests.

Publishing

shouldPublishParticipant(participant: StageParticipantInfo): boolean

Once connected to the stage, the SDK queries the host application to see if a particular participant should publish. This is invoked only on local participants that have permission to publish based on the provided token.

Here is a sample implementation:

const strategy = {

shouldPublishParticipant: (participant) => {
return true;
}

// . . . other strategies properties
}

This is for a standard video chat application where users always want to publish. They can mute and unmute their audio and video, to instantly be hidden or seen/heard. (They also can use publish/unpublish, but that is much slower. Mute/unmute is preferable for use cases where changing visibility often is desirable.)

Choosing Streams to Publish

stageStreamsToPublish(): LocalStageStream[];

When publishing, this is used to determine what audio and video streams should be published. This is covered in more detail later in Publish a Media Stream.

Updating the Strategy

The strategy is intended to be dynamic: the values returned from any of the above functions can be changed at any time. For example, if the host application does not want to publish until the end user taps a button, you could return a variable from shouldPublishParticipant (something like hasUserTappedPublishButton). When that variable changes based on an interaction by the end user, call stage.refreshStrategy() to signal to the SDK that it should query the strategy for the latest values, applying only things that have changed. If the SDK observes that the shouldPublishParticipant value has changed, it starts the publish process. If the SDK queries and all functions return the same value as before, the refreshStrategy call does not modify the stage.

If the return value of shouldSubscribeToParticipant changes from AUDIO_VIDEO to AUDIO_ONLY, the video stream is removed for all participants with changed returned values, if a video stream existed previously.

Generally, the stage uses the strategy to most efficiently apply the difference between the previous and current strategies, without the host application needing to worry about all the state required to manage it properly. Because of this, think of calling stage.refreshStrategy() as a cheap operation, because it does nothing unless the strategy changes.

Events

A Stage instance is an event emitter. Using stage.on(), the state of the stage is communicated to the host application. Updates to the host application’s UI usually can be supported entirely by the events. The events are as follows:

stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_LEFT, (participant) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED, (participant, state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_SUBSCRIBE_STATE_CHANGED, (participant, state) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {})
stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_REMOVED, (participant, streams) => {})
stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {})

For most of these events, the corresponding ParticipantInfo is provided.

It is not expected that the information provided by the events impacts the return values of the strategy. For example, the return value of shouldSubscribeToParticipant is not expected to change when STAGE_PARTICIPANT_PUBLISH_STATE_CHANGED is called. If the host application wants to subscribe to a particular participant, it should return the desired subscription type regardless of that participant’s publish state. The SDK is responsible for ensuring that the desired state of the strategy is acted on at the correct time based on the state of the stage.

Publish a Media Stream

Local devices like microphones and cameras are retrieved using the same steps as outlined above in Retrieve a MediaStream from a Device. In the example we use MediaStream to create a list of LocalStageStream objects used for publishing by the SDK:

try {
// Get stream using steps outlined in document above
const stream = await getMediaStreamFromDevice();

let streamsToPublish = stream.getTracks().map(track => {
new LocalStageStream(track)
});

// Create stage with strategy, or update existing strategy
const strategy = {
stageStreamsToPublish: () => streamsToPublish
}
}

Publish a Screenshare

Applications often need to publish a screenshare in addition to the user's web camera. Publishing a screenshare necessitates creating an additional token for the stage, specifically for publishing the screenshare's media. Use getDisplayMedia and constrain the resolution to a maximum of 720p. After that, the steps are similar to publishing a camera to the stage.

// Invoke the following lines to get the screenshare's tracks
const media = await navigator.mediaDevices.getDisplayMedia({
video: {
width: {
max: 1280,
},
height: {
max: 720,
}
}
});
const screenshare = { videoStream: new LocalStageStream(media.getVideoTracks()[0]) };
const screenshareStrategy = {
stageStreamsToPublish: () => {
return [screenshare.videoStream];
},
shouldPublishParticipant: (participant) => {
return true;
},
shouldSubscribeToParticipant: (participant) => {
return SubscribeType.AUDIO_VIDEO;
}
}
const screenshareStage = new Stage(screenshareToken, screenshareStrategy);
await screenshareStage.join();

Display and Remove Participants

After subscribing is completed, you receive an array of StageStream objects through the STAGE_PARTICIPANT_STREAMS_ADDED event. The event also gives you participant info to help when displaying media streams:

stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
const streamsToDisplay = streams;

if (participant.isLocal) {
// Ensure to exclude local audio streams, otherwise echo will occur
streamsToDisplay = streams.filter(stream => stream.streamType === StreamType.VIDEO)
}

// Create or find video element already available in your application
const videoEl = getParticipantVideoElement(participant.id);

// Attach the participants streams
videoEl.srcObject = new MediaStream();
streamsToDisplay.forEach(stream => videoEl.srcObject.addTrack(stream.mediaStreamTrack));
})

When a participant stops publishing or is unsubscribed from a stream, the STAGE_PARTICIPANT_STREAMS_REMOVED function is called with the streams that were removed. Host applications should use this as a signal to remove the participant’s video stream from the DOM.

STAGE_PARTICIPANT_STREAMS_REMOVED is invoked for all scenarios in which a stream might be removed, including:

  • The remote participant stops publishing.
  • A local device unsubscribes or changes subscription from AUDIO_VIDEO to AUDIO_ONLY.
  • The remote participant leaves the stage.
  • The local participant leaves the stage.

Because STAGE_PARTICIPANT_STREAMS_REMOVED is invoked for all scenarios, no custom business logic is required around removing participants from the UI during remote or local leave operations.

Mute and Unmute Media Streams

LocalStageStream objects have a setMuted function that controls whether the stream is muted. This function can be called on the stream before or after it is returned from the stageStreamsToPublish strategy function.

Important: If a new LocalStageStream object instance is returned by stageStreamsToPublish after a call to refreshStrategy, the mute state of the new stream object is applied to the stage. Be careful when creating new LocalStageStream instances to make sure the expected mute state is maintained.

Monitor Remote Participant Media Mute State

When participants change the mute state of their video or audio, the STAGE_STREAM_MUTE_CHANGED event is triggered with a list of streams that have changed. Use the isMuted property on StageStream to update your UI accordingly:

stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {
if (stream.streamType === 'video' && stream.isMuted) {
// handle UI changes for video track getting muted
}
})

Also, you can look at StageParticipantInfo for state information on whether audio or video is muted:

stage.on(StageEvents.STAGE_STREAM_MUTE_CHANGED, (participant, stream) => {
if (participant.videoStopped || participant.audioMuted) {
// handle UI changes for either video or audio
}
})

Get WebRTC Statistics

To get the latest WebRTC statistics for a publishing stream or subscribing stream, use getStats on StageStream. This is an asynchronous method with which you can retrieve statistics either via await or by chaining a promise. The result is an RTCStatsReport which is a dictionary containing all standard statistics.

try {
const stats = await stream.getStats();
} catch (error) {
// Unable to retrieve stats
}

Optimizing Media

It's recommended to limit getUserMedia and getDisplayMedia calls to the following constraints for the best performance:

const CONSTRAINTS = {
video: {
width: { ideal: 1280 }, // Note: flip width and height values if portrait is desired
height: { ideal: 720 },
framerate: { ideal: 30 },
},
};

You can further constrain the media through additional options passed to the LocalStageStream constructor:

const localStreamOptions = {
minBitrate?: number;
maxBitrate?: number;
maxFramerate?: number;
simulcast: {
enabled: boolean
}
}
const localStream = new LocalStageStream(track, localStreamOptions)

In the code above:

  • minBitrate sets a minimum bitrate that the browser should be expected to use. However, a low complexity video stream may push the encoder to go lower than this bitrate.
  • maxBitrate sets a maximum bitrate that the browser should be expected to not exceed for this stream.
  • maxFramerate sets a maximum frame rate that the browser should be expected to not exceed for this stream.
  • The simulcast option is usable only on Chromium-based browsers. It enables sending three rendition layers of the stream.
    • This allows the server to choose which rendition to send to other participants, based on their networking limitations.
    • When simulcast is specified along with a maxBitrate and/or maxFramerate value, it is expected that the highest rendition layer will be configured with these values in mind, provided the maxBitrate does not go below the internal SDK’s second highest layer’s default maxBitrate value of 900 kbps.
    • If maxBitrate is specified as too low compared to the second highest layer’s default value, simulcast will be disabled.
    • simulcast cannot be toggled on and off without republishing the media through a combination of having shouldPublishParticipant return false, calling refreshStrategy, having shouldPublishParticipant return true and calling refreshStrategy again.

Get Participant Attributes

If you specify attributes in the CreateParticipantToken endpoint request, you can see the attributes in StageParticipantInfo properties:

stage.on(StageEvents.STAGE_PARTICIPANT_JOINED, (participant) => {
console.log(`Participant ${participant.id} info:`, participant.attributes);
})

Handling Network Issues

When the local device’s network connection is lost, the SDK internally tries to reconnect without any user action. In some cases, the SDK is not successful and user action is needed.

Broadly the state of the stage can be handled via the STAGE_CONNECTION_STATE_CHANGED event:

stage.on(StageEvents.STAGE_CONNECTION_STATE_CHANGED, (state) => {
switch (state) {
case StageConnectionState.DISCONNECTED:
// handle disconnected UI
break;
case StageConnectionState.CONNECTING:
// handle establishing connection UI
break;
case StageConnectionState.CONNECTED:
// SDK is connected to the Stage
break;
case StageConnectionState.ERRORED:
// SDK encountered an error and lost its connection to the stage. Wait for CONNECTED.
break;
})

In general, you can ignore an errored state that is encountered after successfully joining a stage, as the SDK will try to recover internally. If the SDK reports an ERRORED state and the stage remains in the CONNECTING state for an extended period of time (e.g., 30 seconds or longer), you probably are disconnected from the network.

Broadcast the Stage to an IVS Channel

To broadcast a stage, create a separate IVSBroadcastClient session and then follow the usual instructions for broadcasting with the SDK, described above. The list of StageStream exposed via STAGE_PARTICIPANT_STREAMS_ADDED can be used to retrieve the participant media streams which can be applied to the broadcast stream composition, as follows:

// Setup client with preferred settings
const broadcastClient = getIvsBroadcastClient();

stage.on(StageEvents.STAGE_PARTICIPANT_STREAMS_ADDED, (participant, streams) => {
streams.forEach(stream => {
const inputStream = new MediaStream([stream.mediaStreamTrack]);
switch (stream.streamType) {
case StreamType.VIDEO:
broadcastClient.addVideoInputDevice(inputStream, `video-${participant.id}`, {
index: DESIRED_LAYER,
width: MAX_WIDTH,
height: MAX_HEIGHT
});
break;
case StreamType.AUDIO:
broadcastClient.addAudioInputDevice(inputStream, `audio-${participant.id}`);
break;
}
})
})

Optionally, you can composite a stage and broadcast it to an IVS low-latency channel, to reach a larger audience. See Enabling Multiple Hosts on an Amazon IVS Stream in the IVS Low-Latency Streaming User Guide.