ActiveSpeakerDetectorFacade provides API calls to add and remove
active speaker observers.
ActiveSpeakerObserver handles active speaker detection and score changes for attendees.
ActiveSpeakerPolicy calculates a normalized score of how active a speaker is. Implementations
of ActiveSpeakerPolicy provide custom algorithms for calculating the score.
AudioDeviceCapabilities describes whether the audio input and output devices are enabled or disabled. Disabling
either the audio input or output will change what audio permissions are required in order to join a meeting.
AudioMode describes the audio mode in which the audio client should operate during a meeting session.
AudioRecordingPresetOverride describes the audio recording preset in which
the audio client should operate during a meeting session.
The values below (except None) directly map to the values defined in:
https://android.googlesource.com/platform/frameworks/wilhelm/+/master/include/SLES/OpenSLES_AndroidConfiguration.h
AudioStreamType describes the audio stream type in which the audio client should operate
during a meeting session.
AudioVideoConfiguration represents the configuration to be used for audio and video during a
meeting session.
AudioVideoControllerFacade manages the signaling and peer connections.
AudioVideoObserver handles audio / video session events.
A set of options that can be supplied when creating a background blur video frame processor.
BackgroundBlurVideoFrameProcessor Draws frames to RGBA, converts to CPU, identifies the
foreground person and blurs the background of a video frame.
BackgroundFilterVideoFrameProcessor Draws frames to RGBA, converts to CPU, identifies the
foreground person and applies filter (blur or replacement) to a video frame.
A set of options that can be supplied when creating a background replacement video frame processor.
BackgroundReplacementVideoFrameProcessor Draws frames to RGBA, converts to CPU, identifies the foreground person
and replaces the background of a video frame.
CameraCaptureSource is an interface for camera capture sources with additional features
not covered by VideoCaptureSource.
CaptureSourceError describes an error resulting from a capture source failure
These can be used to trigger UI, or attempt to restart the capture source.
CaptureSourceObserver observes events resulting from different types of capture devices. Builders
may desire this input to decide when to show certain UI elements, or to notify users of failure.
ConsoleLogger writes logs with console
ContentShareController exposes methods for starting and stopping content share with a ContentShareSource.
The content represents a media steam to be shared in the meeting, such as screen capture or
media files.
Read content share guide for details.
ContentShareObserver handles all callbacks related to the content share.
By implementing the callback functions and registering with ContentShareController.addContentShareObserver,
one can get notified with content share status events.
ContentShareSource contains the media sources to attach to the content share
ContentShareStatus indicates a status received regarding the content share.
ContentShareStatusCode indicates the reason the content share event occurred.
Data message received from server.
DataMessageObserver lets one listen to data message receiving event.
One can subscribe this observer to multiple data message topic in order
to receive and process the message that sent to the topics.
DefaultActiveSpeakerDetector A default implementation of the Active Speaker Detector
DefaultActiveSpeakerPolicy A default implementation of the Active Speaker Policy
DefaultCameraCaptureSource will configure a reasonably standard capture stream which will
use the Surface provided by the capture source provided by a SurfaceTextureCaptureSourceFactory
DefaultEglCore is an implementation of EglCore which uses EGL14 and OpenGLES2.
OpenGLES3 has incompatibilities with AmazonChimeSDKMedia library.
DefaultEglCoreFactory will create a root EglCore lazily if no shared context is provided.
It will track all child EglCore objects and release the root core if all child cores are released.
DefaultModality is a backwards compatible extension of the
attendee id (UUID string) and session token schemas (base 64 string).
It appends # to either strings, which indicates the modality
of the participant.
DefaultScreenCaptureSource uses MediaProjection to create a VirtualDisplay to capture the
device screen. It will render the captured frames to a Surface provided by a SurfaceTextureCaptureSourceFactory.
DefaultSurfaceTextureCaptureSource will provide a Surface which it will listen to
and convert to VideoFrameTextureBuffer objects
DefaultSurfaceTextureCaptureSourceFactory creates DefaultSurfaceTextureCaptureSource objects
DeviceChangeObserver listens audio device changes.
DeviceController keeps track of the devices being used for audio device
(e.g. built-in speaker), video input (e.g. camera)).
The list functions return MediaDevice objects.
Changes in device availability are broadcast to any registered
DeviceChangeObserver.
EglCore is an interface for containing all EGL state in one component. In the future it may contain additional helper methods.
EglCoreFactory is an factory interface for creating new EglCore objects, possible using shared state
EglVideoRenderView is a VideoRenderView which requires EGL initialization to render VideoFrameTextureBuffer buffers.
The VideoTileController should automatically manage (init and release) any bound tiles, but if it
is desired to use a view outside of the controller (e.g. in pre-meeting device selection), users will
need to call init and release themselves
EventAnalyticsController keeps track of events and notifies EventAnalyticsObserver.
An event describes the success and failure conditions for the meeting session.
EventAnalyticsFacade allows builders to listen to meeting analytics events
through adding/removing EventAnalyticsObserver.
EventAnalyticsObserver handles events regarding to analytics.
EventAttributes describes meeting event.
EventBuffer defines a buffer which will consume the SDKEvent internally.
EventClientConfiguration defines core properties needed for every event client configuration.
EventClientType defines the type of event client configuration that will be
sent to the server.
EventName represent sdk event that could help builders to analyze the data.
EventReporter is class that process meeting event that is created in EventAnalyticsController.
EventReporterFactory facilitates creating EventReporter
EventSender is responsible for sending IngestionRecord.
GlTextureFrameBufferHelper is a helper class for handling OpenGL framebuffer with only color
attachment and no depth or stencil buffer.
IngestionConfiguration defines the configuration that can customize DefaultEventReporter.
IngestionEvent defines the event format ingestion server will accept
A record that contains batch of IngestionEvent to send.
This contains metadata that is shared among events.
Contains configuration for a local video or content share to be sent
Logger defines how to write logs for different logging level.
Media device with its info.
The media device's type (Ex: video front camera, video rear camera, audio bluetooth)
MeetingEventClientConfiguration defines one type of EventClientConfiguration
that is needed for DefaultEventReporter
MeetingHistoryEventName is a notable event (such as MeetingStartSucceeded) that occur during meeting.
Thus, this also includes events in EventName.
MeetingSession contains everything needed for the attendee to authenticate,
reach the meeting service, start audio, and start video
MeetingSessionConfiguration includes information needed to start the meeting session such as
attendee credentials and URLs for audio and video
MeetingSessionCredentials includes the credentials used to authenticate
the attendee on the meeting
MeetingSessionStatus indicates a status received regarding the session.
MeetingSessionStatusCode provides additional details for the MeetingSessionStatus
received for a session.
MeetingSessionURLs contains the URLs that will be used to reach the
meeting service.
MetricsObserver handles events related to audio/video metrics.
ModelShape Defines the shape of an ML model. This can be used to define the input and
output shape of an ML model.
(extensions in package com.amazonaws.services.chime.sdk.meetings.analytics)
NoopEventReporterFactory returns null EventReporter
ObservableMetric represents filtered metrics that are intended to propagate to the
top level observers. All metrics are measured over the past second.
PrimaryMeetingPromotionObserver handles events related to the promotion and demotion
of attendees initially in replica meetings.
RealtimeControllerFacade controls aspects meetings concerning realtime UX
that for performance, privacy, or other reasons should be implemented using
the most direct path. Callbacks generated by this interface should be
consumed synchronously and without business logic dependent on the UI state
where possible.
RealtimeObserver lets one listen to real time events such a volume, signal strength, or
attendee changes.
A video source available in the current meeting. RemoteVideoSource need to be consistent between remoteVideoSourcesDidBecomeAvailable
and updateVideoSourceSubscriptions
as they are used as keys in maps that may be updated.
I.e. when setting up a map for updateVideoSourceSubscriptions
do not construct RemoteVideoSource yourselves
or the configuration may or may not be updated.
ScreenCaptureResolutionCalculator calculates scaled resolution based on input resolution
and target resolution constraint
SegmentationProcessor predicts foreground mask for an image.
SignalStrength describes the signal strength of an attendee for audio
SurfaceRenderView is an implementation of EglVideoRenderView which uses EGL14 and OpenGLES2
to draw any incoming video buffer types to the surface provided by the inherited SurfaceView.
SurfaceTextureCaptureSource provides a Surface which can be passed to system sources like the camera.
Upon start call, the source will listen to the surface and emit any new images as VideoFrame objects to any
downstream VideoSink interfaces. This class is mostly intended for composition within VideoSource implementations which will
pass the created Surface to a system source, then call addVideoSink to receive the frames before transforming and
passing downstream.
SurfaceTextureCaptureSourceFactory is an factory interface for creating new SurfaceTextureCaptureSource objects,
possible using shared state. This provides flexibility over use of SurfaceTextureCaptureSource objects since
they may not allow reuse, or may have a delay before possible reuse.
TextureRenderView is an implementation of EglVideoRenderView which uses EGL14 and OpenGLES2
to draw any incoming video buffer types to the surface provided by the inherited TextureView
TranscriptEventObserver lets one listen to TranscriptEvent events of current meeting
URLRewriter Function to transform URLs.
Use this to rewrite URLs to traverse proxies.
Versioning provides API to retrieve SDK version
VideoCaptureFormat describes a given capture format that can be set to a VideoCaptureSource.
Note that VideoCaptureSource implementations may ignore or adjust unsupported values.
VideoCaptureSource is an interface for various video capture sources (i.e. screen, camera, file) which can emit VideoFrame objects
All the APIs here can be called regardless of whether the AudioVideoFacade is started or not.
VideoContentHint describes the content type of a video source so that downstream encoders, etc. can properly
decide on what parameters will work best. These options mirror https://www.w3.org/TR/mst-content-hint/ .
VideoFrame is a class which contains a VideoFrameBuffer and metadata necessary for transmission
Typically produced via a VideoSource and consumed via a VideoSink
VideoFrameBuffer is a buffer which contains a single video buffer's raw data.
Typically owned by a VideoFrame which includes additional metadata.
VideoFrameI420Buffer provides an reference counted wrapper of
a YUV where planes are natively (i.e. in JNI) allocated direct byte buffers.
VideoFrameRGBABuffer provides an reference counted wrapper of
an RGBA natively (i.e. in JNI) allocated direct byte buffer.
VideoFrameTextureBuffer provides an reference counted wrapper of
an OpenGLES texture and related metadata
VideoPauseState describes the pause status of a video tile.
Enum defining video priority for remote video sources. The 'higher' the number the 'higher' the priority for the source when adjusting video quality
to adapt to variable network conditions, i.e. Highest
will be chosen before High
, Medium
, etc.
VideoRenderView is the type of VideoSink used by the VideoTileController
Customizable video resolution parameters for a remote video source.
VideoRotation describes the rotation of the video frame buffer in degrees clockwise
from intended viewing horizon.
VideoScalingType describes the scaling type of how video is rendered. Certain types
may effect how much of a video is cropped. visibleFraction refers to the minimum amount
of a video frame required to be shown per scaling type (e.g. AspectFit indicates showing
the whole frame, no cropping).
A VideoSink consumes video frames, typically from a VideoSource. It may process, fork, or render these frames.
Typically connected via VideoSource.addVideoSink and disconnected via VideoSource.removeVideoSink
VideoSource is an interface for sources which produce video frames, and can send to a VideoSink.
Implementations can be passed to the AudioVideoFacade to be used as the video source sent to remote
participants
Configuration for a specific video source.
The values are intentionally mutable so that a map of all current configurations can be kept and updated as needed.
VideoTile is a tile that binds video render view to display the frame into the view.
VideoTileController handles rendering/creating of new VideoTile.
VideoTileControllerFacade manages video tile binding, pausing, and resuming as well as subscribing
to video tile events by adding a VideoTileObserver.
VideoTileObserver handles events related to VideoTile.
Contains properties related to the current state of the VideoTile
VolumeLevel describes the volume level of an attendee for audio