Options
All
  • Public
  • Public/Protected
  • All
Menu

amazon-chime-sdk-js

Index

Namespaces

Enumerations

Classes

Interfaces

Type aliases

Variables

Functions

Type aliases

AGCOptions

AGCOptions: EnabledAGCOptions | DisabledAGCOptions

AudioInputDevice

AudioInputDevice: Device | AudioTransformDevice | null

ContentShareSimulcastEncodingParameters

ContentShareSimulcastEncodingParameters: { high?: VideoEncodingParameters; low?: VideoEncodingParameters }

Type declaration

Device

Device: string | MediaTrackConstraints | MediaStream

A specifier for how to obtain a media stream from the browser. This can be a MediaStream itself, a set of constraints, a device ID.

EventName

EventName: "meetingStartRequested" | "meetingStartSucceeded" | "meetingReconnected" | "meetingStartFailed" | "meetingEnded" | "meetingFailed" | "attendeePresenceReceived" | "audioInputSelected" | "audioInputUnselected" | "audioInputFailed" | "videoInputSelected" | "videoInputUnselected" | "videoInputFailed" | "signalingDropped" | "receivingAudioDropped" | "sendingAudioFailed" | "sendingAudioRecovered" | "backgroundFilterConfigSelected" | "deviceLabelTriggerFailed"

MeetingHistoryState

MeetingHistoryState: EventName

MeetingHistoryState describes user actions and events, including all event names in EventName.

RealtimeSubscribeToAttendeeIdPresenceCallback

RealtimeSubscribeToAttendeeIdPresenceCallback: (attendeeId: string, present: boolean, externalUserId?: string, dropped?: boolean, posInFrame?: RealtimeAttendeePositionInFrame | null) => void

Type declaration

    • Realtime attendee presence callback that listens to changes in attendee presence.

      Parameters

      • attendeeId: string

        Internal Amazon Chime AttendeeId created by CreateAttendee API.

      • present: boolean

        Indicates the attendee's presence in a meeting.

      • Optional externalUserId: string

        Indicates the attendee's externalUserId provided while joining a meeting.

      • Optional dropped: boolean

        Indicates whether the attendee dropped from the meeting.

        The Amazon Chime SDK for JavaScript reconnects a meeting session in below scenarios:

        • No audio packets (WebRTC)
        • Bad audio delay (WebRTC)
        • No pong reply (WebSocket)

        This value is provided by the Amazon Chime backend when an attendee is dropped and could not join the same meeting again due to re-connection issues. It is also provided to differentiate the scenarios between normal attendee leave and the attendee dropping due to re-connection issues.

        In re-connection scenarios, if an attendee drops and could never join back successfully, the JS SDK will call this callback setting the dropped value to a boolean value received from Amazon Chime backend, and it will set the present parameter to false.

      • Optional posInFrame: RealtimeAttendeePositionInFrame | null

        This object indicates which attendee out of how many total attendees the update is for. For example, if you were to join on a call with 3 total attendees, you would get presence callbacks for attendeeIndex 0, attendeeIndex 1, attendeeIndex 2 out of the total attendeesInFrame of 3. You will receive callback for each attendee present in the meeting after you join the meeting. Later, you will receive callback as attendees leave or join the meeting.

      Returns void

TranscriptEvent

TranscriptEvent: Transcript | TranscriptionStatus

VideoEncodingParameters

VideoEncodingParameters: { maxBitrateKbps?: number; maxFramerate?: number; scaleResolutionDownBy?: number }

Type declaration

  • Optional maxBitrateKbps?: number
  • Optional maxFramerate?: number
  • Optional scaleResolutionDownBy?: number

VideoFxBlurStrength

VideoFxBlurStrength: "low" | "medium" | "high"

A qualitative measure of the background blur strength. Note: the underlying blur implementation, and therefore the perceived strength, may change between different versions.

VideoInputDevice

VideoInputDevice: Device | VideoTransformDevice

VoiceFocusConfig

VoiceFocusConfig: SupportedVoiceFocusConfig | Unsupported

VoiceFocusModelComplexity

VoiceFocusModelComplexity: "c100" | "c50" | "c20" | "c10"

VoiceFocusModelName

VoiceFocusModelName: "default" | "ns_es"

VolumeIndicatorCallback

VolumeIndicatorCallback: (attendeeId: string, volume: number | null, muted: boolean | null, signalStrength: number | null, externalUserId?: string) => void

Type declaration

    • (attendeeId: string, volume: number | null, muted: boolean | null, signalStrength: number | null, externalUserId?: string): void
    • RealtimeVolumeIndicator functions that listen to changes in attendees volume.

      Parameters

      • attendeeId: string
      • volume: number | null
      • muted: boolean | null
      • signalStrength: number | null
      • Optional externalUserId: string

      Returns void

Variables

Const BackgroundBlurStrength

BackgroundBlurStrength: { HIGH: number; LOW: number; MEDIUM: number } = ...

The numbers below indicate the amount of blur to apply. Larger numbers will produce more blur.

Type declaration

  • HIGH: number
  • LOW: number
  • MEDIUM: number

Const RedundantAudioEncoderWorkerCode

RedundantAudioEncoderWorkerCode: "class RedundantAudioEncoder {\n constructor() {\n // Each payload must be less than 1024 bytes to fit the 10 bit block length\n this.maxRedPacketSizeBytes = 1 << 10;\n // Limit payload to 1000 bytes to handle small MTU. 1000 is chosen because in Chromium-based browsers, writing audio\n // payloads larger than 1000 bytes using the WebRTC Insertable Streams API (which is used to enable dynamic audio\n // redundancy) will cause an error to be thrown and cause audio flow to permanently stop. See\n // https://crbug.com/1248479.\n this.maxAudioPayloadSizeBytes = 1000;\n // Each payload can encode a timestamp delta of 14 bits\n this.maxRedTimestampOffset = 1 << 14;\n // 4 byte RED header\n this.redHeaderSizeBytes = 4;\n // reduced size for last RED header\n this.redLastHeaderSizeBytes = 1;\n // P-Time for Opus 20 msec packets\n // We do not support other p-times or clock rates\n this.redPacketizationTime = 960;\n // distance between redundant payloads, Opus FEC handles a distance of 1\n // TODO(https://issues.amazon.com/issues/ChimeSDKAudio-55):\n // Consider making this dynamic\n this.redPacketDistance = 2;\n // maximum number of redundant payloads per RTP packet\n this.maxRedEncodings = 2;\n // Maximum number of encodings that can be recovered with a single RED packet, assuming the primary and redundant\n // payloads have FEC.\n this.redMaxRecoveryDistance = this.redPacketDistance * this.maxRedEncodings + 1;\n // maximum history of prior payloads to keep\n // generally we will expire old entries based on timestamp\n // this limit is in place just to make sure the history does not\n // grow too large in the case of erroneous timestamp inputs\n this.maxEncodingHistorySize = 10;\n // Current number of encodings we want to send\n // to the remote end. This will be dynamically\n // updated through the setNumEncodingsFromPacketloss API\n this.numRedundantEncodings = 0;\n // Used to enable or disable redundancy\n // in response to very high packet loss events\n this.redundancyEnabled = true;\n // Loss stats are reported to the main thread every 5 seconds.\n // Since timestamp differences between 2 consecutive packets\n // give us the number of samples in each channel, 1 second\n // is equivalent to 48000 samples:\n // P-time * (1000ms/1s)\n // = (960 samples/20ms) * (1000ms/1s)\n // = 48000 samples/s\n this.lossReportInterval = 48000 * 5;\n // Maximum distance of a packet from the most recent packet timestamp\n // that we will consider for recovery.\n this.maxOutOfOrderPacketDistance = 16;\n /**\n * Below are Opus helper methods and constants.\n */\n this.OPUS_BAD_ARG = -1;\n this.OPUS_INVALID_PACKET = -4;\n // Max number of Opus frames in an Opus packet is 48 (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.5).\n this.OPUS_MAX_OPUS_FRAMES = 48;\n // Max number of bytes that any individual Opus frame can have.\n this.OPUS_MAX_FRAME_SIZE_BYTES = 1275;\n this.encodingHistory = new Array();\n this.opusPayloadType = 0;\n this.redPayloadType = 0;\n this.initializePacketLogs();\n }\n /**\n * Creates an instance of RedundantAudioEncoder and sets up callbacks.\n */\n static initializeWorker() {\n RedundantAudioEncoder.log('Initializing RedundantAudioEncoder');\n const encoder = new RedundantAudioEncoder();\n // RED encoding is done using WebRTC Encoded Transform\n // https://github.com/w3c/webrtc-encoded-transform/blob/main/explainer.md\n // Check the DedicatedWorkerGlobalScope for existence of\n // RTCRtpScriptTransformer interface. If exists, then\n // RTCRtpScriptTransform is supported by this browser.\n // @ts-ignore\n if (self.RTCRtpScriptTransformer) {\n // @ts-ignore\n self.onrtctransform = (event) => {\n if (event.transformer.options.type === 'SenderTransform') {\n encoder.setupSenderTransform(event.transformer.readable, event.transformer.writable);\n }\n else if (event.transformer.options.type === 'ReceiverTransform') {\n encoder.setupReceiverTransform(event.transformer.readable, event.transformer.writable);\n }\n else if (event.transformer.options.type === 'PassthroughTransform') {\n encoder.setupPassthroughTransform(event.transformer.readable, event.transformer.writable);\n }\n };\n }\n self.onmessage = (event) => {\n if (event.data.msgType === 'StartRedWorker') {\n encoder.setupSenderTransform(event.data.send.readable, event.data.send.writable);\n encoder.setupReceiverTransform(event.data.receive.readable, event.data.receive.writable);\n }\n else if (event.data.msgType === 'PassthroughTransform') {\n encoder.setupPassthroughTransform(event.data.send.readable, event.data.send.writable);\n encoder.setupPassthroughTransform(event.data.receive.readable, event.data.receive.writable);\n }\n else if (event.data.msgType === 'RedPayloadType') {\n encoder.setRedPayloadType(event.data.payloadType);\n }\n else if (event.data.msgType === 'OpusPayloadType') {\n encoder.setOpusPayloadType(event.data.payloadType);\n }\n else if (event.data.msgType === 'UpdateNumRedundantEncodings') {\n encoder.setNumRedundantEncodings(event.data.numRedundantEncodings);\n }\n else if (event.data.msgType === 'Enable') {\n encoder.setRedundancyEnabled(true);\n }\n else if (event.data.msgType === 'Disable') {\n encoder.setRedundancyEnabled(false);\n }\n };\n }\n /**\n * Post logs to the main thread\n */\n static log(msg) {\n if (RedundantAudioEncoder.shouldLog) {\n // @ts-ignore\n self.postMessage({\n type: 'REDWorkerLog',\n log: `[AudioRed] ${msg}`,\n });\n }\n }\n /**\n * Returns the number of encodings based on packetLoss value. This is used by `DefaultTransceiverController` to\n * determine when to alert the encoder to update the number of encodings. It also determines if we need to\n * turn off red in cases of very high packet loss to avoid congestion collapse.\n */\n static getNumRedundantEncodingsForPacketLoss(packetLoss) {\n let recommendedRedundantEncodings = 0;\n let shouldTurnOffRed = false;\n if (packetLoss <= 8) {\n recommendedRedundantEncodings = 0;\n }\n else if (packetLoss <= 18) {\n recommendedRedundantEncodings = 1;\n }\n else if (packetLoss <= 75) {\n recommendedRedundantEncodings = 2;\n }\n else {\n recommendedRedundantEncodings = 0;\n shouldTurnOffRed = true;\n }\n return [recommendedRedundantEncodings, shouldTurnOffRed];\n }\n /**\n * Sets up a passthrough (no-op) transform for the given streams.\n */\n setupPassthroughTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up passthrough transform');\n readable.pipeTo(writable);\n }\n /**\n * Sets up the transform stream and pipes the outgoing encoded audio frames through the transform function.\n */\n setupSenderTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up sender RED transform');\n const transformStream = new TransformStream({\n transform: this.senderTransform.bind(this),\n });\n readable.pipeThrough(transformStream).pipeTo(writable);\n return;\n }\n /**\n * Sets up the transform stream and pipes the received encoded audio frames through the transform function.\n */\n setupReceiverTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up receiver RED transform');\n const transformStream = new TransformStream({\n transform: this.receivePacketLogTransform.bind(this),\n });\n readable.pipeThrough(transformStream).pipeTo(writable);\n return;\n }\n /**\n * Set the RED payload type ideally obtained from local offer.\n */\n setRedPayloadType(payloadType) {\n this.redPayloadType = payloadType;\n RedundantAudioEncoder.log(`red payload type set to ${this.redPayloadType}`);\n }\n /**\n * Set the opus payload type ideally obtained from local offer.\n */\n setOpusPayloadType(payloadType) {\n this.opusPayloadType = payloadType;\n RedundantAudioEncoder.log(`opus payload type set to ${this.opusPayloadType}`);\n }\n /**\n * Set the number of redundant encodings\n */\n setNumRedundantEncodings(numRedundantEncodings) {\n this.numRedundantEncodings = numRedundantEncodings;\n if (this.numRedundantEncodings > this.maxRedEncodings) {\n this.numRedundantEncodings = this.maxRedEncodings;\n }\n RedundantAudioEncoder.log(`Updated numRedundantEncodings to ${this.numRedundantEncodings}`);\n }\n /**\n * Enable or disable redundancy in response to\n * high packet loss event.\n */\n setRedundancyEnabled(enabled) {\n this.redundancyEnabled = enabled;\n RedundantAudioEncoder.log(`redundancy ${this.redundancyEnabled ? 'enabled' : 'disabled'}`);\n }\n /**\n * Helper function to only enqueue audio frames if they do not exceed the audio payload byte limit imposed by\n * Chromium-based browsers. Chromium will throw an error (https://crbug.com/1248479) if an audio payload larger than\n * 1000 bytes is enqueued. Any controller that attempts to enqueue an audio payload larger than 1000 bytes will\n * encounter this error and will permanently stop sending or receiving audio.\n */\n enqueueAudioFrameIfPayloadSizeIsValid(\n // @ts-ignore\n frame, controller) {\n if (frame.data.byteLength > this.maxAudioPayloadSizeBytes)\n return;\n controller.enqueue(frame);\n }\n /**\n * Receives encoded frames and modifies as needed before sending to transport.\n */\n senderTransform(\n // @ts-ignore\n frame, controller) {\n const frameMetadata = frame.getMetadata();\n // @ts-ignore\n if (frameMetadata.payloadType !== this.redPayloadType) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n const primaryPayloadBuffer = this.getPrimaryPayload(frame.timestamp, frame.data);\n if (!primaryPayloadBuffer) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n const encodedBuffer = this.encode(frame.timestamp, primaryPayloadBuffer);\n /* istanbul ignore next */\n if (!encodedBuffer) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n frame.data = encodedBuffer;\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n /**\n * Get the primary payload from encoding\n */\n getPrimaryPayload(primaryTimestamp, frame) {\n const encodings = this.splitEncodings(primaryTimestamp, frame);\n if (!encodings || encodings.length < 1)\n return null;\n return encodings[encodings.length - 1].payload;\n }\n /**\n * Split up the encoding received into primary and redundant encodings\n * These will be ordered oldest to newest which is the same ordering\n * in the RTP red payload.\n */\n splitEncodings(primaryTimestamp, frame, getFecInfo = false, primarySequenceNumber = undefined) {\n // process RED headers (according to RFC 2198)\n // 0 1 2 3\n // 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n // |F| block PT | timestamp offset | block length |\n // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n //\n // last header\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // |0| Block PT |\n // +-+-+-+-+-+-+-+-+\n const payload = new DataView(frame);\n let payloadSizeBytes = payload.byteLength;\n let totalPayloadSizeBytes = 0;\n let totalHeaderSizeBytes = 0;\n let primaryPayloadSizeBytes = 0;\n let payloadOffset = 0;\n let gotLastBlock = false;\n const encodings = new Array();\n const redundantEncodingBlockLengths = new Array();\n const redundantEncodingTimestamps = new Array();\n while (payloadSizeBytes > 0) {\n gotLastBlock = (payload.getUint8(payloadOffset) & 0x80) === 0;\n if (gotLastBlock) {\n // Bits 1 through 7 are payload type\n const payloadType = payload.getUint8(payloadOffset) & 0x7f;\n // Unexpected payload type. This is a bad packet.\n if (payloadType !== this.opusPayloadType) {\n return null;\n }\n totalPayloadSizeBytes += this.redLastHeaderSizeBytes;\n totalHeaderSizeBytes += this.redLastHeaderSizeBytes;\n // Accumulated block lengths are equal to or larger than the buffer, which means there is no primary block. This\n // is a bad packet.\n if (totalPayloadSizeBytes >= payload.byteLength) {\n return null;\n }\n primaryPayloadSizeBytes = payload.byteLength - totalPayloadSizeBytes;\n break;\n }\n else {\n if (payloadSizeBytes < this.redHeaderSizeBytes) {\n return null;\n }\n // Bits 22 through 31 are payload length\n const blockLength = ((payload.getUint8(payloadOffset + 2) & 0x03) << 8) + payload.getUint8(payloadOffset + 3);\n redundantEncodingBlockLengths.push(blockLength);\n const timestampOffset = payload.getUint16(payloadOffset + 1) >> 2;\n const timestamp = primaryTimestamp - timestampOffset;\n redundantEncodingTimestamps.push(timestamp);\n totalPayloadSizeBytes += blockLength + this.redHeaderSizeBytes;\n totalHeaderSizeBytes += this.redHeaderSizeBytes;\n payloadOffset += this.redHeaderSizeBytes;\n payloadSizeBytes -= this.redHeaderSizeBytes;\n }\n }\n // The last block was never found. The packet we received\n // does not have a good RED payload.\n if (!gotLastBlock) {\n // Note that sequence numbers only exist for\n // incoming audio frames.\n if (primarySequenceNumber !== undefined) {\n // This could be a possible padding packet used\n // for BWE with a good sequence number.\n // Create a dummy encoding to make sure loss values\n // are calculated correctly by consuming sequence number.\n // Note that for the receive side, we process packets only\n // for loss/recovery calculations and forward the original\n // packet without changing it even in the error case.\n encodings.push({\n payload: frame,\n isRedundant: false,\n seq: primarySequenceNumber,\n });\n return encodings;\n }\n // This is a bad packet.\n return null;\n }\n let redundantPayloadOffset = totalHeaderSizeBytes;\n for (let i = 0; i < redundantEncodingTimestamps.length; i++) {\n const redundantPayloadBuffer = new ArrayBuffer(redundantEncodingBlockLengths[i]);\n const redundantPayloadArray = new Uint8Array(redundantPayloadBuffer);\n redundantPayloadArray.set(new Uint8Array(payload.buffer, redundantPayloadOffset, redundantEncodingBlockLengths[i]), 0);\n const encoding = {\n timestamp: redundantEncodingTimestamps[i],\n payload: redundantPayloadBuffer,\n isRedundant: true,\n };\n if (getFecInfo) {\n encoding.hasFec = this.opusPacketHasFec(new DataView(redundantPayloadBuffer), redundantPayloadBuffer.byteLength);\n }\n encodings.push(encoding);\n redundantPayloadOffset += redundantEncodingBlockLengths[i];\n }\n const primaryPayloadOffset = payload.byteLength - primaryPayloadSizeBytes;\n const primaryPayloadBuffer = new ArrayBuffer(primaryPayloadSizeBytes);\n const primaryArray = new Uint8Array(primaryPayloadBuffer);\n primaryArray.set(new Uint8Array(payload.buffer, primaryPayloadOffset, primaryPayloadSizeBytes), 0);\n const encoding = {\n timestamp: primaryTimestamp,\n payload: primaryPayloadBuffer,\n isRedundant: false,\n seq: primarySequenceNumber,\n };\n if (getFecInfo) {\n encoding.hasFec = this.opusPacketHasFec(new DataView(primaryPayloadBuffer), primaryPayloadBuffer.byteLength);\n }\n encodings.push(encoding);\n return encodings;\n }\n /**\n * Create a new encoding with current primary payload and the older payloads of choice.\n */\n encode(primaryTimestamp, primaryPayload) {\n const primaryPayloadSize = primaryPayload.byteLength;\n // Payload size needs to be valid.\n if (primaryPayloadSize === 0 ||\n primaryPayloadSize >= this.maxRedPacketSizeBytes ||\n primaryPayloadSize >= this.maxAudioPayloadSizeBytes) {\n return null;\n }\n const numRedundantEncodings = this.numRedundantEncodings;\n let headerSizeBytes = this.redLastHeaderSizeBytes;\n let payloadSizeBytes = primaryPayloadSize;\n let bytesAvailable = this.maxAudioPayloadSizeBytes - primaryPayloadSize - headerSizeBytes;\n const redundantEncodingTimestamps = new Array();\n const redundantEncodingPayloads = new Array();\n // If redundancy is disabled then only send the primary payload\n if (this.redundancyEnabled) {\n // Determine how much redundancy we can fit into our packet\n let redundantTimestamp = this.uint32WrapAround(primaryTimestamp - this.redPacketizationTime * this.redPacketDistance);\n for (let i = 0; i < numRedundantEncodings; i++) {\n // Do not add redundant encodings that are beyond the maximum timestamp offset.\n if (this.uint32WrapAround(primaryTimestamp - redundantTimestamp) >= this.maxRedTimestampOffset) {\n break;\n }\n let findTimestamp = redundantTimestamp;\n let encoding = this.encodingHistory.find(e => e.timestamp === findTimestamp);\n if (!encoding) {\n // If not found or not important then look for the previous packet.\n // The current packet may have included FEC for the previous, so just\n // use the previous packet instead provided that it has voice activity.\n findTimestamp = this.uint32WrapAround(redundantTimestamp - this.redPacketizationTime);\n encoding = this.encodingHistory.find(e => e.timestamp === findTimestamp);\n }\n if (encoding) {\n const redundantEncodingSizeBytes = encoding.payload.byteLength;\n // Only add redundancy if there are enough bytes available.\n if (bytesAvailable < this.redHeaderSizeBytes + redundantEncodingSizeBytes)\n break;\n bytesAvailable -= this.redHeaderSizeBytes + redundantEncodingSizeBytes;\n headerSizeBytes += this.redHeaderSizeBytes;\n payloadSizeBytes += redundantEncodingSizeBytes;\n redundantEncodingTimestamps.unshift(encoding.timestamp);\n redundantEncodingPayloads.unshift(encoding.payload);\n }\n redundantTimestamp -= this.redPacketizationTime * this.redPacketDistance;\n redundantTimestamp = this.uint32WrapAround(redundantTimestamp);\n }\n }\n const redPayloadBuffer = new ArrayBuffer(headerSizeBytes + payloadSizeBytes);\n const redPayloadView = new DataView(redPayloadBuffer);\n // Add redundant encoding header(s) to new buffer\n let redPayloadOffset = 0;\n for (let i = 0; i < redundantEncodingTimestamps.length; i++) {\n const timestampDelta = primaryTimestamp - redundantEncodingTimestamps[i];\n redPayloadView.setUint8(redPayloadOffset, this.opusPayloadType | 0x80);\n redPayloadView.setUint16(redPayloadOffset + 1, (timestampDelta << 2) | (redundantEncodingPayloads[i].byteLength >> 8));\n redPayloadView.setUint8(redPayloadOffset + 3, redundantEncodingPayloads[i].byteLength & 0xff);\n redPayloadOffset += this.redHeaderSizeBytes;\n }\n // Add primary encoding header to new buffer\n redPayloadView.setUint8(redPayloadOffset, this.opusPayloadType);\n redPayloadOffset += this.redLastHeaderSizeBytes;\n // Add redundant payload(s) to new buffer\n const redPayloadArray = new Uint8Array(redPayloadBuffer);\n for (let i = 0; i < redundantEncodingPayloads.length; i++) {\n redPayloadArray.set(new Uint8Array(redundantEncodingPayloads[i]), redPayloadOffset);\n redPayloadOffset += redundantEncodingPayloads[i].byteLength;\n }\n // Add primary payload to new buffer\n redPayloadArray.set(new Uint8Array(primaryPayload), redPayloadOffset);\n redPayloadOffset += primaryPayload.byteLength;\n /* istanbul ignore next */\n // Sanity check that we got the expected total payload size.\n if (redPayloadOffset !== headerSizeBytes + payloadSizeBytes)\n return null;\n this.updateEncodingHistory(primaryTimestamp, primaryPayload);\n return redPayloadBuffer;\n }\n /**\n * Update the encoding history with the latest primary encoding\n */\n updateEncodingHistory(primaryTimestamp, primaryPayload) {\n // Remove encodings from the history if they are too old.\n for (const encoding of this.encodingHistory) {\n const maxTimestampDelta = this.redPacketizationTime * this.redMaxRecoveryDistance;\n if (primaryTimestamp - encoding.timestamp >= maxTimestampDelta) {\n this.encodingHistory.shift();\n }\n else {\n break;\n }\n }\n // Only add an encoding to the history if the encoding is deemed to be important. An encoding is important if it is\n // a CELT-only packet or contains voice activity.\n const packet = new DataView(primaryPayload);\n if (this.opusPacketIsCeltOnly(packet) ||\n this.opusPacketHasVoiceActivity(packet, packet.byteLength) > 0) {\n // Check if adding an encoding will cause the length of the encoding history to exceed the maximum history size.\n // This is not expected to happen but could occur if we get incorrect timestamps. We want to make sure our memory\n // usage is bounded. In this case, just clear the history and start over from empty.\n if (this.encodingHistory.length + 1 > this.maxEncodingHistorySize)\n this.encodingHistory.length = 0;\n this.encodingHistory.push({ timestamp: primaryTimestamp, payload: primaryPayload });\n }\n }\n /**\n * Initialize packet logs and metric values.\n */\n initializePacketLogs() {\n // The extra space from the max RED recovery distance is to ensure that we do not incorrectly count recovery for\n // packets that have already been received but are outside of the max out-of-order distance.\n const packetLogSize = this.maxOutOfOrderPacketDistance + this.redMaxRecoveryDistance;\n this.primaryPacketLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.redRecoveryLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.fecRecoveryLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.totalAudioPacketsExpected = 0;\n this.totalAudioPacketsLost = 0;\n this.totalAudioPacketsRecoveredRed = 0;\n this.totalAudioPacketsRecoveredFec = 0;\n }\n /**\n * Receives encoded frames from the server\n * and adds the timestamps to a packet log\n * to calculate an approximate recovery metric.\n */\n receivePacketLogTransform(\n // @ts-ignore\n frame, controller) {\n const frameMetadata = frame.getMetadata();\n // @ts-ignore\n if (frameMetadata.payloadType !== this.redPayloadType) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n // @ts-ignore\n const encodings = this.splitEncodings(frame.timestamp, frame.data, \n /*getFecInfo*/ true, frameMetadata.sequenceNumber);\n if (!encodings) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n for (let i = encodings.length - 1; i >= 0; i--) {\n if (this.updateLossStats(encodings[i])) {\n this.updateRedStats(encodings[i]);\n this.updateFecStats(encodings[i]);\n }\n }\n this.maybeReportLossStats(frameMetadata.synchronizationSource, encodings[encodings.length - 1].timestamp);\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n }\n /**\n * Adds a timestamp to the primary packet log.\n * This also updates totalAudioPacketsLost and totalAudioPacketsExpected by looking\n * at the difference between timestamps.\n *\n * @param encoding : The encoding to be analyzed\n * @returns false if sequence number was greater than max out of order distance\n * true otherwise\n */\n updateLossStats(encoding) {\n if (encoding.isRedundant)\n return true;\n const timestamp = encoding.timestamp;\n const seq = encoding.seq;\n if (this.totalAudioPacketsExpected === 0) {\n this.totalAudioPacketsExpected = 1;\n this.newestSequenceNumber = seq;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n return true;\n }\n const diff = this.int16(seq - this.newestSequenceNumber);\n if (diff < -this.maxOutOfOrderPacketDistance)\n return false;\n if (diff < 0) {\n if (!this.hasTimestamp(this.primaryPacketLog, timestamp)) {\n if (this.totalAudioPacketsLost > 0)\n this.totalAudioPacketsLost--;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n this.removeFromRecoveryWindows(timestamp);\n }\n }\n else if (diff > 1) {\n this.totalAudioPacketsLost += diff - 1;\n }\n if (diff > 0) {\n this.totalAudioPacketsExpected += diff;\n this.newestSequenceNumber = encoding.seq;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n }\n return true;\n }\n /**\n * Adds a timestamp to the red recovery log if it is not present in\n * the primary packet log and if it's not too old.\n *\n * @param encoding : The encoding to be analyzed\n */\n updateRedStats(encoding) {\n if (!encoding.isRedundant || this.totalAudioPacketsLost === 0)\n return;\n const timestamp = encoding.timestamp;\n if (!this.hasTimestamp(this.primaryPacketLog, timestamp)) {\n if (!this.hasTimestamp(this.redRecoveryLog, timestamp)) {\n this.totalAudioPacketsRecoveredRed++;\n this.addTimestamp(this.redRecoveryLog, timestamp);\n }\n if (this.removeTimestamp(this.fecRecoveryLog, timestamp)) {\n /* istanbul ignore else */\n if (this.totalAudioPacketsRecoveredFec > 0)\n this.totalAudioPacketsRecoveredFec--;\n }\n }\n }\n /**\n * Adds a timestamp to the fec recovery log if it is not present in\n * the primary packet log and red recovery log and if it is not too old.\n *\n * @param encoding : The encoding to be analyzed\n */\n updateFecStats(encoding) {\n if (!encoding.hasFec || this.totalAudioPacketsLost === 0)\n return;\n const fecTimestamp = encoding.timestamp - this.redPacketizationTime;\n if (this.hasTimestamp(this.primaryPacketLog, fecTimestamp) ||\n this.hasTimestamp(this.redRecoveryLog, fecTimestamp) ||\n this.hasTimestamp(this.fecRecoveryLog, fecTimestamp)) {\n return;\n }\n this.totalAudioPacketsRecoveredFec++;\n this.addTimestamp(this.fecRecoveryLog, fecTimestamp);\n }\n /**\n * Reports loss metrics to DefaultTransceiverController\n *\n * @param timestamp : Timestamp of most recent primary packet\n */\n maybeReportLossStats(ssrc, timestamp) {\n if (timestamp === undefined ||\n timestamp - this.lastLossReportTimestamp < this.lossReportInterval)\n return;\n /* istanbul ignore next */\n if (RedundantAudioEncoder.shouldReportStats) {\n // @ts-ignore\n self.postMessage({\n type: 'RedundantAudioEncoderStats',\n ssrc,\n totalAudioPacketsLost: this.totalAudioPacketsLost,\n totalAudioPacketsExpected: this.totalAudioPacketsExpected,\n totalAudioPacketsRecoveredRed: this.totalAudioPacketsRecoveredRed,\n totalAudioPacketsRecoveredFec: this.totalAudioPacketsRecoveredFec,\n });\n }\n this.lastLossReportTimestamp = timestamp;\n }\n /**\n * Adds a timestamp to a packet log\n *\n * @param packetLog : The packetlog to add the timestamp to\n * @param timestamp : The timestamp that should be added\n */\n addTimestamp(packetLog, timestamp) {\n if (timestamp === undefined) {\n return;\n }\n packetLog.window[packetLog.index] = timestamp;\n packetLog.index = (packetLog.index + 1) % packetLog.windowSize;\n }\n /**\n * Checks if a timestamp is in a packetlog\n *\n * @param packetLog : The packetlog to search\n * @param timestamp : The timestamp to search for\n * @returns true if timestamp is present, false otherwise\n */\n hasTimestamp(packetLog, timestamp) {\n const element = packetLog.window.find(t => t === timestamp);\n return !!element;\n }\n /**\n * Removes a timestamp from a packet log\n *\n * @param packetLog : The packetlog from which the timestamp should be removed\n * @param timestamp : The timestamp to be removed\n * @returns true if timestamp was present in the log and removed, false otherwise\n */\n removeTimestamp(packetLog, timestamp) {\n const index = packetLog.window.indexOf(timestamp);\n if (index >= 0) {\n packetLog.window[index] = undefined;\n return true;\n }\n return false;\n }\n /**\n * Removes a timestamp from red and fec recovery windows.\n *\n * @param timestamp : The timestamp to be removed\n */\n removeFromRecoveryWindows(timestamp) {\n let removed = this.removeTimestamp(this.redRecoveryLog, timestamp);\n if (removed) {\n if (this.totalAudioPacketsRecoveredRed > 0)\n this.totalAudioPacketsRecoveredRed--;\n }\n removed = this.removeTimestamp(this.fecRecoveryLog, timestamp);\n if (removed) {\n if (this.totalAudioPacketsRecoveredFec > 0)\n this.totalAudioPacketsRecoveredFec--;\n }\n }\n /**\n * Converts the supplied argument to 32-bit unsigned integer\n */\n uint32WrapAround(num) {\n const mod = 4294967296; // 2^32\n let res = num;\n if (num >= mod) {\n res = num - mod;\n }\n else if (num < 0) {\n res = mod + num;\n }\n return res;\n }\n /**\n * Converts the supplied argument to 16-bit signed integer\n */\n int16(num) {\n return (num << 16) >> 16;\n }\n /**\n * Determines if an Opus packet is in CELT-only mode.\n *\n * @param packet Opus packet.\n * @returns `true` if the packet is in CELT-only mode.\n */\n opusPacketIsCeltOnly(packet) {\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n // Since CELT-only packets are represented using configurations 16 to 31, the highest 'config' bit will always be 1\n // for CELT-only packets.\n return !!(packet.getUint8(0) & 0x80);\n }\n /**\n * Gets the number of samples per frame from an Opus packet.\n *\n * @param packet Opus packet. This must contain at least one byte of data.\n * @param sampleRateHz 32-bit integer sampling rate in Hz. This must be a multiple of 400 or inaccurate results will\n * be returned.\n * @returns Number of samples per frame.\n */\n opusPacketGetSamplesPerFrame(packet, sampleRateHz) {\n // Sample rate must be a 32-bit integer.\n sampleRateHz = Math.round(sampleRateHz);\n sampleRateHz = Math.min(Math.max(sampleRateHz, -(Math.pow(2, 32))), Math.pow(2, 32) - 1);\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n let numSamples;\n let frameSizeOption;\n // Case for CELT-only packet.\n if (this.opusPacketIsCeltOnly(packet)) {\n // The lower 3 'config' bits indicate the frame size option.\n frameSizeOption = (packet.getUint8(0) >> 3) & 0x3;\n // The frame size options 0, 1, 2, 3 correspond to frame sizes of 2.5, 5, 10, 20 ms. Notice that the frame sizes\n // can be represented as (2.5 * 2^0), (2.5 * 2^1), (2.5 * 2^2), (2.5 * 2^3) ms. So, the number of samples can be\n // calculated as follows:\n // (sample/s) * (1s/1000ms) * (2.5ms) * 2^(frameSizeOption)\n // = (sample/s) * (1s/400) * 2^(frameSizeOption)\n // = (sample/s) * 2^(frameSizeOption) * (1s/400)\n numSamples = (sampleRateHz << frameSizeOption) / 400;\n }\n // Case for Hybrid packet. Since Hybrid packets are represented using configurations 12 to 15, bits 1 and 2 in the\n // above TOC byte diagram will both be 1.\n else if ((packet.getUint8(0) & 0x60) === 0x60) {\n // In the case of configuration 13 or 15, bit 4 in the above TOC byte diagram will be 1. Configurations 13 and 15\n // correspond to a 20ms frame size, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (20ms)\n // = (sample/s) * (1s/50)\n //\n // In the case of configuration 12 or 14, bit 4 in the above TOC byte diagram will be 0. Configurations 12 and 14\n // correspond to a 10ms frame size, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (10ms)\n // = (sample/s) * (1s/100)\n numSamples = packet.getUint8(0) & 0x08 ? sampleRateHz / 50 : sampleRateHz / 100;\n }\n // Case for SILK-only packet.\n else {\n // The lower 3 'config' bits indicate the frame size option for SILK-only packets.\n frameSizeOption = (packet.getUint8(0) >> 3) & 0x3;\n if (frameSizeOption === 3) {\n // Frame size option 3 corresponds to a frame size of 60ms, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (60ms)\n // = (sample/s) * (60ms) * (1s/1000ms)\n numSamples = (sampleRateHz * 60) / 1000;\n }\n else {\n // The frame size options 0, 1, 2 correspond to frame sizes of 10, 20, 40 ms. Notice that the frame sizes can be\n // represented as (10 * 2^0), (10 * 2^1), (10 * 2^2) ms. So, the number of samples can be calculated as follows:\n // (sample/s) * (1s/1000ms) * (10ms) * 2^(frameSizeOption)\n // = (sample/s) * (1s/100) * 2^(frameSizeOption)\n // = (sample/s) * 2^(frameSizeOption) * (1s/100)\n numSamples = (sampleRateHz << frameSizeOption) / 100;\n }\n }\n return numSamples;\n }\n /**\n * Gets the number of SILK frames per Opus frame.\n *\n * @param packet Opus packet.\n * @returns Number of SILK frames per Opus frame.\n */\n opusNumSilkFrames(packet) {\n // For computing the frame length in ms, the sample rate is not important since it cancels out. We use 48 kHz, but\n // any valid sample rate would work.\n //\n // To calculate the length of a frame (with a 48kHz sample rate) in ms:\n // (samples/frame) * (1s/48000 samples) * (1000ms/s)\n // = (samples/frame) * (1000ms/48000 samples)\n // = (samples/frame) * (1ms/48 samples)\n let frameLengthMs = this.opusPacketGetSamplesPerFrame(packet, 48000) / 48;\n if (frameLengthMs < 10)\n frameLengthMs = 10;\n // The number of SILK frames per Opus frame is described in https://www.rfc-editor.org/rfc/rfc6716#section-4.2.2.\n switch (frameLengthMs) {\n case 10:\n case 20:\n return 1;\n case 40:\n return 2;\n case 60:\n return 3;\n // It is not possible to reach the default case since an Opus packet can only encode sizes of 2.5, 5, 10, 20, 40,\n // or 60 ms, so we ignore the default case for test coverage.\n /* istanbul ignore next */\n default:\n return 0;\n }\n }\n /**\n * Gets the number of channels from an Opus packet.\n *\n * @param packet Opus packet.\n * @returns Number of channels.\n */\n opusPacketGetNumChannels(packet) {\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n // The 's' bit indicates mono or stereo audio, with 0 indicating mono and 1 indicating stereo.\n return packet.getUint8(0) & 0x4 ? 2 : 1;\n }\n /**\n * Determine the size (in bytes) of an Opus frame.\n *\n * @param packet Opus packet.\n * @param byteOffset Offset (from the start of the packet) to the byte containing the size information.\n * @param remainingBytes Remaining number of bytes to parse from the Opus packet.\n * @param sizeBytes Variable to store the parsed frame size (in bytes).\n * @returns Number of bytes that were parsed to determine the frame size.\n */\n opusParseSize(packet, byteOffset, remainingBytes, sizeBytes) {\n // See https://www.rfc-editor.org/rfc/rfc6716#section-3.2.1 for an explanation of how frame size is represented.\n // If there are no remaining bytes to parse the size from, then the size cannot be determined.\n if (remainingBytes < 1) {\n sizeBytes[0] = -1;\n return -1;\n }\n // If the first byte is in the range 0...251, then this value is the size of the frame.\n else if (packet.getUint8(byteOffset) < 252) {\n sizeBytes[0] = packet.getUint8(byteOffset);\n return 1;\n }\n // If the first byte is in the range 252...255, a second byte is needed. If there is no second byte, then the size\n // cannot be determined.\n else if (remainingBytes < 2) {\n sizeBytes[0] = -1;\n return -1;\n }\n // The total size of the frame given two size bytes is:\n // (4 * secondSizeByte) + firstSizeByte\n else {\n sizeBytes[0] = 4 * packet.getUint8(byteOffset + 1) + packet.getUint8(byteOffset);\n return 2;\n }\n }\n /**\n * Parse binary data containing an Opus packet into one or more Opus frames.\n *\n * @param data Binary data containing an Opus packet to be parsed. The data should begin with the first byte (i.e the\n * TOC byte) of an Opus packet. Note that the size of the data does not have to equal the size of the\n * contained Opus packet.\n * @param lenBytes Size of the data (in bytes).\n * @param selfDelimited Indicates if the Opus packet is self-delimiting\n * (https://www.rfc-editor.org/rfc/rfc6716#appendix-B).\n * @param tocByte Optional variable to store the TOC (table of contents) byte.\n * @param frameOffsets Optional variable to store the offsets (from the start of the data) to the first bytes of each\n * Opus frame.\n * @param frameSizes Required variable to store the sizes (in bytes) of each Opus frame.\n * @param payloadOffset Optional variable to store the offset (from the start of the data) to the first byte of the\n * payload.\n * @param packetLenBytes Optional variable to store the length of the Opus packet (in bytes).\n * @returns Number of Opus frames.\n */\n opusPacketParseImpl(data, lenBytes, selfDelimited, tocByte, frameOffsets, frameSizes, payloadOffset, packetLenBytes) {\n if (!frameSizes || lenBytes < 0)\n return this.OPUS_BAD_ARG;\n if (lenBytes === 0)\n return this.OPUS_INVALID_PACKET;\n // The number of Opus frames in the packet.\n let numFrames;\n // Intermediate storage for the number of bytes parsed to determine the size of a frame.\n let numBytesParsed;\n // The number of the padding bytes (excluding the padding count bytes) in the packet.\n let paddingBytes = 0;\n // Indicates whether CBR (constant bitrate) framing is used.\n let cbr = false;\n // The TOC (table of contents) byte (https://www.rfc-editor.org/rfc/rfc6716#section-3.1).\n const toc = data.getUint8(0);\n // Store the TOC byte.\n if (tocByte)\n tocByte[0] = toc;\n // The remaining number of bytes to parse from the packet. Note that the TOC byte has already been parsed, hence the\n // minus 1.\n let remainingBytes = lenBytes - 1;\n // This keeps track of where we are in the packet. This starts at 1 since the TOC byte has already been read.\n let byteOffset = 1;\n // The size of the last Opus frame in bytes.\n let lastSizeBytes = remainingBytes;\n // Read the `c` bits (i.e. code bits) from the TOC byte.\n switch (toc & 0x3) {\n // A code 0 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.2) has one frame.\n case 0:\n numFrames = 1;\n break;\n // A code 1 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.3) has two CBR (constant bitrate) frames.\n case 1:\n numFrames = 2;\n cbr = true;\n if (!selfDelimited) {\n // Undelimited code 1 packets must be an even number of data bytes, otherwise the packet is invalid.\n if (remainingBytes & 0x1)\n return this.OPUS_INVALID_PACKET;\n // The sizes of both frames are equal (i.e. half of the number of data bytes).\n lastSizeBytes = remainingBytes / 2;\n // If `lastSizeBytes` is too large, we will catch it later.\n frameSizes[0][0] = lastSizeBytes;\n }\n break;\n // A code 2 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.4) has two VBR (variable bitrate) frames.\n case 2:\n numFrames = 2;\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[0]);\n remainingBytes -= numBytesParsed;\n // The parsed size of the first frame cannot be larger than the number of remaining bytes in the packet.\n if (frameSizes[0][0] < 0 || frameSizes[0][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n // The size of the second frame is the remaining number of bytes after the first frame.\n lastSizeBytes = remainingBytes - frameSizes[0][0];\n break;\n // A code 3 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.5) has multiple CBR/VBR frames (from 0 to\n // 120 ms).\n default:\n // Code 3 packets must have at least 2 bytes (i.e. at least 1 byte after the TOC byte).\n if (remainingBytes < 1)\n return this.OPUS_INVALID_PACKET;\n // Frame count byte format:\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // |v|p| M |\n // +-+-+-+-+-+-+-+-+\n //\n // Read the frame count byte, which immediately follows the TOC byte.\n const frameCountByte = data.getUint8(byteOffset++);\n --remainingBytes;\n // Read the 'M' bits of the frame count byte, which encode the number of frames.\n numFrames = frameCountByte & 0x3f;\n // The number of frames in a code 3 packet must not be 0.\n if (numFrames <= 0)\n return this.OPUS_INVALID_PACKET;\n const samplesPerFrame = this.opusPacketGetSamplesPerFrame(data, 48000);\n // A single frame can have at most 2880 samples, which happens in the case where 60ms of 48kHz audio is encoded\n // per frame. A code 3 packet cannot contain more than 120ms of audio, so the total number of samples cannot\n // exceed 2880 * 2 = 5760.\n if (samplesPerFrame * numFrames > 5760)\n return this.OPUS_INVALID_PACKET;\n // Parse padding bytes if the 'p' bit is 1.\n if (frameCountByte & 0x40) {\n let paddingCountByte;\n let numPaddingBytes;\n // Remove padding bytes (including padding count bytes) from the remaining byte count.\n do {\n // Sanity check that there are enough bytes to parse and remove the padding.\n if (remainingBytes <= 0)\n return this.OPUS_INVALID_PACKET;\n // Get the next padding count byte.\n paddingCountByte = data.getUint8(byteOffset++);\n --remainingBytes;\n // If the padding count byte has a value in the range 0...254, then the total size of the padding is the\n // value in the padding count byte.\n //\n // If the padding count byte has value 255, then the total size of the padding is 254 plus the value in the\n // next padding count byte. Therefore, keep reading padding count bytes while the value is 255.\n numPaddingBytes = paddingCountByte === 255 ? 254 : paddingCountByte;\n remainingBytes -= numPaddingBytes;\n paddingBytes += numPaddingBytes;\n } while (paddingCountByte === 255);\n }\n // Sanity check that the remaining number of bytes is not negative after removing the padding.\n if (remainingBytes < 0)\n return this.OPUS_INVALID_PACKET;\n // Read the 'v' bit (i.e. VBR bit).\n cbr = !(frameCountByte & 0x80);\n // VBR case\n if (!cbr) {\n lastSizeBytes = remainingBytes;\n // Let M be the number of frames. There will be M - 1 frame length indicators (which can be 1 or 2 bytes)\n // corresponding to the lengths of frames 0 to M - 2. The size of the last frame (i.e. frame M - 1) is the\n // number of data bytes after the end of frame M - 2 and before the start of the padding bytes.\n for (let i = 0; i < numFrames - 1; ++i) {\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[i]);\n remainingBytes -= numBytesParsed;\n // The remaining number of data bytes must be enough to contain each frame.\n if (frameSizes[i][0] < 0 || frameSizes[i][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n lastSizeBytes -= numBytesParsed + frameSizes[i][0];\n }\n // Sanity check that the size of the last frame is not negative.\n if (lastSizeBytes < 0)\n return this.OPUS_INVALID_PACKET;\n }\n // CBR case\n else if (!selfDelimited) {\n // The size of each frame is the number of data bytes divided by the number of frames.\n lastSizeBytes = Math.trunc(remainingBytes / numFrames);\n // The number of data bytes must be a non-negative integer multiple of the number of frames.\n if (lastSizeBytes * numFrames !== remainingBytes)\n return this.OPUS_INVALID_PACKET;\n // All frames have equal size in the undelimited CBR case.\n for (let i = 0; i < numFrames - 1; ++i) {\n frameSizes[i][0] = lastSizeBytes;\n }\n }\n }\n // Self-delimited framing uses an extra 1 or 2 bytes, immediately preceding the data bytes, to indicate either the\n // size of the last frame (for code 0, code 2, and VBR code 3 packets) or the size of all the frames (for code 1 and\n // CBR code 3 packets). See https://www.rfc-editor.org/rfc/rfc6716#appendix-B.\n if (selfDelimited) {\n // The extra frame size byte(s) will always indicate the size of the last frame.\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[numFrames - 1]);\n remainingBytes -= numBytesParsed;\n // There must be enough data bytes for the last frame.\n if (frameSizes[numFrames - 1][0] < 0 || frameSizes[numFrames - 1][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n // For CBR packets, the sizes of all the frames are equal.\n if (cbr) {\n // There must be enough data bytes for all the frames.\n if (frameSizes[numFrames - 1][0] * numFrames > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n for (let i = 0; i < numFrames - 1; ++i) {\n frameSizes[i][0] = frameSizes[numFrames - 1][0];\n }\n }\n // At this point, `lastSizeBytes` contains the size of the last frame plus the size of the extra frame size\n // byte(s), so sanity check that `lastSizeBytes` is the upper bound for the size of the last frame.\n else if (!(numBytesParsed + frameSizes[numFrames - 1][0] <= lastSizeBytes)) {\n return this.OPUS_INVALID_PACKET;\n }\n }\n // Undelimited case\n else {\n // Because the size of the last packet is not encoded explicitly, it is possible that the size of the last packet\n // (or of all the packets, for the CBR case) is larger than maximum frame size.\n if (lastSizeBytes > this.OPUS_MAX_FRAME_SIZE_BYTES)\n return this.OPUS_INVALID_PACKET;\n frameSizes[numFrames - 1][0] = lastSizeBytes;\n }\n // Store the offset to the start of the payload.\n if (payloadOffset)\n payloadOffset[0] = byteOffset;\n // Store the offsets to the start of each frame.\n for (let i = 0; i < numFrames; ++i) {\n if (frameOffsets)\n frameOffsets[i][0] = byteOffset;\n byteOffset += frameSizes[i][0];\n }\n // Store the length of the Opus packet.\n if (packetLenBytes)\n packetLenBytes[0] = byteOffset + paddingBytes;\n return numFrames;\n }\n /**\n * Parse a single undelimited Opus packet into one or more Opus frames.\n *\n * @param packet Opus packet to be parsed.\n * @param lenBytes Size of the packet (in bytes).\n * @param tocByte Optional variable to store the TOC (table of contents) byte.\n * @param frameOffsets Optional variable to store the offsets (from the start of the packet) to the first bytes of\n * each Opus frame.\n * @param frameSizes Required variable to store the sizes (in bytes) of each Opus frame.\n * @param payloadOffset Optional variable to store the offset (from the start of the packet) to the first byte of the\n * payload.\n * @returns Number of Opus frames.\n */\n opusPacketParse(packet, lenBytes, tocByte, frameOffsets, frameSizes, payloadOffset) {\n return this.opusPacketParseImpl(packet, lenBytes, \n /* selfDelimited */ false, tocByte, frameOffsets, frameSizes, payloadOffset, null);\n }\n /**\n * This function returns the SILK VAD (voice activity detection) information encoded in the Opus packet. For CELT-only\n * packets that do not have VAD information, it returns -1.\n *\n * @param packet Opus packet.\n * @param lenBytes Size of the packet (in bytes).\n * @returns 0: no frame had the VAD flag set.\n * 1: at least one frame had the VAD flag set.\n * -1: VAD status could not be determined.\n */\n opusPacketHasVoiceActivity(packet, lenBytes) {\n if (!packet || lenBytes <= 0)\n return 0;\n // In CELT-only mode, we can not determine whether there is VAD.\n if (this.opusPacketIsCeltOnly(packet))\n return -1;\n const numSilkFrames = this.opusNumSilkFrames(packet);\n // It is not possible for `opusNumSilkFrames()` to return 0, so we ignore the next sanity check for test coverage.\n /* istanbul ignore next */\n if (numSilkFrames === 0)\n return -1;\n const opusFrameOffsets = new Array(this.OPUS_MAX_OPUS_FRAMES);\n const opusFrameSizes = new Array(this.OPUS_MAX_OPUS_FRAMES);\n for (let i = 0; i < this.OPUS_MAX_OPUS_FRAMES; ++i) {\n opusFrameOffsets[i] = [undefined];\n opusFrameSizes[i] = [undefined];\n }\n // Parse packet to get the Opus frames.\n const numOpusFrames = this.opusPacketParse(packet, lenBytes, null, opusFrameOffsets, opusFrameSizes, null);\n // VAD status cannot be determined for invalid packets.\n if (numOpusFrames < 0)\n return -1;\n // Iterate over all Opus frames, which may contain multiple SILK frames, to determine the VAD status.\n for (let i = 0; i < numOpusFrames; ++i) {\n if (opusFrameSizes[i][0] < 1)\n continue;\n // LP layer header bits format (https://www.rfc-editor.org/rfc/rfc6716#section-4.2.3):\n //\n // Mono case:\n // +-----------------+----------+\n // | 1 to 3 VAD bits | LBRR bit |\n // +-----------------+----------+\n //\n // Stereo case:\n // +---------------------+--------------+----------------------+---------------+\n // | 1 to 3 mid VAD bits | mid LBRR bit | 1 to 3 side VAD bits | side LBRR bit |\n // +---------------------+--------------+----------------------+---------------+\n // The upper 1 to 3 bits (dependent on the number of SILK frames) of the LP layer contain VAD bits. If any of\n // these VAD bits are 1, then voice activity is present.\n if (packet.getUint8(opusFrameOffsets[i][0]) >> (8 - numSilkFrames))\n return 1;\n // In the stereo case, there is a second set of 1 to 3 VAD bits, so also check these VAD bits.\n const channels = this.opusPacketGetNumChannels(packet);\n if (channels === 2 &&\n (packet.getUint8(opusFrameOffsets[i][0]) << (numSilkFrames + 1)) >> (8 - numSilkFrames)) {\n return 1;\n }\n }\n // No voice activity was detected.\n return 0;\n }\n /**\n * This method is based on Definition of the Opus Audio Codec\n * (https://tools.ietf.org/html/rfc6716). Basically, this method is based on\n * parsing the LP layer of an Opus packet, particularly the LBRR flag.\n *\n * @param packet Opus packet.\n * @param lenBytes Size of the packet (in bytes).\n * @returns true: packet has fec encoding about previous packet.\n * false: no fec encoding present.\n */\n opusPacketHasFec(packet, lenBytes) {\n if (!packet || lenBytes <= 0)\n return false;\n // In CELT-only mode, packets should not have FEC.\n if (this.opusPacketIsCeltOnly(packet))\n return false;\n const opusFrameOffsets = new Array(this.OPUS_MAX_OPUS_FRAMES);\n const opusFrameSizes = new Array(this.OPUS_MAX_OPUS_FRAMES);\n for (let i = 0; i < this.OPUS_MAX_OPUS_FRAMES; ++i) {\n opusFrameOffsets[i] = [undefined];\n opusFrameSizes[i] = [undefined];\n }\n // Parse packet to get the Opus frames.\n const numOpusFrames = this.opusPacketParse(packet, lenBytes, null, opusFrameOffsets, opusFrameSizes, null);\n if (numOpusFrames < 0)\n return false;\n /* istanbul ignore next */\n if (opusFrameSizes[0][0] <= 1)\n return false;\n const numSilkFrames = this.opusNumSilkFrames(packet);\n /* istanbul ignore next */\n if (numSilkFrames === 0)\n return false;\n const channels = this.opusPacketGetNumChannels(packet);\n /* istanbul ignore next */\n if (channels !== 1 && channels !== 2)\n return false;\n // A frame starts with the LP layer. The LP layer begins with two to eight\n // header bits.These consist of one VAD bit per SILK frame (up to 3),\n // followed by a single flag indicating the presence of LBRR frames.\n // For a stereo packet, these first flags correspond to the mid channel, and\n // a second set of flags is included for the side channel. Because these are\n // the first symbols decoded by the range coder and because they are coded\n // as binary values with uniform probability, they can be extracted directly\n // from the most significant bits of the first byte of compressed data.\n for (let i = 0; i < channels; i++) {\n if (packet.getUint8(opusFrameOffsets[0][0]) & (0x80 >> ((i + 1) * (numSilkFrames + 1) - 1)))\n return true;\n }\n return false;\n }\n}\nRedundantAudioEncoder.shouldLog = true;\nRedundantAudioEncoder.shouldReportStats = true;\nRedundantAudioEncoder.initializeWorker();\n" = "class RedundantAudioEncoder {\n constructor() {\n // Each payload must be less than 1024 bytes to fit the 10 bit block length\n this.maxRedPacketSizeBytes = 1 << 10;\n // Limit payload to 1000 bytes to handle small MTU. 1000 is chosen because in Chromium-based browsers, writing audio\n // payloads larger than 1000 bytes using the WebRTC Insertable Streams API (which is used to enable dynamic audio\n // redundancy) will cause an error to be thrown and cause audio flow to permanently stop. See\n // https://crbug.com/1248479.\n this.maxAudioPayloadSizeBytes = 1000;\n // Each payload can encode a timestamp delta of 14 bits\n this.maxRedTimestampOffset = 1 << 14;\n // 4 byte RED header\n this.redHeaderSizeBytes = 4;\n // reduced size for last RED header\n this.redLastHeaderSizeBytes = 1;\n // P-Time for Opus 20 msec packets\n // We do not support other p-times or clock rates\n this.redPacketizationTime = 960;\n // distance between redundant payloads, Opus FEC handles a distance of 1\n // TODO(https://issues.amazon.com/issues/ChimeSDKAudio-55):\n // Consider making this dynamic\n this.redPacketDistance = 2;\n // maximum number of redundant payloads per RTP packet\n this.maxRedEncodings = 2;\n // Maximum number of encodings that can be recovered with a single RED packet, assuming the primary and redundant\n // payloads have FEC.\n this.redMaxRecoveryDistance = this.redPacketDistance * this.maxRedEncodings + 1;\n // maximum history of prior payloads to keep\n // generally we will expire old entries based on timestamp\n // this limit is in place just to make sure the history does not\n // grow too large in the case of erroneous timestamp inputs\n this.maxEncodingHistorySize = 10;\n // Current number of encodings we want to send\n // to the remote end. This will be dynamically\n // updated through the setNumEncodingsFromPacketloss API\n this.numRedundantEncodings = 0;\n // Used to enable or disable redundancy\n // in response to very high packet loss events\n this.redundancyEnabled = true;\n // Loss stats are reported to the main thread every 5 seconds.\n // Since timestamp differences between 2 consecutive packets\n // give us the number of samples in each channel, 1 second\n // is equivalent to 48000 samples:\n // P-time * (1000ms/1s)\n // = (960 samples/20ms) * (1000ms/1s)\n // = 48000 samples/s\n this.lossReportInterval = 48000 * 5;\n // Maximum distance of a packet from the most recent packet timestamp\n // that we will consider for recovery.\n this.maxOutOfOrderPacketDistance = 16;\n /**\n * Below are Opus helper methods and constants.\n */\n this.OPUS_BAD_ARG = -1;\n this.OPUS_INVALID_PACKET = -4;\n // Max number of Opus frames in an Opus packet is 48 (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.5).\n this.OPUS_MAX_OPUS_FRAMES = 48;\n // Max number of bytes that any individual Opus frame can have.\n this.OPUS_MAX_FRAME_SIZE_BYTES = 1275;\n this.encodingHistory = new Array();\n this.opusPayloadType = 0;\n this.redPayloadType = 0;\n this.initializePacketLogs();\n }\n /**\n * Creates an instance of RedundantAudioEncoder and sets up callbacks.\n */\n static initializeWorker() {\n RedundantAudioEncoder.log('Initializing RedundantAudioEncoder');\n const encoder = new RedundantAudioEncoder();\n // RED encoding is done using WebRTC Encoded Transform\n // https://github.com/w3c/webrtc-encoded-transform/blob/main/explainer.md\n // Check the DedicatedWorkerGlobalScope for existence of\n // RTCRtpScriptTransformer interface. If exists, then\n // RTCRtpScriptTransform is supported by this browser.\n // @ts-ignore\n if (self.RTCRtpScriptTransformer) {\n // @ts-ignore\n self.onrtctransform = (event) => {\n if (event.transformer.options.type === 'SenderTransform') {\n encoder.setupSenderTransform(event.transformer.readable, event.transformer.writable);\n }\n else if (event.transformer.options.type === 'ReceiverTransform') {\n encoder.setupReceiverTransform(event.transformer.readable, event.transformer.writable);\n }\n else if (event.transformer.options.type === 'PassthroughTransform') {\n encoder.setupPassthroughTransform(event.transformer.readable, event.transformer.writable);\n }\n };\n }\n self.onmessage = (event) => {\n if (event.data.msgType === 'StartRedWorker') {\n encoder.setupSenderTransform(event.data.send.readable, event.data.send.writable);\n encoder.setupReceiverTransform(event.data.receive.readable, event.data.receive.writable);\n }\n else if (event.data.msgType === 'PassthroughTransform') {\n encoder.setupPassthroughTransform(event.data.send.readable, event.data.send.writable);\n encoder.setupPassthroughTransform(event.data.receive.readable, event.data.receive.writable);\n }\n else if (event.data.msgType === 'RedPayloadType') {\n encoder.setRedPayloadType(event.data.payloadType);\n }\n else if (event.data.msgType === 'OpusPayloadType') {\n encoder.setOpusPayloadType(event.data.payloadType);\n }\n else if (event.data.msgType === 'UpdateNumRedundantEncodings') {\n encoder.setNumRedundantEncodings(event.data.numRedundantEncodings);\n }\n else if (event.data.msgType === 'Enable') {\n encoder.setRedundancyEnabled(true);\n }\n else if (event.data.msgType === 'Disable') {\n encoder.setRedundancyEnabled(false);\n }\n };\n }\n /**\n * Post logs to the main thread\n */\n static log(msg) {\n if (RedundantAudioEncoder.shouldLog) {\n // @ts-ignore\n self.postMessage({\n type: 'REDWorkerLog',\n log: `[AudioRed] ${msg}`,\n });\n }\n }\n /**\n * Returns the number of encodings based on packetLoss value. This is used by `DefaultTransceiverController` to\n * determine when to alert the encoder to update the number of encodings. It also determines if we need to\n * turn off red in cases of very high packet loss to avoid congestion collapse.\n */\n static getNumRedundantEncodingsForPacketLoss(packetLoss) {\n let recommendedRedundantEncodings = 0;\n let shouldTurnOffRed = false;\n if (packetLoss <= 8) {\n recommendedRedundantEncodings = 0;\n }\n else if (packetLoss <= 18) {\n recommendedRedundantEncodings = 1;\n }\n else if (packetLoss <= 75) {\n recommendedRedundantEncodings = 2;\n }\n else {\n recommendedRedundantEncodings = 0;\n shouldTurnOffRed = true;\n }\n return [recommendedRedundantEncodings, shouldTurnOffRed];\n }\n /**\n * Sets up a passthrough (no-op) transform for the given streams.\n */\n setupPassthroughTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up passthrough transform');\n readable.pipeTo(writable);\n }\n /**\n * Sets up the transform stream and pipes the outgoing encoded audio frames through the transform function.\n */\n setupSenderTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up sender RED transform');\n const transformStream = new TransformStream({\n transform: this.senderTransform.bind(this),\n });\n readable.pipeThrough(transformStream).pipeTo(writable);\n return;\n }\n /**\n * Sets up the transform stream and pipes the received encoded audio frames through the transform function.\n */\n setupReceiverTransform(readable, writable) {\n RedundantAudioEncoder.log('Setting up receiver RED transform');\n const transformStream = new TransformStream({\n transform: this.receivePacketLogTransform.bind(this),\n });\n readable.pipeThrough(transformStream).pipeTo(writable);\n return;\n }\n /**\n * Set the RED payload type ideally obtained from local offer.\n */\n setRedPayloadType(payloadType) {\n this.redPayloadType = payloadType;\n RedundantAudioEncoder.log(`red payload type set to ${this.redPayloadType}`);\n }\n /**\n * Set the opus payload type ideally obtained from local offer.\n */\n setOpusPayloadType(payloadType) {\n this.opusPayloadType = payloadType;\n RedundantAudioEncoder.log(`opus payload type set to ${this.opusPayloadType}`);\n }\n /**\n * Set the number of redundant encodings\n */\n setNumRedundantEncodings(numRedundantEncodings) {\n this.numRedundantEncodings = numRedundantEncodings;\n if (this.numRedundantEncodings > this.maxRedEncodings) {\n this.numRedundantEncodings = this.maxRedEncodings;\n }\n RedundantAudioEncoder.log(`Updated numRedundantEncodings to ${this.numRedundantEncodings}`);\n }\n /**\n * Enable or disable redundancy in response to\n * high packet loss event.\n */\n setRedundancyEnabled(enabled) {\n this.redundancyEnabled = enabled;\n RedundantAudioEncoder.log(`redundancy ${this.redundancyEnabled ? 'enabled' : 'disabled'}`);\n }\n /**\n * Helper function to only enqueue audio frames if they do not exceed the audio payload byte limit imposed by\n * Chromium-based browsers. Chromium will throw an error (https://crbug.com/1248479) if an audio payload larger than\n * 1000 bytes is enqueued. Any controller that attempts to enqueue an audio payload larger than 1000 bytes will\n * encounter this error and will permanently stop sending or receiving audio.\n */\n enqueueAudioFrameIfPayloadSizeIsValid(\n // @ts-ignore\n frame, controller) {\n if (frame.data.byteLength > this.maxAudioPayloadSizeBytes)\n return;\n controller.enqueue(frame);\n }\n /**\n * Receives encoded frames and modifies as needed before sending to transport.\n */\n senderTransform(\n // @ts-ignore\n frame, controller) {\n const frameMetadata = frame.getMetadata();\n // @ts-ignore\n if (frameMetadata.payloadType !== this.redPayloadType) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n const primaryPayloadBuffer = this.getPrimaryPayload(frame.timestamp, frame.data);\n if (!primaryPayloadBuffer) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n const encodedBuffer = this.encode(frame.timestamp, primaryPayloadBuffer);\n /* istanbul ignore next */\n if (!encodedBuffer) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n frame.data = encodedBuffer;\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n /**\n * Get the primary payload from encoding\n */\n getPrimaryPayload(primaryTimestamp, frame) {\n const encodings = this.splitEncodings(primaryTimestamp, frame);\n if (!encodings || encodings.length < 1)\n return null;\n return encodings[encodings.length - 1].payload;\n }\n /**\n * Split up the encoding received into primary and redundant encodings\n * These will be ordered oldest to newest which is the same ordering\n * in the RTP red payload.\n */\n splitEncodings(primaryTimestamp, frame, getFecInfo = false, primarySequenceNumber = undefined) {\n // process RED headers (according to RFC 2198)\n // 0 1 2 3\n // 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1\n // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n // |F| block PT | timestamp offset | block length |\n // +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+\n //\n // last header\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // |0| Block PT |\n // +-+-+-+-+-+-+-+-+\n const payload = new DataView(frame);\n let payloadSizeBytes = payload.byteLength;\n let totalPayloadSizeBytes = 0;\n let totalHeaderSizeBytes = 0;\n let primaryPayloadSizeBytes = 0;\n let payloadOffset = 0;\n let gotLastBlock = false;\n const encodings = new Array();\n const redundantEncodingBlockLengths = new Array();\n const redundantEncodingTimestamps = new Array();\n while (payloadSizeBytes > 0) {\n gotLastBlock = (payload.getUint8(payloadOffset) & 0x80) === 0;\n if (gotLastBlock) {\n // Bits 1 through 7 are payload type\n const payloadType = payload.getUint8(payloadOffset) & 0x7f;\n // Unexpected payload type. This is a bad packet.\n if (payloadType !== this.opusPayloadType) {\n return null;\n }\n totalPayloadSizeBytes += this.redLastHeaderSizeBytes;\n totalHeaderSizeBytes += this.redLastHeaderSizeBytes;\n // Accumulated block lengths are equal to or larger than the buffer, which means there is no primary block. This\n // is a bad packet.\n if (totalPayloadSizeBytes >= payload.byteLength) {\n return null;\n }\n primaryPayloadSizeBytes = payload.byteLength - totalPayloadSizeBytes;\n break;\n }\n else {\n if (payloadSizeBytes < this.redHeaderSizeBytes) {\n return null;\n }\n // Bits 22 through 31 are payload length\n const blockLength = ((payload.getUint8(payloadOffset + 2) & 0x03) << 8) + payload.getUint8(payloadOffset + 3);\n redundantEncodingBlockLengths.push(blockLength);\n const timestampOffset = payload.getUint16(payloadOffset + 1) >> 2;\n const timestamp = primaryTimestamp - timestampOffset;\n redundantEncodingTimestamps.push(timestamp);\n totalPayloadSizeBytes += blockLength + this.redHeaderSizeBytes;\n totalHeaderSizeBytes += this.redHeaderSizeBytes;\n payloadOffset += this.redHeaderSizeBytes;\n payloadSizeBytes -= this.redHeaderSizeBytes;\n }\n }\n // The last block was never found. The packet we received\n // does not have a good RED payload.\n if (!gotLastBlock) {\n // Note that sequence numbers only exist for\n // incoming audio frames.\n if (primarySequenceNumber !== undefined) {\n // This could be a possible padding packet used\n // for BWE with a good sequence number.\n // Create a dummy encoding to make sure loss values\n // are calculated correctly by consuming sequence number.\n // Note that for the receive side, we process packets only\n // for loss/recovery calculations and forward the original\n // packet without changing it even in the error case.\n encodings.push({\n payload: frame,\n isRedundant: false,\n seq: primarySequenceNumber,\n });\n return encodings;\n }\n // This is a bad packet.\n return null;\n }\n let redundantPayloadOffset = totalHeaderSizeBytes;\n for (let i = 0; i < redundantEncodingTimestamps.length; i++) {\n const redundantPayloadBuffer = new ArrayBuffer(redundantEncodingBlockLengths[i]);\n const redundantPayloadArray = new Uint8Array(redundantPayloadBuffer);\n redundantPayloadArray.set(new Uint8Array(payload.buffer, redundantPayloadOffset, redundantEncodingBlockLengths[i]), 0);\n const encoding = {\n timestamp: redundantEncodingTimestamps[i],\n payload: redundantPayloadBuffer,\n isRedundant: true,\n };\n if (getFecInfo) {\n encoding.hasFec = this.opusPacketHasFec(new DataView(redundantPayloadBuffer), redundantPayloadBuffer.byteLength);\n }\n encodings.push(encoding);\n redundantPayloadOffset += redundantEncodingBlockLengths[i];\n }\n const primaryPayloadOffset = payload.byteLength - primaryPayloadSizeBytes;\n const primaryPayloadBuffer = new ArrayBuffer(primaryPayloadSizeBytes);\n const primaryArray = new Uint8Array(primaryPayloadBuffer);\n primaryArray.set(new Uint8Array(payload.buffer, primaryPayloadOffset, primaryPayloadSizeBytes), 0);\n const encoding = {\n timestamp: primaryTimestamp,\n payload: primaryPayloadBuffer,\n isRedundant: false,\n seq: primarySequenceNumber,\n };\n if (getFecInfo) {\n encoding.hasFec = this.opusPacketHasFec(new DataView(primaryPayloadBuffer), primaryPayloadBuffer.byteLength);\n }\n encodings.push(encoding);\n return encodings;\n }\n /**\n * Create a new encoding with current primary payload and the older payloads of choice.\n */\n encode(primaryTimestamp, primaryPayload) {\n const primaryPayloadSize = primaryPayload.byteLength;\n // Payload size needs to be valid.\n if (primaryPayloadSize === 0 ||\n primaryPayloadSize >= this.maxRedPacketSizeBytes ||\n primaryPayloadSize >= this.maxAudioPayloadSizeBytes) {\n return null;\n }\n const numRedundantEncodings = this.numRedundantEncodings;\n let headerSizeBytes = this.redLastHeaderSizeBytes;\n let payloadSizeBytes = primaryPayloadSize;\n let bytesAvailable = this.maxAudioPayloadSizeBytes - primaryPayloadSize - headerSizeBytes;\n const redundantEncodingTimestamps = new Array();\n const redundantEncodingPayloads = new Array();\n // If redundancy is disabled then only send the primary payload\n if (this.redundancyEnabled) {\n // Determine how much redundancy we can fit into our packet\n let redundantTimestamp = this.uint32WrapAround(primaryTimestamp - this.redPacketizationTime * this.redPacketDistance);\n for (let i = 0; i < numRedundantEncodings; i++) {\n // Do not add redundant encodings that are beyond the maximum timestamp offset.\n if (this.uint32WrapAround(primaryTimestamp - redundantTimestamp) >= this.maxRedTimestampOffset) {\n break;\n }\n let findTimestamp = redundantTimestamp;\n let encoding = this.encodingHistory.find(e => e.timestamp === findTimestamp);\n if (!encoding) {\n // If not found or not important then look for the previous packet.\n // The current packet may have included FEC for the previous, so just\n // use the previous packet instead provided that it has voice activity.\n findTimestamp = this.uint32WrapAround(redundantTimestamp - this.redPacketizationTime);\n encoding = this.encodingHistory.find(e => e.timestamp === findTimestamp);\n }\n if (encoding) {\n const redundantEncodingSizeBytes = encoding.payload.byteLength;\n // Only add redundancy if there are enough bytes available.\n if (bytesAvailable < this.redHeaderSizeBytes + redundantEncodingSizeBytes)\n break;\n bytesAvailable -= this.redHeaderSizeBytes + redundantEncodingSizeBytes;\n headerSizeBytes += this.redHeaderSizeBytes;\n payloadSizeBytes += redundantEncodingSizeBytes;\n redundantEncodingTimestamps.unshift(encoding.timestamp);\n redundantEncodingPayloads.unshift(encoding.payload);\n }\n redundantTimestamp -= this.redPacketizationTime * this.redPacketDistance;\n redundantTimestamp = this.uint32WrapAround(redundantTimestamp);\n }\n }\n const redPayloadBuffer = new ArrayBuffer(headerSizeBytes + payloadSizeBytes);\n const redPayloadView = new DataView(redPayloadBuffer);\n // Add redundant encoding header(s) to new buffer\n let redPayloadOffset = 0;\n for (let i = 0; i < redundantEncodingTimestamps.length; i++) {\n const timestampDelta = primaryTimestamp - redundantEncodingTimestamps[i];\n redPayloadView.setUint8(redPayloadOffset, this.opusPayloadType | 0x80);\n redPayloadView.setUint16(redPayloadOffset + 1, (timestampDelta << 2) | (redundantEncodingPayloads[i].byteLength >> 8));\n redPayloadView.setUint8(redPayloadOffset + 3, redundantEncodingPayloads[i].byteLength & 0xff);\n redPayloadOffset += this.redHeaderSizeBytes;\n }\n // Add primary encoding header to new buffer\n redPayloadView.setUint8(redPayloadOffset, this.opusPayloadType);\n redPayloadOffset += this.redLastHeaderSizeBytes;\n // Add redundant payload(s) to new buffer\n const redPayloadArray = new Uint8Array(redPayloadBuffer);\n for (let i = 0; i < redundantEncodingPayloads.length; i++) {\n redPayloadArray.set(new Uint8Array(redundantEncodingPayloads[i]), redPayloadOffset);\n redPayloadOffset += redundantEncodingPayloads[i].byteLength;\n }\n // Add primary payload to new buffer\n redPayloadArray.set(new Uint8Array(primaryPayload), redPayloadOffset);\n redPayloadOffset += primaryPayload.byteLength;\n /* istanbul ignore next */\n // Sanity check that we got the expected total payload size.\n if (redPayloadOffset !== headerSizeBytes + payloadSizeBytes)\n return null;\n this.updateEncodingHistory(primaryTimestamp, primaryPayload);\n return redPayloadBuffer;\n }\n /**\n * Update the encoding history with the latest primary encoding\n */\n updateEncodingHistory(primaryTimestamp, primaryPayload) {\n // Remove encodings from the history if they are too old.\n for (const encoding of this.encodingHistory) {\n const maxTimestampDelta = this.redPacketizationTime * this.redMaxRecoveryDistance;\n if (primaryTimestamp - encoding.timestamp >= maxTimestampDelta) {\n this.encodingHistory.shift();\n }\n else {\n break;\n }\n }\n // Only add an encoding to the history if the encoding is deemed to be important. An encoding is important if it is\n // a CELT-only packet or contains voice activity.\n const packet = new DataView(primaryPayload);\n if (this.opusPacketIsCeltOnly(packet) ||\n this.opusPacketHasVoiceActivity(packet, packet.byteLength) > 0) {\n // Check if adding an encoding will cause the length of the encoding history to exceed the maximum history size.\n // This is not expected to happen but could occur if we get incorrect timestamps. We want to make sure our memory\n // usage is bounded. In this case, just clear the history and start over from empty.\n if (this.encodingHistory.length + 1 > this.maxEncodingHistorySize)\n this.encodingHistory.length = 0;\n this.encodingHistory.push({ timestamp: primaryTimestamp, payload: primaryPayload });\n }\n }\n /**\n * Initialize packet logs and metric values.\n */\n initializePacketLogs() {\n // The extra space from the max RED recovery distance is to ensure that we do not incorrectly count recovery for\n // packets that have already been received but are outside of the max out-of-order distance.\n const packetLogSize = this.maxOutOfOrderPacketDistance + this.redMaxRecoveryDistance;\n this.primaryPacketLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.redRecoveryLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.fecRecoveryLog = {\n window: new Array(packetLogSize),\n index: 0,\n windowSize: packetLogSize,\n };\n this.totalAudioPacketsExpected = 0;\n this.totalAudioPacketsLost = 0;\n this.totalAudioPacketsRecoveredRed = 0;\n this.totalAudioPacketsRecoveredFec = 0;\n }\n /**\n * Receives encoded frames from the server\n * and adds the timestamps to a packet log\n * to calculate an approximate recovery metric.\n */\n receivePacketLogTransform(\n // @ts-ignore\n frame, controller) {\n const frameMetadata = frame.getMetadata();\n // @ts-ignore\n if (frameMetadata.payloadType !== this.redPayloadType) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n // @ts-ignore\n const encodings = this.splitEncodings(frame.timestamp, frame.data, \n /*getFecInfo*/ true, frameMetadata.sequenceNumber);\n if (!encodings) {\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n return;\n }\n for (let i = encodings.length - 1; i >= 0; i--) {\n if (this.updateLossStats(encodings[i])) {\n this.updateRedStats(encodings[i]);\n this.updateFecStats(encodings[i]);\n }\n }\n this.maybeReportLossStats(frameMetadata.synchronizationSource, encodings[encodings.length - 1].timestamp);\n this.enqueueAudioFrameIfPayloadSizeIsValid(frame, controller);\n }\n /**\n * Adds a timestamp to the primary packet log.\n * This also updates totalAudioPacketsLost and totalAudioPacketsExpected by looking\n * at the difference between timestamps.\n *\n * @param encoding : The encoding to be analyzed\n * @returns false if sequence number was greater than max out of order distance\n * true otherwise\n */\n updateLossStats(encoding) {\n if (encoding.isRedundant)\n return true;\n const timestamp = encoding.timestamp;\n const seq = encoding.seq;\n if (this.totalAudioPacketsExpected === 0) {\n this.totalAudioPacketsExpected = 1;\n this.newestSequenceNumber = seq;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n return true;\n }\n const diff = this.int16(seq - this.newestSequenceNumber);\n if (diff < -this.maxOutOfOrderPacketDistance)\n return false;\n if (diff < 0) {\n if (!this.hasTimestamp(this.primaryPacketLog, timestamp)) {\n if (this.totalAudioPacketsLost > 0)\n this.totalAudioPacketsLost--;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n this.removeFromRecoveryWindows(timestamp);\n }\n }\n else if (diff > 1) {\n this.totalAudioPacketsLost += diff - 1;\n }\n if (diff > 0) {\n this.totalAudioPacketsExpected += diff;\n this.newestSequenceNumber = encoding.seq;\n this.addTimestamp(this.primaryPacketLog, timestamp);\n }\n return true;\n }\n /**\n * Adds a timestamp to the red recovery log if it is not present in\n * the primary packet log and if it's not too old.\n *\n * @param encoding : The encoding to be analyzed\n */\n updateRedStats(encoding) {\n if (!encoding.isRedundant || this.totalAudioPacketsLost === 0)\n return;\n const timestamp = encoding.timestamp;\n if (!this.hasTimestamp(this.primaryPacketLog, timestamp)) {\n if (!this.hasTimestamp(this.redRecoveryLog, timestamp)) {\n this.totalAudioPacketsRecoveredRed++;\n this.addTimestamp(this.redRecoveryLog, timestamp);\n }\n if (this.removeTimestamp(this.fecRecoveryLog, timestamp)) {\n /* istanbul ignore else */\n if (this.totalAudioPacketsRecoveredFec > 0)\n this.totalAudioPacketsRecoveredFec--;\n }\n }\n }\n /**\n * Adds a timestamp to the fec recovery log if it is not present in\n * the primary packet log and red recovery log and if it is not too old.\n *\n * @param encoding : The encoding to be analyzed\n */\n updateFecStats(encoding) {\n if (!encoding.hasFec || this.totalAudioPacketsLost === 0)\n return;\n const fecTimestamp = encoding.timestamp - this.redPacketizationTime;\n if (this.hasTimestamp(this.primaryPacketLog, fecTimestamp) ||\n this.hasTimestamp(this.redRecoveryLog, fecTimestamp) ||\n this.hasTimestamp(this.fecRecoveryLog, fecTimestamp)) {\n return;\n }\n this.totalAudioPacketsRecoveredFec++;\n this.addTimestamp(this.fecRecoveryLog, fecTimestamp);\n }\n /**\n * Reports loss metrics to DefaultTransceiverController\n *\n * @param timestamp : Timestamp of most recent primary packet\n */\n maybeReportLossStats(ssrc, timestamp) {\n if (timestamp === undefined ||\n timestamp - this.lastLossReportTimestamp < this.lossReportInterval)\n return;\n /* istanbul ignore next */\n if (RedundantAudioEncoder.shouldReportStats) {\n // @ts-ignore\n self.postMessage({\n type: 'RedundantAudioEncoderStats',\n ssrc,\n totalAudioPacketsLost: this.totalAudioPacketsLost,\n totalAudioPacketsExpected: this.totalAudioPacketsExpected,\n totalAudioPacketsRecoveredRed: this.totalAudioPacketsRecoveredRed,\n totalAudioPacketsRecoveredFec: this.totalAudioPacketsRecoveredFec,\n });\n }\n this.lastLossReportTimestamp = timestamp;\n }\n /**\n * Adds a timestamp to a packet log\n *\n * @param packetLog : The packetlog to add the timestamp to\n * @param timestamp : The timestamp that should be added\n */\n addTimestamp(packetLog, timestamp) {\n if (timestamp === undefined) {\n return;\n }\n packetLog.window[packetLog.index] = timestamp;\n packetLog.index = (packetLog.index + 1) % packetLog.windowSize;\n }\n /**\n * Checks if a timestamp is in a packetlog\n *\n * @param packetLog : The packetlog to search\n * @param timestamp : The timestamp to search for\n * @returns true if timestamp is present, false otherwise\n */\n hasTimestamp(packetLog, timestamp) {\n const element = packetLog.window.find(t => t === timestamp);\n return !!element;\n }\n /**\n * Removes a timestamp from a packet log\n *\n * @param packetLog : The packetlog from which the timestamp should be removed\n * @param timestamp : The timestamp to be removed\n * @returns true if timestamp was present in the log and removed, false otherwise\n */\n removeTimestamp(packetLog, timestamp) {\n const index = packetLog.window.indexOf(timestamp);\n if (index >= 0) {\n packetLog.window[index] = undefined;\n return true;\n }\n return false;\n }\n /**\n * Removes a timestamp from red and fec recovery windows.\n *\n * @param timestamp : The timestamp to be removed\n */\n removeFromRecoveryWindows(timestamp) {\n let removed = this.removeTimestamp(this.redRecoveryLog, timestamp);\n if (removed) {\n if (this.totalAudioPacketsRecoveredRed > 0)\n this.totalAudioPacketsRecoveredRed--;\n }\n removed = this.removeTimestamp(this.fecRecoveryLog, timestamp);\n if (removed) {\n if (this.totalAudioPacketsRecoveredFec > 0)\n this.totalAudioPacketsRecoveredFec--;\n }\n }\n /**\n * Converts the supplied argument to 32-bit unsigned integer\n */\n uint32WrapAround(num) {\n const mod = 4294967296; // 2^32\n let res = num;\n if (num >= mod) {\n res = num - mod;\n }\n else if (num < 0) {\n res = mod + num;\n }\n return res;\n }\n /**\n * Converts the supplied argument to 16-bit signed integer\n */\n int16(num) {\n return (num << 16) >> 16;\n }\n /**\n * Determines if an Opus packet is in CELT-only mode.\n *\n * @param packet Opus packet.\n * @returns `true` if the packet is in CELT-only mode.\n */\n opusPacketIsCeltOnly(packet) {\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n // Since CELT-only packets are represented using configurations 16 to 31, the highest 'config' bit will always be 1\n // for CELT-only packets.\n return !!(packet.getUint8(0) & 0x80);\n }\n /**\n * Gets the number of samples per frame from an Opus packet.\n *\n * @param packet Opus packet. This must contain at least one byte of data.\n * @param sampleRateHz 32-bit integer sampling rate in Hz. This must be a multiple of 400 or inaccurate results will\n * be returned.\n * @returns Number of samples per frame.\n */\n opusPacketGetSamplesPerFrame(packet, sampleRateHz) {\n // Sample rate must be a 32-bit integer.\n sampleRateHz = Math.round(sampleRateHz);\n sampleRateHz = Math.min(Math.max(sampleRateHz, -(Math.pow(2, 32))), Math.pow(2, 32) - 1);\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n let numSamples;\n let frameSizeOption;\n // Case for CELT-only packet.\n if (this.opusPacketIsCeltOnly(packet)) {\n // The lower 3 'config' bits indicate the frame size option.\n frameSizeOption = (packet.getUint8(0) >> 3) & 0x3;\n // The frame size options 0, 1, 2, 3 correspond to frame sizes of 2.5, 5, 10, 20 ms. Notice that the frame sizes\n // can be represented as (2.5 * 2^0), (2.5 * 2^1), (2.5 * 2^2), (2.5 * 2^3) ms. So, the number of samples can be\n // calculated as follows:\n // (sample/s) * (1s/1000ms) * (2.5ms) * 2^(frameSizeOption)\n // = (sample/s) * (1s/400) * 2^(frameSizeOption)\n // = (sample/s) * 2^(frameSizeOption) * (1s/400)\n numSamples = (sampleRateHz << frameSizeOption) / 400;\n }\n // Case for Hybrid packet. Since Hybrid packets are represented using configurations 12 to 15, bits 1 and 2 in the\n // above TOC byte diagram will both be 1.\n else if ((packet.getUint8(0) & 0x60) === 0x60) {\n // In the case of configuration 13 or 15, bit 4 in the above TOC byte diagram will be 1. Configurations 13 and 15\n // correspond to a 20ms frame size, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (20ms)\n // = (sample/s) * (1s/50)\n //\n // In the case of configuration 12 or 14, bit 4 in the above TOC byte diagram will be 0. Configurations 12 and 14\n // correspond to a 10ms frame size, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (10ms)\n // = (sample/s) * (1s/100)\n numSamples = packet.getUint8(0) & 0x08 ? sampleRateHz / 50 : sampleRateHz / 100;\n }\n // Case for SILK-only packet.\n else {\n // The lower 3 'config' bits indicate the frame size option for SILK-only packets.\n frameSizeOption = (packet.getUint8(0) >> 3) & 0x3;\n if (frameSizeOption === 3) {\n // Frame size option 3 corresponds to a frame size of 60ms, so the number of samples is calculated as follows:\n // (sample/s) * (1s/1000ms) * (60ms)\n // = (sample/s) * (60ms) * (1s/1000ms)\n numSamples = (sampleRateHz * 60) / 1000;\n }\n else {\n // The frame size options 0, 1, 2 correspond to frame sizes of 10, 20, 40 ms. Notice that the frame sizes can be\n // represented as (10 * 2^0), (10 * 2^1), (10 * 2^2) ms. So, the number of samples can be calculated as follows:\n // (sample/s) * (1s/1000ms) * (10ms) * 2^(frameSizeOption)\n // = (sample/s) * (1s/100) * 2^(frameSizeOption)\n // = (sample/s) * 2^(frameSizeOption) * (1s/100)\n numSamples = (sampleRateHz << frameSizeOption) / 100;\n }\n }\n return numSamples;\n }\n /**\n * Gets the number of SILK frames per Opus frame.\n *\n * @param packet Opus packet.\n * @returns Number of SILK frames per Opus frame.\n */\n opusNumSilkFrames(packet) {\n // For computing the frame length in ms, the sample rate is not important since it cancels out. We use 48 kHz, but\n // any valid sample rate would work.\n //\n // To calculate the length of a frame (with a 48kHz sample rate) in ms:\n // (samples/frame) * (1s/48000 samples) * (1000ms/s)\n // = (samples/frame) * (1000ms/48000 samples)\n // = (samples/frame) * (1ms/48 samples)\n let frameLengthMs = this.opusPacketGetSamplesPerFrame(packet, 48000) / 48;\n if (frameLengthMs < 10)\n frameLengthMs = 10;\n // The number of SILK frames per Opus frame is described in https://www.rfc-editor.org/rfc/rfc6716#section-4.2.2.\n switch (frameLengthMs) {\n case 10:\n case 20:\n return 1;\n case 40:\n return 2;\n case 60:\n return 3;\n // It is not possible to reach the default case since an Opus packet can only encode sizes of 2.5, 5, 10, 20, 40,\n // or 60 ms, so we ignore the default case for test coverage.\n /* istanbul ignore next */\n default:\n return 0;\n }\n }\n /**\n * Gets the number of channels from an Opus packet.\n *\n * @param packet Opus packet.\n * @returns Number of channels.\n */\n opusPacketGetNumChannels(packet) {\n // TOC byte format (https://www.rfc-editor.org/rfc/rfc6716#section-3.1):\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // | config |s| c |\n // +-+-+-+-+-+-+-+-+\n // The 's' bit indicates mono or stereo audio, with 0 indicating mono and 1 indicating stereo.\n return packet.getUint8(0) & 0x4 ? 2 : 1;\n }\n /**\n * Determine the size (in bytes) of an Opus frame.\n *\n * @param packet Opus packet.\n * @param byteOffset Offset (from the start of the packet) to the byte containing the size information.\n * @param remainingBytes Remaining number of bytes to parse from the Opus packet.\n * @param sizeBytes Variable to store the parsed frame size (in bytes).\n * @returns Number of bytes that were parsed to determine the frame size.\n */\n opusParseSize(packet, byteOffset, remainingBytes, sizeBytes) {\n // See https://www.rfc-editor.org/rfc/rfc6716#section-3.2.1 for an explanation of how frame size is represented.\n // If there are no remaining bytes to parse the size from, then the size cannot be determined.\n if (remainingBytes < 1) {\n sizeBytes[0] = -1;\n return -1;\n }\n // If the first byte is in the range 0...251, then this value is the size of the frame.\n else if (packet.getUint8(byteOffset) < 252) {\n sizeBytes[0] = packet.getUint8(byteOffset);\n return 1;\n }\n // If the first byte is in the range 252...255, a second byte is needed. If there is no second byte, then the size\n // cannot be determined.\n else if (remainingBytes < 2) {\n sizeBytes[0] = -1;\n return -1;\n }\n // The total size of the frame given two size bytes is:\n // (4 * secondSizeByte) + firstSizeByte\n else {\n sizeBytes[0] = 4 * packet.getUint8(byteOffset + 1) + packet.getUint8(byteOffset);\n return 2;\n }\n }\n /**\n * Parse binary data containing an Opus packet into one or more Opus frames.\n *\n * @param data Binary data containing an Opus packet to be parsed. The data should begin with the first byte (i.e the\n * TOC byte) of an Opus packet. Note that the size of the data does not have to equal the size of the\n * contained Opus packet.\n * @param lenBytes Size of the data (in bytes).\n * @param selfDelimited Indicates if the Opus packet is self-delimiting\n * (https://www.rfc-editor.org/rfc/rfc6716#appendix-B).\n * @param tocByte Optional variable to store the TOC (table of contents) byte.\n * @param frameOffsets Optional variable to store the offsets (from the start of the data) to the first bytes of each\n * Opus frame.\n * @param frameSizes Required variable to store the sizes (in bytes) of each Opus frame.\n * @param payloadOffset Optional variable to store the offset (from the start of the data) to the first byte of the\n * payload.\n * @param packetLenBytes Optional variable to store the length of the Opus packet (in bytes).\n * @returns Number of Opus frames.\n */\n opusPacketParseImpl(data, lenBytes, selfDelimited, tocByte, frameOffsets, frameSizes, payloadOffset, packetLenBytes) {\n if (!frameSizes || lenBytes < 0)\n return this.OPUS_BAD_ARG;\n if (lenBytes === 0)\n return this.OPUS_INVALID_PACKET;\n // The number of Opus frames in the packet.\n let numFrames;\n // Intermediate storage for the number of bytes parsed to determine the size of a frame.\n let numBytesParsed;\n // The number of the padding bytes (excluding the padding count bytes) in the packet.\n let paddingBytes = 0;\n // Indicates whether CBR (constant bitrate) framing is used.\n let cbr = false;\n // The TOC (table of contents) byte (https://www.rfc-editor.org/rfc/rfc6716#section-3.1).\n const toc = data.getUint8(0);\n // Store the TOC byte.\n if (tocByte)\n tocByte[0] = toc;\n // The remaining number of bytes to parse from the packet. Note that the TOC byte has already been parsed, hence the\n // minus 1.\n let remainingBytes = lenBytes - 1;\n // This keeps track of where we are in the packet. This starts at 1 since the TOC byte has already been read.\n let byteOffset = 1;\n // The size of the last Opus frame in bytes.\n let lastSizeBytes = remainingBytes;\n // Read the `c` bits (i.e. code bits) from the TOC byte.\n switch (toc & 0x3) {\n // A code 0 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.2) has one frame.\n case 0:\n numFrames = 1;\n break;\n // A code 1 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.3) has two CBR (constant bitrate) frames.\n case 1:\n numFrames = 2;\n cbr = true;\n if (!selfDelimited) {\n // Undelimited code 1 packets must be an even number of data bytes, otherwise the packet is invalid.\n if (remainingBytes & 0x1)\n return this.OPUS_INVALID_PACKET;\n // The sizes of both frames are equal (i.e. half of the number of data bytes).\n lastSizeBytes = remainingBytes / 2;\n // If `lastSizeBytes` is too large, we will catch it later.\n frameSizes[0][0] = lastSizeBytes;\n }\n break;\n // A code 2 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.4) has two VBR (variable bitrate) frames.\n case 2:\n numFrames = 2;\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[0]);\n remainingBytes -= numBytesParsed;\n // The parsed size of the first frame cannot be larger than the number of remaining bytes in the packet.\n if (frameSizes[0][0] < 0 || frameSizes[0][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n // The size of the second frame is the remaining number of bytes after the first frame.\n lastSizeBytes = remainingBytes - frameSizes[0][0];\n break;\n // A code 3 packet (https://www.rfc-editor.org/rfc/rfc6716#section-3.2.5) has multiple CBR/VBR frames (from 0 to\n // 120 ms).\n default:\n // Code 3 packets must have at least 2 bytes (i.e. at least 1 byte after the TOC byte).\n if (remainingBytes < 1)\n return this.OPUS_INVALID_PACKET;\n // Frame count byte format:\n // 0\n // 0 1 2 3 4 5 6 7\n // +-+-+-+-+-+-+-+-+\n // |v|p| M |\n // +-+-+-+-+-+-+-+-+\n //\n // Read the frame count byte, which immediately follows the TOC byte.\n const frameCountByte = data.getUint8(byteOffset++);\n --remainingBytes;\n // Read the 'M' bits of the frame count byte, which encode the number of frames.\n numFrames = frameCountByte & 0x3f;\n // The number of frames in a code 3 packet must not be 0.\n if (numFrames <= 0)\n return this.OPUS_INVALID_PACKET;\n const samplesPerFrame = this.opusPacketGetSamplesPerFrame(data, 48000);\n // A single frame can have at most 2880 samples, which happens in the case where 60ms of 48kHz audio is encoded\n // per frame. A code 3 packet cannot contain more than 120ms of audio, so the total number of samples cannot\n // exceed 2880 * 2 = 5760.\n if (samplesPerFrame * numFrames > 5760)\n return this.OPUS_INVALID_PACKET;\n // Parse padding bytes if the 'p' bit is 1.\n if (frameCountByte & 0x40) {\n let paddingCountByte;\n let numPaddingBytes;\n // Remove padding bytes (including padding count bytes) from the remaining byte count.\n do {\n // Sanity check that there are enough bytes to parse and remove the padding.\n if (remainingBytes <= 0)\n return this.OPUS_INVALID_PACKET;\n // Get the next padding count byte.\n paddingCountByte = data.getUint8(byteOffset++);\n --remainingBytes;\n // If the padding count byte has a value in the range 0...254, then the total size of the padding is the\n // value in the padding count byte.\n //\n // If the padding count byte has value 255, then the total size of the padding is 254 plus the value in the\n // next padding count byte. Therefore, keep reading padding count bytes while the value is 255.\n numPaddingBytes = paddingCountByte === 255 ? 254 : paddingCountByte;\n remainingBytes -= numPaddingBytes;\n paddingBytes += numPaddingBytes;\n } while (paddingCountByte === 255);\n }\n // Sanity check that the remaining number of bytes is not negative after removing the padding.\n if (remainingBytes < 0)\n return this.OPUS_INVALID_PACKET;\n // Read the 'v' bit (i.e. VBR bit).\n cbr = !(frameCountByte & 0x80);\n // VBR case\n if (!cbr) {\n lastSizeBytes = remainingBytes;\n // Let M be the number of frames. There will be M - 1 frame length indicators (which can be 1 or 2 bytes)\n // corresponding to the lengths of frames 0 to M - 2. The size of the last frame (i.e. frame M - 1) is the\n // number of data bytes after the end of frame M - 2 and before the start of the padding bytes.\n for (let i = 0; i < numFrames - 1; ++i) {\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[i]);\n remainingBytes -= numBytesParsed;\n // The remaining number of data bytes must be enough to contain each frame.\n if (frameSizes[i][0] < 0 || frameSizes[i][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n lastSizeBytes -= numBytesParsed + frameSizes[i][0];\n }\n // Sanity check that the size of the last frame is not negative.\n if (lastSizeBytes < 0)\n return this.OPUS_INVALID_PACKET;\n }\n // CBR case\n else if (!selfDelimited) {\n // The size of each frame is the number of data bytes divided by the number of frames.\n lastSizeBytes = Math.trunc(remainingBytes / numFrames);\n // The number of data bytes must be a non-negative integer multiple of the number of frames.\n if (lastSizeBytes * numFrames !== remainingBytes)\n return this.OPUS_INVALID_PACKET;\n // All frames have equal size in the undelimited CBR case.\n for (let i = 0; i < numFrames - 1; ++i) {\n frameSizes[i][0] = lastSizeBytes;\n }\n }\n }\n // Self-delimited framing uses an extra 1 or 2 bytes, immediately preceding the data bytes, to indicate either the\n // size of the last frame (for code 0, code 2, and VBR code 3 packets) or the size of all the frames (for code 1 and\n // CBR code 3 packets). See https://www.rfc-editor.org/rfc/rfc6716#appendix-B.\n if (selfDelimited) {\n // The extra frame size byte(s) will always indicate the size of the last frame.\n numBytesParsed = this.opusParseSize(data, byteOffset, remainingBytes, frameSizes[numFrames - 1]);\n remainingBytes -= numBytesParsed;\n // There must be enough data bytes for the last frame.\n if (frameSizes[numFrames - 1][0] < 0 || frameSizes[numFrames - 1][0] > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n byteOffset += numBytesParsed;\n // For CBR packets, the sizes of all the frames are equal.\n if (cbr) {\n // There must be enough data bytes for all the frames.\n if (frameSizes[numFrames - 1][0] * numFrames > remainingBytes) {\n return this.OPUS_INVALID_PACKET;\n }\n for (let i = 0; i < numFrames - 1; ++i) {\n frameSizes[i][0] = frameSizes[numFrames - 1][0];\n }\n }\n // At this point, `lastSizeBytes` contains the size of the last frame plus the size of the extra frame size\n // byte(s), so sanity check that `lastSizeBytes` is the upper bound for the size of the last frame.\n else if (!(numBytesParsed + frameSizes[numFrames - 1][0] <= lastSizeBytes)) {\n return this.OPUS_INVALID_PACKET;\n }\n }\n // Undelimited case\n else {\n // Because the size of the last packet is not encoded explicitly, it is possible that the size of the last packet\n // (or of all the packets, for the CBR case) is larger than maximum frame size.\n if (lastSizeBytes > this.OPUS_MAX_FRAME_SIZE_BYTES)\n return this.OPUS_INVALID_PACKET;\n frameSizes[numFrames - 1][0] = lastSizeBytes;\n }\n // Store the offset to the start of the payload.\n if (payloadOffset)\n payloadOffset[0] = byteOffset;\n // Store the offsets to the start of each frame.\n for (let i = 0; i < numFrames; ++i) {\n if (frameOffsets)\n frameOffsets[i][0] = byteOffset;\n byteOffset += frameSizes[i][0];\n }\n // Store the length of the Opus packet.\n if (packetLenBytes)\n packetLenBytes[0] = byteOffset + paddingBytes;\n return numFrames;\n }\n /**\n * Parse a single undelimited Opus packet into one or more Opus frames.\n *\n * @param packet Opus packet to be parsed.\n * @param lenBytes Size of the packet (in bytes).\n * @param tocByte Optional variable to store the TOC (table of contents) byte.\n * @param frameOffsets Optional variable to store the offsets (from the start of the packet) to the first bytes of\n * each Opus frame.\n * @param frameSizes Required variable to store the sizes (in bytes) of each Opus frame.\n * @param payloadOffset Optional variable to store the offset (from the start of the packet) to the first byte of the\n * payload.\n * @returns Number of Opus frames.\n */\n opusPacketParse(packet, lenBytes, tocByte, frameOffsets, frameSizes, payloadOffset) {\n return this.opusPacketParseImpl(packet, lenBytes, \n /* selfDelimited */ false, tocByte, frameOffsets, frameSizes, payloadOffset, null);\n }\n /**\n * This function returns the SILK VAD (voice activity detection) information encoded in the Opus packet. For CELT-only\n * packets that do not have VAD information, it returns -1.\n *\n * @param packet Opus packet.\n * @param lenBytes Size of the packet (in bytes).\n * @returns 0: no frame had the VAD flag set.\n * 1: at least one frame had the VAD flag set.\n * -1: VAD status could not be determined.\n */\n opusPacketHasVoiceActivity(packet, lenBytes) {\n if (!packet || lenBytes <= 0)\n return 0;\n // In CELT-only mode, we can not determine whether there is VAD.\n if (this.opusPacketIsCeltOnly(packet))\n return -1;\n const numSilkFrames = this.opusNumSilkFrames(packet);\n // It is not possible for `opusNumSilkFrames()` to return 0, so we ignore the next sanity check for test coverage.\n /* istanbul ignore next */\n if (numSilkFrames === 0)\n return -1;\n const opusFrameOffsets = new Array(this.OPUS_MAX_OPUS_FRAMES);\n const opusFrameSizes = new Array(this.OPUS_MAX_OPUS_FRAMES);\n for (let i = 0; i < this.OPUS_MAX_OPUS_FRAMES; ++i) {\n opusFrameOffsets[i] = [undefined];\n opusFrameSizes[i] = [undefined];\n }\n // Parse packet to get the Opus frames.\n const numOpusFrames = this.opusPacketParse(packet, lenBytes, null, opusFrameOffsets, opusFrameSizes, null);\n // VAD status cannot be determined for invalid packets.\n if (numOpusFrames < 0)\n return -1;\n // Iterate over all Opus frames, which may contain multiple SILK frames, to determine the VAD status.\n for (let i = 0; i < numOpusFrames; ++i) {\n if (opusFrameSizes[i][0] < 1)\n continue;\n // LP layer header bits format (https://www.rfc-editor.org/rfc/rfc6716#section-4.2.3):\n //\n // Mono case:\n // +-----------------+----------+\n // | 1 to 3 VAD bits | LBRR bit |\n // +-----------------+----------+\n //\n // Stereo case:\n // +---------------------+--------------+----------------------+---------------+\n // | 1 to 3 mid VAD bits | mid LBRR bit | 1 to 3 side VAD bits | side LBRR bit |\n // +---------------------+--------------+----------------------+---------------+\n // The upper 1 to 3 bits (dependent on the number of SILK frames) of the LP layer contain VAD bits. If any of\n // these VAD bits are 1, then voice activity is present.\n if (packet.getUint8(opusFrameOffsets[i][0]) >> (8 - numSilkFrames))\n return 1;\n // In the stereo case, there is a second set of 1 to 3 VAD bits, so also check these VAD bits.\n const channels = this.opusPacketGetNumChannels(packet);\n if (channels === 2 &&\n (packet.getUint8(opusFrameOffsets[i][0]) << (numSilkFrames + 1)) >> (8 - numSilkFrames)) {\n return 1;\n }\n }\n // No voice activity was detected.\n return 0;\n }\n /**\n * This method is based on Definition of the Opus Audio Codec\n * (https://tools.ietf.org/html/rfc6716). Basically, this method is based on\n * parsing the LP layer of an Opus packet, particularly the LBRR flag.\n *\n * @param packet Opus packet.\n * @param lenBytes Size of the packet (in bytes).\n * @returns true: packet has fec encoding about previous packet.\n * false: no fec encoding present.\n */\n opusPacketHasFec(packet, lenBytes) {\n if (!packet || lenBytes <= 0)\n return false;\n // In CELT-only mode, packets should not have FEC.\n if (this.opusPacketIsCeltOnly(packet))\n return false;\n const opusFrameOffsets = new Array(this.OPUS_MAX_OPUS_FRAMES);\n const opusFrameSizes = new Array(this.OPUS_MAX_OPUS_FRAMES);\n for (let i = 0; i < this.OPUS_MAX_OPUS_FRAMES; ++i) {\n opusFrameOffsets[i] = [undefined];\n opusFrameSizes[i] = [undefined];\n }\n // Parse packet to get the Opus frames.\n const numOpusFrames = this.opusPacketParse(packet, lenBytes, null, opusFrameOffsets, opusFrameSizes, null);\n if (numOpusFrames < 0)\n return false;\n /* istanbul ignore next */\n if (opusFrameSizes[0][0] <= 1)\n return false;\n const numSilkFrames = this.opusNumSilkFrames(packet);\n /* istanbul ignore next */\n if (numSilkFrames === 0)\n return false;\n const channels = this.opusPacketGetNumChannels(packet);\n /* istanbul ignore next */\n if (channels !== 1 && channels !== 2)\n return false;\n // A frame starts with the LP layer. The LP layer begins with two to eight\n // header bits.These consist of one VAD bit per SILK frame (up to 3),\n // followed by a single flag indicating the presence of LBRR frames.\n // For a stereo packet, these first flags correspond to the mid channel, and\n // a second set of flags is included for the side channel. Because these are\n // the first symbols decoded by the range coder and because they are coded\n // as binary values with uniform probability, they can be extracted directly\n // from the most significant bits of the first byte of compressed data.\n for (let i = 0; i < channels; i++) {\n if (packet.getUint8(opusFrameOffsets[0][0]) & (0x80 >> ((i + 1) * (numSilkFrames + 1) - 1)))\n return true;\n }\n return false;\n }\n}\nRedundantAudioEncoder.shouldLog = true;\nRedundantAudioEncoder.shouldReportStats = true;\nRedundantAudioEncoder.initializeWorker();\n"

Redundant audio worker code string.

Functions

isAudioTransformDevice

isDestroyable

isVideoTransformDevice

Generated using TypeDoc