Add Features to your CAF Receiver

This page contains code snippets and descriptions of the features available for customizing a CAF receiver app.

Creating a customized receiver app

The main structure of a customized CAF receiver app includes required elements (shown in bold) along with optional features to customize the app for your particular use case:

  1. A cast-media-player element that represents the media player.
  2. Custom CSS for the cast-media-player element.
  3. A script element to load the Cast receiver framework.
  4. JavaScript code to customize receiver app by intercepting messages and handling events.
  5. Queue for autoplay.
  6. Options to configure playback.
  7. Options to set the receiver context.
  8. A JavaScript call to start the receiver application.

Here is sample code for a CAF receiver application that illustrates this full structure.

Tip: Also see Loading media using contentId, contentUrl and entity.

<html>
<head>
</head>
<body>
  <cast-media-player id="player"></cast-media-player>
  <style>
    #player {
        --theme-hue: 210;
        --splash-image: url("my.png");
    }
  </style>
  <script type="text/javascript" src="//www.gstatic.com/cast/sdk/libs/caf_receiver/v3/cast_receiver_framework.js">
  </script>
  <script>
    const context = cast.framework.CastReceiverContext.getInstance();
    const playerManager = context.getPlayerManager();

    // intercept the LOAD request to be able to read in a contentId and get data
    playerManager.setMessageInterceptor(
        cast.framework.messages.MessageType.LOAD, loadRequestData => {
            if (loadRequestData.media && loadRequestData.media.contentId) {
                return thirdparty.getMediaById(loadRequestData.media.contentId)
                .then(media => {
                  if (media) {
                    loadRequestData.media.contentUrl = media.url;
                    loadRequestData.media.contentType = media.contentType;
                    loadRequestData.media.metadata = media.metadata;
                  }
                  return loadRequestData;
                });
            }
            return loadRequestData;
        });

    // listen to all Core Events
    playerManager.addEventListener(cast.framework.events.category.CORE,
        event => {
            console.log(event);
        });

    const MyCastQueue = class extends cast.framework.QueueBase {
        initialize(loadRequestData) {
            const media = loadRequestData.media;
            const items = [];
            items.push(myCreateItem(media)); // your custom function logic

            const queueData = new cast.framework.messages.QueueData();
            queueData.items = items;

            return queueData;
        }

        nextItems(itemId) {
           return [myCreateNextItem()]; // your custom function logic
        }
    };

    const playbackConfig = new cast.framework.PlaybackConfig();

    // Sets the player to start playback as soon as there are five seconds of
    // media contents buffered. Default is 10.
    playbackConfig.autoResumeDuration = 5;

    const myCastQueue = new MyCastQueue(); // create instance of queue Object

    context.start({queue: myCastQueue, playbackConfig: playbackConfig});
  </script>
</body>
</html>

Application configuration and options

The CastReceiverContext is the outermost class exposed to the developer, and it manages loading of underlying libraries and handles the initialization of the receiver SDK.

If the CAF Receiver API detects that a sender is disconnected it will raise the SENDER_DISCONNECTEDevent. If the receiver has not been able to communicate with the sender for what we described as maxInactivity seconds, it will also raise the SENDER_DISCONNECTED event. During development it is a good idea to set maxInactivity to a high value so that the receiver app does not close when debugging the app with the Chrome Remote Debugger:

const context = cast.framework.CastReceiverContext.getInstance();
const options = new cast.framework.CastReceiverOptions();
options.maxInactivity = 3600;
context.start(options);

However, for a published receiver application it is better to not set maxInactivity and instead rely on the default value. Note that the Cast receiver options are set only once in the application.

The other configuration is the cast.framework.PlaybackConfig. This can be set as follows:

const playbackConfig = new cast.framework.PlaybackConfig();
playbackConfig.manifestRequestHandler = requestInfo => {
  requestInfo.withCredentials = true;
};
context.start({playbackConfig: playbackConfig});

This configuration affects each content playback and essentially provides override behavior. For a list of behaviors that developers can override, see the definition of cast.framework.PlaybackConfig. To change the configuration in between contents, one can use the PlayerManager to get its current playbackConfig, modify or add an override and reset the playbackConfig like this:

const playerManager =
    cast.framework.CastReceiverContext.getInstance().getPlayerManager();
const playbackConfig = (Object.assign(
            new cast.framework.PlaybackConfig(), playerManager.getPlaybackConfig()));
playbackConfig.autoResumeNumberOfSegments = 1;
playerManager.setPlaybackConfig(playbackConfig);

Note that if PlaybackConfig has never been overridden, the getPlaybackConfig() returns a null object. And any property on PlaybackConfig that has undefined as its value will default its behavior and implementations to CAF.

Audio tracks

Audio track selection in the CAF Receiver SDK has a new AudioTracksManager class that simplifies and streamlines track selection, giving you more control and better access to properties, such as name, URL and language. This class is best used in the event handler for the cast.framework.events.EventType.PLAYER_LOAD_COMPLETE event.

The API provides various ways to query and select the active audio tracks. Here is an example of how to select a track to be active by specifying its ID:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

playerManager.addEventListener(
  cast.framework.events.EventType.PLAYER_LOAD_COMPLETE, () => {
    const audioTracksManager = playerManager.getAudioTracksManager();

    // Get all audio tracks
    const tracks = audioTracksManager.getTracks();

    // Choose the first audio track to be active by specifying its ID
    audioTracksManager.setActiveById(tracks[0].trackId);
  });
context.start();

The AudioTracksManager class also provides a method getActiveTrack().

Here is an example of how to select the first audio track for a specified language, in this case English:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

playerManager.addEventListener(
  cast.framework.events.EventType.PLAYER_LOAD_COMPLETE, () => {
    const audioTracksManager = playerManager.getAudioTracksManager();

    // Set the first matching language audio track to be active
    audioTracksManager.setActiveByLanguage('en');
  });
context.start();

The AudioTracksManager class also provides a method getTracksByLanguage(language) that returns all tracks for the specified language.

The audio language code is retrieved from the media manifest and should follow RFC 5646. Language codes can be presented in 2-character nomenclature (such as "es", "en" or "de"), or 4 character nomenclature (such as "en-us", "es-es" or "fr-ca").

If the media manifest follows a different language code standard, the receiver app needs to convert it into an RFC 5646 conforming language code. CAF Receiver SDK provides an interceptor EDIT_AUDIO_TRACKS to perform modifications:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
// Intercept the EDIT_AUDIO_TRACKS request
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.EDIT_AUDIO_TRACKS, request => {
  // write logic to convert language codes here
});
context.start();

When playing through ad breaks, any audio track selection, such as language, made before a break will persist after the break for the same content, even if the ads are in a different language.

Closed captions (subtitles)

Closed caption track selection in the CAF Receiver SDK has a new TextTracksManager class that simplifies and streamlines track selection, giving you more control and better access to properties, such as name, URL and language.

The TextTracksManager class is best used in the event handler for the cast.framework.events.EventType.PLAYER_LOAD_COMPLETE event.

Closed captions selection in the CAF Receiver SDK is simplified and streamlined with other parts of the SDK.

The API supports controlling WebVTT, TTML and CEA-608.

The TextTracksManager class provides various ways to query and select a closed caption track to be active. Here is an example of how to select the first track to be active by specifying its ID:

Note: Although setActiveByIds(ids) takes an array of ID numbers, for now the CAF receiver SDK supports only one active text track at a time.

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

playerManager.addEventListener(
  cast.framework.events.EventType.PLAYER_LOAD_COMPLETE, () => {
    const textTracksManager = playerManager.getTextTracksManager();

    // Get all text tracks
    const tracks = textTracksManager.getTracks();

    // Choose the first text track to be active by its ID
    textTracksManager.setActiveByIds([tracks[0].trackId]);
  });
context.start();

The TextTracksManager class also provides a method getActiveTracks().

Here is an example of how to select the first text track for a specific language:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

playerManager.addEventListener(
  cast.framework.events.EventType.PLAYER_LOAD_COMPLETE, () => {
    const textTracksManager = playerManager.getTextTracksManager();

    // Set the first matching language text track to be active
    textTracksManager.setActiveByLanguage('en');
  });
context.start();

The TextTracksManager class also provides a method getTracksByLanguage(language) that returns all tracks for the specified language.

The text language code is retrieved from the media manifest and should follow RFC 5646. Language codes can be presented in 2-character nomenclature (such as "es", "en" or "de"), or 4-character nomenclature (such as "en-us", "es-es" or "fr-ca").

If the media manifest follows a different language code standard, the receiver app needs to convert it into an RFC 5646 conforming language code. CAF Receiver SDK provides an interceptor EDIT_TRACKS_INFO to perform modifications:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();
// intercept the EDIT_TRACKS_INFO request
playerManager.setMessageInterceptor(cast.framework.messages.MessageType.EDIT_TRACKS_INFO, request => {
  // write logic to convert language codes here
});
context.start();

The API allows a developer to dynamically add new closed caption tracks, in this case for different languages and out-of-band tracks, and then select a track to be the new active track:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

playerManager.addEventListener(
  cast.framework.events.EventType.PLAYER_LOAD_COMPLETE, () => {

    // Create text tracks object
    const textTracksManager = playerManager.getTextTracksManager();

    // Create track 1 for English text
    const track1 = textTracksManager.createTrack();
    track1.trackContentType = 'text/vtt';
    track1.trackContentId = 'http://example.com/en.vtt';
    track1.language = 'en';

    // Create track 2 for Spanish text
    const track2 = textTracksManager.createTrack();
    const track2Id = track2.trackId;
    track2.trackContentType = 'text/vtt';
    track2.trackContentId = 'http://example.com/spa.vtt';
    track2.language = 'spa';

    // Add tracks
    textTracksManager.addTracks([track1, track2]);

    // Set the first matching language text track to be active
    textTracksManager.setActiveByLanguage('en');
  });
context.start();

When playing through ad breaks, any text track selection, such as language, made before a break will persist after the break for the same content, even if the ads are in a different language.

Styling the player

The CAF Receiver SDK provides a built-in player UI. In order to use this UI, you need to add cast-media-player element to your HTML. CSS-like styling allows setting various things including background image, splash image, font family and other things.

<cast-media-player id="player"></cast-media-player>
<style>
  #player {
    --theme-hue: 100;
    --progress-color: rgb(0, 255, 0);
    --splash-image: url('http://some/image.png');
  }
</style>
<script>
  // Update style using javascript
  let playerElement = document.getElementById('player');
  playerElement.style.setProperty('--splash-image', 'url("http://some/other/image.png")');
</script>

The following table lists all currently available customization parameters.

Name Default Value Description
--background-image Background image.
--ad-title Ad Title of the ad.
--skip-ad-title Skip ad Text of the Skip Ad text box.
--logo-image Logo image (shown during launch).
--splash-image App name The image for the splash screen.
--watermark-image The image for the watermark.
--font-family Open Sans Font family for metadata and progress bar.
--spinner-image Default image The image to display while launching.
--buffering-image Default image The image to display while buffering.
--pause-image Default image The image to display while paused.
--play-image The image to show in metadata while playing.
--theme-hue 42 The hue to use for the player.
--progress-color hsl(hue, 95%, 60%) Color for progress bar.
--break-color hsl(hue, 100%, 50%) Color for the ad break mark.

A blank value in the "Default value" column indicates no default exists.

Overscan

Layouts for TV have some unique requirements due to the evolution of TV standards and the desire to always present a full screen picture to viewers. TV devices can clip the outside edge of an app layout in order to ensure that the entire display is filled. This behavior is generally referred to as overscan. Avoid screen elements getting clipped due to overscan by incorporating a 10% margin on all sides of your layout.

Custom UI data binding

If you want to use your own custom UI element instead of cast-media-player, you can do that; instead of adding cast-media-player element to your HTML, instead use the PlayerDataBinder class to bind the UI to the player state. The binder also supports sending events for data changes, if the app does not support data binding.

const context = cast.framework.CastReceiverContext.getInstance();
const player = context.getPlayerManager();

const playerData = {};
const playerDataBinder = new cast.framework.ui.PlayerDataBinder(playerData);

// Update ui according to player state
playerDataBinder.addEventListener(
    cast.framework.ui.PlayerDataEventType.STATE_CHANGED,
    e => {
      switch (e.value) {
        case cast.framework.ui.State.LAUNCHING:
        case cast.framework.ui.State.IDLE:
          // Write your own event handling code
          break;
        case cast.framework.ui.State.LOADING:
          // Write your own event handling code
          break;
        case cast.framework.ui.State.BUFFERING:
          // Write your own event handling code
          break;
        case cast.framework.ui.State.PAUSED:
          // Write your own event handling code
          break;
        case cast.framework.ui.State.PLAYING:
          // Write your own event handling code
          break;
      }
    });
context.start();

You should add at least one MediaElement to the HTML so that the receiver can use it. If multiple MediaElement objects are available, you should tag the MediaElement that you want the receiver to use. You do this by adding castMediaElement in the video's class list, as shown below; otherwise, the receiver will choose the first MediaElement.

<video class="castMediaElement"></video>

Note, CAF Receiver currently discourages passing a custom MediaElement as you lose many benefits of using CAF Receiver SDK.

Event handling

The CAF Receiver SDK allows your receiver app to handle player events. The event handler takes a cast.framework.events.EventType parameter (or an array of these parameters) that specifies the event(s) that should trigger the listener. Preconfigured arrays of cast.framework.events.EventType that are useful for debugging can be found in cast.framework.events.category. The event parameter provides additional information about the event.

For example, if you want to know when a mediaStatus change is being broadcasted, you can use the following logic to handle the event:

const playerManager =
    cast.framework.CastReceiverContext.getInstance().getPlayerManager();
playerManager.addEventListener(
    cast.framework.events.EventType.MEDIA_STATUS, (event) => {
      // Write your own event handling code, for example
      // using the event.mediaStatus value
});

Note: The receiver framework automatically tracks when a sender connects or disconnects from it and doesn't require an explicit SENDER_DISCONNECTED event handler in your own receiver logic (as in Receiver v2).

Message interception

CAF Receiver SDK allows your receiver app to intercept messages and execute custom code on those messages. The message interceptor takes a cast.framework.messages.MessageType parameter that specifies what type of message should be intercepted.

Note: The interceptor should return the modified request or a Promise that resolves with the modified request value. Returning null will prevent calling the default message handler.

For example, if you want to change the load request data, you can use the following logic to intercept and modify it.

Tip: Also see Loading media using contentId, contentUrl and entity.

const context = cast.framework.CastReceiverContext.getInstance();
const player = context.getPlayerManager();

player.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD,
    request => {
      return new Promise((resolve, reject) => {
        // Write your own custom code if you would like to see the stream info
        const onSuccess = function(streamInfo) {
          const mediaUrl = streamInfo.url;
          const captionUrl = streamInfo.captionUrl;

          request.media.contentId = mediaUrl;
          request.media.contentType = 'application/dash+xml';
          request.media.tracks = [{
            trackId: 1,
            trackContentId: captionUrl,
            trackContentType: 'text/vtt',
            type: cast.framework.messages.TrackType.TEXT
          }];
          resolve(request);
        };
        const onFailure = function() {
          reject(new cast.framework.messages.ErrorData(
              cast.framework.messages.ErrorType.INVALID_REQUEST));
        };
      fetchData(request.media.contentId, onSuccess, onFailure)
    });

player.setMessageInterceptor(
    cast.framework.messages.MessageType.MEDIA_STATUS,
    status => {
      status.customData = {};
      return status;
    });

context.start();

Custom messages

Message exchange is the key interaction method for receiver applications.

A sender issues messages to a receiver using the sender APIs for the platform the sender is running (Android, iOS, Chrome). The event object (which is the manifestation of a message) that is passed to the event listeners has a data element (event.data) where the data takes on the properties of the specific event type.

A receiver application may choose to listen for messages on a specified namespace. By virtue of doing so, the receiver application is said to support that namespace protocol. It is then up to any connected senders wishing to communicate on that namespace to use the appropriate protocol.

All namespaces are defined by a string and must begin with "urn:x-cast:" followed by any string. For example, "urn:x-cast:com.example.cast.mynamespace".

Note: The media message namespace "urn:x-cast:com.google.cast.media" is reserved. Media messages sent using the Cast APIs on both the sender and receiver use the media namespace protocol by convention. See Media and players for more about media messages.

Here is a code snippet for the receiver to listen to custom messages from connected senders:

const context = cast.framework.CastReceiverContext.getInstance();

const CUSTOM_CHANNEL = 'urn:x-cast:com.example.cast.mynamespace';
context.addCustomMessageListener(CUSTOM_CHANNEL, function(customEvent) {
  // handle customEvent.
});

context.start();

Similarly, receiver applications can keep senders informed about the state of the receiver by sending messages to connected senders. A receiver application can send messages using sendCustomMessage(namespace, senderId, message) on CastReceiverContext. A receiver can send messages to an individual sender, either in response to a received message or due to an application state change. Beyond point-to-point messaging (with a limit of 64kb), a receiver may also broadcast messages to all connected senders.

Ad breaks

The CAF Receiver SDK supports embedding ads within a given media stream.

A break is an interval for playback of one or more ads; each ad is called a break clip. This diagram shows two ad breaks, each with two ads:

CAF Receiver SDK provides two ways to incorporate ad breaks to the receiver: client-side and server-side stitching.

Client-side ad stitching

In client-side stitching, you need to specify the necessary media information for your ad clips by adding a break to the receiver when the media is being loaded. The following snippet is an example of the previous diagram, where third_party is part of your library.

/**
 * @param {!cast.framework.messages.MediaInformation} media
 */
function addBreakToMedia(media) {
  media.breakClips = [
  {
    id: 'bc1',
    title: thirdparty.getBreakClipTitle(),
    contentId: thirdparty.getBreakClipUrl(),
    contentType: thirdparty.getBreakClipContentType(),
    posterUrl: thirdparty.getBreakClipPosterUrl(),
    whenSkippable: 10
  },
  {
    id: 'bc2'
    ...
  },
  {
    id: 'bc3'
    ...
  }];
  media.breaks = [
  {
    id: 'b1',
    breakClipIds: ['bc1', 'bc2'],
    position: thirdparty.getBreakClipPosition()
  },
  {
    id: 'b2',
    breakClipIds: ['bc1', 'bc3'],
    position: thirdparty.getBreakClipPosition()
  }];
}

const context = cast.framework.CastReceiverContext.getInstance();
const castPlayer = context.getPlayerManager();
castPlayer.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD,
    request => {
      addBreakToMedia(request.media);
      return request;
    });
context.start();

Server-side ad stitching

In server-side stitching, the server is expected to provide a single stream that contains both the primary media and ads. In this case, you are expected to provide the duration of the break clip instead of the URL and content type of the break clip, and set isEmbedded to true.

Note that the isEmbedded boolean is used to identify whether client or server-side stitching should be used. By default, CAF will use client-side stitching if the variable is not provided.

/**
 * @param {!cast.framework.messages.MediaInformation} media
 */
function addBreakToMedia(media) {
  media.breakClips = [
  {
    id: 'bc1',
    title: thirdparty.getBreakClipTitle(1),
    contentId: thirdparty.getBreakClipUrl(1),
    contentType: thirdparty.getBreakClipContentType(1),
    posterUrl: thirdparty.getBreakClipPosterUrl(1),
    duration: thirdparty.getBreakClipDuration(1)'
  },
  {
    id: 'bc2'
    ...
  },
  {
    id: 'bc3'
    ...
  }];
  media.breaks = [
  {
    id: 'b1',
    breakClipIds: ['bc1', 'bc2'],
    position: thirdparty.getBreakPosition(1),
    isEmbedded: true
  },
  {
    id: 'b2',
    breakClipIds: ['bc1', 'bc3'],
    position: thirdparty.getBreakPosition(2),
    isEmbedded: true
  }];
}

const context = cast.framework.CastReceiverContext.getInstance();
const castPlayer = context.getPlayerManager();
castPlayer.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD,
    request => {
      addBreakToMedia(request.media);
      return request;
    });
context.start();

While the ad is playing, the receiver will also send a MediaStatus that contains information about the ad break to the sender, which it can use to render its UI and to block the seek operation.

Device capabilities

The getDeviceCapabilities method provides device information on the connected Cast device and the video or audio device attached to it. It can provide information about support for Google Assistant, Bluetooth, and the connected display and audio devices.

This method returns an object which you can query by passing in one of the specified enums to get the device capability for that enum. The enums are defined in cast.framework.system.DeviceCapabilities.

This example checks if the receiver device is capable of playing HDR and DolbyVision (DV) with the IS_HDR_SUPPORTED and IS_DV_SUPPORTED keys, respectively.

const context = cast.framework.CastReceiverContext.getInstance();
context.addEventListener(cast.framework.system.EventType.READY, () => {
  const deviceCapabilities = context.getDeviceCapabilities();
  if (deviceCapabilities &&
      deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_HDR_SUPPORTED]) {
    // Write your own event handling code, for example
    // using the deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_HDR_SUPPORTED] value
  }
  if (deviceCapabilities &&
      deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_DV_SUPPORTED]) {
    // Write your own event handling code, for example
    // using the deviceCapabilities[cast.framework.system.DeviceCapabilities.IS_DV_SUPPORTED] value
  }
});
context.start();

Check display type

The canDisplayType method checks for video and audio capabilities of the receiver device and display by validating the media parameters passed in, returning a boolean. All parameters but the first are optional — the more parameters you include, the more precise the check will be.

Its signature is canDisplayType(mimeType,codecs,width,height,framerate)

Examples:

Checks whether the receiver device and display support the video/mp4 mimetype with this particular codec, dimensions, and framerate:

canDisplayType("video/mp4", "avc1.42e015,mp4a.40.5", 1920, 1080, 30)

Checks whether the receiver device and display support 4K video format for this codec by specifying the width of 3840 and height of 2160:

canDisplayType("video/mp4", "hev1.1.2.L150", 3840, 2160)

Checks whether the receiver device and display support HDR10 for this codec, dimensions, and framerate:

canDisplayType("video/mp4", "hev1.2.6.L150", 3840, 2160, 30)

Checks whether the receiver device and display support Dolby Vision (DV) for this codec, dimensions, and framerate:

canDisplayType("video/mp4", "dvhe.04.06", 1920, 1080, 30)

Queueing

Note: Queueing is a major feature introduced as part of Cast Application Framework. The Receiver v2 implementation carries a basic sender-initiated queue while the new queueing implementation introduces receiver-implemented queueing.

Queueing allows partner applications to better integrate with Cast by providing the following features:

  • Support of Google's and partner's cloud queue implementation so externally stored and created queue can be directly loaded into Cast devices.
  • Mechanisms that allows pagination of items in the queue rather than loading everything at once, solving our Receiver v2 message size limit issue.
  • Support for new messaging such as going to the next item, the previous item, fetching a window of items, as well as getting media information related to a set of queue items.
  • Better integration with the Cast eco-system such as Google Home and Google Assistant through new queueing data.
  • An easy-to-use QueueManager API that allows insertion, removal, and update of queue items.

Application developers can create a Receiver side queue by implementing cast.framework.QueueBase.

Here is a basic example of a simple queue where the initialize call is overridden and then a list of queue items along with queue descriptions are provided to the Cast device.

Tip: Also see Loading media using contentId, contentUrl and entity.

// Creates a simple queue with a combination of contents.
const DemoQueue = class extends cast.framework.QueueBase {
 constructor() {
   super();

   /**
    * List of media urls.
    * @private @const {!Array<string>}
    */
   this.myMediaUrls_ = [...];
 }
 /**
  * Provide a list of items.
  * @param {!cast.framework.messages.LoadRequestData} loadRequestData
  * @return {!cast.framework.messages.QueueData}
  */
 initialize(loadRequestData) {
   const items = [];
   for (const mediaUrl of this.myMediaUrls_) {
     const item = new cast.framework.QueueItem();
     item.media = new cast.framework.messages.MediaInformation();
     item.media.contentId = mediaUrl;
     items.push(item);
   }
   const queueData =
       loadRequestData.queueData || new cast.framework.messages.QueueData();
   queueData.name = 'Your Queue Name';
   queueData.description = 'Your Queue Description';
   queueData.items = items;
   // Start with the first item in the playlist.
   queueData.startIndex = 0;
   // Start from 10 seconds into the first item.
   queueData.currentTime = 10;
   return queueData;
 }
};

In this example, the list of items in the initialize call is provided in the provider's QueueBase constructor call. However, for a cloud queue implementation, the custom receiver logic can fetch the items externally and then return them as part of the initialize call.

To demonstrate a more comprehensive use of the queueing API, here is a Demo queue that implements most of the QueueBase class.

Tip: Also see Loading media using contentId, contentUrl and entity.

const DemoQueue = class extends cast.framework.messages.QueueBase {
 constructor() {
   /** @private {} */
   YourServer.onSomeEvent = this.updateEntireQueue_;
 }

 /**
  * Initializes the queue.
  * @param {!cast.framework.messages.LoadRequestData} loadRequestData
  * @return {!cast.framework.messages.QueueData}
  */
 initialize(loadRequestData) {
   // Put the first set of items into the queue
   const items = this.nextItems();
   const queueData =
       loadRequestData.queueData || new cast.framework.messages.QueueData();
   queueData.name = 'Your Playlist';
   queueData.description = 'Your Playlist Description';
   queueData.items = items;
   return queueData;
 }

 /**
  * Picks a set of items from remote server after the reference item id and
  * return as the next items to be inserted into the queue. When
  * referenceItemId is omitted, items are simply appended to the end of the
  * queue.
  * @param {number} referenceItemId
  * @return {!Array<cast.framework.QueueItem>}
  */
 nextItems(referenceItemId) {
   // Assume your media has a itemId and the media url
   return this.constructQueueList_(YourServer.getNextMedias(referenceItemId));
 }

 /**
  * Picks a set of items from remote server before the reference item id and
  * return as the items to be inserted into the queue. When
  * referenceItemId is omitted, items are simply appended to beginning of the
  * queue.
  * @param {number} referenceItemId
  * @return {!Array<cast.framework.QueueItem>}
  */
 prevItems(referenceItemId) {
   return this.constructQueueList_(YourServer.getPrevMedias(referenceItemId));
 }

 /**
  * Constructs a list of QueueItems based on the media information containing
  * the item id and the media url.
  * @param {number} referenceItemId
  * @return {!Array<cast.framework.QueueItem>}
  */
 constructQueueList_(medias) {
   const items = [];
   for (media of medias) {
     const item = new cast.framework.QueueItem(media.itemId);
     item.media = new cast.framework.messages.MediaInformation();
     item.media.contentId = media.url;
     items.push(item);
   }
   return items;
 }

 /**
  * Logs the currently playing item.
  * @param {number} itemId The unique id for the item.
  * @export
  */
 onCurrentItemIdChanged(itemId) {
   console.log('We are now playing video ' + itemId);
   YourServer.trackUsage(itemId);
 }
};

In the example above, YourServer is your cloud queue server and has logic about how to fetch certain media items.

To use QueueBase-implemented queueing, one would set the queue option in the CastReceiverContext:

const context = cast.framework.CastReceiverContext.getInstance();
context.start({queue: new DemoQueue()});

As part of CAF, we now expose a new QueueManager class that gives developers flexibility in developing their queueing solutions by providing methods to access the currently stored list of queue items as well as the current playing item. Methods also provide operations such as insertion, removal, and update of queueing items. To access an instance of QueueManager:

const context = cast.framework.CastReceiverContext.getInstance();
const queueManager = context.getPlayerManager().getQueueManager();

Here is an example of a queueing implementation that uses the insertion and removal methods based on some event. The example also demonstrates a usage of updateItems where the developers can modify the queue items in the existing queue, such as removing ad breaks.

Tip: Also see Loading media using contentId, contentUrl and entity.

const DemoQueue = class extends cast.framework.messages.QueueBase {
  constructor() {
    super();

    /** @private @const {!cast.framework.QueueManager} */
    this.queueManager_ = context.getPlayerManager().getQueueManager();
  }

  /**
   * Provide a list of items.
   * @param {!cast.framework.messages.LoadRequestData} loadRequestData
   * @return {!cast.framework.messages.QueueData}
   */
  initialize(loadRequestData) {
    // Your normal initialization; see examples above.
    return queueData;
  }

  /** Inserts items to the queue. */
  onSomeEventTriggeringInsertionToQueue() {
    const twoMoreUrls = ['http://url1', 'http://url2'];
    const items = [];
    for (const mediaUrl of twoMoreUrls) {
      const item = new cast.framework.QueueItem();
      item.media = new cast.framework.messages.MediaInformation();
      item.media.contentId = mediaUrl;
      items.push(item);
    }
    // Insert two more items after the current playing item.
    const allItems = this.queueManager_.getItems();
    const currentItem = this.queueManager_.getCurrentItem();
    const nextItemIndex = allItems.indexOf(currentItem) + 1;
    const insertBefore = (nextItemIndex < allItems.length) ?
        allItems[nextItemIndex].itemId :
        undefined;
    this.queueManager_.insertItems(items, insertBefore);
  }

  /** Removes a particular item from the queue. */
  onSomeEventTriggeringRemovalFromQueue() {
    this.queueManager_.removeItems([2]);
  }

  /** Removes all the ads from all the items across the entire queue. */
  onUserBoughtAdFreeVersion() {
    const items = this.queueManager_.getItems();
    this.queueManager_.updateItems(items.map(item => {
      item.media.breaks = undefined;
      return item;
    }));
  }
};

Incoming and outgoing messages

Today, a set of v2 queueing APIs exist and they continue to function. To fully support receiver-side queue fetching as the source of truth, the following additional queueing messages are introduced and handled by the CAF Receiver SDK:

Incoming Message Parameters Outgoing Response Message Return
NEXT No parameter needed. MEDIA_STATUS Receiver will (fetch through nextItems() if necessary) and start playing the next item.
PREVIOUS No parameter needed. MEDIA_STATUS Receiver will (fetch through prevItems() if necessary) and start playing the previous item.
FETCH_ITEMS FetchItemsRequestData QUEUE_CHANGE A cast.receiver.media.QueueChange or cast.framework.messages.QueueChange. As an example, for an insert case, the items field in the JSON will contain the list of new items fetched.
GET_ITEMS_INFO GetItemsInfoRequestData containing

itemIds: !Array

ITEMS_INFO cast.receiver.media.ItemsInfo with queue item information.
GET_QUEUE_IDS No parameter needed. QUEUE_IDS cast.receiver.media.QueueId.

For NEXT/PREVIOUS, if the existing queue representation on the Receiver does not have more items, the QueueBase.nextItems() or QueueBase.prevItems() will be automatically invoked to receive more items.

For FETCH_ITEM, the corresponding function fetchItems in the QueueBase implementation will be called for cloud queues and will retrieve the relevant data to be returned to the receiver to store.

Whenever more items are fetched, a new message type QUEUE_CHANGE will be triggered and sent back to the sender. See the various types of queue changes.

For GET_ITEMS_INFO, QueueBase's implementation is not triggered and the Receiver returns media information already known to the list of ids.

Note: In general, the Incoming and Outgoing queueing messages will be handled by the respective CAF sender SDK (Web, iOS, Android) and ensure the proper UI is shown on the sender side. Application developers are free to intercept or listen to these messages as desired.

Content preload

Cast Application Framework supports preloading of media items after the current playback item in the queue. The preload operation pre-downloads several segments of the upcoming items. The specification is done on the preloadTime value in the QueueItem object (default to 20 seconds if not provided). The time is expressed in seconds, relative to the end of the currently playing item . Only positive values are valid. For example, if the value is 10 seconds, this item will be preloaded 10 seconds before the previous item has finished. If the time to preload is higher than the time left on the currentItem, the preload will just happen as soon as possible. So if a very large value of preload is specified on the queueItem, one could achieve the effect of whenever we are playing the current item we are already preloading the next item. However, we leave the setting and choice of this to developer as this value can affect bandwidth and streaming performance of the current playing item.

Note that preloading will work for HLS and Smooth streaming content by default. For DASH content, preloading works if useLegacyDashSupport is specified in CastReceiverOptions, since Media Player Library (MPL) supports preload while Shaka does not yet. For regular MP4 video and audio files such as MP3, those will not be preloaded, as Cast devices support one media element only and cannot be used to preload while an existing content item is still playing.

DRM

Note: One of the key benefits of using CAF Receiver SDK is that your app no longer needs to load MPL and handle media playback separately, as CAF Receiver SDK handles that for you.

Some media content requires Digital Rights Management (DRM). For media content that has its DRM license (and key URL) stored in their manifest (DASH or HLS), CAF handles this case for you. A subset of that content requires a licenseUrl which is needed to obtain the decryption key. In CAF, you can use PlaybackConfig to set the licenseUrl as needed.

The following code snippet shows how you can set request information for license requests such as withCredentials:

const context = cast.framework.CastReceiverContext.getInstance();
const playbackConfig = new cast.framework.PlaybackConfig();
// Customize the license url for playback
playbackConfig.licenseUrl = 'http://widevine/yourLicenseServer';
playbackConfig.licenseRequestHandler = requestInfo => {
  requestInfo.withCredentials = true;
};
context.start({playbackConfig: playbackConfig});

// Update playback config licenseUrl according to provided value in load request.
context.getPlayerManager().setMediaPlaybackInfoHandler((loadRequest, playbackConfig) => {
  if (loadRequest.media.customData && loadRequest.media.customData.licenseUrl) {
    playbackConfig.licenseUrl = loadRequest.media.customData.licenseUrl;
  }
});

If you have a Google Assistant integration, some of the DRM information such as the credentials necessary for the content might be linked directly to your Google account through mechanisms such as OAuth/SSO. In those cases, if the media content is loaded through voice or comes from the cloud, a setCredentials is invoked from the cloud to the Cast device providing that credentials. Applications writing a receiver app with CAF can then use the setCredentials information to operate DRM as necessary. Here is an example of using the credential to construct the media.

Tip: Also see Loading media using contentId, contentUrl and entity.

playerManager.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD, loadRequestData => {
      if (loadRequestData.media && loadRequestData.media.entity) {
        return thirdparty
            .getMediaById(
                loadRequestData.media.entity, loadRequestData.credentials)
            .then(media => {
              if (media) {
                loadRequestData.media.contentId = media.url;
                loadRequestData.media.contentType = media.contentType;
                loadRequestData.media.metadata = media.metadata;
              }
              return loadRequestData;
            });
      }
      return loadRequestData;
    });

Tip: Loading media using contentId, contentUrl and entity

The contentId property is typically the URL of the media, and can be used as a real ID or key parameter for custom lookup. If contentId is used as a real ID (and is not the URL), the optional parameter contentUrl will be the URL of the media. In other words, contentId must either be used as a key/real ID with the optional contentUrl included OR contentId must be the URL itself.

The following is a snippet that shows how to fetch media by contentId (if it's not a URL) and use contentUrl for the media URL.

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

// Support Load by contentId. It fetches media data by contentId and
// uses the contentUrl for the media URL.
playerManager.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD, loadRequestData => {
      if (loadRequestData.media && loadRequestData.media.contentId) {
        return thirdparty.getMediaById(loadRequestData.media.contentId)
            .then(media => {
              if (media) {
                loadRequestData.media.contentUrl = media.url;
                loadRequestData.media.contentId = media.id;
                loadRequestData.media.contentType = media.contentType;
                loadRequestData.media.metadata = media.metadata;
              }
              return loadRequestData;
            });
      }
      return loadRequestData;
    });
context.start();

The entity property is a deep link URL which can be used by Google Assistant. The entity value should be converted to contentUrl or contentId by the load interceptor as such:

const context = cast.framework.CastReceiverContext.getInstance();
const playerManager = context.getPlayerManager();

// Support load by entity by intercepting load request, and get media
// information by entity and credentials.
playerManager.setMessageInterceptor(
    cast.framework.messages.MessageType.LOAD, loadRequestData => {
      if (loadRequestData.media && loadRequestData.media.entity) {
        return thirdparty
            .getMediaById(
                loadRequestData.media.entity, loadRequestData.credentials)
            .then(media => {
              if (media) {
                loadRequestData.media.contentUrl = media.url;
                loadRequestData.media.contentType = media.contentType;
                loadRequestData.media.metadata = media.metadata;
              }
              return loadRequestData;
            });
      }
      return loadRequestData;
    });
context.start();