DashCast is deprecated as of 0.9.0, more details here


DashCast is an application which allows users to:

  1. transcode a live/non-live stream in multiple qualities (eg. bitrate and resolution).
  2. segment it in small chunks and packetize them for delivery via Dynamic Adaptive Streaming over HTTP (DASH) standard.


DashCast is an application which is shipped with GPAC. It is compiled and installed by compiling and installing GPAC source code. DashCast is currently implemented for Linux distributions and Windows. It depends on the following libraries:

  • libavcodec for transcoding
  • libavdevice for acquisition from devices
  • libavformat for parsing and demultiplexing
  • libswscale for scaling video
  • libavresample to resample audio

Since DashCast needs to encode videos using H.264. encoder, before compilation, make sure that libavcodec-extra-53 is installed. Also make sure to have the libavresample-dev package installed.

To get dependencies:

Before executing this command, please note that you may need to compile FFmpeg with x264 support by yourself. This is due to the fact most Linux distributions use a FFmpeg fork called libav.

Get x264:

> sudo apt-get install libx264-dev

or build it yourself:

> sudo apt-get install yasm
> git clone git://git.videolan.org/x264.git x264 
> cd x264 
> ./configure --enable-shared --disable-cli --enable-strip --disable-avs --disable-swscale --disable-lavf --disable-ffms --disable-gpac --disable-opencl 
> make -j
> sudo make install

Compile ffmpeg:

> sudo apt-get install yasm
> git clone git://source.ffmpeg.org/ffmpeg.git ffmpeg
> cd ffmpeg
> ./configure --enable-shared --disable-debug --enable-libx264 --enable-avresample 
> make -j
> sudo make install

Get the other dependencies from your package manager:

 sudo apt-get install make pkg-config g++ zlib1g-dev firefox-dev libfreetype6-dev libjpeg62-dev libpng12-dev libopenjpeg-dev libmad0-dev libfaad-dev libogg-dev libvorbis-dev libtheora-dev liba52-0.7.4-dev libavcodec-dev libavformat-dev libavutil-dev libswscale-dev libavresample-dev libxv-dev x11proto-video-dev libgl1-mesa-dev x11proto-gl-dev linux-sound-base libxvidcore-dev libssl-dev libjack-dev libasound2-dev libpulse-dev libsdl1.2-dev dvb-apps libavcodec-extra-53 libavdevice-dev libmozjs185-dev

To get the code:

git clone https://github.com/gpac/gpac.git

To compile:

cd gpac
sudo make install
sudo make install-lib


-a inasrc:str : input audio source named inasrc.
If input is from microphone, inasrc will be “plughw:[x],[y]” where x is the card number and y is the device number.
-v invsrc:str : input video source named invsrc.
If input is from a webcam, invsrc will be “/dev/video[x]” where x is the video device number.
If input is the screen video, invsrc will be “:0.0+[x],[y]” which captures from upper-left at x,y.
If input is from stdin, invsrc will be “pipe:”
-av inavsrc:str : a multiplexed audio and video source named inavsrc.
If this option is present, non of ‘-a’ or ‘-v’ can be present.
  Note: for UDP input streams, you may want to increase the libav/FFmpeg FIFO size e.g. ‘-av udp://@’
-vf invfmt:str : invfmt is the input video format.
To capture from a webcam invfmt will be video4linux2
To capture the screen invfmt will be x11grab
-v4l2f inv4l2f:str : inv4l2f is the input format for webcam acquisition.
It can be mjpeg, yuyv422, etc.
-vfr invfr:int : invfr is the input video framerate.
-vres invres:intxint : input video resolution.
-af inafmt:str : inafmt is the input audio format
To capture from a microphone, inafmt will be alsa.
-conf confname:str : confname is the configuration file.
The default value is dashcast.conf .

-seg-dur dur:int : dur is the segment duration in millisecond.
The default value is 1000.
-frag-dur dur:int : dur is the fragment duration in millisecond.
The default value is 1000.
-live : system is live and input is a camera.
-live-media : system is live and input is a media file.
Absence of -live or -live media indicates that the system is on-demand.
-no-loop : system does not loop on the input media file
-seg-marker marker:str : add a marker box named marker at the end of DASH segment.
-gdr : use Gradual Decoder Refresh feature for video encoding.

-out outdir:str : outdir is the output data directory.
The default value is output .
-mpd mpdname:str : mpdname is the MPD file name.
The default value is dashcast.mpd .
-ast-offset dur:int : dur is the MPD availabilityStartTime shift in milliseconds.
The default value is 1000.
-time-shift dur:int : dur is the MPD TimeShiftBufferDepth in seconds.
The default value is 10. Specify -1 to keep all files.
-min-buffer dur:float : dur is the MPD minBufferTime in seconds.
The default value is 1.0.

-switch-source confname:str : confname is the name of configuration file for source switching.

Configuration files

A configuration file (-conf) must be provided so that DashCast transcodes the stream accordingly. Configuration file contains several entries. Each entry can be of either video or audio type. Here are the possible options in configuration file:

[ID]        # The ID of the representation
type=T      # T can be "audio" or "video"
width=W     # if type is "video", 
            # W is the width of the representation
height=H    # if type is "video", 
            # H is the height of the representation
bitrate=B   # B is the bitrate (bps) of "video" or "audio"

Another configuration file (-switch-source) is used for source switching.

[ID]         # The ID of the source identification.
type=T       # T can be "video"
start=ST     # ST is the start time YYYY-MM-DD hh:mm:ss
end=ET       # ET is the end time YYYY-MM-DD hh:mm:ss
source=S     # S is the address of the source


Here are some examples of how to use DashCast:

  • To transcode a live video from webcam and a live audio from microphone and to segment them in 1000-millisecond segments.
 DashCast -vf video4linux2 -v4l2f mjpeg -vfr 30 -vres 1280x720 -v /dev/video0 -af alsa -a plughw:2,0 -conf dashcast.conf -live


  1. DashCast should be compatible with the dash.js player.
  2. ls /dev/video* gives you a list of all video capture devices. Insert /dev/video# as your input video stream source.
  3. arecord -l gives you a list of all audio capture devices. Look for the card and device number you want to use. Insert plughw:card#,device# as your input audio stream source.
  • To transcode a multiplexed video and audio and segment them in 500-millisecond segments for a live session:
 DashCast -av test.mp4 -seg-dur 500 -live-media
  • To transcode a multiplexed video and audio and segment them in 1000-millisecond segments and 200-millisecond fragments for an on-demand service:
 DashCast -av test.mp4 -seg-dur 1000 -frag-dur 200
  • To capture the screen video and transcode and segment in 1000-millisecond segments for a live service:
DashCast -vf x11grab -vres 1280x720 -vfr 25 -v :0.0 -live
  • To capture a UDP stream and transcode and segment it in 1000-millisecond segments for a live service:
DashCast -av "udp://@" -live


The following configuration file is used: