% LICENSE: see LICENCE \section{Interfacing the Tools} \subsection{Files} \label{sec-files} The first versions of these tools used files and pipes to exchange data. For offline generation of a multiplex or a modulated I/Q, it is possible to generate all files separately, one after the other. Here is an example to generate a two-minute ETI file for a multiplex containing two programmes: \begin{itemize} \item one DAB programme at 128kbps \item one \dabplus{} programme at 88kbps \end{itemize} We assume that the audio data for the two programmes is located in uncompressed 48kHz WAV in the files \filename{prog1.wav} and \filename{prog2.wav}. The first step is to encode the audio. The DAB programme is encoded to \filename{prog1.mp2} using: \begin{lstlisting} odr-audioenc --dab -b 128 -i prog1.wav -o prog1.mp2 \end{lstlisting} The DAB+ programme is encoded to \filename{prog2.dabp}. The extension \filename{.dabp} is arbitrary, but since the framing is not the same as for other AAC encoded audio, it makes sense to use a special extension. The command is: \begin{lstlisting} odr-audioenc -i prog2.wav -b 88 -o prog2.dabp \end{lstlisting} These resulting files can then be used with ODR-DabMux to create an ETI file. ODR-DabMux supports many options, which makes it much more practical to set the configuration using a file than using very long command lines. Here is a short file that can be used for the example, which will be saved as \filename{2programmes.mux}: \begin{lstlisting} general { dabmode 1 nbframes 5000 } remotecontrol { telnetport 0 } ensemble { id 0x4fff ecc 0xec ; Extended Country Code local-time-offset auto international-table 1 label "mmbtools" shortlabel "mmbtools" } services { srv-p1 { label "Prog1" } srv-p2 { label "Prog2" } } subchannels { sub-p1 { ; MPEG type audio inputfile "prog1.mp2" bitrate 128 id 10 protection 5 } sub-p2 { type dabplus inputfile "prog2.dabp" bitrate 88 id 1 protection 1 } } components { comp-p1 { label Prog1 service srv-p1 subchannel sub-p1 } comp-p2 { label Prog2 service srv-p2 subchannel sub-p2 } } outputs { output1 "file://myfirst.eti?type=raw" } \end{lstlisting} This file defines two components, that each link one service and one subchannel. The IDs and different protection settings are also defined. The bitrate defined in each subchannel must correspond to the bitrate set at the encoder. The duration of the ETI file is limited by the \lstinline{nbframes 5000} setting. Each frame corresponds to $24$\ms, and therefore $120 / 0.024 = 5000$ frames are needed for $120$ seconds. The output is written to the file \filename{myfirst.eti} in the ETI(NI) format. Please see Appendix~\ref{etiformat} for more options. To run the multiplexer with this configuration, run: \begin{lstlisting} odr-dabmux 2programmes.mux \end{lstlisting} This will generate the file \filename{myfirst.eti}, which will be $5000 * 6144 \approx 30$\si{MB} in size. Congratulations! You have just created your first DAB multiplex! With the configuration file, adding more programmes is easy. More information is available in the \filename{doc/example.mux} \subsection{Over the Network} In a real-time scenario, where the audio sources produce data continuously and the tools have to run at the native rate, it is not possible to use files anymore to interconnect the tools. For this usage, a network interconnection is available between the tools. The standard protocol to carry both contribution (From audio encoder to multiplexer) and distribution (from multiplexer to modulators) is EDI, specified by ETSI~\cite{etsits102693} \footnote{For a summary about the ZeroMQ interface used before EDI, see the section~\ref{sec:zeromq} below.}. EDI can be carried over UDP or other unreliable links, and offers a protection layer to correct bit-errors. Over network connections where the occasional congestion can occur, EDI can also be carried over TCP, which will ensure lost packets get retransmitted. Unless you are able to guarantee reserved bandwidth for the EDI traffic, using TCP is the safer option. While the main reason to use EDI is to put the different tools on different computers, it is not necessary to do so. It is possible, and even encouraged to use this interconnection locally on the same machine, for increased flexibility. \subsubsection{Between Encoder and Multiplexer} \label{sec:between_encoder_and_multiplexer} Between ODR-AudioEnc and ODR-DabMux, the EDI protocol carries \dabplus{} superframes or DAB frames, with additional metadata that contains the audio level indication, a version field and a free-form identifier string for monitoring purposes.\footnote{This metadata is carried in the custom EDI TAGs \texttt{ODRv} and \texttt{ODRa}.} The multiplexer cannot easily derive the audio level from the encoded bitstream without decoding it, so it makes more sense to calculate this in the encoder and carry it along the encoded data. On the multiplexer, the subchannel must be configured for EDI as follows: \begin{lstlisting} sub-fb { type dabplus bitrate 80 id 24 protection 3 inputproto edi inputuri "tcp://*:7002" buffer-management prebuffering } \end{lstlisting} The EDI input supports several options in addition to the ones of a subchannel that uses a file input. The options are: \begin{itemize} \item \texttt{inputuri}: This defines the interface and port on which to listen for incoming data. It must be of the form \texttt{://*:}, with \texttt{proto} may be either \texttt{tcp} or \texttt{udp}. \item \texttt{buffer-management}: Two buffer management approaches are possible with EDI: The other option \texttt{timestamped} will take into account the timestamps carried in EDI, inserting the audio into the ETI frame associated to that same time stamp. \texttt{prebuffering} ignores timestamps and pre-buffers some data before it starts streaming. This allows to compensate for network jitter. \item \texttt{buffer}: (Both buffer management settings) The input contains an internal buffer for incoming data. The maximum buffer size is given by this option, the units are frames ($24$\ms). Therefore, with a value of $40$, you will have a buffer of $40 * 24 = 960$\ms. The multiplexer will never buffer more than this value, and will discard data when the buffer is full. \item \texttt{prebuffering}: (Only in buffer management \texttt{prebuffering}) When the buffer is empty, the multiplexer waits until this amount of frames are available in the buffer before it starts to consume data. \end{itemize} The goal of having a buffer in the input of the multiplexer is to be able to absorb network latency jitter: Because IP does not guarantee anything about the latency, some packets will reach the encoder faster than others. The buffer can then be used to avoid disruptions in these cases, and its size should be adapted to the network connection. In both buffer management techniques, it is a trade-off between absolute delay and robustness. When using pre-buffering, you directly control size of the buffer, and you set it to a value depending on your network delays. When using timestamped buffer management, the size of the input buffer is a consequence of the effective delay you set in the timestamps. If the encoder is running remotely on a machine, encoding from a sound card, it will encode at the rate defined by the sound card clock. This clock will, if no special precautions are taken, be slightly off frequency. The multiplexer however runs on a machine where the system time is synchronised over NTP, and will not show any drift or offset. Two situations can occur: Either the sound card clock is a bit slow, in which case the input buffer in the multiplexer will fill up to the amount given by \texttt{prebuffering}, and then start streaming data. Because the multiplexer will be a bit faster than the encoder, the amount of buffered data will slowly decrease, until the buffer is empty. Then the multiplexer will enter prebuffering, and wait again until the buffer is full enough. This will create an audible interruption, whose length corresponds to the prebuffering. Or the sound card clock is a bit fast, and the buffer will be filled up faster than data is consumed by the multiplexer. At some point, the buffer will hit the maximum size, and one frame will be discarded. This also creates an audible glitch. Consumer grade sound cards have clocks of varying quality. While these glitches would only occur sporadically for some, bad sound cards can provoke such behaviour in intervals that are not acceptable, e.g. more than once per hour. Both situations are suboptimal, because they lead to audio glitches, and also degrade the ability to compensate for network latency changes. It is preferable to use the drift compensation feature available in ODR-AudioEnc, which insures that the encoder outputs the encoded bitstream at the nominal rate, aligned to the NTP-synchronised system time, and not to the sound card clock. The sound card clock error is compensated for inside the encoder. Complete examples of such a setup are given in the scenarios. \subsubsection{Authentication Support} In order to be able to use the Internet as contribution network, some form of protection has to be put in place to make sure the audio data cannot be altered by third parties. Usually, some form of VPN is set up for this case. % Previous versions described the ZeroMQ encryption functionality here. \subsubsection{Between Multiplexer and Modulator} The EDI protocol can also carry data of a complete ensemble from ODR-DabMux to one or more instanced of ODR-DabMod. In case you with to interface ODR-DabMux with a modulator that does not support EDI over TCP, but your network is not stable enough to use UDP, you can use ODR-EDI2EDI. See \url{http://github.com/Opendigitalradio/ODR-EDI2EDI} for information about that tool. \subsection{ZeroMQ} \label{sec:zeromq} Previous versions of this guide described an IP protocol based on ZeroMQ, a library that permits the creation of a socket connection with automatic connection management (connection, disconnection, error handling). This has now been deprecated in favour of the standards compliant EDI, which, when carried over TCP, can also be used over congested networks like the Internet. ZeroMQ was used both between for encoded audio contribution and for distribution of ensemble data, and was a custom, non-specified protocol. It was created before EDI got implemented in the ODR-mmbTools, and because carrying ETI inside IP was not sufficient for carrying complete timestamps, and because there was no other obvious option for audio contribution besides inventing a custom protocol. Now that it is proven that EDI-over-TCP can work satisfactorily over congested networks, there is no reason for a protocol that is not conforming to ETSI specifications to exist anymore. \subsection{Pipes} Pipes are an older real-time method to connect several encoders to one multiplexer on the same machine. It uses the same configuration as the file input but instead of using files, FIFOs, also called ``named pipes'' are created first using \texttt{mkfifo}. This setup is deprecated in favour of EDI. % vim: spl=en spell tw=80 et