INF5072 - MONROE and AStream FAQ

 

Page for resources and frequently asked questions for MONROE and AStream.

 

Q: Where can I get the user manual for MONROE?

A: The latest version of the user manual can be found here: https://github.com/MONROE-PROJECT/UserManual

 

Q: Where is the Astream container located?

A: The Astream container for MONROE is under: https://hub.docker.com/r/andralutu/astream/

 

Q: What nodes shall I use?

A: The nodes allocated for the course are: 186 (CAT3),187 (CAT3), 440 (CAT6), 441 (CAT3), 442 (CAT6), 443 (CAT3), 450 (CAT6), 451 (CAT3), 446 (CAT6), 448 (CAT6)

 

Q: All nodes are the same?

A: The even numbered nodes have two interfaces while the odd numbered nodes have one interface. Check the resources tab to see which operators run on which nodes: https://www.monroe-system.eu/Resources.html

 

Q: What are the Astream parameters I can change?

A: The source code of Astream running on MONROE is @ https://github.com/MONROE-PROJECT/Experiments/tree/master/experiments/astream. Below, you can find a summary of the parameters you can change in Astream:

 

Q: How should I set the parameters?

A: Apart from the experiment specific settings, there are 3 important parameters.

 

Q: What browser shall I use?

A: We recommend to use Chrome as we experience some issues with the Firefox browser due to CORS.

 

Q: Getting this error message: "Delayed; container does not exist, or i/o timeout"

A: One of the reason for this is that the sim card might be out of data quota so the data transfer is throttled and the container cannot be deployed in time. In such cases, write to the slack channel for us to assign you a new simcard.

 

OpenVQ

GPAC's way of coding DASH works as follows:


(1) for each quality layer, create one H.264 file containing only the required header information, but not content.
(2) for each video segment, create one H.264 file per quality layer, which is missing this header information.


When a player wants to show a video, it must fetch the H.264 file containing the correct header first, send it to the decoder followed by the segment it wants to play. After that, it can send all segments of the same quality layer to the decoder without repeating the header.

That’s also something you must do for the quality test: prepend the header. The names are very easy to find, they are the only URL names in the MPD file (or the only filenames on the server) that are in the same directory but have a different naming pattern. Like:

   cat BigBuckBunny_4s_init.mp4 BigBuckBunny_2s1_init.mp4 > BigBuckBunny_2s2_init.mp4 > segment.mp4

 

The PEVQ algorithm has been tuned to give you a good QoE estimate for 8sec - 12sec videos. It has not been tuned for 2sec or 10min videos or anything else. You should consider that when using it to interpret results. We have usually used OpenVQ to make a timeline: concat and measure segments for seconds 0-12, after that concat and measure segments 2-14, then 4-16, and so on.

To save time, you may finally use 0-12sec, 12-24sec, etc.

In this way, you can also stop after, for example, the first minute. Or just measure the third minute. Or whatever.

The reference encoding should always be the top quality, because that is the best the user can ever get. For big buck bunny, it is also possible to download the 4k source file as a reference, but I advise against it because users do not experience 4k on a phone as particularly better than 1k. I propose to use the best-quality segment on the server.


Since this top quality is the reference, you should adapt the lower-quality segment to it. In that way, you need not convert the reference version: it makes more sense. If you choose to retrieve all of the segments in all of the qualities to your local computer, you can keep them. Then, you need either the AStream client's log file or alternatively the server's HTTP log, to see which quality each segment had. Because the segment index and the quality layer number are both part of the URL. So you don’t need to download the actual data again and again and again.

Finally: OpenVQ does not make an adaptation decision for you. Problem 1: If the videos you have don’t fit in resolution, you must preprocess the bad quality video by upscaling. Problem 2: If the videos don’t have the same number of frames, you must make sure that you fix that. Problem 3: If there is some other conflict after the concatenation, make an ffmpeg video copy operation that copies of the video info but fixes the headers may be a good idea. Unfortunately, ALL problems related to any of these 3 situations will be indicated by OpenVQ with the same error message saying that the resolution is different. Look at the output of ffmpeg or ffprobe to find out if you have problem 1, problem 2, or if it’s not problem 1 or 2 it is very probably problem 3.