-
Notifications
You must be signed in to change notification settings - Fork 0
Analysis
As explained in the Method section, the measurements were taken on the server. We defined two scenarios in order to compare the two different transcoding modes on the considered parameters.
The goal is to capture the evolution of three parameters (CPU, disk usage and bandwidth) through time while five clients are watching the same video. The difference between the scenarios lies in the transcoding mode the server was:
- In Sc1, the video was not transcoded when it was requested by the clients, it was transcoded on-the-fly while they were watching it. Afterwards, the transcoded video was removed from storage in order to conduct the same experiment again (the removal function is explained in the [Clear streams](Clear streams) section).
- In Sc2, the video was already transcoded and available on the server's storage in the representations when the clients requested it. The content was streamed entirely and then the clients disconnected before starting another experiment.
Here are the curves we expected to obtain.

Figure 1 - Expected measures
To avoid biased results, we perform 4 experiments each time, i.e., we perform video streaming 4 times and we keep the measurements for all the 4 experiments, named M1, M2, M3, M4.
aa: Il est important de spécifier la configuration de votre machine virtuelle, parce que ces mesures peuvent être différente avec une machine plus ou moins puissante.
In this scenario, we expect to see the used storage space to increase progressively while the video is being transcoded. In this regard, the CPU load should be high during this period as transcoding necessitates important calculations. Concerning the bandwidth, we expected to be able to correlate the curve with the buffer length of the clients, however, as we did not measure it, we can only expect to see intense traffic activity in the beginning and sometimes during the transcoding as more chunks become available.
Considering our expectations, we virtually separated each experiment into 4 steps:
- Clients request the video
- Server starts the transcoding
- Server finishes the transcoding
- Clients finish the video
Using the time.sh script (cf. Method), we timestamped the 1st, 3rd and 4th steps in order to correlate these events with what the results will show.
Observe that when the server starts transcoding, it transcodes all the chunks of a video, not just the chunks adjacent to the ones requested by the client.
aa: Est le paragraphe ci-dessus correct?
The following curves display the superpositions of the same parameter measured during the four experiments. They were adjusted horizontally to have the event “Clients request the video” at the same time.

Figure 2 - CPU load in % from the 4 experiments
The CPU load plateaus to around 80% after the clients requested the video.
This correspond to the transcoding happening on the server.
There are a few seconds before the the load skyrockets, this may correspond to the time the server processes the requests and evaluates the transcoding parameters.
After some time, the load comes back to lower values, probably because the video is now fully transcoded.

Figure 3 - Disk usage in % from the 4 experiments
As expected, the used space increases progressively after the clients requested the video. The stair-shaped curve can be explained by two factors: the fact the video is transcoded into multiple chunks or that the variation is so small the tool we used could not be more precise than a hundredth of a percent. This second one seems more likely as there would be more chunks than we see stairs.

Figure 4 - Network out in kbps from the 4 experiments
Packets are sent from the server all along the duration of the viewing. The shapes of the four curves are very similar, however, we can see some peaks appear at different time.
All the experiments start with an important outgoing data transfer: as the first chunks are being sent and the clients’ buffers are filling, the demand can be important.

Figure 5 - Network out in kbps and CPU load in % from the 1st experiment
Both curves are highly correlated, there is no doubt there is some correlation between the CPU processing the video and the server sending high amount of data. It is important to note the CPU load increases before the first peak of network write. Indeed, the chunks need to be created before they can be sent.
When the CPU load decreases, we do not observe any change on the network’s curve. This is not surprising as the server only deals with the clients demand when it can, the end of the plateau only means the transcoding ended, however some chunks remain to be sent to the clients.

Figure 6 - Disk usage in % and CPU load in % from the 1st experiment
The DiskUsage and CPU are really correlate. When the video has to be transcoded and given at the user at the same time, disk usage plateaus at a max. Then, for each time the server need to send chunk, we have little spikes.

Figure 7 - Disk usage in % and Network out in kbps from the 1st experiment
(WIP)
Opposite to the first scenario, we expected the storage usage stay at the same level during and after the test phase. Indeed, when transcoding occurs offline, all chunks are already transcoded, before any client starts playin a video. Concerning the bandwidth, we expected to be able to correlate the curve with the buffer length of the clients, however, as we did not measure it, we can only expect to see intense traffic activity in the beginning and much less in the end as the clients would have downloaded all the chunks. As for the bandwidth, we do not expect any difference with the scenario 1 (transcoding online).
The following curves display the superpositions of the same parameter measured during the four experiments. They were adjusted horizontally to have the event “Clients request the video” at the same time.

Figure 8 - CPU load in % from the 4 experiments
Since the video is already transcode, we don’t have any huge spike of CPU load. However, there is some small spike, we think this is due to video chunk sent to the client.

Figure 9 - Disk usage in % from the 4 experiments
As predicted, the curve of this diagram is perfectly flat. This is totally logical due to the way that offline transcoding work.

Figure 10 - Network out in kbps from the 4 experiments
Packets are sent from the server all along the duration of the viewing. The shapes of the four curves are very similar, however, we can see some peaks appear at different time.
All the experiments start with a big outgoing data transfer: as the first chunks are being sent and the clients’ buffers are filling, the demand can be large.
It would take further investigation to understand why the bandwidth consumption is different than in the scenario 1 (transcoding on-the-fly). One hypothesis is that in scenario 1, there may be cases where the client request a chunk, but that transcoded chunk is not yet available and is sent with a certain delay. This smooths out bandwidth peaks in the scenario 1. On the contrary, in this scenario, all chunk are available and transcoded and they can be sent back to clients immediately after they requested them. Therefore, all chunks that clients request at the beginning to fill their empty buffers are immediately sent at the beginning and constitute the first peak. Then the buffers are sufficiently filled, and the number of chunks sent over the networks is reduced.
aa: Êtes vous d'accord avec le paragraphe ci-dessus?

Figure 11 - Network out in kbps and CPU load in % from the 3rd experiment
(WIP)