As discussed in our two previous articles, our mission with our precoding technology available at bitsave.tech is to develop deep perceptual preprocessors that leverage the power of data, loss function designs in machine learning, and advanced deep neural network designs, in order to increase perceptual quality of existing and upcoming video coding standards without requiring any change in the encoding or decoding process.
To put our technology under another stress test, we decided to run our previously-reported MTurk validation of our deep perceptual precoder framework, but this time use the latest VVC Test Model (VTM). In a nutshell, we used the Amazon Mechanical Turk service to ask independent MTurk workers from around the world to evaluate full HD video encoded with VVC with and without our deep perceptual optimizer. We also compared the performance of an older and less-performing-but-faster codec: the MPEG/ITU-T HEVC/H.265 in conjunction with our precoder versus stand-alone VVC. The aim of that last test is to see if current standards can be boosted to the levels of quality-bitrate expected by VVC, without requiring the encoding and decoding complexity of the upcoming standard, or indeed the hardware upgrade in the server and client side.
We present here seven Full HD (FHD) test video sequences from Xiph.org. They have been chosen to represent a wide variety of content with rapid/irregular motion, rich texture details, and camera motion/zoom. Each sequence was compressed with the VVC encoder at 1.8mbps and 3.0mbps, and also with HEVC at 3mbps.
The full details of our encoding and viewing tests are discussed in the associated article in our news page. Each of the sequences below presents the video in two halves (since we want to show two concurrent videos in 1080p resolution, we can only show the half side of each video at any given viewing). The full set of videos can be downloaded via this link for independent inspection of the bitstream of each case.