As cloud technology continues to advance, so does the demand for high-throughput computing systems that can handle large volumes of data. At ByteNite, we are committed to providing our clients with cutting-edge distributed computing solutions that are both fast and scalable. We recently ran tests to measure the scalability of ByteNite’s grid computing system, and the results were both exciting and unexpected and could represent a real turn in the world of high-throughput computing. The more data we ingested, the faster the total throughput or speed of our distributed encoder. This means that ByteNite's computing power can handle massive data volumes simultaneously better than the cloud, making it ideal for organizations of all sizes. This breakthrough in scalability has the potential to significantly improve intensive applications like video encoding, paving the way for faster and more efficient computing solutions.
Our tests studied the relationship between video volume and global throughput, or speed*, for a set of 20 encoding jobs. Each job took as input a video twice as long and bulky as the previous one and measured the global processing speed (video duration divided by processing time). See section Test Details below for more information.
We found that the more video ByteNite ingests, the faster the global speed. Specifically, the global speed went from 1x for a 10-second encoding job, to 19.4x for a 5130-second job (see Figure 1). These results suggest that processing time doesn’t increase with video volume proportionally on ByteNite. When we doubled the volume assigned to the grid, the processing speed doubled as well, resulting in a flatter and flatter processing time (see Figure 2).
This result can be attributed to our chunk-based encoding system, which runs on an unlimited distributed infrastructure. On ByteNite, the video is broken down into smaller chunks that are processed simultaneously by multiple devices in the computing grid. When the video length doubles, the number of chunks that need to be processed also doubles, allowing more devices to work simultaneously on different chunks. This implies a higher efficiency of the computing grid, which can lead to a doubling of the total throughput.
*Note: As opposed to the speed, which measures the number of video seconds processed every second by our system, a job’s throughput on ByteNite is the number of video chunks processed per second. Since, in this experiment, each video chunk was 10 seconds long, the throughput can be obtained by dividing the speed by 10. For example, a global speed of 10x means an average throughput of one task per second.
When it comes to encoding, the relationship between video length and encoding speed is not always straightforward. This is because the encoding process can be complex, involving algorithms that may take longer to process certain frames based on factors like the video's content and the encoding parameters used. However, the results from the previous tests on our new high-tech solution yield some very interesting considerations.
Firstly, we found that the throughput of ByteNite's system always correlates positively with video length. This means that as video length increases, so does ByteNite’s speed in working the video chunks off, thanks to a more efficient parallelization, resulting in an increase in the grid’s throughput.
Moreover, we found that for standard encoding jobs using popular encoding parameters and encoders, ByteNite's system can achieve two-digit speeds from 10 minutes of video upwards. This is a significant improvement over traditional encoding systems, which may struggle to maintain high throughput as the video volume increases due to capped capabilities.
But what’s more interesting is that all the previous results apply even if the video volume is aggregate (e.g. sixty 10-second video jobs instead of a 10-minute job). By running the same set of tests with multiple smaller videos, we obtained the same processing throughput as before. This is a crucial finding because it proves that ByteNite is equally horizontally and vertically scalable. Our system performs well under stress, and is designed to ingest big workloads with the same ease as small ones. As a consequence, ByteNite’s encoding system can handle large amounts of data from multiple customers simultaneously, predicting the processing time with pinpoint accuracy and managing the workloads efficiently.
This outstanding example of scalability is critical in today's data-driven world, where organizations need to process and analyze vast amounts of data quickly and efficiently (not only video). By enabling ByteNite to process large amounts of data from multiple customers and always meet their throughput needs without larger expenses, our system is a game-changer in the world of cloud computing.
These exciting findings prove that our distributed computing model is not only fast, but also incredibly scalable. ByteNite can process massive amounts of data simultaneously, making it an ideal solution for organizations of all sizes. By leveraging the processing power of idle devices, ByteNite can encode videos at a much faster rate than most cloud computing services. We offer a flexible pricing model that allows organizations to pay only for the computing resources they need. This means that companies can scale their computing power up or down without incurring significant costs. Finally, our distributed computing model is not only fast but also environmentally friendly. By utilizing idle devices, we can reduce the carbon footprint of organizations while delivering high performance and security.
The input videos used for the tests were obtained from the LIVE Video Quality Challenge (VQC) Database by concatenating 10-second clips progressively. The resulting videos are the following:
The same video volumes were replicated in the second experiment by launching the original 10-second clip jobs in batches. All the input videos were originally encoded with AVC at 1920p by 1080p and 29.97 frames per second, without audio. The encoding parameters used for the libx264 and libx265 jobs can be found here.
t = processing time
d = video duration
s = global processing speed = d / t
r = average throughput = s / 10
Linear regression results:
s = 1.769 * log2(d) - 6.617
t = 0.565 * d / (log2(d) - 3.741)