In video surveillance, the race for higher resolutions is well and truly on! As an ever increasing amount of video is archived on digital devices, or transmitted over network infrastructure, so the need to consider data file sizes becomes more important. Here Benchmark considers the performance of compression algorithms.
[dropcap]V[/dropcap]ideo compression is a necessary tool. An uncompressed standard definition image is around 400-500kb, and there are 25 of those every second in real-time video. That equates to up to 12.5MB every second, or 750MB per minute. It sounds a lot, especially when you consider that one single camera would therefore create 45GB of data per hour! That’s over 1TB per day, per camera.
As with anything, a compromise has to be made, and the very first thing to consider is video compression. There are a choice of algorithms out there, and all essentially remove redundant data to reduce the size of the video data files. This enables transmission over networks and efficient storage to be realised.
There are some technologies that transmit uncompressed video, but these typically require one dedicated cable per video stream. It might sound as if the overall quality would be higher, but the minute you want to do something useful with that stream – record it, replay it, manage it or carry out any automated analytical process – it will need to be compressed!
Whilst the computational complexity of the algorithms might be high, for the user compression appears – on the face of things – to be quite simple. The compression engine is built into the device, and it runs in the background. Users might be able to tweak performance with regard to the degree of compression, but generally the algorithms just get on with things.
Data considerations
The introduction of digital technology has been a major positive for the video surveillance sector. It has brought with it so many benefits that it is difficult to remember how video-based security functioned effectively before the changes occurred. It seems that the advent of digital storage has done nothing but good for the security industry but it has one downside; and that is the size of the image files! Given all of the benefits, reducing file sizes was always going to be a serious concern.
Image compression has existed for many years in other sectors, and efficient low cost compression algorithms were constantly being developed to serve the consumer market-place. Of course, once an algorithm is created, that is not the end of the task. If anything, it’s the beginning. Most algorithms need to be set up to perform to an optimum level.
Boosting performance
There are two issues to consider when looking at image compression. The first is how well the algorithm reduces data without creating visible artefacts. The second is how much processing power the compression engine requires to work effectively. The second point is often forgotten.
For example, JPEG and M-JPEG compression work in a very formulaic way, and treat each frame of video as a separate piece of data. This results in high quality images, but file sizes are higher than most predictive coding algorithms. However, the processing required to run the algorithm is low.
Conversely, considering H.264, the video surveillance sector tends to use I-frames and P-frames (baseline profile) when compressing footage. Some manufacturers will also use other profiles that include B-frames, which use video from previous and forward frames. The result is better quality, but the computational needs are much higher. If a device struggles to deliver the right level of processing, there can be abnormal aberrations and artefacts, plus increased latency.
The video surveillance sector uses several different image compression algorithms, and all of them work in slightly different ways. They also have some effect on the final image, and the level of this is determined by two main points; the compression implementation, and the ratio at which it is applied.
The main algorithms employed by the industry are H.264, MPEG-4 and M-JPEG. There are others; JPEG2000 crops up on occasions, and there are still a few Wavelets machines around, plus some manufacturers use proprietary algorithms. The issue with the latter is that you don’t really have any guide as to what you should expect. Final performance depends upon what they set out to achieve!
It is a common mistake to believe that all devices using a certain algorithm will compress video in the same way. By way of an example, much has been made of the H.264 algorithm. The hype is that this always presents excellent image quality.
There are a variety of versions of H.264, and a number of application levels. If the algorithm has not been well implemented, then the image will always suffer.
It can be the case that a device with well-implemented compression running at a high ratio is actually better than a DVR running a poorly implemented algorithm at a low ratio. A compression engine that is well implemented can work harder and produce smaller files with cleaner images.
A hard choice?
Given that algorithms can be implemented in different ways, is there any way to make a choice about the algorithm used? All too often, the choice about which device to specify is made on other features and functions, and you then end up with whatever compression the unit utilises.
When Benchmark tests video devices, we always look at all the compression options. You might think that selecting an algorithm is simple if devices offer a selection of techniques. However, often there are restrictions on how some of the algorithms perform. This is due to the cost of processing required.
On a basis of pure image sharpness, Motion-JPEG is hard to beat. When looking at analogue solutions, it is the compression algorithm of choice. However, when applied to network-based devices, it is increasingly difficult to recommend because of the number of missed frames, high latency and artefacting. Given that most systems will use a constant bit rate, there can also be vast fluctuation in scene quality.
The use of M-JPEG is in decline. Many manufacturers include it because it’s easy to configure and requires low levels of processing, but in the field it is approaching the end of usefulness as resolutions rise.
The market has also started to fall out of love with MPEG-4. Because of the degree of compression required to make video surveillance practical, MPEG-4 has always had a degree of softness. Its strength becomes evident when video is transmitted over networks. Indeed, Benchmark tests have shown at very low bitrates, MPEG-4 can still compete favourably with H.264 in some applications.
When we refer to MPEG-4 in video surveillance, we typically mean MPEG-4 (Part 2). However, H.264 is also MPEG-4, but in this case it’s Part 10. The algorithm was developed to deliver quality video over lower bandwidths, and can handle HD and megapixel streams.
When well implemented, it is very impressive, delivering clean and sharp video with smooth motion and high colour accuracy. Given that the algorithm has been adopted by the consumer market, the cost of chipsets is falling, and unless manufacturers go to the very lowest price sellers then the compression engines are likely to be impressive.
The algorithm can display background shimmer and a slight softness when over-compressed, but with like-for-like quality Benchmark has been able to easily achieve 40 per cent reductions in file sizes. This equates to more efficient storage, reduced transmission latency and very smooth video delivery.
In summary
Given the market drivers, H.264 will be the algorithm of choice – until something better comes along. It is certainly the first choice for the Benchmark test team, who have handled hundreds of devices using various algorithms.
It’s not a guarantee of quality, but if your devices don’t deliver clean and smooth images, then look around. Good implementations are becoming much easier to find!