When considering video archiving, the traditional approach is to opt for centralised solution, with all video streams from across the site being brought back to a central point where the relevant data is stored. New developments in video management technology mean that the concept of distributed archiving has become a reality. While centralised recording remains the most common choice, Benchmark considers the alternative options.
When designing a video surveillance solution, the considerations with regard to video archiving used to be relatively simple. It was a case of considering the number of video inputs, calculating the required frame rate and resolution, and then using the required retention time of video footage to specify an appropriate recording device – typically DVRs – which met the specification.
That this process was relatively simple shouldn’t be seen as an indication that the surveillance sector had hit upon the right formula for video archiving. Indeed, the specification was relatively simple predominantly because the archiving technology was inflexible. Today we can recognise how the limited technology was actually a significant restriction for those designing and implementing video surveillance systems.
Although not best practice, often the technological issues meant that some systems were designed with a core element of compromise. Often the decision was made based upon how much video a site was to lose, or to put it another way, how much video they could afford to keep.
For many years, centralised recording was – and to a certain degree still is – the predominant model when designing video surveillance solutions. The reason for this is historical. In the past, technology simply didn’t allow any other approach to be achieved for a realistic cost. Every camera needed a dedicated cable linking it to a recording device, and recording devices effectively recorded a single stream of video.
When there was a need to record multiple cameras. this required that all video streams were feed through a management unit – typically a multiplexer or a switcher – which then delivered the images from the multiple feeds as one stream, albeit one with a large amount of information missing.
The major driving force behind centralised recording in the past was the limited technology. The necessary topology forced the core design of surveillance systems to feature a centralised recording system, whether that was a single unit, or several cascaded recorders. Even when VCRs were replaced with DVRs, the centralised approach was retained because of the same limitations. Also, the hardware design was intentionally based around mimicking the operations of a multiplexer and VCR – for many, that was the major selling point of early digital recorders! Essentially, because the units included an integral multiplexer, so the video feeds still needed to come back to the recorder to be managed. This meant that as the video surveillance sector moved into digital archiving, the recording model was still a centralised one.
As the use of DVRs became the mainstream choice for those recording video data, the concept of centralised recording was challenged for the first time. Previously multiple DVRs had been cascaded at the central control point, but their network connectivity allowed the recorders to be distributed around a site, near to camera clusters. This allowed installers and integrators to realise savings in terms of the time and cost of cabling.
However, at the time most DVRs offered very limited control and management when networked, and typically the centralised model remained as the obvious option for many. Often, the cabling was already in place for a centralised archiving model, and so the units were simply slotted in where obsolete technology was being replaced.
Despite the flexibility which was inherent in digital recording, the centralised archiving model remained well established, and for good reason. Legacy cabling meant that the links were already in place to pull back video to a centralised location for recording. The idea of putting recording devices out in the field was hampered by three factors: connectivity, management and cost.
Because early video-based networking only featured low bandwidth capacities for remote connectivity, one problem with distributed archiving was that if someone wanted to review high quality footage, typically they would have to visit the recorder, burn a disk, and bring the footage back. It wasn’t an ideal solution.
As digital archiving became the norm, there were still several factors that made centralised archiving the method of choice. Memory was expensive, and the choices were very much limited to integral hard drives in video recorders, WORM (write once, read many) media or digital tape from the IT sector. If people thought hard drives were expensive at that time, the cost of digital tape devices sent them running for cover!
While it may seem odd to start a debate about centralised or distributed recording with a recap of the history of video archiving, the reality is that it serves to remind us that centralised archiving did not become the mainstream choice of those designing surveillance systems because it was the best option. The approach grew out of the limitations of certain technologies, and as such it is true to say that it was the only option!
The right fit?
Of course, putting the cost and connectivity issues to one side, probably the biggest reason that distributed archiving didn’t set the surveillance sector alight back then was that at the time there wasn’t really a lot to gain from adopting it as an archiving model. Indeed, even today some of the available technologies could be argued to be best suited to a centralised model, not because it is the best option, but because it fits the topology being employed.
Firstly, whilst DVRs record digitally, they still – even today – use composite video as the transport platform. Every device uses a dedicated coaxial cable. This is also true of other systems which use alternative video technologies such as HD-SDI and HD-CVI, along with the various megapixel ‘analogue’ systems now being offered.
Because the topology of these systems results in a very structured – and as a result restricted – design, they almost lend themselves to a centralised archiving model. The core technology simply isn’t flexible enough to allow systems to be designed in different ways, even if the alternative approaches better suit a site’s needs.
Probably the biggest negative with a centralised approach is the fact that it introduces a single point of failure. To eradicate this, a full redundant secondary archiving system needs to be in place, effectively doubling the cost of the recording element of the system! Whilst some very high risk sites might be willing to swallow this, few mainstream users will.
The introduction of networked-based video surveillance, and the ability to utilise ever more flexible topologies with the use of advanced infrastructure, pushed the boundaries with regard to alternative recording models. Indeed, the impact of this has been that increasingly those manufacturing systems with more restrictive cabling topologies are introducing products that allow distributed archiving to be realised.
Many manufacturers and experts – especially those with a vested interest in supplying systems with a centralised recording model – like to intimate that distributed recording, and especially edge recording, is solely pushed by those trying to eliminate bandwidth issues with networked solutions. Such an attitude is blinkered, and misses the benefits of such an approach.
It is true that as image resolutions increase, there can be issues with regard to bandwidth management, and these issues are reduced if the full resolution video is not pulled back across the network for archiving. However, that’s is not – and never should be – the sole reason for specifying distributed or edge storage. That said, it can be the most economical way to record video, in terms of both costs and resources, and it would be foolhardy to reject such an approach.
Distributed archiving allows entire systems to be split into discrete zones, with each having the ideal archiving configuration for its specific needs. This not only allows a greater degree of flexibility in system design, but it also eliminates any single point of failure. Where certain cameras or other devices are designated as critical, these can enjoy recording redundancy without the expense of implementing such an approach across all devices, as is necessary with the centralised archiving model.
It is important to realise that distributed or edge recording is not limited to the use of memory cards within a device. That said, modern memory cards do have high capacities and reliable performance if correctly specified. Often a camera with an integral SD card slot is capable of archiving more footage than a mainstream DVR or NVR!
The distributed and edge recording model can make use of a very wide range of media and devices. These can include NAS and SAN devices, recording servers, VMS recording appliances, DVRs, NVRs, dedicated discrete drives and on-device archiving. The latter can include solid state storage along with the already mentioned memory cards. For many, it is this flexibility which allows the creation of a reliable, robust and cost-effective recording solution.
The cost of memory has never been more competitive, and the options for archiving media are more varied than ever. Additionally, the advanced functionality of today’s systems gives the user greater control over what data is captured, and how that captured data is stored. This is not only a case of employing event-based recording, but also how and where certain event footage is archived, with choices of resolution, frame rate, the inclusion of metadata, the ability to remotely back-up recordings, even to other locations. The end result is an ability to better manage footage and its associated data.
With a centralised model, if the recorder fails, then all video footage is lost, and new footage cannot be captured until the device is repaired or replaced. By utilising distributed recording, any device failure will only affect a small number of video streams. As already mentioned, if these are critical, then redundancy can be introduced in a cost-effective manner. This might be to use both memory cards in specific devices, as well as a local back-up to a storage device of basic NVR.
The distributed model eliminates issues with any potential weak points in the infrastructure. Dual-streaming cameras and codecs can deliver high resolution real-time video to local archiving devices, whilst transmitting a second lower resolution bandwidth-friendly stream for viewing if required.
How distributed archiving is defined will be decided on a application-by-application basis, such is the flexibility that it offers. In some cases, a group of cameras can be brought together and classified as one archiving point. In others, it might be preferable to archive on a camera-by-camera basis. Thankfully, the wide availability of cost-effective archiving media makes this achieveable.
Distributed archiving points can utilise analogue, digital or hybrid technologies. This allows selection of the best tools – in terms of cost-efficiency and performance – for every system, and indeed for every aspect of any given system. It also allows existing infrastructure to be utilised, where practical.
The various archiving points can be linked together using a wide range of cabling topologies to ensure the most economical and effective solution.
Recordings can be accessed on demand, or regular updates can be scheduled for times that best suit the needs of the user. Additionally, if the link between the various recording elements is either done or busy, recording still continues are footage is securely archived at the local node.
As digital media increases in capacity and decreases in cost – a situation that is increasingly driven by the consumer market-place – the argument for distributed archiving does start to become more attractive. Both the IT sector and consumer multi-media systems have created a growing market for NAS (network attached storage) devices. These offer anything from simple single disk solutions up to multi-disk RAID enabled solutions, and everything in between.
When selecting a solution, ensure that the hardware will be compatible with the surveillance options you are using. Benchmark has identified some NAS manufacturers trying to dump obsolete products into the surveillance sector. Needless to say, such units haven’t fared well in our reports!
Beware of any NAS product that doesn’t use the latest technologies. If anything relies on serial connections or uses terminal emulators for set-up, avoid them.
Also, remember that many HDDs aren’t designed to have data written to them continually. Always select drives that are rated for surveillance needs. In the past this wasn’t easy, but increasingly drive manufacturers are recognising the potential from such devices. Solid state drives are also available, and whilst these carry a cost premium they do remove the weak point of moving parts.
Memory cards are also extremely cost-effective. Whilst SD cards are the most common format in surveillance, other options such as Compact Flash are also used. If a reputable brand is selected, these cards are extremely reliable, and capacities are increasing all of the time. With the move to SD-XC, capacities of 128GB and write speeds of 100Mbps are now common.
Centralised recording was born out of the limitations of now redundant technology. Distributed recording can deliver higher security, enhanced performance and a more cost-effective solution.
When you look at the choice, which best suits a site’s needs depends very much on the infrastructure being used and the system’s profile. However, where distributed archiving fits the site, it should be the prefered option!