The benefits on offer from Artificial Intelligence, Machine Learning and Deep Learning are numerous, but performance is often dependent upon the use of suitable hardware. Using GPUs (graphics processing units) is increasingly common, with many hardware providers offering devices with enhanced processing, and the chipsets can also be added to legacy systems. What is a GPU, and why do they matter?
For many integrators and end users, the developments being made in Artificial Intelligence, Machine Learning and Deep Learning enable the creation of ever more innovative and bespoke solutions. The potential is limitless, and as processing power increases further (as it inevitably will), the opportunities which will be opened up by these technologies can only increase.
Faster computational speed and resources equates to the ability to run more processes and algorithms simultaneously, which in turn enables the creation of more ‘intelligent’ systems. However, smart systems only make sense – and appeal to customers – if they have a convincing use-case. Without a demand from businesses and organisations, the best technologies will not impact on the security solutions market.
The technologies in question will predominantly be added into products and systems at the manufacturing level. Integrators and installers are unlikely to ever have to work with the coding or processors on offer. Their challenge is to understand the benefits on offer, and to sell those to the end user.
Sadly, simply stating that a system uses Artificial Intelligence or Deep Learning isn’t going to persuade an end user to increase their budget, as these generic terms mean little to them. As with most emerging technologies, the real-world benefits which can be realised for businesses and organisations are what will excite the mainstream market-place.
End users won’t buy AI or Deep Learning because it’s something ‘new’. The benefits are what the customer will pay for. However, this does not mean integrators and installers should not bother about gaining an understanding of how the technologies work in harmony with appropriate hardware. Indeed, a level of knowledge is important to ensure that benefits and features are not under-utilised or oversold.
Which technology?
To understand why GPUs are important, it is worth considering the various technologies. Often the terms Artificial Intelligence, Machine Learning and Deep Learning will be used interchangeably. Whilst the technologies are linked, they are different.
Artificial Intelligence (AI) is the overarching technology. Both Machine Learning and Deep Learning are part of the AI landscape. AI seems to have risen to prominence in recent years, but the pursuit of AI has been on-going since the 1950s.
AI is simply the delivery of systems and solutions which can apply intelligence to a problem. That sounds vague, because it is!
The clearest definition of AI is where a machine uses all available and relevant data to maximise its chances of success in a given task. By using reasoning and probability, AI allows a machine, system or solution to participate in the decision making.
AI relies on the system ‘learning’ about its environment and the various actions which take place, either normally or as a part of ‘exceptional’ activity. There are two main types of learning associated with AI: Machine Learning and Deep Learning.
Machine Learning is very common in the IT world, and many systems are based upon this approach. Machine Learning is used by social media, search engines, on-line services and data management systems. It works by running data through a variety of algorithms and uses the results to ‘predict’ things in a given environment.
Deep Learning is superior to Machine Learning, and uses numerous layers of algorithms (which is where the ‘deep’ reference comes from). It can understand an environment and make decisions based upon what it has learned.
A good way of understanding the difference between the two is that Machine Learning systems will search through millions of options to quickly find a solution in a given environment, based upon what it has been programmed to do. Deep Learning systems will use gained knowledge and experience to understand the environment, and will filter past events to decide how to act accordingly.
It is important to realise both processes require a high level of computational power. Machine Learning involves a high degree of searching and filtering, and Deep Learning runs multiple processes, simultaneously, to ensure it ‘understands’ the status data from a given site or system.
Not only does the hardware require the capacity to manage these computational tasks, but it also needs to ensure it has the processing power to carry out everyday tasks too: video processing, data recording and searches, access control transactions, alarm and event handling, etc..
The cost-effective way of delivering such performance is via hardware acceleration. This is a process whereby the server ‘offsets’ some of the tasks to additional hardware elements, thus freeing up its core resources to other tasks. GPUs are well suited to this role and ensure the CPU (central processing unit) manages a reduced workload.
CPUs and GPUs
The CPU has, for a long time, been the driving force in servers and PCs. Over the years, CPU performances have increased, and while that does mean today’s servers have higher performance levels, the workload we expect them to manage has also increased significantly.
If you consider video, the servers of the past managed PAL video streams with much lower resolutions than is standard today. However, in order to ensure the video footage wasn’t degraded by limitations in the hardware, the streams were often forced to limited frame rates, or quality was restricted by a bitrate overhead.
Because the cameras were only used for security surveillance, the numbers of devices were often kept to a bare minimum, and additional processing was typically limited to motion detection.
Despite these elements resulting a low load on the server hardware, restrictions were required to ensure consistent performance.
Since then, processing capabilities have increased and modern servers provide a lot more power in terms of processing. However, HD1080p has become the de facto standard for video, and the increased use of 4K UHD, multi-megapixel and 360 degree video has further impacted on loads. Increasingly, cameras are being deployed with higher levels of IVA, adding to the work a server must do.
Video is used not only for security, but also for safety, site management, traffic control, process management, etc.. The result of this is increased camera counts, which in turn creates more video data.
Additionally, these higher numbers of cameras are using advanced video analytics in order to automate management tasks, which again increases the load on the server’s processing capacity.
Mobile viewing is another task which has grown significantly in popularity. However, it also can create a lot of processing load, as video inevitably needs to be transcoded to make it suitable for remote viewing.
In some systems, mobile viewing can have such an impact that essential core services become unstable or fail, compromising on the credibility of the entire security solution. By offsetting much of this processing work, modern systems remain stable and efficient.
The emphasis placed on GPUs needs to be considered in a balanced way. While it is true they offer a remedy to systems which might otherwise grind to a halt, the CPU remains very important to a server’s suitability for security use.
The CPU contains millions of transistors which perform a variety of calculations. Standard CPUs have up to four processing cores. The benefit of CPUs is that they can carry out a huge range of tasks, very quickly.
The GPU is more specialised, and is designed to display graphics and carry out specific computational tasks. GPUs have a lower clock speed than CPUs, but have significantly more processing cores. This allows it to carry out mathematical operations over and over again. The processing cores run simultaneously, making it ideal for handling repetitive tasks.
GPUs might lack the diverse abilities of a CPU, but they make up for it in terms of speed. A CPU can perform up to four computations per clock cycle, while a GPU can perform thousands.
GPUs were designed for 3D game rendering, but the performance can harnessed to accelerate computational workloads. A GPU can manage huge batches of data, performing basic operations very quickly. NVIDIA, the leading manufacturer of GPUs, states the ability to process thousands of threads can accelerate software by 100x over a CPU alone.
GPUs excel in carrying out multimedia tasks: transcoding video, image recognition, pattern matching, content analysis, etc.. These are therefore tasks which are better passed to the GPU than managed by the CPU.
While much of this might sound like the GPU has arrived just in time to save the struggling CPU, the reality is that GPUs have nowhere near the flexibility of CPUs. Indeed, they were never designed as a replacement.
The best value in terms of system performance, price, and power comes from a combination of CPUs with GPUs. Indeed, many of the tasks which a GPU carries out are done because the CPU makes the decision to hand them over. Some explanations of the difference between CPUs and GPUs neglects to point out that with CPUs, most hardware would be very limited. The two types of processors co-exist in order to ensure optimal performance in an advanced hardware set-up, and matching the two components is best left to the experts.
A legacy option?
Is there any option of an upgrade if an end user has invested in hardware for their security project, but because there was no need at the time, hadn’t allowed for the introduction of GPUs to accelerate performance? There is, but caution is required.
It’s already been mentioned that GPUs were designed for the gaming industry, where the ability to render accurate high-quality graphics is necessary. Serious gamers have, for many years, invested heavily in high performance PCs, and as the performance of games increased, so the upgrades market also blossomed.
The most common way to add GPUs to PCs was via graphics cards. These simple PCI cards can be added to hardware, boosting graphics performance. The same is true of security servers. By adding an upgraded graphics card, the GPU performance can be used for hardware acceleration. Implementing this is usually very easy. Many VMS of other software packages include a simple ‘use hardware acceleration’ tick box.
If adding a GPU via a graphics card is very simple, and switching on hardware acceleration is often a case of simply checking a box in a menu, why do integrators and installers need to exercise caution if going down the route of upgrading a legacy server?
The answer is one of expectations. If a system is lagging when under load, it stands to reason that deploying a GPU upgrade will boost performance, and it probably will. However, the question mark is over how much it will boost the performance by.
Any enhancements can be affected by other hardware components. For example, if the CPU is extremely overloaded, not because the system is throwing too much work at it but because it is woefully inadequate for the job, then adding a GPU might not make a significant difference, because the CPU will still be struggling. If memory limitations are causing issues, then adding a GPU might not have a significant result.
CPUs and GPUs work together and rely on other hardware components too. If the various elements are mismatched, the benefits of GPUs might not be obvious.
In summary
AI, Machine Learning and Deep Learning are in their infancy in security, but the technologies promise much. The important point for integrators and installers is to ensure they have the right hardware to cope with the high workload.