fbpx
Home Infrastructure Understanding GPUs

Understanding GPUs

by Benchmark

The benefits on offer from Artificial Intelligence, Machine Learning and Deep Learning are numerous, but performance is often dependent upon the use of suitable hardware. Using GPUs (graphics processing units) is increasingly common, but what is a GPU, and why do they matter?

For many, the developments in Artificial Intelligence, Machine Learning and Deep Learning enable the creation of ever more innovative and bespoke solutions. The potential is huge, and as processing power increases further (as it inevitably will), the opportunities which will be opened up by these technologies can only increase.

Faster computational speed and more powerful resources equate to the ability to run more processes and algorithms simultaneously, which in turn enables the creation of more intelligent systems. However, smart systems only make sense if they have a convincing use-case. Without a demand from businesses and organisations, the best technologies will not impact on the smart solutions market.

Which technology?

To understand why GPUs are important, it is worth considering the various smart technologies. Often the terms Artificial Intelligence, Machine Learning and Deep Learning are used interchangeably. Whilst the technologies are linked, they are different.

Artificial Intelligence (AI) is the overarching technology. Both Machine Learning and Deep Learning are part of the AI landscape. The clearest definition of AI is where a machine uses available and relevant data to maximise its chances of success in a given task. By using reasoning and probability, AI allows a machine, system or solution to participate in decision making.

Machine Learning is very common in the IT world, and many systems are based upon this approach. Machine Learning is used by social media, search engines, on-line services and data management systems. It works by running data through a variety of algorithms and uses the results to ‘predict’ things in a given environment.

Deep Learning is superior to Machine Learning, and uses numerous layers of algorithms (which is where the ‘deep’ reference comes from). It can understand an environment and make decisions based upon what it has been taught.

A good way of understanding the difference between the two is that Machine Learning systems will search through millions of options to quickly find a solution in a given environment, based upon what it has been programmed to do. Deep Learning systems will use gained knowledge and experience to understand the environment, and will filter events to decide how to act accordingly.

Both processes require a high level of computational power. Machine Learning involves a high degree of searching and filtering, and Deep Learning runs multiple processes, simultaneously, to ensure it ‘understands’ the status data from a given site or system.

Not only does the hardware require the capacity to manage these computational tasks, but it also needs to ensure it has the processing power to carry out everyday tasks too: video processing, data recording and searches, transactions, event handling, etc..

CPUs and GPUs

The CPU has, for a long time, been the driving force in servers and PCs. CPU performances have increased, and today’s servers have higher performance levels, but their workload has also increased significantly.

Video is used not only for security, but also for safety, site management, traffic control, process management, etc.. The result of this is increased camera counts, which in turn creates more data.

Additionally, these higher numbers of cameras are using advanced video analytics in order to automate management tasks, which again increases the load on the server’s processing capacity.

Mobile viewing is another task which has grown significantly. However, it creates a lot of processing load, as video inevitably needs to be transcoded to make it suitable for remote viewing. In some systems, mobile viewing can have such an impact that essential core services become unstable or fail, compromising on the credibility of the entire solution. By offsetting this processing, modern systems remain stable and efficient.

The emphasis placed on GPUs needs to be considered in a balanced way. While it is true they offer a remedy to systems which might otherwise grind to a halt, the CPU remains very important to a server’s suitability.

The CPU contains millions of transistors which perform a variety of calculations. Standard CPUs have up to four processing cores. The benefit of CPUs is that they can carry out a huge range of tasks, very quickly.

The GPU is more specialised, and is designed to display graphics and carry out specific computational tasks. GPUs have a lower clock speed than CPUs, but have significantly more processing cores. This allows them to carry out significant numbers of mathematical operations; the processing cores run simultaneously, making GPUs ideal for handling repetitive tasks.

GPUs might lack the diverse abilities of a CPU, but they make up for it in terms of power. A CPU can perform up to four computations per clock cycle, while a GPU can perform thousands.

GPUs were designed for 3D visuals rendering, but the performance can harnessed to accelerate other computational workloads. A GPU can manage huge batches of data, performing basic operations very quickly. NVIDIA, the leading manufacturer of GPUs, states the ability to process thousands of threads can accelerate software by 100x over a CPU alone.

GPUs excel in carrying out multimedia tasks: transcoding video, image recognition, pattern matching, content analysis, etc.. These are tasks which are better passed to the GPU than managed by the CPU.

While much of this might sound like the GPU has arrived just in time to save the struggling CPU, the reality is that GPUs have nowhere near the flexibility of CPUs. Indeed, they were never designed as a replacement.

The best value in terms of system performance, price, and power comes from a combination of the two. Indeed, many of the tasks which a GPU carries out are done after the CPU makes the decision to hand them over. The two types of processors must co-exist in order to ensure optimal performance in an advanced hardware set-up.

A legacy option?

If an end user has invested in hardware, but didn’t include GPUs, there is an option to add them, but caution is required. The most common way to add GPUs is via graphics cards. These simple PCI cards can be added to hardware, introducing GPUs with performance which can be leveraged for hardware acceleration. Implementing this can be very easy. Many VMS or other software packages include a simple ‘use hardware acceleration’ tick box.

If adding a GPU via a graphics card is very simple, and switching on hardware acceleration is often a case of simply checking a box in a menu, why is it important to exercise caution if going down the route of upgrading a legacy server?

The answer is one of expectations. If a system is lagging when under load, it stands to reason that deploying a GPU upgrade will boost performance. However, the question is how much it will boost performance, and what performance elements it will boost.

Any enhancements can be impacted by other hardware components. For example, if the CPU is extremely overloaded, not because the system is throwing too much work at it but because it is woefully inadequate for the job, then adding a GPU might not make a difference, because the CPU will still be struggling. If memory limitations are causing issues, then adding a GPU might not have a significant result.

CPUs and GPUs work together and rely on other hardware components too. If the various elements are mismatched, the benefits of GPUs might not be realised.

In summary

AI, Machine Learning and Deep Learning are in their infancy in the smart systems sector, but the technologies promise much. The important point for integrators and end users is to ensure they have the right hardware to cope with the increased workload.

The balance of CPU and GPU power is best left to the experts; a mismatch of components may well end up wasting time and money. However, as big data becomes more widely used, the extra heavy processing power of GPUs will become increasingly important when designing smart systems to deliver real benefits.

 

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy