fbpx
Home Technology CCTV Test: Loitering Detection (Part 2)

CCTV Test: Loitering Detection (Part 2)

by Benchmark

The growth of video analytics has seen a diverse number of options made available to installers and integrators. Analytics can be embedded on cameras and encoders, added to open platform-based devices as an uploadable app, run on a centralised server or accessed via the cloud. The potential for deploying video analytics has increased as the rules on offer also become more diverse. One area that is challenging due to the nature of the action being detected is the identification of loitering.

READ PART ONE OF THE TEST, WITH EDGE-BASED LOITERING DETECTION OPTIONS FROM BOSCH SECURITY, HANWHA TECHWIN, FLIR, PANASONIC AND TYCO SECURITY

The rise of video analytics can only be seen as a positive for many security applications. The technology – which delivers benefits to security and business intelligence systems – creates a proactive element which adds an ever-increasing number of diverse deployments for video surveillance.

Typically the effectiveness of any analytics rule will be dependent upon the complexity of the scenario it is trying to detect. Video analytics work best when the ‘violation’ is simple to define.

For example, applications such as line-crossing simply require the analytics engine to detect if an object passes over a defined line. Obviously events can be filtered with discriminations: object size and direction of travel can be added to reduce nuisance activations. However, the definition of the rule remains fairly straightforward.

The same can be said for object disappear rules. The analytics engine is made aware of an object within the video scene to be protected. If that object disappears, then a violation has occurred.

These two examples underline how some analytics rules can be very clearly defined. The clearer the definition of a violation, the more effective an analytics rule will be. This is because the ‘state’ that needs to be detected can be very easily defined. In effect, the analytics engine ‘understands’ what it is looking for.

There are many analytics rules which can be defined in very simple terms and it is therefore of no surprise that these are the more established options in the market. After all, if you had to code an analytics engine you would start with the simplest tasks too! Unfortunately life isn’t always that straightforward and in order to offer credible protection, analytics must also address some less well-defined situations.

When considering the performance of video analytics it is important to remember the purpose of the technology in any application. Analytics cannot and will not apply reasoning or understanding to a chain of events. Anyone who believes the video analytics will consistently make the correct decision will eventually be disappointed with the technology.

The role of analytics is to identify certain behaviours, thus making the job of the system operator – who can apply reason and understanding to a chain of events – simpler.

Detecting loitering is of great value for many applications. Often a precursor to a more significant event, making operators aware of someone loitering or flagging such activity in a recording can assist in the prevention and/or investigation of an incident.

The challenge for video analytics is how to effectively define loitering. Is someone who is waiting for a friend or family member loitering? How do you define people in a doorway sheltering from rain? What about someone just taking a few minutes from their busy day to watch the sun go down?

All of these activities will have similarities with a criminal watching an ATM and waiting for a vulnerable victim. Whilst an operator viewing live or recorded footage will be able to assess the situation, video analytics will not. Therefore, it is not reasonable to expect video analytics to differentiate between loitering with intent and innocuous loitering.

Deciding when and where to implement loitering-based video analytics is just as important to their success as is the selection of the right analytics engine. Deploying the technology in areas where people may be waiting, sheltering or queueing could lead to a high level of nuisance activations caused by everyday activity.

Equally it should be remembered that loitering per se is not an offence: loitering with intent, as defined under the Vagrancy Act 1824 was abolished when the Criminal Attempts Act was introduced in 1981. This changes the offence from loitering to attempting to commit a criminal act. If a site has public areas then care needs to be taken to ensure any system reflects the user’s customer service guidelines.

Because loitering is in itself an activity without clearly definable attributes, it is important to consider the discriminations that can be applied by a video analytics engine in relation to the needs of any given site. It is also important to carry out a thorough risk assessment and to ensure the end-user has clearly defined operational requirements. This will help to identify which said of analytics are best suited for their needs. It is vital to the success of any system that the customer’s expectations are realistic, more so than when a simpler-to-define analytics rule is deployed.

IPS Intelligent Video Analytics: IPS Loitering Detection

IPS Loitering Detection from IPS Intelligent Video Analytics is an application which can be run on any ACAP-compatible device from Axis Communications. The application detects incidents of loitering within the monitored area, and can differentiate between normal activity, slow movement and incidence of loitering. The detection zone is fully configurable, and the application allows discriminations based upon target size, dwell time and perspective.

IPS Loitering Detection is configured via a web browser, and events can be managed via the ACAP interface using the Axis camera’s or encoder’s action rules functionality. The application also can display metadata in the browser, or pass this to a VMS.

The manufacturer claims that IPS Loitering Detection is reliable and offers stable performance in adverse weather conditions. The application can be used in both internal and external applications. It is suitable for use with both video and thermal imaging cameras; for the purpose of our test we used a video camera running HD1080p streams.

IPS provides a wide range of analytics applications for the ACAP platform, and the company’s website includes download links for demo versions of all applications. They will also supply a time-limited demo licence valid for up to 60 days. Registration with the website is required, and you will receive an email including license codes and a download link within minutes.

IPS does have a lot of documentation and videos available via its website, and whilst much of the configuration process is intuitive, any installer or integrator who is new to analytics will find these a benefit. The instruction manuals are well written and deliver the required information.

The application is supplied as a single .eap file, which is simply uploaded to the device. On completion of upload we did receive a message stating that the application was not supported by the hardware. Despite this, the application loaded correctly and was accessible.

Once loaded, and IPS Loitering link appears in the applications menu, and opening this gives three options: Settings, Licence and About. The first step is to enter the licence details. To do this you need to visit the Axis Communications website, enter the serial number of the device you intend to run the application on along with the licence code supplied by IPS. This will then generate a key file which is uploaded to the device.

With the application running, you can enter the configuration page. This is a standalone browser page supplied by the camera’s web server. It does require a login to be able to continue. This confused this for a few seconds as we had no IPS login, but then it dawned on us that you use the camera login to access the functionality.

The configuration page has three buttons: Global Parameters, IPS WebConfigurator and IPS WebViewer.

The global parameters settings include camera ID, VMS details and a number of general housekeeping options. Once these are set you move on to the WebConfigurator page. This includes a configuration workflow so you can assess where you are in the process. The first stage is to set object sizes. This is done with simple mimics (colour-coded blue for the foreground and green for the background). These are simply sized over persons in the foreground and background of the detection zone.

The next stage is to define the loitering zone. The default zone is rectangular, but points can be added to give a fully flexible polygonal shape. You are then shown a graphical representation of the configuration to allow for final adjustments. This includes a green box in which any motion will generate an activity event. This is useful as it provides information about target movements prior to entering the loitering detection zone. If this is not required it can be minimised to match the loitering zone.

The next step is to set alarm criteria with relation to dwell time. There are three settings on this page: alarm time for dwell in any zone, alarm time for dwell in the same place and the delay time before an alarm ends. In order for an alarm to be generated, either of the first two time periods must be exceeded.

The installation process is very straightforward, and once configured the application works as expected. With regard to actions following an incident, these are managed either via the ACAP device or using a connected VMS.

The application has the ability to create up to eight analytics profiles. Each of these contains separate object sizes, loitering zones and activity zones, as well as individual time parameters. However, in the IPS viewer profiles are managed individually. Via ACAP or a VMS, they are handled as a group.

Detection performance was consistent throughout the test, and stability was high. One thing we did notice was that during periods of low light, detection seemed to take longer and certainly exceeded the prescribed time windows. However, despite this, accuracy remained high.

When global changes occur, the analytics engine ‘relearns’ the scene. This is a very swift process and does not greatly impact on performance.

In terms of a graphical representation, potential targets are boxed and a trail of motion is shown. This will turn red when an alarm event occurs. If the target then leaves the detection area, this remains red within the activity zone which helps with tracking after an event.

The only slight anomaly we found with the application was the after creating a profile, you can open the viewer to test the implementation. Typically upon opening the viewer you will receive another login window, and after entering your authentication details the viewer appears. However on a number of occasions there was no authentication option and the viewer did not correctly load. This is a minor issue but one that IPS should address.

intuVision: VA

VA is intuVision’s video analytics solution which is available in four custom packages. These allow specific analytic rules to be selected based upon the user’s requirements. The four packages are Security, Retail, Traffic and Face. Despite this approach, the various packages can be used in combination if required.

The VA Security package includes loitering detection; it also supports activity detection, directional detection, enter/exit, intrusion, object left/removed and area occupancy. Camera tampering protection is also included. The Security module also includes some vehicle-based rules such as idle vehicle (detection of static vehicles), speeding vehicles and ‘no exit’ vehicles. The latter rule detects if a vehicle has parked but nobody has exited it.

Whilst the focus of the Benchmark test is on loitering persons, both the idle vehicle and the no exit rules might be useful where those involved in loitering are using a car as cover. While these may be beneficial for some users, they are beyond the scope of this test.

VA Security is a software-based system which is compatible with Windows desktop and server operating systems. It is supplied as a single software installer with very brief instructions. The software itself has an integral manual, but obviously this needs to be installed before it can be viewed.

The installation process is relatively straightforward. The software makes use of PostgreSQL database, which will be installed if it is not present on the machine. During installation there is an option to either install a standalone instance of the solution, or a distributed system can be specified. Patience is a key part of a successful installation, because you will see several messages stating that the installer is not responding. Ignore these, and it will eventually complete!

Once installed, the next stage is to licence the product. Initial login to the software resulted in a message that the package was not licensed and it automatically closed. We then noticed during the next login attempt that the name of the software had changed: the quick start guide makes reference to a dialogue box which loads the various software elements, and we believe it was this process that showed up as an error. The subsequent login allowed us to enter the licence details. These are comprised of the user’s email address and a PIN supplied by the manufacturer. In order for the licence to be validated the server needs to be on-line.

Once up and running the first task is to add video sources. Initially this showed as no cameras being available, and trying to add a video source via the URL was rejected. However, a forced refresh did bring up the attached cameras, and adding these could then be achieved once the insertion of the authentication details had been completed. With this done the video was accessible.

The installation process is straightforward, and whilst there are a few ‘clumsy’ moments, it’s nothing that will trouble installers or integrators. The only anomaly we noticed was that after relocating cameras, the software kept reporting an error. We checked the services and discovered that some of these were not running correctly. The System Monitor was unresponsive and so the system required a reboot. After this the services became responsive again.

With the software running, the interface initially appears to be intuitive, but there will most likely be in need to consult the Help function. With cameras added, the next task is to sign Analytic Rules to the video streams. To do this, there is a need for the system to ‘learn’ object classifications. Classifications for Persons and Vehicles are included, but references need to be used to effectively train the system to recognise these. The classification configuration can also be used to add additional objects if required.

Classifications need to be established for each individual camera. This is achieved using a contextual menu. Initially we had a bit of a ‘head scratching’ moment, as part of the classifications menu was not visible. To view this we needed to increase screen resolution, making everything significantly smaller, as the scrollbars in the menu were not functional. This would be a simple fix for the manufacturer and is something they need to take care of.

The classification process is fairly straightforward. It’s a simple case of starting data collection, capturing images of the relevant objects, selecting five good examples and using these to train the system. For example, to train for the recognition of persons, with data collection instigated the engineer (or a colleague) simply walks around in the protected area. The various data elements are stored in the software as Unassigned objects. These can then be viewed and the best examples are dragged and dropped into the Persons category.

We then tried to run the training mode only to receive a notification that it had failed. The reason for this is that the Classifications set-up menu requires at least five datasets for a minimum of two object classifications. This is despite the classification configuration for the loitering rule being set to only look for people. This approach might create a problem where rules are designed to only identify one classification; as such, you are forced to create a second object classification just to ignore it.

Interestingly, the software will not reject classifications based upon poor quality data. As an experiment we fed nonsensical data into a category and the software allowed training to be carried out and flagged it as completed. As such, this is something which could be improved upon to make the configuration process more logical.

With the Classifications finalised, the Analytics Rule can be created. Rules are specific to individual cameras, and the selection of the detection type is done via a drop-down menu. Discriminations are applied via a series of tabs: for loitering detection these comprise Loitering, Classification and Advanced.

The Loitering tab is used to set an ‘idle’ duration in either milliseconds, seconds, minutes or hours. Maximum movement of the target is also specified in pixels. To assist with this, the display screen includes a scale to indicate 100 pixels as a reference point. That Classification tab is used to select which objects can create alarms. This can be set to all objects, or specific objects from the predefined classifications can be selected. The purpose of our test we selected that only people should generate alarms. Finally, the Advanced tab is used to set minimum and maximum target sizes and to configure the action taken when an event occurs.

With the Loitering Rule configured, VA Security works well. The number of possible discriminations allows most nuisance activations to be filtered out. It also boasts enough flexibility to allow a variety of requirements to be catered for. Event management and reporting is achieved via the Review app. This displays event notifications, which include details of the incident, video from the relevant camera and two snapshots of the actual event. The Review app also enables additional reports to be generated.

VA Security is both consistent and accurate, and provides more than enough performance for the vast majority of loitering-based detection applications. As with any video analytics tool with a good degree of discriminations, performance will ultimately be reliant upon careful and sensible set-up. Whilst the need for a minimum of two object classifications might not be necessary in some loitering-based applications, it does make sense in terms of ‘training’ the system.

Agent Vi: innoVi

Taking a different approach to the other analytics providers in the test, innoVi is a cloud-based analytics solution. Basically operating as ‘software-as-a-service’, the product is designed to allow central access to a number of remote devices. The innoVi package is available as an edge-based application which can be used with compatible Axis devices. For cameras and encoders from other manufacturers, an edge device is also available.

The innoVi package offers analytic rules designed to detect violations from people and vehicles. The people-based rules include detection of loitering; both sets also include movement into a detection zone and line crossing. Other features include bulk configuration to simplify managing numerous devices, rule scheduling and video verification. As video is not streamed to the innoVi servers, the system does not require high upload speeds.

For the purpose of our test, the application was loaded onto a compatible camera. The majority of current ACAP-compatible Axis devices are supported. There are some provisos with regard to firmware versions: the camera must be running firmware 5.60 or later, and firmware versions 5.65.1 through to 5.70.1.1 cannot be used.

Agent Vi states that innoVi does not require calibration or ‘complex configuration’, and promotes the setup as being three stages: enable the camera with innoVi, add a detection rule, and receive real-time alerts and a video clip.

You will require a innoVi account, and once this is set up you can commence the installation of the app.
The ‘Vi-Agent for innoVi’ application is a single .eap file which enables the camera to communicate with the innoVi server. Installation of this element is typical of the ACAP platform. It is simply a case of navigating to the camera’s Application menu, selecting the relevant file and clicking the Upload button. The process is straightforward and takes no more than a few seconds.

Once installed you then navigate to the Settings screen which provides a link for the Vi-Agent portal. Opening this displays the various settings; there are only two which require changing. The first is the camera name. Note that this is the name which will be used in the innoVi cloud interface. The second is the unique Account ID. This is found on the Settings page of the innoVi cloud interface. This is simply copied and then pasted into the portal page which has been accessed from the camera.

With the changes made, the settings can be saved and there is a prompt to stop and restart the application. This is a simple task and is carried out via the camera’s Application page. The result of this is that the camera will then appear in the innoVi cloud interface as an unassigned camera. It is worth noting that unassigned devices cannot be configured until they have been moved to a relevant site. This is a very simple process, and the camera is dragged and dropped into the site folder.

Configuring a rule is very simple. The first step is to name the rule, and to select the type. There are two steps for this: the first is to select whether you are detecting a person or a vehicle, and the second is to select between line crossing or moving in a detection zone. If you wish to perform loitering detection, you will need to select moving in a detection zone and adjust the dwell time to a suitable period. A simple detection zone will appear in the image, and this can be easily adjusted by manipulating the nodes. Additional nodes can be added, allowing the creation of polygonal shapes.

With this done, the rule can be scheduled (this can be set as always active or a schedule can be created). There are a few final options which relate to the display of overlays and detection elements. With these made, the rule is applied and it becomes active.

In the settings menu of the innoVi cloud interface, there is the option to set a minimum time between events. The lowest this can be set to is 15 seconds. This does mean that multiple rules cannot be linked together to create a double knock scenario. This is a slight limitation and will affect some applications.

Event notifications can make use of either the Immix or Sentinel platforms. This is set up for each specific customer. It does somewhat limit the appeal of innoVi for applications where remote monitoring is not being used.

Despite the use of minimal discriminations, we found innoVi to be both accurate and consistent in terms of performance. There were a few false activations, but all genuine loitering violations being detected as configured. With regard to false activations, these happened on a very windy day. A tree’s branch in the detection zone was flagged as a loitering event on a few occasions.

We did also run additional rules detecting both people and vehicles, and again these performed without issues.

Agent Vi manages much of the additional analytics classifications such as object size, shape and speed, as well as issues of perspective, via the cloud server. For the installer and integrator, this does simplify the configuration process. However, the trade-off is that you lose a degree of flexibility which is inherent in some of the alternative solutions.

Xtralis: LoiterTrace

LoiterTrace is a downloadable software application which delivers detection of loitering when run on an Adpro iFT or FastTrace NVR system. The Xtralis application can be used in both internal and external applications. It is dedicated to the detection of loitering events; the manufacturer offers other applications which add additional video analytics rules to the system.

The application, which requires a licence, makes use of regions of interest to simplify the programming process. Loitering dwell times are fully adjustable (up to 180 seconds), making it ideal for a wide range of user needs. Sensitivity can also be adjusted to suit varied situational requirements. Areas can be configured as either detection zones or masked zones, and no calibration is required, according to the manufacturer.

LoiterTrace can be used in conjunction with other Xtralis video content analysis applications to create a double-knock solution.

It must be noted that LoiterTrace is wholly dependent upon the system making use of an appropriate Xtralis device. This means that for many applications not using this hardware where the addition of analytics is desirable, the solution will probably not be a viable option. Whilst many analytics applications which run on edge devices are limited in terms of the cameras or encoder is supported, these can usually be added regardless of the system’s core technology.

Installation of the LoiterTrace application is carried out via the Xtralis Xchange service. Our test unit came with the application preloaded and licensed, and as such we cannot comment on how the application and licence management system works.

Whilst the configuration of the Adpro iFT unit is outside of the scope of this test, the process is very straightforward and the necessary Client software is downloaded to the server via a direct connection with the Xtralis hardware, and this delivers full control over the systems functionality.

The LoiterTrace configuration menu screen can be found in the Analytics section of the Client menu tree. The menus can be viewed in either Simple Mode or Advanced Mode: in truth the difference between the two modes is minimal, so it’s best to stick with the advanced option. The first task is to select which camera feed you are working on. This will populate the video panel with a low resolution stream from the camera, and will also highlight the licence options.

There are three drawing tools: two to identify detection and mask areas, and one for camera calibration. The detection zone and mask areas are flexible, allowing individual points to be configured. This allows the creation of polygonal shapes. It is possible to create masks which sit over detection zones, enabling the elimination of nuisance alarm sources due to certain environmental conditions. The process is very simple and will be intuitive for any installer or integrator used to working with analytics.

It is possible to have LoiterTrace up and running using nothing more than the creation of detection zones, masked zones if required, and a maximum dwell time setting. This latter element is measured in seconds, with a maximum duration of 180 seconds and a minimum duration of 20 seconds.

One area where LoiterTrace differs from a number of other loitering-based analytics options is that the detection zones are used to cover the ground space which is being protected rather than the image areas. Effectively, this means that the drawn area does not have to cover the full image of a target; instead the space in which they will be loitering is identified. This may sound strange but it does help to eliminate a number of potential nuisance alarm sources.

During the setup it is possible to effectively ‘walk test’ the performance of LoiterTrace. When a target is detected a green bounding box with a timer is shown on the screen. This will turn red and display loitering icon when an incident occurs. This can help to ensure that the detection zone, and any included masking, is set correctly.

LoiterTrace is optimised for detection in areas of up to 20 metres. If detection is required over longer ranges, then the camera calibration feature can be used. This helps the software to understand perspective, with typical target heights specified for the foreground and background. To assist in this, recorded video or snapshots can be used, simplifying the task.

Finally, sensitivity for the section can be set. Configurations can be made for both object sensitivity and contrast sensitivity. The process is very straightforward, making LoiterTrace a quick and simple loitering-detection analytics to set up.

In operation, LoiterTrace is both accurate and consistent, providing a good degree of detection along with high degree of stability. Interestingly, if you approach the configuration of LoiterTrace in the same way you would most other loitering-based video analytics systems, you are more likely to miss the benefits and might even introduce unwanted nuisance activations. Xtralis does produce a guide to ‘best practice’ when configuring LoiterTrace; it is worth reading this to fully understand how the software discerns loitering attempts.

Because of the way in which the software is configured with regards to the use of detection zones, it’s a simple task to minimise the impact of environmental conditions that could create problems. During testing all genuine loitering events were detected, both in internal and external environments. Whilst initially the application does have a basic feel, the performance reassures you that the detection is both intelligent and resilient.

BENCHMARK RATINGS

IPS Intelligent Video Analytics: IPS Loitering Detection

IPS Loitering Detection from IPS Intelligent Video Analytics offers a good degree of functionality and flexibility with regard to loitering detection. The configuration process is very simple, and performance was stable and effective.

It could be improved if multiple zones could be created in a single profile, but this can be achieved using a flexible VMS. Moreover, the application itself is generally stable and does not negatively impact on the host camera’s performance. As such, it is recommended.

intuVision: VA

VA is a decent analytics package and does offer a high degree of flexibility. Loitering detection is good, and it allows the implementation of discriminations to filter most nuisance activations. The ‘training’ process does work well, but in some applications you’ll find yourself adding object classifications just to meet its criteria.

The installation and configuration processes are relatively straightforward, but can be a little ‘scruffy’. The software doesn’t freeze or crash, but you will find yourself forcing refreshes and doing a few things twice.

That said, it does deliver in terms of performance and so is recommended, albeit with the proviso that the installer or integrator will require a bit of patience when dealing with some aspects of the interface.

Agent Vi: innoVi

Being cloud-based, innoVi has some positives and negatives. Obviously it attracts a subscription model in terms of fees, and the system needs to be on-line to function. However, set-up and configuration is simple, and much of the complexity associated with video analytics has been eliminated.

In terms of performance, it does work well. Compared to some of the other alternatives, its rules options are limited, as is the need to have a minimum of 15 seconds between different events. Despite this, detection is reliable and consistent.

If cloud-based solutions are not right for an application, then innoVi won’t be worth considering. However, for a solution for multiple sites where a monitoring platform is being used, it has to be recommended.

Xtralis: LoiterTrace

LoiterTrace is both flexible and intuitive, simple and quick to configure, and delivers credible loitering detection for a wide range of applications. As such, Xtralis has produced an analytical tool that works very well. Unlike all other options in the test, it is solely dedicated to detecting loitering events. This does mean the software is optimised for a single task, which does lead to enhanced performance.

Because the application requires an Xtralis Adpro hardware platform, it does require installers or integrators to base the system on an Adpro core. This differs when considered against edge devices, as the latter can generally be added to any platform.

LoiterTrace takes a different approach to detection, which in turn actually allows it to be more robust thanks to his dedication to a single task. As such, LoiterTrace has to be recommended.

READ PART ONE OF THE TEST, WITH EDGE-BASED LOITERING DETECTION OPTIONS FROM BOSCH SECURITY, HANWHA TECHWIN, FLIR, PANASONIC AND TYCO SECURITY

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More

Privacy & Cookies Policy