12 to Try: Video Intelligence
Benchmark looks at 12 products and systems of interest for security installers and system integrators actively involved in the design and installation of smart video surveillance systems.
Avigilon: Appearance Search
Aviglon’s Appearance Search technology has been designed to use ‘deep learning’ techniques associated with artifical intelligence to assist in the identification and swift retrieval of video footage which is relevant to an investigation or management of an on-going event. The technology can be accessed via the manufacturer’s ACC (Avigilon Control Center) VMS software.
One of the biggest drawbacks with video analytics in recent years has been the problems faced by users managing the sheer volume of data generated by these advanced solutions.
Appearance Search technology enhances the level of security at a site by enabling end users to quickly locate targets of interest, such as specific persons or vehicles, across an entire site.
The way that Appearance Search can enhance forensic investigations is by enabling operators to quickly locate video of an event, plus any other associated footage, thus creating a full overview of events.
Appearance Search allows operators to collate footage with ease. The technology scans many hours worth of recorded footage, and using advanced algorithms and artificial intelligence it identifies relevant video data to help track a person or vehicle. This helps to identify routes taken while in the protected area, and can also identify previous visits and last-known locations.
Herta: BioSurveillance NEXT
Herta’s facial recognition video surveillance solution, BioSurveillance NEXT, is capable of detecting multiple faces in busy scenes and can operate in real-time to deliver accurate results. The software incorporates ‘on-the-fly’ video enrolment which makes it suitable for a wide range of both security and business intelligence tasks. It can also quickly identify previously registered or known individuals, and facilitates run-time management of alarms and people-based events.
BioSurveillance NEXT makes use of standard video surveillance cameras to identify and recognise the faces of black- or white-listed people.
The system is capable of delivering a highly accurate level of performance, even with partial obscurations of the face. It can also accurately identify individuals regardless of changes of facial expression, shadows, high contrasts and extreme or poor lighting conditions. Moderate rotations of the face do not affect accuracy.
Individuals can be enrolled on to the BioSurveillance NEXT system through photographs as well as using real time live or recorded video.
Because of its ability to identify faces in crowded scenes, BioSurveillance NEXT can be deployed in crowded environments.
BioSurveillance NEXT makes use of deep learning algorithms. These apply multiple layers of convolutional and non-linear filters to the footage, increasing accuracy as more data is processed.
Each layer processes the image and extracts information. After many layers of filtered data (typically between tens and hundreds) are processed, the facial biometric information is then encoded directly into small templates which are very fast to compare and yield more accurate results.
Where users are using GPU-based systems, BioSurveillance NEXT has been optimised to allow the use of artificial intelligence and deep learning techniques with operation at video speeds of up to 150 frames per second.
The Zepcam T2 bodycam system has been developed for use in security applications where video is required from operatives or personnel in the field. The system includes body-worn self-contained cameras plus a back-end system for offloading captured video and management of footage.
The video footage is offloaded when the body-worn cameras are inserted into the charging dock at the end of a shift or patrol. This also ensures that the devices are kept charged and ready for use. Video can be transmitted either to the Zepcam cloud service or the user’s own servers.
Zepcam can provide a proprietary video management software package, but can also integrate with third party software such as the XProtect range of solutions from Milestone Systems.
The cameras can record HD720p and HD1080p video and support 32GB internal storage. Videos are saved in MP4 format. The docking station can manage up to eight devices and includes solid state storage for buffering. This is then transferred using gigabit ethernet links.
For installers and integrators, the biggest and most significant difference between the standard and the ‘+’ versions is that the latter include Milestone’s Rules Engine.
The launch of the new ‘+’ versions puts the flexibility and power of Rules into all applications, even those with very tight budgetary constraints. Indeed, XProtect Essential+ is a free-of-charge version of the VMS which, while limited to eight devices, includes the flexible event-handling functionality.
Milestone’s Rules engine allows installers and integrators to quickly and easily add value and a raft of operational benefits to any security system. Rules are easy to establish, as they make use of simple drop-down menus and clickable links to allow the features and functions of connected devices, status data or even the VMS itself to be used to create ‘cause and effect’ scenarios.
Rules can be very simple such as motion detection triggering higher frame rate recording, through to bespoke and complex solutions. For example, if VMD is triggered on Camera 1 and a input received from a detection device within a five minute window while the site is closed for business, a specified action can be triggered.
By using AND/OR logical programming (via a simple GUI), criteria can be combined to allow bespoke solutions to be created in minutes without any need to write code or create Macros.
Rules make use of events and actions, but these do not need to be restricted to the same device. For example, an event on camera 1 or a detection event from camera 2 can trigger an action from camera 3, allowing a high degree of flexibility for set-up.
Bosch: Intelligent Video Analysis
Bosch offers its Intelligent Video Analysis video analytics tool at the edge as a standard feature of many of its cameras and codecs. The free-of-charge addition ensures that relevant and critical video can be easily identified and used to create alarm scenarios.
Intelligent Video Analysis automatically creates metadata, which can be recorded along with the video stream or on its own. This helps to create a usable and effective structure which is applied to live and captured video footage at the point of capture, allowing a variety of actions to be implemented.
Intelligent Video Analysis offers a variety of in-depth configuration options, along with a high degree of flexibility and accuracy in real-world scenarios. While Intelligent Video Analysis is licence-free, it still offers a professional grade solution.
Intelligent Video Analysis offers a range of rules, including object detection for entering and exiting a zone, line crossing including multiple lines in a logical row, objects following a route, loitering, idle objects, objects left or removed, object change, object counting, overhead people counting, crowd levels, motion based on direction and speed, wrong way motion and face detection.
Because Intelligent Video Analysis can be deployed at the edge, this reduces network load. The devices can stream video footage with metadata, or the metadata alone. This adds to the flexibility on offer as it means users can also find events that were originally not configured as analytics alerts using the forensic search.
The Hawkeye Effect & SICK Inc: LaserGuardian
LaserGuardian combines laser-based detection with GPS location mapping to drive PTZ cameras, tracking events as they happen.
The system utilises absolute positioning PTZ cameras and laser detectors to track events and provide real-time visual verification of intrusions.
Because the exact location of intruders is known, and their movements can be tracked and visually verified, end users are able to maximise their resources, ensuring that differing levels of response are implemented for genuine intrusions or innocuous activity.
LaserGuardian offers a high degree of reliability due to the use of advanced laser scanning technology which allows X and Y coordinates to be generated for each alarm activation. When this data is combined with video surveillance and geospatial mapping, it creates a smart solution for high risk sites.
Automated actions can be triggered when the system is linked with a VMS software package which supports rules conditions.
Live Earth: Live Earth
Live Earth offers an interactive management interface that helps make sense of data from multiple sources. It is a software-based mapping platform that has the ability to visually present numerous real-time data streams from a variety of sources. For example, it can manage camera streams from VMS systems along with alarm and event information, tracking data from vehicles or objects, weather data, site status information, occupancy levels, etc..
The sources are identified on an interactive map, and these can then be used to either interrogate the data for more information or to control the relevant devices or systems.
Because Live Earth maps the data sources in real-time, users can quickly assess information, leading to more accurate and efficient decision-making.
Live Earth can support numerous data sources, allowing its deployment across a range of applications. To ensure the user is not overwhelmed, the data sources can be filtered based upon requirements.
If a local authority makes use of the system, it could – via on-screen icons – show the location of public surveillance cameras and traffic cameras, generate reports on car park occupancy, identify the location of public transport assets, etc.. While the security department might be interested in public space cameras, these would be of no interest to parking revenue collection officers. Each department can filter the data according to its needs.
If an incident or event occurs, it might be necessary to take an overview of the wider situation, and this is achieved by adding ‘layers’ to the map.
Axis: AXIS Perimeter Defender
AXIS Perimeter Defender is a video-based application that has been designed to enhance perimeter protection and access control. It combines Axis network cameras, audio devices and video management software, delivering a comprehensive video-based solution for effective perimeter protection. It is i-LIDS certified as a primary detection system for monitoring sterile zones.
Deployed on the camera itself and configured via a standard PC or laptop, AXIS Perimeter Defender provides detection capabilities with support for multiple detections.
Detection types include intrusion, loitering, zone crossing and conditional zone crossing. It is also capable of differentiating between people and vehicles. Detection sensitivity is adjustable, and attempts to defeat the analytics by crouching, crawling or rolling will not be successful. The application also is able to cope with attempts to dazzle the camera, such as where vehicle headlights are present.
The application analyses events, dismisses any nuisance activations and notifies operators of critical situations. They can then view detailed video footage to determine the precise nature of the threat. The system is fully scalable, and as analysis occurs on the camera a server is not needed.
Syndex from Briefcam is a video synopsis technology that condenses hours of video footage into short overviews that are a few minutes long. Each detected event is flagged within the synopsis video with a time stamp, and multiple events are displayed simultaneously.
When an end user wants to investigate an incident they simply watch the short synopsis video until they see the individual or object they wish to investigate. Simply clicking on the associated time stamp will then call up the single video clip for full analysis.
Syndex allows rapid video search, review and analysis via a front-end which is both simple to understand and fast to use.
To limit false events, the software filters environmental movements such as branches, shadows, reflections, waves and clouds. The software also has the capacity to manage varying lighting conditions and weather conditions such as snow and rain. Small and subtle objects can be detected, as can objects in the frame appearing against a background of the same colour, through sensitivity to differences in colour and texture.
Finally, if a tracked target disappears from the video then reappears, for example if they pass behind another object, it is treated as one event. However, if the target leaves and re-enters the detection boundaries, it can be logged as separate incidents.
IPS: IPS AnalyticsManager
IPS Intelligent Video Analytics offers a wide range of video analytics which support security operators in interpreting situations quickly, recognising potential threats in real time and identifying risks and threats as they unfold.
To make these intelligent functions available to any video surveillance system, IPS AnalyticsManager is designed to offer centralised management and simple integration of a wide range of certified video analytics modules. IPS AnalyticsManager facilitates a high level of compatibility and ease of use.
The platform makes use of a ‘zero integration’ interface, which not only makes implementation straightforward but also allows the software to provide a video stream with overlaid metadata which can be displayed in any third party VMS.
With a range of intuitive web tools, IPS AnalyticsManager gives the operator the ability to easily configure and operate the analytics without any requirement for an in-depth technical knowledge. The process simply takes a few mouse clicks.
Currently IPS offers 13 different camera- or server-based video analytics modules via the IPS AnalyticsManager platform. The intelligent video analysis IPS Loitering Detection, for example, is ideal for effective support of security staff. The module detects if a person or group of people dwell in a certain area or at the same spot for a predefined time.
All IPS analytics have a graphical user interface which allows intuitive setting of the modules, often with visual aides to ensure that the process is simple and quick to implement.
The use of filters enables reliable detection and ensures that adverse weather conditions do not impact on detection performance.
SeeTec: Cayuga Logistics
The SeeTec Logistics Suite is an add-on element to the company’s award-winning Cayuga VMS, and offers solutions for businesses in the logistics sector seeking enhanced reliability and operational control as well as security.
The Logistics Suite includes the SeeTec Cayuga core VMS software, SeeTec Business Application Server and Client, an interface for ERP (enterprise resource planning) and warehouse management systems as well as the connection of different locating systems.
Individual elements are combined to meet project requirements. As a result, surveillance footage is more easily accessible by linking it to data from the ERP system. This can also include information from localisation systems.
The research interface of the Business Application Client permits the combination of different search criteria.
MxActivitySensor 2 offers intelligent video motion detection. It is an integral part of the firmware of Mobotix devices released in recent years and is a software-based motion analysis tool with applied discriminations for detecting movement of people and objects in the image area. The selected area can either be the full image or a custom-defined area of interest.
MxActivitySensor delivers consistently reliable results, even in applications with large amounts of external interference. The camera distinguishes between movements of vehicles, people or objects that trigger an alarm and innocuous movements that are not relevant for alarms, such as changes in illumination, heavy rain or objects or trees swaying in the wind.
Along with directional discrimination, the latest version has been enhanced with 3D motion detection. This adds a higher degree of perspective-based discrimination and ensures that innocuous sources of motion, such as animals, flying birds, wind-borne debris, etc., do not generate alarm conditions.
MxActivitySensor looks at an entire object rather than portions of an object. This ensures that partial motion – such as foliage or tree branches moving due to environmental conditions – are ignored.