AI-powered post-analysis solution from VisionTrack can transform commercial fleet safety using video telematics and connected fleet data. As a result of NARA (Notification, Analysis and Risk Assessment), vehicle operators will have an easier time assessing video footage of their vehicles and will be able to reduce road deaths and injuries significantly.
“Our cloud-based NARA software is a true game changer in the world of video telematics as it will help save time, costs and most importantly, lives by providing proactive risk intervention and accurate incident validation,” explains Richard Kent, President of Global Sales at VisionTrack. “NARA proactively removes false positives and monitors driver behaviour without the need for human involvement. With traditional video telematics solutions, commercial fleets can be experiencing hundreds of triggered daily events, enabling them to deliver more efficiently whilst not compromising road safety.”
The company says that NARA is device agnostic, so it can be integrated with existing connected vehicle technology – whether VisionTrack or third-party hardware – and adds another powerful layer of analysis to AI vehicle cameras, installed with edge-based AI technology, that are often limited by the processing capacity of the device.
NARA represents a huge step forward for video telematics as it uses ground-breaking computer vision models with sensor fusion to assess footage of driving events, near misses and collisions. This ensures the review process is manageable and timely while eliminating human availability or error. Vehicle operators can use video telematics insight best to protect road users better and help prevent collisions.
During the testing phase, a 1100-strong logistics fleet was found to be generating on average 2,000 priority videos a week, which would typically take someone over 8 hours to review. NARA reduced the time needed to review events that require human validation to just minutes per day. As a result, the company is now targeting more efficient risk management whilst supporting their road safety strategy.
Advanced object recognition uses deep learning algorithms to identify different types of vehicles, cyclists and pedestrians automatically. With incredibly high accuracy levels, it will be able to distinguish between collisions, near misses and false positives that can be generated by harsh driving, potholes or speed humps. The software will also include Occupant Safety Rating that uses a range of parameters to calculate the percentage probability of injury and immediately identify if a driver needs assistance.
“As a true advocate of road safety, having already pledged our support to global initiative Vision Zero, we are passionate about helping the industry achieve its target of eliminating all traffic fatalities. Our vision is to create a world where all road users are kept safe from harm. We are embracing the latest advances in machine learning and computer vision to further enhance our industry-leading IoT platform, Autonomise.ai, and AI video telematics solutions,” concludes Kent.