Video cameras are smarter than ever. Video analytics functionality is available inside most cameras now on the market. Smarter cameras enable a system with distributed intelligence and also help to manage bandwidth and storage – on-camera intelligence can determine what video is important enough to tie up network resources and to eventually be retained or viewed. But on-camera video analytics have their limitations, and additional video intelligence at the server can add a new range of functionality to a system. We asked this week’s Expert Panel Roundtable: Given the rise in edge-based video analytics, what is the continuing role for server-based analytics systems?
Jumbi Edulbehram,
Regional President, Americas, Oncam
19 Nov 2015:
While there are many advantages of running analytics at the edge – such as bandwidth reduction – server-based analytics continue to play a significant role. First, “running analytics,” involves more than just metadata extraction; the data that are extracted have to be stored in a database and then queries have to be run on that database. Storing large databases and running (often multiple) queries are best performed on servers. Some analytics like facial recognition and license plate recognition (LPR) by definition require facial/license plate data to be compared to ones stored in a database, which generally resides server-side. If the video analysis involves multiple cameras – e.g. tracking – then again, server-based analytics play an important role. Another often forgotten aspect is cost. Cameras that are capable of running analytics are generally more expensive; so rather than buying analytics-capable cameras, users may be better off buying more cost-effective cameras
Mark Pritchard,
Marketing Director – EMEA, Pelco by Schneider Electric
19 Nov 2015:
Edge-based analytics are readily available within most professional cameras available today, and they may be the best choice for an installation with many remote locations and limited bandwidth. That said, they tend to be limited in their functionality, mainly due to the computing power available within the camera. This means advanced algorithms are not supported, and the users will not be able to run multiple analytic rules on the same camera. Server-based analytics provide a number of advantages. One is performance; increased computing power allows more advanced algorithms to be run. Another is choice of functionality combined with the ability to run multiple analytics on the same camera or to run an analytic rule across multiple cameras. Server-based analytic systems are also more flexible. The future will see central server-based systems used for the more complex installations, and where users need a wider choice of analytics and better system management.
Fredrik Nilsson,
General Manager, North America, Axis Communications
19 Nov 2015:
The advantages of analytics on the edge have been obvious for a long time, giving better system scalability and access to uncompressed data. But the reality has been different with most analytics in actuality being done at the server level. That is quickly changing with increasing processing power at the edge, and also system architecture at the camera level and ease of use making it possible. Does that mean that all analytics on the server side will go away? Most likely not. While the heavy lifting by means of analytics will be done at the edge, the information in the form of metadata will be provided along with the video stream, and then further analysis as needed on a central server, or in the future most likely in the cloud.
Dave Poulin,
Director – Business Operations Security & Evidence Management Solutions, Panasonic Corporation of North America
19 Nov 2015:
As with all security specifications — let the facility and the end-use be your guide to the type of system you deploy. Server-based analytic systems allow greater ease of use and centralised operation for enterprise customers. These software solutions unify larger, multi-recorder, multi-site networked video systems and allow the user to connect up to 100 network video recorders and digital video recorders (number depends on the system) as well as encoders and directly connected IP and analogue cameras. For those solutions with advanced analytics capabilities such as facial recognition software, server-based systems utilise deep integration with databases to verify a person’s face using live or recorded video streams to provide instant matching against enrolled faces. Real-time processing capacity is also enhanced and these systems can conduct high-speed searches of hundreds of registered reference faces. Capacity is also greater, with servers able to store millions of detected face images depending on the configuration.
Per Björkdahl,
Chairman, ONVIF
19 Nov 2015:
I think it is fair to say that, as the capabilities of edge analytics increase, so will the capabilities of server-based analytics. There are no doubt some cost and performance advantages to performing the analytics on the edge, which I believe will pave the way for a broader use of video analytics. But it is my belief that the capacity of server-based analytics will also increase, perhaps even more than the capacity of the edge, which will lead to new and more complex uses of video analytics. One area of improvement could be where there is a pre-analytics on the edge and final analysis on the server.
Simon Lambert,
Principal Consultant, Lambert & Associates
19 Nov 2015:
For mass-market “edge VCA” cameras, their size, power budget and acceptable price are constraints limiting their number-crunching prowess. There are only so many Tflops of processing, GB of RAM and TB of SSD they can carry on a wall-bracket. This can limit the image sizes, frame rates and DSP required for effective performance. By the way, am I the only person who thinks primary recording in-camera means data is stored in an “unsafe” location? (Otherwise, why are there security cameras?) If video is delivered to the central server for recording and analytics then limitless horsepower becomes available, securely, and with economies of scale. Data from many cameras and data sources can be combined there. This can offer more complex algorithms, machine learning, better decisions; faster, too. Video can then be automatically tagged for more useful archiving. Not forgetting the edge’s advantages, important digested data can then be fed back there.
There is a limit to the processing power that can realistically be offered inside a video camera. There are also intelligent system functions that depend on data supplied by more than one camera. As demands on systems become more complex, the required intelligence may be most effectively supplied at the server level. More likely, however, tomorrow’s systems will utilise a combination of edge-based and server-based (even cloud-based) analytics. Each has its unique role, and functionality available by combining the two may prove in the end to be greater than the sum of the parts. The market will need both edge-based and server-based analytics for the foreseeable future.
Article published courtesy of SourceSecurity.com, a division of Notting Hill Media Limited.
Copyright © Notting Hill Media Limited