Computer Vision or IoT - Picking the Right Technology
Computer vision has evolved rapidly in recent years due to improvements in Machine Learning, edge and cloud computing power, and visual hardware. For some, computer vision is classified as a subset of IoT, this blog highlights the increasing needs of enterprises and organizations and the impact computer vision has over traditional IoT devices, especially with current under supply of cellular IoT chips for data collection, analytics and decision.
IoT is a collection of interconnected devices mainly using the internet to communicate and share data with each other. Though they have been around for years, IoT devices are typically designed and deployed for single-purpose applications. Dedicated IoT sensors are often deployed to provide static data collected for monitoring, specifically detecting pressure, counts, vibration, wind, and temperature. Today, approximately 20 billion devices are connected to the internet, estimated to exceed 30 billion by 2024^.
Computer vision, however, is revolutionizing the way businesses operate. Organizations seek opportunities to drive down costs, improve operational efficiencies, uncover new revenue opportunities, and create safer and more productive work environments. With over 214 million IP cameras deployed in 2021, growing at 12%, companies are leveraging existing and new cameras for data extraction and collection. Today, in the US alone, the camera market is valued at US$28.02 billion (2021), expected to reach US$45.54 billion by 2027.
Organizations are looking at computer vision to replace manual inspections and monitoring of video feeds and images to identify, detect, and measure at a scale that is impossible without AI today. In a world where companies with the most comprehensive data win the market, organizations seek to gather connected and granular data points for insight generation and data analytics.
Will chip shortage impact IoTs?
Most of the IoT devices that have cellular connectivity use cellular IoT chipsets. The global chip shortage had a significant impact on IoTs. According to the article published on FutureIoT.tech, chip shortage will affect IoT growth by 10% to 15% in 2022. The problem is not expected to be resolved until mid-2023. Furthermore, iot-analytics.com shows that in 2021, an under-supply of 20 million IoT cellular chips was due to a global shortage linked to Covid-19-induced supply issues.
On the other hand, cloud computing has also driven chip volume efficiencies, which do not require having multiple cellular-dependent IoTs. For instance, running multiple cameras on one server has a significant advantage over having multiple cellular-dependent devices.
Computer Vision's edge over IoT
The deployment of computer vision can be accomplished in several ways. Dedicated cameras with embedded CV capabilities can be deployed; however, more customers see value in applying AI on existing or new cameras deployed today. Technologies like Unleash Live supercharge the video feeds from any camera: CCTV, IP cameras, phones, drones, and traffic cameras with computer vision in real-time, turning a camera sensor into a smart sensor.
Whereas IoT devices are built for single-purpose applications, computer vision is a multi-purpose application, enabling a variety of data points to be captured from a video feed, which can be customized based on customer needs and is dynamic, meaning the data captured can evolve over time. The power of computer vision lies in the combination of the detection of objects and the characteristics of the detections relative to the real-world environment. This table shows the insights that can be collected via various IoT devices and visual sensors.
Computer vision has the added benefit of flexibly improving and expanding the extracted data through ongoing improvements in AI models. These AI models can extend not only the type of data captured, but can be added infinitely to video streams for analysis across multiple use cases, effectively amplifying the value of the camera sensor. As organizations evolve, so do their needs for uncovering hidden data within video feeds, which pairs well with how computer vision models can be retrained to be increasingly more robust, accurate, and efficient. This factor alone extends the longevity of computer vision deployed solutions, constantly evolving with the needs of organizations.
A case study of comparison: People counting in public and private spaces
One of the most common methods of understanding space capacity and people counting today is installing hardware, specifically WiFi sniffers that detect mobile devices with their WiFi switched on. These sniffers leverage the device's unique identifiers, MAC addresses, to determine the number of active mobile devices in a region and infer the number of people present. This assumes that each person only has one mobile device and that their WiFi is switched on.
Deploying cutting-edge visual analytics, Unleash live solves this problem through computer vision. By leveraging existing CCTV cameras, trained AI models uniquely identify people to provide counts within designated areas. Our work with cities, local councils, and retail reveals that simply counting people is insufficient, with many of our clients wanting to understand better people's behaviors and how they interact with the environment. By deploying our AI apps, we demonstrate that through computer vision, we can address the needs of our customers by not only detecting people but also tracking their path of travel, the direction they are heading, and the duration spent within areas of interest. These data points support data analysts in validating and understanding the efficacy of planned spaces, spatial campaigns, and the data modeling of people's movements.
Privacy and security are of the utmost importance at Unleash Live. Anonymization is automatically applied to detect people, and no Personally Identifiable Information (PII) is captured.
The output video also allows data analytics teams to visually assess the detections to get a deeper understanding of what is happening in the scene within context, this is achieved without the need to deploy additional hardware as Unleash live works with already integrated CCTV cameras.
Sensor-based condition monitoring vs external monitoring
In high-value asset infrastructure maintenance projects, condition monitoring plays a significant role in the future maintenance of distributed fixed assets. Traditionally, visual inspection with the naked eye was the only option for condition monitoring, requiring rigorous manual handling, increased asset downtime, and lots of manpower. Historically, inspections are conducted by rope access teams that rappel down 70m tall wind turbine blades for image capture and visual inspections. Although IoT sensors that measure vibrations and pressures may be present to detect other forms of damage, they cannot detect critical surface faults and cracks caused by weather, environment, and debris.
Our customers use Autofly with drones to capture automated and consistent images of turbine blades and automatically process them with our Windfarm AI App to detect a range of faults such as Leading Edge erosion, pinholes, surface damage, paint damage, VG panel, and split separation and classify the severity of each fault based on the combination of the nature of damage and the distance to the turbine hub or tip.
Wind Turbine blade fault images are captured, uploaded and analyzed automatically
The AI app clearly highlights the faults, and all data is presented in a digital library for reporting and import into condition assessment tools. This solution improves the workflow from capture, storage, analysis, and reporting to support asset maintenance safely, quickly, and consistently. This approach is revolutionizing the maintenance landscape through scalability by increasing the speed of inspections and safety for the teams involved.
IoT vs computer vision and counting cars
For cities, councils, and traffic authorities such as the Department of Transport (DOT), there are a few different processes for how IoT devices can count vehicles, most of which rely on antiquated sensors to detect vehicles through the deployment of physical weight tube counters placed on roads, weight sensors built into the road surface or microphone sensors. Though these solutions are considered effective ways of vehicle counting, there are several limitations. Deployment of these solutions is costly but has limited coverage considering the vast amounts of roadways available, and the accuracy is relatively low since lighter vehicles such as motorbikes are skipped, not to mention trucks with more than four wheels can lead to miscounts.
However, the need to understand traffic on a more granular level is increasing. It is no longer sufficient to rely on such systems to simply understand vehicle counts, traffic engineers and cities want to understand a variety of data points: vehicle count, vehicle type, vehicle duration, vehicle origin, vehicle destination and lane usage. Using computer vision, Unleash live developed the Vehicle Counting A.I. app to achieve outstanding accuracy in counting vehicles. Extra features help identify which vehicles they are and detect vehicles with their time stamp and frame numbers to create video analysis across time. Computer vision can not only provide the necessary data for detailed traffic analysis, it can be applied to the many traffic cameras that exist today monitoring traffic.
Creating polygon zones or bi-directional lines gave the AI app extra capabilities to count vehicles only when they cover that zone or particular direction of the vehicle. Furthermore, you can read this article on creating zones and bi-directional lines on Unleash live AI Apps.
Faster, better insights
Reports generated by AI apps are converted into insightful analytics, generating insights that leaders use to make critical business decisions.
There are many other comparisons and use cases of computer vision over IoTs like speed detection, temperature detection using thermal image over thermometer etc. In the industries of robotics and autonomous vehicles, computer vision is on the verge of replacing proximity sensors. One best example is Tesla replacing radar sensors in favour of a vision-only Autopilot.
Better together
Unleash live works with organizations in the Smart Cities and Transport and Energy sectors to help enterprises solve unique challenges, often using various IoT devices side by side with computer vision. For example, the Unleash live platform features GIS modeling using drone visual data to produce point clouds and 3D visualization.
Another example shows Unleash live uses GPS sensors with camera devices to detect objects and localize them in geo coordinates in our Atlas Fusion feature. This gives an advantage to getting analytics on object detection and geo-tagged spatial information within a single platform. For instance, detecting potholes is one thing, but localizing the detected potholes in the geography is a whole new level of insightful data.
Conclusion
There are lots of advantages with computer vision over various IoT devices. However, certain aspects of IoT devices offer impressive benefits in particular use cases. But for broader and multiple insights, computer vision is deemed the superior choice for reasons including the accuracy and range of insights that can be achieved, but also that costs are lower and hardware handling is significantly decreased. However, the true ‘winner’ is when computer vision and IoT operate together to achieve the best possible results.