Question: For those without the time to watch the AWS session, can you describe in 2 sentences or less the challenge and solution offered by Unleash live for Miami-Dade County
James: Miami-Dade Metro-rail decided to suspend fares and open up ticketing gates during Covid-19 for the safety of transit riders and employees in order to lessen the need for non-essential interactions and promote social distancing.. As a result, they lost ticketing data and could not measure demand for their service. Unleash live utilized their existing cameras based in the stations to provide real time commuter volumes and visibility for operational planning
Question: Does Unleash live only work with transportation agencies and cities?
James: We are a video analytics platform that connects a wide range of camera types to the cloud, and with over 20+ current A.I. applications currently available in our app store (and growing fast) we service a wide range of industries such as mining, power utilities and wind farms.We primarily focus on any organization which has expensive assets dispersed over large geographies requiring visual inspections. In the case of transport and cities, the highly valuable asset is people. In Wind farms, it’s turbines inspections and in mining, it may be conveyor belts.
Our solution is not limited to a fixed set of applications, use cases are constantly growing as the economics of using the same cameras to detect and inspect multiple needs is something CFO’s and CIO’s absolutely love.
Question: How do you deal with the obvious question of privacy?
James: So you really need to think of us as an insights company, using video to reduce the manual labor of conducting inspections. We essentially augment the role of inspectors by using computer vision to reduce the need for humans to conduct the inspection or watch video footage.
You can see that our core offering is about operational efficiencies, especially for enterprises. We do not have any algorithms that do facial recognition as this is not in our business model, or in our ethics. There are plenty of companies out there that offer this type of service, but not us.
To safeguard people's privacy, we take a number of steps. We concurrently run a facial anonymizer on all videos that includes people to remove identity to comply with GDPR. Additionally, many customers ask to have the visual data deleted at the point of capture. We extract valuable metadata for insights generation and often remove the video footage or store it in accordance with our customers' guidelines and policies.You can read more about how we think about privacy on our blog.
Question: Do your customers need a dedicated person to use your platform?
James: In short, no. In many instances, our customers do not actually touch our platform. The cameras are ingested directly into our secure AWS infrastructure. Video is analyzed and pushed out based on predetermined customer specifications.
The key here is integrating the data into our customers’ workflows. We don't want them having to buy new hardware or, worse, having to train their team to learn how to use new software. We provide real-time alerts through SMS and Slack, API integrations into their ERPs or their existing BI software, such as Tableau and PowerBI, so they can get on with their business more flexibly.
Read more on the different type of outputs we offer
Question: How do I know if the data you extract is accurate?
James: I would start off by saying that training computer vision algorithms is an iterative process and it requires hours of footage and thousands of images to train. At Unleash live we adapt our algorithms to satisfy a wide range of situations, so the error probability is minimized. For example, in the case of Miami-Dade County, we refined our algorithm to cater for a range of scenarios so we streamed footage from various station cameras, different locations, angles, resolutions, frame rates and lighting conditions, then we validated this data with the results, to make sure there was no duplication or double counting of people. Lastly, we placed bi-directional virtual gates to specific zones, to ensure we had the most accurate numbers for those specific zones.
Question: What makes Unleash live different?
James: Simply put, it's the speed with which we turn visual data from cameras into insights for our customer, the breadth and speed again of deploying new use cases on your feeds, and the flexibility of how we serve that data up.
Question: Are you limited to what you can detect? Is it just people ?
James: Not at all. We can detect people, vehicles, objects and attributes. It all starts with the visual data. Our customers typically have existing cameras and know what they’re looking to get insights on. Whether it’s faults on a power line, cracks on a turbine blade, or analyzing overcrowding on a platform. In these instances, we work with customers to develop an AI App if one does not already exist, test it and deploy in under 6 weeks. That's fast.
Question: How does it help for Unleash live to be on AWS?
James: We partnered with AWS, because it allows us to have the flexibility to continuously refine and optimize the algorithms, as well as add or take out as many cameras as our customers want. On the other hand, due to being a serverless architecture we are able to adjust the cloud computing requirements based on our customer’s needs on the fly. Essentially, our customers do not have to worry, or think about server racks, or forecast how much compute they need. We serve this all up as an on demand service.
Questions: Many transport providers have complicated and sometimes legacy CCTV systems, what are the limitations for such cloud services.
James: There really are no limitations! Our platform is designed to easily work with existing legacy systems and we offer various ways of connectivity. This can range from directly connecting to IP cameras, connections via Video Management Software such as Milestone, Genetec, Avigilon or deploying new cameras. Unleash was designed to flexibly work with a variety of inputs and if customers really wanted to, they could also upload footage for post processing analysis.Often our customers see the value in visual data and have deployed new dedicated cameras to extract more data.
Question: What is a normal project process for a transport provider to get started
James: We start off by understanding customers’ needs. We then assess available camera locations and types for analysis to best meet those needs. For Miami we selected the cameras with the best view of the ticketing gates, then we worked in collaboration with their IT teams to connect their cameras to adhere to safety and security protocols. We then run streaming tests until we were satisfied with the quality and begin applying the specific algorithms to generate outputs.
The deployment time depends upon the complexity of the project. Connecting our customer’s cameras can take anywhere from 5 minutes to a couple of hours, but it all depends on their network integrity, connectivity, and speed.
Over the course of the project we continuously refine the algorithms and monitor for optimal performance.
Question: What is your experience working with government and transport entities to date?
We have been fortunate to work with forward thinking cities and counties like Miami-Dade County where they see the value in visual analytics, data informed decision making and recognise the increased proliferation of cameras and connected devices.
In many instances, transport authorities and cities can be slow to innovate and are afraid to tread new paths. Covid19 however has pushed cities globally to think more about remote operations and leverage technology to solve old and new problems. Since the Unleash live transport solution was built for easy adoption, our customers have been able to start small with minimal time or commercial investment and have found that our technology is well established, easily accessible and that we have a methodology that is tried and tested.
Contact us to learn more or book a time with one of our specialist team members to learn how your business can take advantage of live streaming video and AI analytics.