IBM Nixes General Purpose Facial Recognition Technology


Love it or hate it, video technology is here to stay.

There’s plenty to love: The ability to make face-to-face calls in a time of physical distancing, remote work flexibility and telemedicine are just a few of its redeeming qualities. On the flipside, worries about corporate and governmental power to monitor users, displacement of labor and ethical threats from artificial intelligence can be off-putting.

Take Zoom as an example. The popular video-conferencing platform has garnered notoriety for its shortcomings. Zoom’s free conferencing app sent sensitive data to Facebook without notifying users, has been riddled with Zoombombing—unwanted intrusions into video conferences—and was scolded for misleading claims about end-to-end encryption capabilities. 

Zoom’s CEO apologized for falling short on privacy and security expectations and provided an itemized list of issues the company is working to address. Eric S. Yuan stated in a blog post that the platform was built primarily for enterprise customers. We did not design the product with the foresight that, in a matter of weeks, every person in the world would suddenly be working, studying, and socializing from home,” he wrote.

Yuan noted that Zoom now has a much broader set of users employing its product in a “myriad of unexpected ways” that have presented challenges that were not anticipated when the platform was conceived. “These new, mostly consumer use cases have helped us uncover unforeseen issues with our platform,” he said.

Zoom’s spot in the limelight is largely due to the unprecedented nature of the pandemic. Work-in-place rules under COVID-19 rapidly changed operations in significant ways across the globe. Based solely on numbers, it’s reasonable to assume vulnerabilities such as systemic design flaws in internet-connected applications won’t disappear into cyberspace anytime soon. The number of active IoT devices is expected to grow to 24.1 billion in 2030, up from 7.6 billion at the end of 2019, according to Transforma Insights.

There’s no shortage of high-profile use cases that warrant scrutiny, and these serve to emphasize the value of risk assessments during the design phase and subsequent research into blind spots, noted Bahaman Sistany, senior security analyst and researcher with Irdeto, a firm specializing in digital platform security and IoT connected industries.

When Machine Design reported on building resiliency into firmware vulnerabilities earlier this year, the article (featuring Sistany’s insights) pointed to at least one software flaw that left a surveillance company’s network security wide open to having thousands of Wi-Fi passwords and usernames stolen. The breach pertained to Ring’s video doorbell communications and associated app, which sent users’ login information to the doorbell using an unencrypted Wi-Fi network during setup and gave potential hackers a window for attack.

Another notable case surfaced at the end of May, when a Florida Tech computer science student, Blake Janes, was awarded a $3,133.70 bug bounty from Google for identifying the flaw in its Net series of devices. Janes alerted Google’s Nest, Ring (owned by Amazon), Merkury, Blink, Samsung and several other manufacturers that the mechanism for removing user accounts on their camera systems does not work as intended.

Blake Janes, computer science student, Florida Tech.Presented in the paper, “Never Ending Story: Authentication and Access Control Design Flaws in Shared IoT Devices,” Janes and his co-authors showed vendors how their camera systems allow a shared account that might have been taken offline could actually stay activated with continued access to the video feed. The problem stems from the notion that the same features that allow convenience can maliciously monitor the auditory, visual and location data between shared users. The expediency of closing a garage door remotely or checking in on kids while away on a business trip is thwarted when the technology is abused by a disgruntled spouse who stalks an intimate partner electronically.  

The Florida Tech research team noted the breach occurs largely because decisions about granting access are done in the cloud and not locally on either the camera or the smartphones involved. Of the 19 IoT devices the Florida Tech team evaluated against a user-interface bound adversary attack, 16 suffered from flaws that enable unauthorized access after credential modification or revocation. In other words, potential malicious actors could retain access to camera systems indefinitely, covertly recording audio and video in a substantial invasion of privacy or instances of electronic stalking.

The approach to allow cloud-based access is preferred by manufacturers because it allows for the cameras to transmit data in a way that cameras don’t need to connect to every smartphone directly. But what may seem like an innocent breach can turn into a legitimate invasion of privacy when bad actors have a field day hacking a company’s security devices.

In the following Q&A, Machine Design once again turned to Sistany to weigh in on what the access control breaches mean for vendors and what it would take to remediate underlying problems.    

Machine Design: Do you think the video camera security flaw was overlooked or missed?

Bahman Sistany: Yes. This specific scenario demonstrates that propagation of state changes due to revocation or changes to access control lists were not properly done in some of the tested IOT camera systems. However, it is important to note the difficulty of testing all scenarios: Interaction of the set of APIs with multiple backend services, in multiple ways, could create a huge number of permutations that need to be tested or formally reasoned about (where applicable). This highlights the importance of research such as the one done by the Florida Tech team. 

MD: What could the vendors do to fix the issue?

Sistany: The paper recommends that a companion app be used to display anomalous device behavior. They additionally recommend the use of credential insight algorithms to detect anomalies, especially when sharing credentials. While these recommendations may help mitigate the threat, better design by the vendors and also better testing of different scenarios is more important. 

The main flaw that is being exploited in the attack has to do with the divergence of IoT API servers and the content servers. Revocation and other access control changes, initiated by a user through the companion app, is reflected in the API servers—but there is a lag until the content server is synched. This is reminiscent of the old TOCTOU (time-of-change, time-of-use vulnerability) class of bugs caused by a race condition, involving the change of the state of a part of a system (API server) and the use of the results of that change (in the content server). 

A better design would reduce the lag between the two services in a way that the change in the access control is reflected in the content server almost immediately (or at least in a shorter window).

MD: The paper notes that manufacturers designed their systems so users would not have to repeatedly respond to access requests. How can they remedy this vulnerability? 

Sistany: There is of course a need for balance between implementing good security practices on the one hand and high usability and responsiveness on the other hand. Frequent access requests and/or other questions to the users who at times won’t even know the right answer will become annoying indeed, and will result in less security.

However, in this scenario, there is no need for frequent access requests to users. A clear UI and implementing a no-lag system where revocation and other access control changes (e.g., limiting access should be treated as higher priority than giving access) are treated as high priority so they would not require frequent user involvement. 

MD: Would cyber attackers be able to conduct malicious acts with access to the active user accounts? 

Sistany: The attack model the researchers are using is where the attacker’s capabilities are those of a naive user, the so-called UI-bound adversary where the attacker is using the user interface without having to install anything else. Furthermore, the assumption in the model is that the attacker was a legitimate user at one point (e.g., ex-girlfriend) but whose access is now being revoked or reduced so a general cyberattack scenario would not be applicable here. 

MD: What sort of encryption should be employed to keep bad actors from hacking the system?

Sistany: The flaws highlighted are not necessarily caused by weak encryption but rather by relaxed encryption (as in no encryption) and other access control. For example, according to the paper, Xiaomi and Nest failed to ensure the security and privacy of camera feeds by storing decrypted content in a cache. The encryption services provided by the managed cloud frameworks should simply be used according to best practices and not relaxed.

MD: How can companies keep their systems safe?

Sistany: This is an ongoing process, and vendors need to be vigilant against potential vulnerabilities and design their systems based on the latest security and privacy research and best practices. However, use of AI and ML systems could help these vendors to deploy adaptive systems that could change behavior as they learn how specific users use the system, and as they learn to differentiate between legitimate and illegitimate use.  

MD: How can users ensure privacy and safe usage at their end? 

Sistany: With respect to the current scenario, a user can audit his/her system easily and see how long revocation or other changes takes to materialize. This could be done by either creating another user account and test to see how long a revocation would take to materialize, or by sharing credentials across multiple devices and [testing] how long a credential update takes to propagate. 

Ultimately, vendors that have gone through independent and reputable audits of their own will certainly be more trusted by users who want to ensure their privacy and safety. 



Source link