Printers still big risk

In a recent blog, Tenable researchers described vulnerabilities in HP business printers.   This is just one vendor, but I’m sure a close look at other printer products would reveal problems across most (if not all) manufacturers.

As we evaluate and manage IoT risk, we should never forget about printers, fax machines, and other devices that have long lived on our networks.  How are you controlling the risk?

Video images and speech are not long for our trust

Social engineering and biometrics forgeries are common.  However, with training, we can usually detect bad activity or mitigate biometrics risk with a second authentication factor.  But things are changing.  In less than five years, fake voice prints and video sounding and looking like people we know might propagate across the Internet.  How this changes fake news and social engineering depends on how we manage this now… not in five years.

You might be thinking that I am talking about the technology like that used in movies like Avatar.  Not so.  Take a moment and watch this video in which researchers use actual faces to fool the watcher. Anyone watching without knowing to look for something out of the ordinary would likely never know they are being hoodwinked.

This is dangerous in at least two ways.  First, fake news is causing serious issues in politics and social interactions.  Any group having the technology to create fake videos can convince millions that an adversary said or did something they actually didn’t do.  The subject of the attack will have a nearly impossible task when trying to convince those millions that he or she never said or did something when their eyes tell them differently.

The second risk lies in the use of videos and speech to communicate with employees or family.  How will we know which video conferences are real and which are fake?  How will we know if a video sent to our email or SMS device represents a real person?

We can’t stop the tech.  It is already far down the road, so we should begin today letting our employees know about these new technologies.  If we allow speech, facial, or video recognition for authentication, we must ensure another strong factor is also used.  If we believe our employees will use video messages or video conferencing for sensitive conversations, we must have something in place to ensure the validity of these messages.  And our employees must be trained to look for anomalies in what makes it through our filters.  How we do this is not yet clear.  Something else we need to address… soon.

Biometrics theft is not necessarily the end of the world

Theft of biometrics data is becoming more frequent.  A recent example is the breach of Avanti point of sale systems.   Although this is a problem, it isn’t likely as high risk as many believe.  Using stored biometrics data is harder to use than is practical, making too high (in most cases) the effort given the attacker’s financial returns.  So possible theft of biometrics data shouldn’t be a reason to stop using biometrics as an authentication factor.

When a user registers a physical attribute with a biometrics solution, the attribute’s characteristics are converted to a numeric value.  This value is encrypted and stored.  According to Larry Greenemeier, in an article written for Scientific American, “Misuse of stolen digital fingerprint files is hardly that straightforward and would involve cracking encryption codes, reverse-engineering data files and several other complicated procedures that are probably not worth the effort.”

The biggest problem is not in the actual risk.  It is in the public’s perception of the risk.  We have enough challenges trying to get many people to accept biometrics without spreading misinformation about the risk.  Yes, we need to protect biometrics data.  Yes, theft of this data elevates risk.  However, biometrics alone should never be used to protect highly sensitive information, and the effort needed to use stolen customer biometrics data is likely too high for common use.

There is an exception, however, that might elevate the risk above acceptable levels.  What if the attacker steals the imprint information passing between the sensor and the biometrics verification algorithm? Any solution selected to protect our customers or our highly sensitive information must be protected and designed in ways to make this kind of attack highly improbable.