Skip to main content

Anatomy of a phishing campaign

This is the story of a phishing email that came across my desk. It's good to take a look at what the bad guys are doing sometimes. It's often not rocket science but it's handy to keep an eye on the simple techniques used. And if this isn't your day job you probably don't get forwarded a huge number of phishing emails, malware to analyse or dodgy sites to investigate. In fact hopefully you do your best to avoid all of those things.

The Attack Chain

So this particular phishing campaign started as many others do with a simple phishing email.

It's not an aggressive email, it's not selling itself too hard, no spelling mistakes, no funny looking URLs and it's pretty simple. There's only one link to click on.

Just a quick note here about clicking on links in nefarious emails. Don't do it unless you are ready to. This link could trigger some malware, it could be unique to the targeted email (so the attacker knows the email address is valid), it could point to a website containing illegal material or something that might provide you with a trip to HR to explain yourself.

Get the email to an isolated machine, not sitting behind your corporate or personal IP address and be ready to burn the machine if it all goes wrong.

This link goes to a pretty good copy of the Office 365 login.

The website even had an SSL certificate. Sure the URL was a random site that had nothing to do with Office 365 but we've been telling users to check for the green padlock for years and this random URL had one. The certificate was a free one from letsencrypt and the domain was a free domain but the site looks pretty good.

When something likes this comes to me, my first concern is to defend the people under my charge. There are plenty of tools out there to analyse websites, but in the first instance I tend to just use the network tab of the inspector in firefox to track what the site is doing.

Analysing the site

These sorts of sites don't tend to be well configured. The attacker has a limited amount of time before their site will get taken down so they focus on getting what they want out of it. Fortunately in this case they left directory browsing enabled so we can see all of their files.

There's a zip file there that's got my interest so let's download that. The zip file turns out to contain an complete copy of the fake Office 365 login page. Importantly it contains the PHP source code so we can see what it is doing.

I've uploaded the zip file here for anyone to take a look at. It's mostly fairly straight forward but there are a couple of interesting bits.

Defending Itself

Many of the pages call blocker.php at the beginning. This is a script aimed at reducing the chances of the site being analysed by google, etc. and flagged as a phishing site.

First it looks up the hostname of the client. If it's in a banned list then it sends a 404 error.

$hostname = gethostbyaddr($_SERVER['REMOTE_ADDR']);
$blocked_words = array("above","google","softlayer","amazonaws","cyveillance","phishtank","dreamhost","netpilot","calyxinstitute","tor-exit", "paypal");
foreach($blocked_words as $word) {
    if (substr_count($hostname, $word) > 0) {
    header("HTTP/1.0 404 Not Found");
        die("<h1>404 Not Found</h1>The page that you have requested could not be found.");


Secondly it checks against a list of banned IP addresses and again returns a 404. These IP addresses again belong to major companies that might analyse malware and phishing sites.

$bannedIP = array("^66.102.*.*", "^38.100.*.*", "^107.170.*.*", "^149.20.*.*", "^38.105.*.*", "^74.125.*.*",  "^66.150.14.*", "^54.176.*.*", "^38.100.*.*", "^184.173.*.*", "^66.249.*.*", "^128.242.*.*", "^72.14.192.*", "^208.65.144.*", "^74.125.*.*", "^209.85.128.*", "^216.239.32.*", "^74.125.*.*", "^207.126.144.*", "^173.194.*.*", "^64.233.160.*", "^72.14.192.*", "^66.102.*.*", "^64.18.*.*", "^194.52.68.*", "^194.72.238.*", "^62.116.207.*", "^212.50.193.*", "^69.65.*.*", "^50.7.*.*", "^131.212.*.*", "^46.116.*.* ", "^62.90.*.*", "^89.138.*.*", "^82.166.*.*", "^85.64.*.*", "^85.250.*.*", "^89.138.*.*", "^93.172.*.*", "^109.186.*.*", "^194.90.*.*", "^212.29.192.*", "^212.29.224.*", "^212.143.*.*", "^212.150.*.*", "^212.235.*.*", "^217.132.*.*", "^50.97.*.*", "^217.132.*.*", "^209.85.*.*", "^66.205.64.*", "^204.14.48.*", "^64.27.2.*", "^67.15.*.*", "^202.108.252.*", "^193.47.80.*", "^64.62.136.*", "^66.221.*.*", "^64.62.175.*", "^198.54.*.*", "^192.115.134.*", "^216.252.167.*", "^193.253.199.*", "^69.61.12.*", "^64.37.103.*", "^38.144.36.*", "^64.124.14.*", "^206.28.72.*", "^209.73.228.*", "^158.108.*.*", "^168.188.*.*", "^66.207.120.*", "^167.24.*.*", "^192.118.48.*", "^67.209.128.*", "^12.148.209.*", "^12.148.196.*", "^193.220.178.*", "", "^198.25.*.*", "^64.106.213.*");if(in_array($_SERVER['REMOTE_ADDR'],$bannedIP)) {
     header('HTTP/1.0 404 Not Found');
} else {
     foreach($bannedIP as $ip) {
          if(preg_match('/' . $ip . '/',$_SERVER['REMOTE_ADDR'])){
               header('HTTP/1.0 404 Not Found');
               die("<h1>404 Not Found</h1>The page that you have requested could not be found.");

The third section looks as the user agent string and rejects any kind of bot. It also rejects PyCurl which is interesting because it is often used by scripts to analyse links automatically.

if(strpos($_SERVER['HTTP_USER_AGENT'], 'google') or strpos($_SERVER['HTTP_USER_AGENT'], 'msnbot') or strpos($_SERVER['HTTP_USER_AGENT'], 'Yahoo! Slurp') or strpos($_SERVER['HTTP_USER_AGENT'], 'YahooSeeker') or strpos($_SERVER['HTTP_USER_AGENT'], 'Googlebot') or strpos($_SERVER['HTTP_USER_AGENT'], 'bingbot') or strpos($_SERVER['HTTP_USER_AGENT'], 'crawler') or strpos($_SERVER['HTTP_USER_AGENT'], 'PycURL') or strpos($_SERVER['HTTP_USER_AGENT'], 'facebookexternalhit') !== false) { header('HTTP/1.0 404 Not Found'); exit; }

This blocking script allows people targeted by the campaign to see the site but hides it from automatic analysis by the big players.

When I tested how well it works, google and bing were unable to crawl the site and even google translate couldn't see the pages so it's pretty effective.

Retrieving Credentials

Creds are both stored on the website as a text file (in the case 2.txt) and also emailed to an account. I'm not going to put the details of the email account here because that might have been a compromised email address in the first place.

Retrieving the creds from the emails would be one thing, however that does leave the attacker open to tracking down. Google could hand over the IP addresses history of the person logging in to retrieve the creds and gmail have a pretty good track record of keeping detailed logs and handing them over.

Retrieving the creds directly from the website leave a much smaller log trail. Sure the server knows the IP address of each and every access, but it would be pretty hard to prove that anyone on the internet was the person behind the campaign just because they visited a URL that was open to the internet without any authentication.

Phishing campaigns are not careful with your details. They don't care if your details get into the wrong hands (they are the wrong hands), so assume if your details have been phished that any number of people on the internet now have them.

While monitoring the site, other versions of the script started putting creds into 3.txt. Presumably this was a third phishing campaign.

Fuzzing a phishing site for 1.txt, 2.txt or one.log, two.dat are likely to reveal the credentials compromised. If you are protecting a company you may be able to see if anyone working for you was hit.

As I monitored the site for a while they also uploaded a fake dropbox, gmail, icloud and salesforce, so I'm guessing further campaigns were planned.


This phishing campaign seemed to be quite targeted. Rather than mass mailing an entire organisation, the targeting individuals - based on the emails found - were all CFOs of major organisations.  At least that's what their linked in profiles all said.

It would seem that more effort went into finding the right email addresses than correctly setting up the web server.

Site Management

Seller150.php isn't referenced by any of the files used in the phishing pages and seems pretty big for a PHP file so let's take a look at that one.

Yup, its the WSO webshell. I've uploaded it here and it's interesting for two reasons.

Firstly, it's obfuscated PHP. The chances are that a quick scan for malicious files wouldn't have found this.

Secondly, they haven't bothered with a password. WSO has some pretty clever ways to hide itself - hiding in 404 error messages, passwords boxes that don't appear until a key like TAB is pressed, etc. They didn't bother with any of it.

In case you don't know, a web shell like this is a huge hole in your security, it allows arbitrary command execution, file management and much more. It's a great tool in some ways. There are many versions out there, and this one again uses header information to hide from Google and bots that might try to analyse or classify it.

End game

Well, once someone has a shell on your server, it's game over. Not much else to be said there really.

So to sum up, phishing campaigns are using SSL sites that look realistic, are poorly maintained, and share your creds with the whole internet.


Popular posts from this blog

Snagging creds with Raspberry Pi Zero and Responder

So this is not all my own original work. This is a bringing together of the ethernet gadget tutorial by lady ada at adafruit and the beautiful work by Mubix at room362 which uses Laurent Gaffie's from SpiderLabs scripts.

I'm still using Mubix's recipe of USB Ethernet + DHCP + Responder == Creds but here we are using a £4.00 Raspberry Pi Zero instead of the USB armoury or the HAK5 LAN turtle. Both are awesome products.

Please note that this only works on the RPi Zero. Other RPi's will not work!

1.0 Setup the the RPi Zero for Ethernet over USB
Download and install the latest Jessie Lite from here onto an SD Card.

Pop the card out of the card reader and re-insert it to mount it. Take your favorite text editor and edit the following two files in the boot partition.

config.txt Go to the bottom and adddtoverlay=dwc2as the last line:

Save the config.txt file.

cmdline.txt After rootwait (the last word on the first line) add a space and then modules-load=dwc2,g_ether

Munging Passwords

Password munging is the art of changing a word that is easy to remember until it becomes a strong password. This is how most people make up passwords.

Munge stands for Modify Until Not Guessed Easily.

The trouble is that it doesn't work very well. We can guess the modifications.
Password selection.
Take the average office worker that is told that it's time to change their password and come up with a new one. They have just been on holiday to New York with their family and so following common advice they choose that as their password.


No! They are told they must include capital letters


No! They are told they must include numbers


No! They are told they must include a special character


There, now that's a password that meets security requirements and our office worker can get on with their actual job instead of playing with passwords.

Scripting similar munges
There are a number of ways that they could munge their password but the vast majority ar…