When did Google Start Policing the Internet?
2010-09-01 19:19:48
About a month ago, one of the machines in the office got a nasty virus. We managed to get it cleaned up pretty quickly and moved on. What no one considered was that once one machine is compromised, lots of other, more subtle security issues might still be lurking even though the initial mess is cleaned up.
Last week we found one of those other compromises on the Rackspace Cloud server. Apparently the virus rifled through the designer's FTP passwords and sent them off to some other server. We should have, but didn't, change her passwords on the ftp site, and the malicious server rewrote a whole mess of JavaScript files to include a line of text to download malware onto your machine. Not that the offending malware was on our server, but the line of code to deliver it was definitely appended to our scripts.
No problem -- change the passwords, restore the files from our repository in the office and all is good. Until today.
It turns out that Google crawled that site during the time the malware link was on the server and today I was surprised to get a call from a client telling me that one of the site we manage for them is being blocked as a 'Reported Attack Page!'
I'm all for free enterprise, but I'm not entirely comfortable with a single company, Google in this case, deciding what's good or bad and forcing me to create an account to get my site out of hock. In our case, the developer had already corrected the problem, but they didn't automatically go back and notice the problem was fixed. There's no indication how long it's going to take Google to decide that the site is no longer compromised, and no real path to contesting their decision beyond asking them to, pretty please, look again.
Granted, as Firefox says, 'Google scans millions of websites and identifies those that are, or recently were, hosting or distributing badware. If Google later determines a site is clean, Firefox no longer reports it as an Attack Site.'
I'm not sure what the alternative is, after all, my own team managed to open a crack wide enough for the bad guys to stick a knife in and put a link back to their malicious servers, so how can we expect any better from non-technical people, of which more and more are managing servers with automated tools that can go sour really fast.
But… this doesn't feel the same as dealing with the committee driven black list groups for open relays… it takes a lot longer to get out of hock, and I'm having to give a lot more of my own information to a commercial enterprise to get them to let the world see my websites again, even when we fixed it faster than they blocked it…