Experience and innovation in a single touch

Lolita City, and other alleged child porn websites, attacked by Anonymous



AnonymousThe hacktivist collective Anonymous has declared war on internet paedophiles, attacking websites it accuses of carrying child abuse images and videos, and declaring that anyone who hosts, promotes or supports child pornography is a target.
In an operation dubbed "Operation DarkNet" or "OpDarkNet", the loosely-knit group has claimed responsibility for taking offline over 40 websites accused of sharing child abuse material, and has published details of 1589 alleged paedophiles that had been using the websites.
In particular the hackers targeted a site called "Lolita City", and crashed the servers of its web hosting service Freedom Hosting. In a statement, Anonymous called "Lolita City" one of the "largest child pornography websites to date containing more than 100GB of child pornography".
Here is part of the statement from Anonymous that was published on the internet:
Anonymous statement
Did the Anonymous hackers do the right thing?
I don't think so. Their intentions may have been good, but take-downs of illegal websites and sharing networks should be done by the authorities, not internet vigilantes.
When 'amateurs' attack there is always the risk that they are compromising an existing investigation, preventing the police from gathering the necessary evidence they require for a successful prosecution, or making it difficult to argue that evidence has not been corrupted by hackers.
The anonymous hackers may feel they have done the right thing, but they may actually have inadvertently put more children at risk through their actions.
In addition, it's possible to conceive how releasing usernames could put entirely innocent parties at risk. After all, how likely is it that members of such websites will be using their own names as a username?
If anyone discovers evidence of child abuse online they should report it to the appropriate authorities, not take the law into their own hands.
At the same time, I recognise that members of the public may feel frustrated that action isn't taken quickly enough against online paedophiles, and not realise how long it can take for an investigation to take place and evidence to be gathered.
So, it's an interesting question - do you think Anonymous did the right thing by shutting down child porn websites?
Do you think Anonymous did the right thing by shutting down child porn websites? (Poll Closed)
 
 
 
 
 
Total Votes: 2,633
Comments (3) Share This  
How to properly report online child abuse
If you have information about online child abuse that you wish to report to the authorities, visit the websites of the Virtual Global Taskforce, CEOP (the Child Exploitation and Online Protection Centre) and the IWF (Internet Watch Foundation) which provide a reporting mechanism.

Leak of kids' social services info earns Aberdeen City Council £100k fine



Aberdeen City Council has been hit with a £100,000 fine (about $150k) by the Information Commissioner's Office (ICO), after an employee took sensitive files home and accidentally uploaded them to a public website.
The data, which included information on vulnerable children and details of alleged crimes, was on display for three months before it was spotted and taken down.
The incident started in November 2011, when an unnamed female council worker worked on council files on her own second-hand computer at home. These files apparently included minutes of meetings and detailed reports relating to the care of children.
The investigation into the incident failed to pin down whether the documents were accessed using remote access to council email or carried home on a USB stick, but at some point after being copied to the My Documents folder on her laptop the files were posted online by some unspecified software, thought to have been installed on the system by a previous owner and either started automatically or accidentally activated by the hapless employee.
Once online they were not noticed until February 2012 when another council employee stumbled across them when doing a search for their own name, and they were promptly removed from the website. The exact location the four files were posted to is also unspecified in the ICO report.
The ICO found huge gaps in the council's policies regarding home working, which seem to have focused entirely on health and safety with no regard for the security of sensitive data, and even those policies which had been drafted were not being enforced:

In this case Aberdeen City Council failed to monitor how personal information was being used and had no guidance to help home workers look after the information. On a wider level, the council also had no checks in place to see whether the council’s existing data protection guidance was being followed.
The Data Protection Act, found to have been breached in this case, allows for fines of up to £500,000 for the most serious data breaches.
This case highlights a wealth of common problems with working from home and BYOD (Bring Your Own Device) practices. Any business or institution dealing with sensitive data - which is just about anyone really - needs to think carefully about how that data is secured when it's being accessed remotely by staff, just as much as when handing it over to third parties.
Strict and comprehensive policies need to be put in place, clearly demonstrated to staff and strongly enforced with both technical and regulatory controls.
The rules need to cover what data can be accessed, from where and by whom, how data is accessed, transferred and handled, and what systems can be used to work on data.
The BYOD issue usually focuses on smartphones and tablets being brought in to work, but personal laptops remain the default tool to enable home working. Imposing the same level of application control, anti-malware and other security features is far more difficult than in systems built and monitored by dedicated IT staff.
So staff training is also vital - from the sound of this case, where the employee in question appears to have been unaware of what was running on her pre-owned laptop, it seems that IT skills were not considered an important part of her job, but people need to take more care to know what the tools they are using are capable of before they blindly trust them with information which could be incredibly sensitive to leakage.
Since the Aberdeen incident, auditing and assessment by the ICO earlier this year has noted some improvements, although there is still some way to go to achieve a satisfactory level of security.
Hopefully this good-sized fine will be an eye-opener to anyone dealing with personal information, particularly local government where data sensitivity is high but IT infrastructure tends to be disparate and creaky and skills are often minimal.
They need to wake up to the dangers of home-working and BYOD, and make sure they do all they can to minimise the risk.

Cyberextortion by US gov, or simple P2P security lapse by medical firm?



The ongoing data leak saga between medical firm LabMD and "The Man," in the form of the Federal Trade Commission (FTC) of the United States, has entered its next stage.
This is a curious story that would be amusing were its import not so serious.
If everyone who has contributed to the story is to be believed, it unfolded over a five year period, and goes something like this (remember, this is not necessarily what happened, but what has been variously alleged):
  • In 2008, Tiversa, a "Peer to Peer (P2P) intelligence services" company out of Pittsburg, Pennsylvania, finds a stash of Personally Identifiable Information (PII) from over 9000 patients of LabMD. Apparently, a 1,718-page spreadsheet of health insurance billing information was accessible via a P2P file sharing network.
  • LabMD, out of Atlanta, Georgia, declines to deal with Tiversa's complaint, on the grounds that Tiversa is using the data in its possession to shill LabMD into inking a deal for security consultancy.
  • In 2009, Tiversa decides to hand over the data to the authorities.
  • The FTC gets involved in 2010, asking LabMD to provide documents so it can review the case.
  • LabMD digs its heels in, refusing to agree to a so-called consent decree imposing to a security audit every two years for the next 20 years.
  • In 2011, the FTC begins a formal investigation.
  • LabMD files a petition to squash the investigation, on the grounds that Tiversa is an unobjective witness.
  • The FTC disagrees, though not without one dissenting opinion stating that "the commission should avoid even the appearance of bias or impropriety by not relying on [Tiversa's] evidence or information in this investigation."
  • On 29 August 2013, the FTC files a formal complaint against LabMD, for "failing to protect consumers' privacy."
  • On 17 September 2013 (which, of course, is the one part of the story that hasn't actually happened yet), Michael J. Daugherty, the CEO of LabMD, will publish a book about the saga so far, The Devil Inside the Beltway [*].
Daugherty's doughtily-named book claims to document "a government power grab and intimidation that if not for the fact that it is all real, would make for an a brilliant novel."
The book's marketing material says that what "began with medical files taken without authorization from a laboratory, turned into a government supported extortion attempt," and vows "to ensure that this does not happen to any other American."
Wow!
I'm going to sit on the fence here, and decline to take sides (I'll leave that to you, our readers, in the comments below).
Instead, I'll just point out that there is one thing that doesn't seem to be in doubt: the fact that the offending data was, indeed, grabbable via P2P, five long years ago.
And, as the FTC very plainly points out in its latest communication on this issue:
P2P software is commonly used to share music, videos, and other materials with other users of compatible software. The software allows users to choose files to make available to others, but also creates a significant security risk that files with sensitive data will be inadvertently shared. Once a file has been made available on a P2P network and downloaded by another user, it can be shared by that user across the network even if the original source of the file is no longer connected.
How serious, then, can it possibly be that this data "got out" back in 2008?
How long does the risk last after a data leak?
Well, according to the FTC:
[I]n 2012 the Police Department [in Sacramento, California,] found LabMD documents in the possession of identity thieves. These documents contained personal information, including names, Social Security numbers, and in some instances, bank account information, of at least 500 consumers. The complaint alleges that a number of these Social Security numbers are being or have been used by more than one person with different names, which may be an indicator of identity theft.

Twitter makes good on promise to make abuse reports easier and more obvious

Twitter has lived up to its promise, made a month ago, to make it easier and more obvious how to report abusive messages published on its microblogging site.
The combination of Twitter's short messages, high volumes and "always logged in" style of use make it easy for internet pests (and worse) to pepper victims with the internet abuse equivalent of never-ending birdshot from a auto-repeating shotgun that never runs out of ammunition.
UK journalist Caroline Criado-Perez found this out to her personal alarm recently.
She'd run a campaign to promote the idea that a well-known female British persona should be included on UK banknotes.
With social reformer Elizabeth Fry giving way to Sir Winston Churchill on the £5 note, Criado-Perez thought that another woman might usefully be drafted in to take the place of one of the men on the other banknotes.

When it was announced that the author Jane Austen would grace the £10 note, not everyone was happy at the result, and Criado-Perez was blasted by at least one detractor's considerable anger.
She was swamped with a giant wave of abusive Tweets, reaching a peak rate of close to one a minute and allegedly including threats of sexual violence, for which a 21 year old man was arrested in Manchester, UK.
A petition quickly started to urge Twitter to make it easier for victims of this sort of online rage to report their problems.
It worked: Twitter agreed, and now it's easier to do something about problem tweets.
Just click on the ...More tag under a Tweet, and you'll see Report Tweet:

→ You have to be logged in to get the Report option, which makes sense. This makes it easier for Twitter to sort out abusers of the abuse button, by tying complaints to a specific account, and thus preventing the abuse queue from being flooded by anonymously-reported complaints. And you can't report your own Tweets, which make sense too. (If you think they should be deleted, just delete them!)
The next step is to choose your reason for reporting the Tweet, which defaults to Abusive:

Note that you can also report two other common Twittersphere problems in a similar way, namely Spam and Compromise. (The latter is a good way to help your friends if you realise before they do that someone has nabbed their password and is now misusing their account.)
There's still more to abuse reports, since you need to say what sort of badness the offending Tweet has displayed:

And then you are asked to provide yet more information, such as this for the Report an ad option:

It sounds long-winded when shown here, but the system does make you think about what you want to report, and it's definitely helpful to Twitter to pre-filter the abuse reports into various categories so that its responses can be prioritised.
You can argue that Twitter ought to have had this all along, and decry the microbloggers for being slow to the abuse-prevention table.
Or you can chalk it up as a victory for common sense, and say, "Well done" to the organisers of the petition for not trying to prove their point by more, well, pointed means (such as hacking back, or some sort of counterabuse), and say, "Thanks, Twitter for listening and reacting quickly."

Stolen cellphone databases switched on by major US carriers



Pickpocketing a mobile phone. Image from ShutterstockA friend was walking down a Manhattan sidewalk a year ago, staring into his iPhone in the now-ubiquitous, data-engrossed trance of a smartphone user.
A group of teenagers walked up to him. One gently plucked the phone from my friend's hand and jogged away, leaving him blinking, thinking for a brief moment that it was all just a joke.
It wasn't. That's the last he saw of that gizmo.
The CTIA, a wireless industry trade group, on Wednesday moved to stop smartphone thieves like those teenagers in their tracks by switching on databases to block stolen phones from being used on the four major US networks: AT&T, T-Mobile, Verizon and Sprint.
The initiative was first announced in April, when the US Federal Communications Commission teamed with police chiefs from major US cities and CTIA representatives to announce a database that would put a lid on the burgeoning number of smartphone thefts.
As MSNBC reported in October, New York City Police say that more than 40 percent of all robberies now involve cellphones.
As goes New York, so goes the rest of the country. Cellphone thefts in Los Angeles are up 27 percent over last year. Transit system authorities in cities such as Boston and San Francisco are launching ad campaigns that seek to alert riders of the danger of thieves preying on those who casually use, and get engrossed in, their phones while in public.
Carriers have up until now blocked SIM cards on stolen phones, preventing unauthorized calls from going through.
That was easy to get around: thieves would simply install a new SIM card and sell the phone on the second-hand market.
The new databases will instead block the International Mobile Equipment Identity (IMEI) number, a unique identifier that stays in the phone regardless of the SIM card being used.
IMEI
Chris Guttman-McCabe, vice president of regulatory affairs at CTIA, told IDG that the goal is to shut down the market for stolen phones:
"The goal is to not only protect the consumer by cancelling the service, but by ultimately protecting the consumer by drying up the after market for stolen phones."
The CTIA says that as of Wednesday, AT&T and T-Mobile will offer a joint database, given that they use more or less the same network technology - GSM - and their handsets can easily be used on each other's networks.
Verizon and Sprint use a different network technology, CDMA, and will offer their own databases.
Guttman-McCabe told IDG that by the end of November 2013, the four carriers will combine their databases so that "the vast majority" of US cellphone users will be covered.
Samsung smartphoneHe also said that smaller carriers such as Nex-Tech and Cellcom also have plans to implement the database, while work is under way to link the US database with an international database maintained by the GSM Association, to prevent stolen phones from being shipped overseas and used on foreign networks.
When it comes to losing something important, beyond the cost of the phone itself is the sensitive data stored on that phone, which can include contacts, photos, music, email, bank account numbers, and stored passwords.
Being able to prevent a stolen phone from being used to place unauthorized calls is a good step, but as the CTIA emphasized, consumers should still take steps to protect their phone data.
CTIA President and CEO Steve Largent said in a statement:
"While the GSM and CDMA databases are important, consumers also play a key role in protecting their information and preventing smartphone theft. By using passwords or PINs, as well as remote wiping capabilities, consumers can help to dry up the aftermarket for stolen devices. Today’s average wireless user stores a lot of personal information on a mobile device, such as pictures, video, banking and other sensitive data. It's important consumers know that by taking simple precautions, such as downloading a few apps, they can protect their information from unauthorized users."
The organization has guidelines here on how to prevent smartphone theft and protect personal information.
One thing I'd add to the CTIA's guidelines is to know your phone's identification number, given that your carrier may not have the number in their files.
The IMEI might be located on the box the phone came in, or you can find it by removing the cover from the back of the phone and taking out the battery. The number should be printed on the inside compartment.
In many cases you can obtain your IMEI by dialing *#06#. Vendors such as Apple have provided advice on how to find out the IMEI number on their phones.
That won't do you much good if the phone has already been stolen, though, so it's a good idea to write the number down or otherwise store it somehow.
Hopefully, soon phone thieves will only manage to walk away with a useless brick, if all goes according to plan.

Database of illegal downloaders - are British ISPs to become the "music NSA"?



The major UK broadband providers are being asked to create a database of customers who illegally download films, music and other protected content from the internet.
Download key. Image courtesy of Shutterstock.This latest move is likely borne out of frustration with the Digital Economy Act 2010 which was designed to give more power in fighting piracy but has seen delays push its full implementation date back to 2014 at the earliest.
If Virgin Media, BT, BSkyB and TalkTalk sign off on the proposal, it's anticipated that the data they collate could then be used to serve warning letters, apply for disconnections or prosecute repeat offenders.
Curbing digital piracy will be one of the topics discussed when record labels and their trade association, the BPI, meet with Prime Minister David Cameron at a Downing Street breakfast on September 12.
Film and music companies will ask broadband providers to sign up to a voluntary code which will, arguably, see them tasked with policing the internet on the behalf of the content creation industry. The Guardian reports that negotiations have already been happening for months with the BPI and the British Video Association, of which the BBC and Hollywood studios are members.
The voluntary code, should it be adopted, will see internet service providers (ISPs) tasked with creating a database of repeat offenders. These offenders would be sent warning letters stating that their internet address had been used for illegal downloads.
The letters would warn of further consequences for continued copyright infringement and would point users towards legal services for their film and musical needs.
Should the offenders ignore the letters then sanctions would be imposed, such as having access to certain sites blocked, slowing of internet connections or even prosecution.
There are some potential issues for ISPs should they adopt these measures though. Firstly, if they were to create and maintain such a database then who would pay for it? Would they pick up the tab or would it be funded by the content creators themselves?
Personally I suspect it would be option three – the consumer – who would see an increase in their broadband costs, irrespective of whether they themselves had downloaded anything illegally or not.
Secondly, keeping a database of warning notices could put the broadband providers on the wrong side of the Data Protection Act which states that companies can only store information about individuals for commercial reasons.
Pirate. Image courtesy of Shutterstock.
A spokesperson for TalkTalk told the Guardian that while they would, "like to reach a voluntary agreement" their "customers' rights always come first" and they would "never agree to anything that would compromise them."
A spokeswoman for Virgin Media also had similar concerns, commenting that the current proposal is "unworkable."
When I contacted the BPI and asked them for their views on both of these issues I was told the planned meeting at No.10 was solely in response to an invitation from David Cameron after he attended a BPI 40th anniversary event in June. The only comment a spokesperson would give me was:
Record labels are key investors in British music, and, contrary to some media reports, we expect the forthcoming meeting with the Prime Minister to focus on a range of positive measures that will enable further investment in British talent, promote exports and support the continuing growth of the UK’s digital music market.
I'll leave you to ponder what this tells us along with a quote from Loz Kaye, leader of Pirate Party UK, who said:
The content industry seems intent on turning Internet Service Providers in to the music NSA.
Harsh words indeed, but ones that may well resonate with people who already have concerns about the government's digital policies, especially in the wake of surveillance claims and attempts to censor certain types of content on the internet.

Lawyers report steep rise in employee data theft cases



UK law firm EMW has reported a sharp rise in confidential data theft cases brought before the High Court.
The bulk of the cases involve information taken by employees from their places of work, with blame for the rise being put on the availability of cloud storage services, and also on increases in remote working.
2012 saw 167 cases involving confidential data theft at the High Court, up 58% from the 106 seen in 2011.
Although some reports have flagged a whopping 250% increase over the 45 recorded in 2010, this was a bit of an anomaly, sharply down on the 95 seen in the previous year.
Nevertheless, the general upward sweep over the last few years seems clear, and will be quite alarming for many businesses.

The bulk of the cases logged were civil cases brought by firms against former employees found to have taken company data, which might include anything from client and contact lists to financial info to technical product designs.
According to a report in the Telegraph, these cases rack up an average cost of £30,000 in legal costs, let alone the value of the lost data. This can be hard to put a price on, and is pretty impossible to "retrieve" once it has left company networks.
The availability of Dropbox and similar cloud storage services, which enable disgruntled staff to transfer huge amounts of data very rapidly with minimal preparation, is cited as a major factor in the spike.
Other commentators have emphasised the greater ease of stealing data thanks to the rise in remote working, and the remote access to company databases needed by homeworkers.
In the movies our data-exfiltrating hero has to crouch behind a desk, sweatily watching a progress bar tick towards completion on his USB stick copying, while footsteps thud ominously closer down the corridor. In reality, data can be copied or uploaded in comfort and safety from an armchair in front of the TV, with no risk of being observed, at least physically.
Of course this sort of thing should be being monitored by data leak protection (DLP) systems, and restricted by tight controls on who has access to sensitive data, especially remotely.
DLP controls can watch for specific files, file types or even tiny fragments of data crossing fixed boundaries, or can limit the amount of data that can be transferred from point to point in a given time period. Device control can prevent the use of removable media such as USB drives or CD burners, while web filtering can block access to cloud services which might be open to abuse.
This sort of thing should deter most "disgruntled" (or simply avaricious) employees from making off with sensitive data, documents, or even whole databases.
It's not entirely clear whether the upturn seen in the EMW figures also reflects a failure deploy or properly implement such technologies, allowing more data to be stolen, or in fact shows an improvement in their quality, ensuring more would-be data thieves are caught in the act and prosecuted.
Either way, it seems that employee data theft remains a problem which needs addressing. Data needs to be properly monitored and protected, whoever is working with it and whether it is inside company networks or being accessed by remote workers.
One approach might be to make sure you keep all your employees happy at all times, although that might prove impractical. On the other hand, the threat of heavy penalties for data theft doesn't have a 100% success rate either.

Faces, gestures, heartbeats - how will the passwords of the future work?



Researchers regularly come up with revolutionary ideas to replace the clunky, fiddly and mostly rather insecure passwords we use for almost all of our authentication needs.
The latest schemes to hit the headlines involve using features of our bodies, internal or external, to reassure our devices that we are who we claim to be.
Will any of them ever become the new standard for authentication? Are we going to be stuck with passwords forever, or is there a brighter future out there somewhere?
Security folk talk a lot about passwords. How long or complex they need to be, how bad people tend to be at choosing them and not reusing them, how they should be recorded and stored, how easily they can be cracked.
Occasionally a shiny new idea pops up - most recently we saw biostamps and swallowable dongles - but they generally disappear again just as quickly, leaving us stuck with the status quo.

In your face

In the news this week, Australian researchers have been promoting their work on facial recognition as a means of authentication.
As an idea this seems obvious - faces are the main means we use to identify each other in the real world, if we want to avoid being identified a mask is a standard first step. So it makes sense to have computers recognise our faces, or at least bits of our faces, too.
It's an approach that has become fairly common of late, with PC login systems and mobile apps trying to use our faces to authenticate us to various things. Only a few weeks ago we heard about a Finnish company's plans to use faces in place of credit cards.
In general these schemes have proven less than perfect, either easily fooled by photos, similar-looking people or technical tricks, or failing to authenticate real users thanks to bad hair days or bad moods affecting how we look.
Similar issues have blighted fingerprint-based authentication, which remains too unstable and unreliable for general use.
It's not yet entirely clear what will separate the work being done by the University of Queensland researchers from the crowd, other than vague mentions of improved accuracy and security, and being able to work from a single initial still image and recognise the face from different angles and in different lighting conditions, which sounds like a must for any decent recognition system.
Either way, they don't expect to have a working prototype for at least another year.

The way you move

The good thing about the face recognition approach is that it's relatively low-tech, using a component (the rear-facing camera) that has become a standard component of most of the devices we use.
Another potential password replacement emerging from the world of smartphones and tablets is gesture-based authentication. Hand movements repeated often enough can lead to muscle-memory, so quite complex patterns can become quite easy to reproduce reliably and accurately.
This is the basis of a very venerable form of authentication, the signature. It should be harder to compromise though, as unlike signatures swipes leave few visible traces to be copied, other than a few greasy smears perhaps.
Android phones have long had swipe-pattern unlock features, and Windows 8 includes a system based on a few swipes around a picture. Some research presented at the recent Usenix conference has poked some serious holes in this approach though, showing that people are just as bad at picking hard-to-guess shapes as they are at choosing passwords.
A combination of face recognition and gestures, recognising patterns of unusual facial expressions, has also been proposed but is widely seen as no more than a gimmick, provoking humorous images of people gurning and grimacing into their webcams.

In a heartbeat

All of these use physical features, aspects of how our bodies look or move, in contrast to the purely cerebral requirements of passwords, which reside only in our minds (in theory at least - they may also reside on post-it notes attached to our monitors).
The biostamp idea proposed a hybrid of body and technology.
Another spin on this hybrid approach uses a bracelet device which measures heart rhythms to check who we are, and then connects to our devices via Bluetooth to pass on that confirmation.
The "Nymi" bracelet, developed by a Canadian startup, certainly sounds like a promising idea.
The actual authentication takes place only when the bracelet is first put on, requiring a quick touch of some sensors, and from then on will continue to confirm you're you until it's removed.
It includes motion sensors, so the basic authentication can also be combined with movements and gestures to create multi-factor passwords, using both the body and the mind of the attached user. Gestures could be used to unlock cars, for example.
I'm no expert on heart rhythm patterns, but according to the developers they're as unique as fingerprints. Just how resilient the authentication will be to stress, fitness, aging and so on may well be a major factor in the success of the idea.
There are also security concerns of course. The connection to the authenticating devices will have to be very secure, and the bracelet will have to ensure it remains connected to a live wrist; as with biostamps, if it can simply be slid (or hacked) off and still work, it'll be no good.
Also like biostamps, there's a potential issue with proximity; if it's simply broadcasting a "yes" to any request for ID, it would seem trivial to sneak up behind someone and steal their login.
The gesture system might help here, to ensure the user actually wants to be identified, and it should also be fairly simple (and unintrusive) to require re-authentication for major transactions - a simple touch of the wristband checks the heart pattern.
It's also a relatively hi-tech solution, requiring dedicated hardware. The cost is not prohibitively high though; pre-orders are already available at under $80, although it's not clear how much of that would be subsidised by the device and service providers the makers hope to attract.
With mass adoption and the cost reductions that would bring, it wouldn't be unreasonable to expect governments to hand one out to every citizen to cover all their ID needs, although here we stray into civil rights territory - not a huge leap from there to barcodes on our foreheads, some will say.

In the future

Over the years the password systems we use have seen various improvements, both in usability (ranging from simple but nowadays indispensable systems for replacing forgotten passwords to the latest secure password management utilities) and security, for example two-factor authentication schemes using dongles or smartphones combined with our computers.
All have helped in some ways, but have also introduced further opportunities for insecurity - recovery systems can be tricked, management tools can have vulnerabilities or simply be insecurely designed, and two-factor approaches can be defeated by man-in-the-mobile techniques.
Despite all the problems, the insecurities on one side and the impeded workflows on the other, passwords remain the simplest solution to the authentication problem. Finding a universal panacea to replace them is going to be difficult.
What it really comes down to is how we define who we are, whether we are the contents of our brains, the shapes, textures and rhythms of our bodies, or the tools and devices we create and use. Perhaps an approach which uses aspects of all of these will best cover all our needs.
A lot depends on popular uptake of course, perhaps more than actual technical innovation, but it could just be that one of these new techniques will become the passwords of the future.

How to find out everything that Facebook *really* knows about you



Max SchremsMax Schrems, a 24-year-old law student from Vienna, a meticulous document requester and researcher, is now sitting on a pile of 1,200 pages that comprise his personal-data Facebook dossier.
He secured the data by using a European requirement that entities with data about individuals make it available to those individuals if they request it.
After Mr. Schrems made the request, Facebook handed over a CD containing data that’s now fueling 22 complaints that the law student has filed against Facebook with the Irish Data Protection Commissioner (according to Facebook, European users have a relationship with the Irish Facebook subsidiary).
Watch the following German TV news report (with English subtitles) which features Schrems:
The complaints, which Mr. Schrems began to file in August, concern the illegality of these charges (for the full set and PDFs of the filed complaints, go to Kim Cameron’s Identity Weblog):
* Pokes: Retained even after a user removes them.
* Shadow Profiles: Facebook is collecting data about people without their knowledge, using it to substitute existing profiles and to create profiles of non-users.
* Tags: Used without specific user consent. Users have to "untag" themselves (opt-out).
* Synchronizing: Facebook is gathering personal data - e.g., via its iPhone app or the "friend finder" - and using it without the consent of the data subjects.
* Deleted Postings: Postings that have been deleted showed up in the set of data Mr. Schrems received from Facebook.
* Postings on other Users' Pages: Users can't see the settings under which content is distributed that they post on other's pages.
* Messages: Messages, including Chat Messages, are stored by Facebook even after the user deletes them. This means that all direct communication on Facebook can never be deleted.
Facebook not deleting posts - complaint
According to the Europe vs. Facebook website, the complaints have brought about an audit of Facebook’s Irish headquarters, scheduled for the coming week.
"The Irish DPC will go into the premises of Facebook in Dublin and audit the Company for 4 to 5 days," according to the site. "We hope that this will bring more evidence for the complaints we filed before."
News of Schrems’ legal activities, along with demands for users’ own personal dossiers, went viral at the end of last month. Reddit users stampeded, swamping Facebook with requests for personal data after going through the Reddit submission’s four-step tutorial on how to do so.
Here are the steps on how you can request your personal data from Facebook:
1. Open this site: http://www.facebook.com/help/contact_us.php?id=166828260073047
Request your personal data from Facebook
2. Enter your personal information
3. Make a reference to the following law:
"Section 4 DPA + Art. 12 Directive 95/46/EG"
4. Click on "Send"
Facebook cried uncle, sending an email claiming that it could not comply with the requests within a 40-day period.
Europe Vs FacebookIn addition to filing the complaints, Mr. Schrems has worked to bring together a crowd of like-minded individuals with the Europe Vs. Facebook website, and setting up a YouTube channel.
Of course, a Facebook page, Europe vs. Facebook, has also been created. The page had 447 members as of this posting.
Remember how Mark Zuckerberg, in the early days of creating Facebook, called users dumb f*cks for trusting him with their private information?
After 7+ years of The Facebook bloating into a private-data behemoth (or boondoggle, depending on your attitude about privacy), one user has finally arisen from the land of dumb f*ckery to strip the label from his own online persona and instead paste it across the data-gobbling gut of Facebook itself

Another 5 Tips To Help Keep You Safe On Facebook


1. Stop search engines from indexing your profile

Facebook's great for keeping in touch with friends and family but you might not want just anyone finding your profile via Google or other search engines. Here's how to fix that:
Click on the cog icon at the top right of your screen and then click Privacy Settings.
Privacy Settings
Now that you are in the Privacy Settings and Tools area of Facebook, find 'Who can look me up?' and the setting that says 'Do you want other search engines to link to your timeline?'
Who can look me up
This is likely on by default, so click Edit and then remove the tick from the box which says 'Let other search engines link to your timeline'.
Search engines off
Note: It may take a bit of time for search engines to stop showing the link to your timeline in their results so don't expect it to disappear immediately from search results.

2. Block someone on Facebook

Just as in real life, some people on the web can prove challenging for a number of reasons. If you don't want someone to see your profile or things you write on Facebook, you can block them – and here's how to do just that.
Click on the padlock icon that you see in the top right hand corner of the screen. Now click on How do I stop someone from bothering me?
How do i stop someone from bothering me
Now either enter a name or email address and click Block.
Block someone
The person you block won't get any notification that they've been blocked and they will now no longer be able to initiate conversations with you or see anything that you post on your timeline either.

3. Public computer? Use a one-time password

If you would like to use Facebook from a public location, such as a computer in an internet cafe or library, you can use a one-time password to access your Facebook account, keeping your actual password safe. This password is sent to you by text message and will expire after 20 minutes.
Note: you do have to link your mobile number with your Facebook account in order to use this function.
All you need to do is send "otp" as a text message to the number listed next to your country and mobile carrier on the one-time password list on Facebook. If you're in the US, you can send the same message to 32665. Unfortunately, it isn't available everywhere, and the number of countries and carriers is fairly limited at the moment.
After you've sent the message, you will receive a reply from Facebook with your OTP, a one-time password of eight characters (or with instructions on how to link your mobile to your Facebook account).
You can now login to Facebook in the normal way, substituting this temporary password for your regular one.
*Always* remember to sign out of Facebook once you are finished, especially if you are signed in on a public computer. If you do leave your account signed in the next person to use the computer will have access to it, even without your password.
→ Even though we've just showed you how to use OTPs, we recommend avoiding public computers, such as those in libraries and internet cafes, as much as your digital lifestyle will permit. At the least, work on the (admittedly pessimistic) assumption that anything you type in or view on screen may be sent to cybercrooks, and stick to things you don't mind being public.

4. Block an app from accessing your information

If you already have an app installed on Facebook but you now want to prevent it from accessing your personal information then blocking it is quite simple.
Click on the cog icon found at the top right of the screen and then click on Account Settings.
Account settings
Look to the left pane and click on the fifth option from the top; Blocking.
Blocking
Then look for the last option - Block apps.
Block apps
All you need to do is put in the name of the app you want to block and then press enter.

5. Remove something from your timeline

If you or someone else has put something on your timeline which you want to remove, it's pretty easy to do.
Firstly, navigate to your timeline and find the story you wish to block from appearing. Next, move your mouse to the top right corner of the story and you will see what looks like an arrow head appear. Click on that and you'll be shown a box.
You now have two options here. You can either choose to Hide from timeline which will stop the post from showing on your page (but it will still appear in newsfeeds and search).
Hide from timeline
Or you can remove it completely by clicking on Delete.
Delete post
This is just a small selection of tips to help you safeguard your Facebook profile.
If you have any others please do add them in the comments below.
And if you would like to stay up to date on the latest Facebook scams and other internet threats  on Facebook.

Xbox Live customers not hacked but phished


A wireless black Microsoft Xbox 360 controller with white background
Xbox Live customers are the latest gamers to fall victim to a cyber attack.
Thousands of accounts have been hit across 35 countries, with most victims losing between £100 and £200, according to The Sun newspaper.
But the Sun report that the cybercriminals had "hacked into thousands of Xbox Live accounts to steal millions of pounds" is not entirely accurate. Actually, the users were victims of a phishing attack.
The fraudsters sent emails to users with links to bogus websites offering free Microsoft points which are used to buy games. The gamers were then invited to enter their personal details, such as addresses, emails and credit card information.
Small amounts were then taken from the victims' accounts over a few weeks which made it harder to detect the thefts.
Other victims were targeted by people befriending them online and duping them into giving them their password and other personal details.
Victims only realised they had been conned when they tried to access their online profile and saw they’d become "locked out", meaning someone else had used their account.
Xbox Live operator Microsoft is looking into the cyber thefts, according to the BBC, who quote a Microsoft spokesman.
"We take the security of the Xbox Live service seriously and work to improve it against evolving threats.
Very occasionally, though, we are contacted by members regarding alleged unauthorized access to their accounts by outside individuals.
We work closely with impacted members directly to resolve any unauthorized changes to their accounts and, as always, highly recommend all Xbox Live users follow our account security guidance in order to protect their account details."
XBox Live customers are just the latest gamers to be affected.
Steam, the online empire of computer game giant Valve Corporation, was hit earlier this month.
And just last month 93,000 Sony accounts were hacked. This follows the attacks earlier this year where up to 70 million people had their personal data stolen and the Sony PlayStation network was forced offline.

Nokia is dead. Long live Nokia...


I'm sure you've heard the news.
Nokia, once the 200kg gorilla of the Finnish economy - heck, the 400kg gorilla if you like [*] - is to become part of Microsoft.
More or less, anyway.
Microsoft's press release isn't as clear as I'd hoped, though that may be more a consequence of my poor fluency in US legalese than an objective assessment of its comprehensibility.
The wording says that Microsoft has decided to "purchase substantially all of Nokia's Devices & Services business, license Nokia's patents, and license and use Nokia's mapping services."

What's planned

Substantively, if not substantially, and at least as far as handsets are concerned, it looks as though:
• Microsoft will acquire outright the Lumia and Asha phones and brands.
• Microsoft will license Nokia's budget handsets.
Lumias are high-end smartphones in both features and price: they have lots of memory, great cameras, cool looks, and the latest Windows Phone operating system.
Ashas are high-end feature phones: they're stripped down to a price, which makes them good value for money, and they run what's left of Symbian. ("Low-end smartphones," as the marketing department might say.)
In short, Nokia is dead. Long live Nokia!
Keeping the budget handsets, sorry, basic feature phones, as Nokia products under the Nokia brand makes a lot of sense to me.
These devices still sell hugely well in the developing world, where the equivalent of $10 can get you up and running in minutes with a prepaid mobile and an activated SIM card.
Better yet, a charge will easily last you days or even weeks, rather than hours or days - a huge plus for those with only irregular access to mains eletricity.
Why confuse a large and lucrative market by reinventing a phone like the Nokia 1280 as a Microsoft device?

What about security?

Through Lumia and Asha, Microsoft is now explicitly moving into the handset business as well as the mobile operating system business.
You'll be able to shop in Microsoft's catalogue for a Microsoft phone that runs a Microsoft OS and is locked down to apps bought from Microsoft's online software store.
Suddenly, Microsoft in Redmond sounds a lot closer to Apple in Cupertino.

What next?

The burning question, of course, is, "What will this acquisition do to or for mobile security?"
Over the next two or three years, my feeling is, "Almost nothing."
That sounds bad, since it implies things won't get better; in reality, it's good, because Windows Phone 8 isn't attracting much interest from cybercriminals at the moment, and that probably won't change.
Of course, it'll still possible to get yourself into as much trouble on a Microsoft Lumia smartphone as you could on an Android or iOS device.
If you upload the right file to the wrong person, or lose a smartphone without having encrypted or locked it, or type in your banking password on an imposter site, you may end up in harm's way regardless of your operating system.

Looking back

And finally, we have one thing left to do: to look back at the once-dominant market position occupied by Nokia, and ask, "What did Nokia ever do for us?"
Some of us at Naked Security discussed this at some length, with our rose-tinted spectacles on, and we think we have correctly identified the Top Three Legacies of the Nokia era:
1. Snake. (Why would you ever need or want another game for a phone-sized device?)
2. The Nokia Tune. (Want to bet it enjoys a bit of a nostalgia-driven comeback for a while?)
3. S-M-S in Morse code to announce a message has arrived. (You did know that's what it was, didn't you?)
Download: nokia-tune.mp3

Google coding glitch locks Apple iOS users out of on-line accounts

Google has once again found itself all over the IT news for a spot of bother with its security software.
The good news is that the problem isn't quite as dramatic as the recent code verification bugs in Android, because it doesn't open any security holes.
In fact, it doesn't affect Android users at all.
It's a fault, apparently, or was until the app was withdrawn, in the Google Authenticator software in the Apple Store.
The bad news is that if you were affected, you'd have found quite the opposite of security holes: you'd have been locked out of your own accounts.
To explain, the Google Authenticator app is a software based Two Factor Authentication (2FA) token.
More precisely, it's a One Time Password (OTP) generator, commonly used to implement the second factor in a 2FA login process.
To protect an account with the Authenticator, you prime the app with a random secret key generated by the server hosting your account; this secret key is saved on the server side, too.
The secret key may be provided as a barcode you simply scan in, or as a character code you type in by hand.

Later on, when you want to login, for example from your laptop, you type in your username and regular password in the regular way, and then read off the relevant one time password displayed by the Authenticator app:

This completes the 2FA process, with your username and regular password being the first factor, and the OTP the second.
To make the OTP unique for every login, either a counter (which is bumped up by one every time you try to login) or the current time (to the nearest 30 seconds) is mixed together with the secret key, and hashed to create the OTP.
→ Google Authenticator has some features specific to Google accounts, but can be used with many third party sites as well. It is based on open standards called HOTP (HMAC-Based One-Time Password Algorithm, RFC4226) and TOTP (Time-based One-time Password Algorithm, RFC6238).
The big deal in this, of course, is that both you and the server need to have and to hold the secret keys, from this day forward, for better for worse, for richer for poorer, in sickness and in health...
...because if either of you forgets the secret key that goes with an account, you won't be able to come up with matching OTPs next time you try to log in, and that will be that.
As the Authenticator app itself warns you when you try to delete an account on its list:

Removing this account will remove your ability to generate codes, however, it will not turn off 2-factor authentication.
Before removing: turn off 2-factor authentication for this account, or ensure you have an alternate mechanism for generating codes.
Sadly, removing all your accounts is exactly what happened during a recent upgrade to the iOS version of the Authenticator.
Update. The iOS version is back in the iTunes store, with the bug fixed. Seems that the accounts weren't physically deleted. They were just "visually deleted," i.e. not displayed. [Added 2013-09-07T16:34Z. ]
As I said, at least it wasn't a security hole, though that's probably cold comfort to anyone who ended up locked out of their own accounts.
And remember that a bug of this sort, no matter how regrettable, is not the most likely way you'll lose access to accounts that you've protected with Google Authenticator.
You'd be just as stuck if you went on an overseas trip and left your mobile device behind by mistake, or if someone stole it, or if you accidentally dropped it over the side of a Harbour Ferry.
So, to reduce the risk of a Denial of Service against yourself, no matter how much you trust the Google Authenticator software:
  • Keep backup copies of the barcodes or starting keys for any account you add to the Google Authenticator. (NB. Don't store the backups on the laptop you're protecting with 2FA in the first place! Encrypt them and store them offline, and preferably offsite.)
  • Consider using alternative OTP software, instead or as well as the Authenticator, that makes it easier to take a secure local backup of the secret keys for your accounts after they've been activated.
  • Generate account recovery codes for services on which you will be activating 2FA, and keep them in a safe place.
Backup is still important, even in the modern Cloud Era!

15 years jail time for Romanian card heist ringleader, 5 for light-fingered company president

Adrian-Tiberiu Oprea, the Romanian ringleader of a gang which heisted payment card data from hundreds of Subway branches in the US, has been sentenced to a hefty 15 years in jail for his crimes.



POS machine. Image courtesy of ShutterstockOprea pleaded guilty in May to his part in the scheme, in which the crew compromised vulnerable point-of-sale systems, planted malware on them and harvested details of payment cards fed in or swiped.
Several hundred businesses were hit, including 250 Subway franchises. Details were gathered for over 100,000 cards, with money stolen and clean-up costs coming to over $17.5 million.
The sentence was announced this week by a New Hampshire court. Oprea's sidekick Iulian Dolan got a comparatively light 7 year sentence after pleading guilty a year ago, while another co-conspirator, Cezar Butu, got 21 months back in January.
Several of the gang were apparently tricked and lured to the US by federal agents offering free casino visits or posing as amorous waitresses. It sounds like their visits will be rather longer than they expected, not to mention considerably less pleasant.
In other sentencing news, a former president of logistics firm Exel has been given 63 months (or five-and-a-bit years) in jail by a Texas federal judge for his part in "hacking" his former employers' computer systems to access customer data.
Michael Musacchio is alleged to have used the data to start up his own rival business, stealing files from Exel with the help of two fellow employees who went on to join him in his new venture.
Given the description, it sounds likely that the hacking involved little more than using an account, which should have been shut down, and moving data out of the company network, which should really have been prevented by stricter policies and better protections.
Prosecutors wanted Musacchio to face 15 years, and have argued he should pay $10 million in restitution against a loss of business for Exel, which some estimates put at up to $166 million.
Musacchio's legal team suggest the losses could be much lower, at between $71k and $200k. The final charge will be decided in the next few months.
Also in Texas, a Dallas judge has imposed a gagging order on Barrett Brown, who's up on federal charges for alleged involvement in the Anonymous heist of data from government contractor firm Stratfor back in 2011.
The order means Brown and his legal team cannot publicly discuss anything involving the case - even what the charges brought against him are.
Jail bars. Image courtesy of Shutterstock
The reasoning behind the order is to avoid biasing a potential jury in a case which apparently carries a rather aggressive potential penalty of up to 100 years in jail. The trial itself is not due to start until next April, although Brown has been in custody since last year.
Meanwhile, over in South Africa police have rounded up a gang of 54 believed to be involved in a phishing scam in the country, thought to have netted 15 million Rand (US$1.5 million).
Most have since been released on bail, but the 9 main suspects have been remanded in custody.
All in all, a busy week for the cybercrime cops; hopefully some of these sentences will deter a few would-be digital crooks and put them back on the straight and narrow.

Get ready: Microsoft Patch Tuesday looms large with 14 patches and 8 remote code execution holes


In the coming week, Friday falls on the thirteenth day of the month.
That used to be a bad omen in computer security circles, because of the association with computer viruses that deliberately chose that date to unleash their warheads.
These days, however, it doesn't tell you much more than that Tuesday is the Tenth, making it the second Tuesday of the month, and thus a Patch Tuesday.
Get ready: September's Patch Tuesday has 14 bulletins, eight of which are listed as fixing remote code execution vulnerabilities.
The biggie is Bulletin Three, a "spare no versions" Internet Explorer (IE) update.
From IE 6 on Windows XP to IE 10 on Windows 8, including Windows 8 RT, this one hits the Patch Trifecta: it is considered critical, permits remote code execution, and requires a reboot.
At the other end of the risk scale, Server Core installations benefit once again from their reduced attack surface area, with no critical or remotable vulnerabilities reported.
(Windows 2008 R2 Service Pack 1 Server Core will, however, require a reboot to fix an Elevation of Privilege bug listed as important.)
There are four sorts of security flaw patched this month, so let's take this opportunity to revise the implications of each vulnerability type.
Remote code execution
An RCE is the most serious sort of vulnerability.
It means that content supplied from outside your network, such as a web page or email, can trick your computer into running executable code that would usually require explicit download and installation.
This bypasses any security warnings or "are you sure" dialogs, and can lead to what's called a drive-by download, where just visting a webpage or viewing an image could lead to infection with malware.
RCE example: Anatomy of a buffer overflow.
Elevation of privilege
EoP vulnerabilities allow a user or process to perform activities usually reserved for more privileged accounts.
Often, an EoP will allow regular users to convert themselves temporarily into an administrator, which pretty much means that all security bets are off.
With administrator privileges, untrusted users may be able to change file access permissions, add backdoor accounts, dump confidential databases, bypass many of the security protections on the network, and even alter logfiles to hide their tracks.
If an EoP vulnerability is combined with an RCE, an attacker may be able to take over your account while you're browsing, and then make the leap to Administrator once they're in.
EoP example: Apple neglects OS X privilege escalation bug.
Information disclosure
An information disclosure vulnerability, or leak, happens when software inadvertently lets you retrieve data that ought to be protected.
If passwords or similar data are leaked, this could facilitate future attacks; if confidential data is recovered, this could lead to corporate emabrrassment or even data breach penalties.
Leak example: Anatomy of a cryptographic oracle - the BREACH attack.
Denial of service
A DoS is just what it sounds like: by needlessly consuming computing resources, or by deliberately provoking a crash of vulnerable software, you compromise the availability of a system.
DoSes are often considered to be at the bottom of the severity scale, since they don't usually allow unauthorised access or lead directly to the exfiltration of confidential data.
Nevertheless, DoSes can be very costly, because they may hamper your ability to do business online, cost you revenue, or mask other parts of an attack.
DoS example: Apple apps turned upside down writing right to left - you're only 6 characters from a crash!

Anatomy of a phish - a "generic mass targeted attack" against WordPress admins

Naked Security reader Lisa Goodlin is a website designer and a WordPress user.
That's not exactly a secret.
If you happen to visit one of the sites she looks after, you'll probably see her name and a link to her own website discreetly placed at the bottom of every page, as I've done on this site I made up to use as an example:

And why not?
It's not just handy for Lisa as a spot of advertising, it's handy for anyone who spots a problem with the site and wants to report it.
So that tells you she's a web designer; finding out that she's probably also a WordPress user (aside from the fact that it's a good guess, being a very popular content management system for blogs and web servers) is similarly easy.
Just try adding /wp-admin to the website's fully qualified domain name, and see if you end up redirected to a WordPress login page, something like this:

Once you get this far, you can be pretty sure that:
  1. info@lisagoodlin.com is a working address that will reach someone in the business of caring for websites.
  2. luresite.example is one of lisagoodlin.com's customers.
  3. Sending emails to (1) about WordPress issues on site (2) would not be entirely out of the ordinary.
And that's exactly what phishers did to Lisa, in what I like to call a "generic mass targeted attack."
We'll assume that they don't know Lisa from a bar of soap, and that they aren't targeting her because she's Lisa Goodlin. (Sorry, Lisa: I don't mean to imply you are unimportant!)
They're targeting Lisa simply because their web crawler identified her business as a website design company that uses WordPress.
That gives them a way to phish her more believably than just hitting her up randomly, out of the blue.

What happens next

The phishers' rogue back end server is surprisingly simple.
On a compromised web server belonging to an innocent third party, the crooks have set up some PHP scripts that simulate a wp-admin login page.
Visiting a realistic looking URL like this (don't bother trying it: 192.0.2.0/24 is an IP range reserved for documentation only):
http://192.0.2.62/blog/wp-login.php?

  redirect_to=http://luresite.example/wp-admin/&reauth=1
produces a realistic looking login screen like this, tailored with the text luresite.example:

Of course, it should be obvious that something is wrong, not least because the domain luresite.example looks familiar but the starting domain, 192.0.2.62, does not.
Nevertheless, if you're in a hurry, or just trying to tidy up a few loose ends for your customers before bedtime, you might not look carefully enough at the URL, and instead rely on two other factors:
  • The presence of the text luresite.example, which lends familiarity because it's your customer.
  • The look and feel of the login screen, which is visually correct because it's ripped off from WordPress.
If you fall for the phish, the username and password you enter are sent to the crooks, not to the luresite.example server.

Casting the bait

The next step the phishers need to take is to persuade you to click through to the login page.
And what better way of attracting a WordPress user's attention than by means of a notification about a pending website comment?
Any switched-on web site operator who has enabled comments on a customer's site will be putting regular and frequent effort into keeping the comments flowing: it's a great way to attract and build an online community, and it's fun, too.
Using comment bait is exactly what Lisa's phishers did; fortunately, their creativity and attention to detail fell apart at this point, and she received an email like this:

It was for amusement rather than pedagogic value that Lisa sent the phish to us - as she herself put it, "'Sing in'! Yes, let's all get together and sing Kumbayah!"
But it wouldn't take much effort for the crooks to produce something significantly more believable.

What to do?

You probably frequently see emails that are obviously bogus but which nevertheless make you think, "However did they know that?"
It might be a DHL scam just after you make an online purchase from a company that uses DHL, or a promised tax refund soon after you submit your annual return, or (as in this case) an email that happens to match both your content management system and your customer.
Whenever this happens, I suggest you actually stop and take the time to answer.
Treat the rhetorical question literally and you'll quickly realise that there are often many ways that "they could have known."
In Lisa's case, it was simply that her domain name was listed on a website that happens to use WordPress.
Here are some other steps you can take:
  • Don't use login links provided in emails. It's too easy to make a mistake.
  • Consider managing your customers' websites from inside their networks via a full-blown Virtual Private Network (VPN), so you don't need to leave the website administration portal visible to the world.
  • Consider using two factor authentication for remote logins, so that your password alone isn't enough for the crooks.
  • Remember that "Sing ins" are for church choirs and choral societies, not for WordPress administrators.

More about two factor authentication

By the way, for a discussion of how two factor authentication helps protect you in cases of this sort, you might like to listen to this Techknow podcast:

Download: sophos-techknow-two-factor-authentication.mp3

LinkedIn DNS hijacked, traffic rerouted for an hour, and users’ cookies read in plain text


linkedin down


App.net cofounder Bryan Berg noticed that LinkedIn was DNS-hijacked tonight and that traffic was rerouted to a shady India-based site, http://www.confluence-networks.com.
That’s bad for LinkedIn, but there’s worse news for you.
According to Berg, that site does not require SSL (secure sockets layer), which means that anyone who visited in the last hour or so sent it their long-lived session cookies in plain text … a potential security risk.
DNS hijacking is the process of redirecting a domain name to a different IP address. IP addresses are strings of numbers that identify a server, but they’re long and hard to remember. The DNS system allows us to use simple, easy-to-remember , and it then translates them to IP address like 216.52.242.86.
(You can also use that IP address, by the way, in your browser.)
You can hijack a company’s DNS on the client side by hacking individual computers’ network configurations and on the Internet side by hacking a DNS server — or by installing a rogue DNS server that masquerades as a real DNS server. Alternatively, if you can access a company’s domain records, you can change the IP address associated with that company’s web services.
DownRightNow shows that LinkedIn had a service interruption from about 6 p.m. tonight and lasting until now.
However, I’m able to access the actual LinkedIn service right now, so the site must be up and available for at least some users, or maybe the DNS hijack has only affected a percentage of users.

Facebook vulnerability that allowed any photo to be deleted earns $12,500 bounty

Arul Kumar, image courtesy of Arul Kumar

An Indian electronics and communications engineer who describes himself as a "security enthusiast with a passion for ethical hacking" has discovered a Facebook vulnerability that could have allowed for any photo on the site to be deleted without the owner's knowledge.



Arul Kumar, a 21 year old from Tamil Nadu, discovered that he could delete any Facebook image within a minute, even from verified pages, all without any interaction from the user.
For his efforts in reporting the vulnerability to Facebook's whitehat bug bounty program Kumar received a reward of $12,500.
The vulnerability that he discovered was based around exploiting the mobile version of the social network's Support Dashboard, a portal that allows users to track the progress of any reports they make to the site, including highlighting photos that they believe should be removed.
When such a request is submitted, and Facebook does not remove the photo in question, the user has the option of messaging the image owner directly with a photo removal request.
Doing so causes Facebook to generate a photo removal link which is then sent to the recipient of the message (the photo owner). The owner can then opt to click on that link to remove the image.
Kumar discovered that a couple of parameters within this message – 'photo_id' and 'Owners Profile_id' – could be easily modified.
With this information he then sent a photo removal request for an unrelated image on another account that he controlled. By changing the two parameters in the message received by the second account, Kumar could then choose to delete any image from any user on the network.
The victim of this photo removal technique would not be involved in the process in any way and wouldn't receive any messages from Facebook – indeed the first they would know of this would be when they logged in to discover their photo(s) had disappeared.
Kumar explained that the exploit could be used to remove photos from any verified user, pages or groups as well as from statuses, photo albums, suggested posts and even comments.
As part of the process of responsible disclosure Kumar forwarded details of the bug to the Facebook security team who, at first, could not delete any photos by following his instructions:
Facebook email
Yeah I messed around with this for the last 40 minutes but cannot delete any victims photos. All I can do is if the victim clicks the links and chooses to remove the the [sic] photo it will be removed which is not a security vuln obviously.
Kumar then explained his bug by using a demo account, as well as sending Facebook a proof of concept video in which he showed how he could have removed Mark Zuckerberg's own photos from his album.
This time, Emrakul from Facebook's security team was able to see the vulnerability:
Facebook email 2
Ok found the bug, fixing the bug. The fix should be live sometime early tomorrow.
I will let you know when it is live so you can retest. Wanted to say your video was very good and helpful, I wish all bug reports had such a video :)
Unlike Khalil Shreateh who, two weeks ago, became frustrated with Facebook's bug reporting process and hacked Mark Zuckerberg's own timeline, the way in which Kumar reported this bug shows just how responsible disclosure should work.
By following Facebook's whitehat guidelines he was able to pick up his deserved bounty.