This article came up the other day, and it’s a little
jargon-full, but it seemed like a very important topic for the average user.
This is particularly important because Microsoft Security Essentials (MSE) is
one of the most popular virus protection programs, and now that it’s built into
Windows 8 that is only likely to increase. I know that since its release, I
have always recommended it to friends and family. It’s free, easy to use, and
updates along with your Windows OS. Now, based on the article headline, you
might think that I’m changing my mind, but I’m not, and here’s why.
The main point here is the difference in goals of the
certification test and Microsoft’s own testing goals, summarized most
succinctly in the article that Blackbird is looking at the viruses that are
missed by the software by category, whereas Microsoft designs its software
based on consumer impact. To me this means that while this certification test
is important and useful, it is also biased by being more artificial. Microsoft’s
own tests too are by definition artificial, but the company is striving to
obtain real world results.
The second major thing to look at when talking about all
these percentages of malware, is the encountered numbers. What this means is
that malware might affect your machine, but you may never come into contact
with it. And this hits on one of the points that I have always emphasized, this
is about the person using the machine and not the machine. Even if MSE was 100%
effective, a person can still mess up a machine. If you are using safe browsing
habits, not opening links from spam emails, or browsing untrustworthy websites,
you’re not going to encounter this malware. The point is, no one should be
depending entirely on their antivirus software to completely protect them.
There are many other ways for your data to be compromised, such a phishing and
other social engineering scams.
At the end of the day, my take away is this, I’d rather
Microsoft fail an artificial certification and continue to focus on real word
tests than for the company to design their software in an artificial setting
that scores 100% in the lab, then completely fails the consumer in the real
world. (Ahem, video card benchmarks, cough).