Many times we get complaints from customers about bugs in our software. This is not uncommon and I don't think there is an application without bugs. However, most of the times your application passed some form of QA control (hopefully).
It could be from a few weeks of intensive testing to a simple sanity check made by the developer just to see the application loads after he released it. Ideally we will want to be as through as we can and we write comprehensive test scenarios to check as many aspects of the program as possible. However, no matter how much tests you perform on your application something will always get through. Also, the cost-per-value goes up as you write more and more tests. It's very easy to ensure 70% of your application is working fine. It's harder to ensure the other 20% and getting harder and harder as you approach 100%.
So how do you increase the effectiveness of the QA tests without adding cost?
One of the methods is using non-sterile environments. Usually when we develop and QA test our application we will use "clean" machines. We want to test if the new feature is working, we don't want interference.
However, our customers will not use 'clean' machines! They will use the same computers they use on daily basis. The same crappy computers they use for months and years. They will have anti-virus programs, they will have different browsers, they will have different service packs and languages.
All these things are more then capable to interfere with our application and when we test we should test with them.
A one time effort to create "dirty" machines can save countless hours trying to fix a bug after the application was released to clients.
I remember more then a few times that an anti-virus on the client's machine caused my program to stop working, and this is something that could have been found on QA.
So, to summarize: Developing on clean machine is fine, but testing should definitely be conducted on used machines. Ones that could really simulate a computer in real life.