The trouble with "quality control"

"Quality control" has been mentioned here a few times recently so, why not I open a thread about it?

For those that are unaware, it's a term relating to how software/hardware updates are managed and maintained, and more so when they are released to a consumer's device. This had been a buzzword of Windows 10-related discussions because we know how bad they have been with it for their updates, always breaking stuff and it soon turned out that most updates are released as beta stages. I ask this though... why? Does Microsoft expect all of us to be technicians or even digital detectives to solve whatever the hell happens after a few updates forced on them?

I'm also beginning to think that poor quality control might not just be a Microsoft issue... other devices may be vulnerable from lazy-ass companies. One example... years ago, an update for an Asus tablet I once owned (that ran Android 5) totally bricked it and more recently, the BT TV top box had some unexpected issues as well such as a problem with its Live Pause feature where the only way to fix that was a factory reset. At times also, subtitles or audio description would be randomly turned on and it had been a pain in the ass to turn them back off. Might probably be assuming things but the Asus tablet was most likely a victim of bad QC (and it's still in one of my drawers where it's now long-forgotten, never to see the light of day again).

Discuss.

Comments

  • I have nothing to say about Android, but I do have lots to say about cable boxes.

    I do not care much about television these days, but I do know that at one time, most cable boxes in North America (and a few in the UK and South Africa, among other markets) were made by Scientific Atlanta. The software that most of their boxes ran in the 2000s was very spartan and looked like something out of the 1980s, but was decently stable; I do believe that most stability issues were due to hardware failure/flaws, much like NT5+/Linux/Unix.

    And then SA sold out to Cisco, who then sold it to Technicolor (more of a holding company than anything else, actually). With more powerful hardware now available to support HDTV and multi-tuner PVR/DVR service, many companies began to spruce up the software layer above the firmware/software that controls the playback of video/audio streams.

    However, some of this software was released in a very buggy state; my cable system's new software front-end would originally kick users out of a playing recording if the "guide" button was invoked. This and various other early issues were fixed long ago, but 1-2% of the recordings still fail.

    A cable system in a neighbouring province introduced its own software (the day before W10 came out; what a coincidence) where forced upgrades wiped existing recordings and settings. Some older units were simply too slow to run the software at an acceptable speed (with no option to roll back) and crashes are still common.

    The underlying software/firmware is supposed to be maintained by the manufacturer of the cable boxes, and Technicolor does not care about those legacy products. So recording failures and other minor playback issues persist.

    I am far less forgiving of bugs with cable boxes and other locked down appliances as there relatively few hardware/software configurations compared to what Microsoft has to deal with.

  • The problem these days is there is a HUGE disconnect between manufacturers and customers.

    Look at some of the early Apple Macintosh testing videos - you literally have an employee watching someone use the computer, gauging every last thing about their experiences, how they feel about it, and why they think things should work certain ways. The employees very well may have been some of the programmers designing and writing parts of the code.

    These days, analysis is done by remotely collecting automated "metrics" about which buttons are pressed. The people designing the software are from the marketing department. This is why screen real-estate gets sold out for advertising, and proper programs turn in to cloudy "Software as a Service". "Testing" is all done using automated scripts with little or no manual testing. If some edge case breaks and there was not an automated test for it, it will not be caught. Then, all of you become the actual testers - except when you find problems, no one cares. The companies won't care unless sales drop, and most consumers these days are are so used to it, a product could periodically rape them up the ass and they would still buy it.

    This applies not just to computers, but to almost every product, including things like clothing and food. It's why stuff falls apart after a month rather than lasting for years like it used to. It's why products come in a dozen whacky varieties except the no-nonsense one you want. It's why you bring a product home and find they changed it and it no longer does what you need. It's why every appliance has to somehow be internet dependent and send your private information back to the manufacture.

    Technically, the blame lies on both sides of the fence. Companies might tighten their quality control if consumers refused to buy their defective products. But when there is little or no competition, customers are usually left with no choice.

  • The problem, in terms of Microsoft, is the lack thereof.

    Satya Nadella, in his infinite wisdom, believed crowd-sourcing (making end-users beta testers) and developer-side testing would cut costs compared to having an entire lab dedicated to this. And to his benefit, it did.

    However, this comes at a cost. Developers are going to cut corners testing. They have deadlines to meet, and only care to see that on the surface the change they made works. Then they move on. I can attest to this myself, as everytime I write large programe I too only check that what I just did works. Only to find out later down the line I broke something. But I digress.
    So then this is all pooled into the source depot, with other code. Which may or may not develop bugs from this change. A compilation is made, and the only way now to see if something is wrong is by build errors or warnings. Which likely won't come out. The compiler output is refined and made usable for testing.

    Previously, a massive crew would test this escrow with as many possibilities and circumstances as possible, and if it's good enough, RTM.
    This is no longer the case. Instead only a surface-level "yup everything works" check is made before shipping it out. The end user is expected to find bugs, after all there are near infinite combinations of circumstances for each end-user.
    However, this idea fails when you realize there are people who pollute the feedback hub with:
    Please bring back Windows 7 Aero!
    Please stop tracking me!
    Why did you preload Candy Crush?
    Windows 10 is so slow.
    Why doesn't [insert really outdated hardware] work with Windows 10?

    Which wind up hiding lesser-posted legitimate issues such as
    Latest KB[insert long number] renders dedicated audio hardware unusable even with latest driver.
    Update KB[long number] causes network dropouts.
    Windows 10 [whatever number it's on] causes higher-than-normal SSD wear compared to previous version.
    Windows 10 [number] upgrade process deleted files, had to restore from backup.

    You get the picture.

Sign In or Register to comment.