The trouble with "quality control"

"Quality control" has been mentioned here a few times recently so, why not I open a thread about it?

For those that are unaware, it's a term relating to how software/hardware updates are managed and maintained, and more so when they are released to a consumer's device. This had been a buzzword of Windows 10-related discussions because we know how bad they have been with it for their updates, always breaking stuff and it soon turned out that most updates are released as beta stages. I ask this though... why? Does Microsoft expect all of us to be technicians or even digital detectives to solve whatever the hell happens after a few updates forced on them?

I'm also beginning to think that poor quality control might not just be a Microsoft issue... other devices may be vulnerable from lazy-ass companies. One example... years ago, an update for an Asus tablet I once owned (that ran Android 5) totally bricked it and more recently, the BT TV top box had some unexpected issues as well such as a problem with its Live Pause feature where the only way to fix that was a factory reset. At times also, subtitles or audio description would be randomly turned on and it had been a pain in the ass to turn them back off. Might probably be assuming things but the Asus tablet was most likely a victim of bad QC (and it's still in one of my drawers where it's now long-forgotten, never to see the light of day again).

Discuss.

Comments

  • I have nothing to say about Android, but I do have lots to say about cable boxes.

    I do not care much about television these days, but I do know that at one time, most cable boxes in North America (and a few in the UK and South Africa, among other markets) were made by Scientific Atlanta. The software that most of their boxes ran in the 2000s was very spartan and looked like something out of the 1980s, but was decently stable; I do believe that most stability issues were due to hardware failure/flaws, much like NT5+/Linux/Unix.

    And then SA sold out to Cisco, who then sold it to Technicolor (more of a holding company than anything else, actually). With more powerful hardware now available to support HDTV and multi-tuner PVR/DVR service, many companies began to spruce up the software layer above the firmware/software that controls the playback of video/audio streams.

    However, some of this software was released in a very buggy state; my cable system's new software front-end would originally kick users out of a playing recording if the "guide" button was invoked. This and various other early issues were fixed long ago, but 1-2% of the recordings still fail.

    A cable system in a neighbouring province introduced its own software (the day before W10 came out; what a coincidence) where forced upgrades wiped existing recordings and settings. Some older units were simply too slow to run the software at an acceptable speed (with no option to roll back) and crashes are still common.

    The underlying software/firmware is supposed to be maintained by the manufacturer of the cable boxes, and Technicolor does not care about those legacy products. So recording failures and other minor playback issues persist.

    I am far less forgiving of bugs with cable boxes and other locked down appliances as there relatively few hardware/software configurations compared to what Microsoft has to deal with.

  • The problem these days is there is a HUGE disconnect between manufacturers and customers.

    Look at some of the early Apple Macintosh testing videos - you literally have an employee watching someone use the computer, gauging every last thing about their experiences, how they feel about it, and why they think things should work certain ways. The employees very well may have been some of the programmers designing and writing parts of the code.

    These days, analysis is done by remotely collecting automated "metrics" about which buttons are pressed. The people designing the software are from the marketing department. This is why screen real-estate gets sold out for advertising, and proper programs turn in to cloudy "Software as a Service". "Testing" is all done using automated scripts with little or no manual testing. If some edge case breaks and there was not an automated test for it, it will not be caught. Then, all of you become the actual testers - except when you find problems, no one cares. The companies won't care unless sales drop, and most consumers these days are are so used to it, a product could periodically rape them up the ass and they would still buy it.

    This applies not just to computers, but to almost every product, including things like clothing and food. It's why stuff falls apart after a month rather than lasting for years like it used to. It's why products come in a dozen whacky varieties except the no-nonsense one you want. It's why you bring a product home and find they changed it and it no longer does what you need. It's why every appliance has to somehow be internet dependent and send your private information back to the manufacture.

    Technically, the blame lies on both sides of the fence. Companies might tighten their quality control if consumers refused to buy their defective products. But when there is little or no competition, customers are usually left with no choice.

  • The problem, in terms of Microsoft, is the lack thereof.

    Satya Nadella, in his infinite wisdom, believed crowd-sourcing (making end-users beta testers) and developer-side testing would cut costs compared to having an entire lab dedicated to this. And to his benefit, it did.

    However, this comes at a cost. Developers are going to cut corners testing. They have deadlines to meet, and only care to see that on the surface the change they made works. Then they move on. I can attest to this myself, as everytime I write large programe I too only check that what I just did works. Only to find out later down the line I broke something. But I digress.
    So then this is all pooled into the source depot, with other code. Which may or may not develop bugs from this change. A compilation is made, and the only way now to see if something is wrong is by build errors or warnings. Which likely won't come out. The compiler output is refined and made usable for testing.

    Previously, a massive crew would test this escrow with as many possibilities and circumstances as possible, and if it's good enough, RTM.
    This is no longer the case. Instead only a surface-level "yup everything works" check is made before shipping it out. The end user is expected to find bugs, after all there are near infinite combinations of circumstances for each end-user.
    However, this idea fails when you realize there are people who pollute the feedback hub with:
    Please bring back Windows 7 Aero!
    Please stop tracking me!
    Why did you preload Candy Crush?
    Windows 10 is so slow.
    Why doesn't [insert really outdated hardware] work with Windows 10?

    Which wind up hiding lesser-posted legitimate issues such as
    Latest KB[insert long number] renders dedicated audio hardware unusable even with latest driver.
    Update KB[long number] causes network dropouts.
    Windows 10 [whatever number it's on] causes higher-than-normal SSD wear compared to previous version.
    Windows 10 [number] upgrade process deleted files, had to restore from backup.

    You get the picture.

  • Bringing this up again only to mention this... it's been emerged recently that Microsoft may go with a "permanent" remote working plan (and they'd probably have this even when the pandemic finally ends) so with its employees relying on Zoom and such whilst sitting on dining chairs doing their stuff, this begs the question... how would this affect quality control over future Windows updates?
  • Well with how bad it's been so far, it can't get much worse....... can it?
  • If you have to ask that, you have clearly never used Oracle products. :p

    And just a reminder, eventually all of these new remote jobs will be sent to India.
  • I think we should be using WebRTC or create a open standard for videocalling BEFORE doing that. If Zoom suddenly implodes, then it will take weeks to get everything working again and nobody would get any work done.

    What would Wally do? :p
  • @SomeGuy
    If you have to ask that, you have clearly never used Oracle products. :P
    I've used Oracle VirtualBox... does that count? :D
  • >we should be using WebRTC

    No, this is not something we should be doing. Why are we accepting and encouraging stuff like this? Web browsers are becoming insanely complex and we keep trying to shoehorn stuff in where it shouldn't be. Back in the day, we used to call this "feature creep" and it was heavily frowned upon. Web browsers are having things stuffed into them so frequently that being even 2 versions behind can make some websites unusable. This is a major problem and it needs to stop.
  • Thank god I'm using ESR Firefox rather than the main one. And I've never liked the idea of "rolling releases" of browsers since this first started over ten years ago. Have one major version out and then another just a month later. What's the point in that? :/
  • edited October 2020
    Well, wherever possible, I completely disable all updates because of lack of quality control in the industry. I used to update regularly Windows 7 and 8.1 until the CPU vulnerabilities came along - where updates would result in slowdowns. I stopped updating then.

    Then Windows 10 came along and auto-updating it is OUT OF THE QUESTION. That said, I do perform in-place manual upgrades when the build is stable, has been thoroughly tested and patched and has become a bit old. I don't mind staying on an older Windows 10 version always. I don't care about support ending, nor security vulnerabilities.

    I've been using computers since Windows 3.0 and know how to keep them secure without patching everything, not once have I been infected with malware or hacked.

    Office updates also all always disabled. Windows Store auto-app updates disabled. On Android, Google Play Store auto app updates always disabled. In fact, I will never get Pixel phones because of Google's crappy OS updates, which have just as many QC issues.

    I used to be a tester for Classic Shell and I know what a pain it is to find every bug with specific combination of settings and configuration but in the end, it is worth it. You can't just release software just like that on to end users who are guinea pigs (no offense), not professional testers. In fact, its developer, Ivo and myself got sick and tired of Windows 10 releases (I doubt intentionally) breaking Classic Shell with every release due to the way it has to hook into Explorer.exe. This was not an issue with stable 3 yearly releases like Windows Vista, 7, 8/8.1 but with Windows 10, it made no sense when the platform became so fragile.

    Recently, I got a video calling device from Facebook called Portal for its outstanding camera and while it's fairly private and secure, despite coming from FB, it has forced auto-updates which are not tested at all. A couple of months ago, an update to the device broke manual zooming and panning. There was a fix which no one knew or discovered - which was to factory reset the device. A few weeks later, another update broke more the ability to pin/lock the fullscreen view on a specific person in a group call. Nobody cares at FB to fix it despite reporting it.

    We are living in a very unfortunate era, vs late 90s and early 2000s when software was of higher quality in general, more customizable, more rigorously tested. Capitalist companies were less greedy and felt responsibility towards customers. They had some conscience at least. Today nothing is tested, even extremely broken and defective hardware is often shipped as-is and breaks down soon after buying.
  • Also I would like to add that besides the actual disconnect between manufacturers and customers, there is this unwanted middleman these days, called the marketing folks whose decisions are largely driven by "cool" and hipster trends besides what the investors in large publicly traded corporations have to say. So these marketing folks decide a lot of things which are purposely opposite to what customers want, or intentionally built to not last a very long time, i.e. designed for planned obsolescence in a few years so they can drive next generation of sales. There are no laws against this in US so companies which were once upon a time straightforward and honest are today downright customer-hostile and dishonest.
  • One of the biggest issues is many software companies switching to a rapid-release cycle to support their subscription models.
    It's wrong. Especially when it is forced.

    An operating system should not be on a rapid-release cycle. A production program should not be on a rapid-release cycle.

    Example: 3D programs. There's a reason many studios remain on older versions of their software. Well yeah they don't want to have to work out a new license agreement. But that aside, it would be a pain in the ass to have to work through massive asset libraries to make sure they are perfect on each new release and update. And when they're getting pushed as quickly as they are, good luck.
    And operating systems, hell no. I shouldn't have to put up examples of how well Windows 10's rapid-release cycle is panning out now do I?
Sign In or Register to comment.