Application Performance are measures of real-world performance and availability of applications. It is often used to describe remote and cloud computing applications, however, there is a lot of time spent on tuning applications running locally to enhance or improve their performance. For example file access time, database, infrastructure, graphics, human interfaces, network access, etc.
Most applications today are not Brownfield versus Greenfield systems. In addition, many programs rely on incorporating reusable components as part of the cost reduction strategy. Reusing components such as operating systems, databases, Access Control etc. makes a lot of sense since the functionality of these components is not an area of expertise for the developers of the system. For example, if the system is for medical devices, then designing new virtual page swap software has nothing to do with medical technology or medicine. Another example might be developing middleware software for accessing other computers on the network. This kind of development might seem “fun” but it is not within the scope of the project that is developing the system. These re-used components also have the benefit that they are used by a wide range of other projects and systems accessed by many people. Collectively the communities can find and fix software faster than it can be done locally.
I don't believe there is a company that could pay for all the testing the user communities provide. Linux is a great example.1)
It is also true that this many users also marks Linux as a huge target for malicious activities, but pretending that any OS is not vulnerable to attack is a fool's game.