The risk workflow in this case is very important, and there are multiple angles to consider. Let's start with a simple one.
The first items to consider are the issues that evolve during the development of the software. Already, two types of risks exist. There are the risks that exist on the application, which should be known and captured on the risk register. The business owner must accept these risks, because ultimately he/she must decide how to prioritize the risks, and whether to fix them or not, depending on the priorities of the business.
Secondly, there are the risks that the company is willing to disclose to their customers. A mix between legal, governance, business and even marketing must decide on disclosure, because in most companies, security is still a marketing-driven exercise, as there is no regulation or requirement to disclose the truth.
There will be a list of security issues that the business will be willing to expose to customers, specially to clients that have signed NDAs, since those clients cannot disclose or talk about those issues publicly.
If you work on the software producer's security team, you should know every risk that exists. To do so, you must go through the process of capturing, prioritizing, and understanding information, and then you must convince your business owners to accept the risk that is the flow.
Another grey area of responsibility are the insecure-by-design features, that are enabled by default, or are so key to the value of the software, that most clients will enable them.
This process raises a lot of interesting, ethical dilemmas. If you have a risk that no customer will notice, and is unlikely to be attacked, and nobody will pay attention to it, do you need to fix it?
Of course, you know you should fix the risk because it is the right thing to do. But, you also know that business is about making decisions, and about making risk decisions, and a lot of companies would choose not to address this kind of issue.
In fact, there is even a perverse reward system in some companies, where staff get rewarded for not knowing in situations where they plead ignorance of a problem and therefore evade accountability.
I would argue that ignorance is no longer a valid excuse, especially if you are dealing with vulnerabilities or problems that are widely known, and have been disclosed in different places.
If you are a client of those software packages, the situation is tricky because if the product is open-sourced, you have the code and you can do a security review. Of course, this doesn't mean you are going to do a security review, but the option is there.
In most traditional products, the code is proprietary and you won't be able to access it. You are therefore dependent on the vendor for information, which is usually limited. Your only option is to pay for an independent security review, which is something I have often done.
When you do discover a security problem with a product, you log it in your own risk register, where you acknowledge the problems that existed, and you discovered, in that product.
You report the problem to the vendor. Of course, the vendor should already know about the problem, and should have a solution for it. If they don't know about it, this highlights the fact that they don't have a risk process, and they weren't paying attention to that problem.
If you are the vendor, and you receive those issues disclosed to you, you are now in a very weak position. Your client now has more power, and more leverage, than you do. In a way, you have switched from being the senior player, to the junior player, at the table.
A big problem we have as an industry is the fact that when client A discloses a problem, the vendor has no legal responsibility to disclose it to client B. The vendor can do a business and a marketing exercise for client A to keep him happy, give him more licenses, and control the damage limitation because they don't have to tell all the other clients.
The client should ask the following questions:
- Do you know about this?
- How many more of these do you have?
- Why didn't you tell me in the first place?
If clients asked these questions, and vendors answered them, we would have a much better working market, and a much better model.
If I am working for the client, and we find issues, I ask a lot of questions on how fit-for-purpose the software is, especially if I start to find basic vulnerabilities and I know that the vendor doesn't have a security team, or a good security posture.
This is an interesting, multi-dimensional problem and the risk workflows, and the mapping of the risks, will be a great way to measure this problem.
If you look at the insurance industry, and its ability to understand the real risk creative applications, although one might argue that not all these risks should be disclosed publicly, at least until they are fixed, the industry should at least disclose how many they have identified.
It goes back to the concept of a labelling system, where you create labels for software. In the same way that a label on a food product lists its ingredients, security labels on software products should list the security vulnerabilities that exist within.