SDL Coverage of Relevant Regulations, Certifications, and Compliance Frameworks
One key criterion for success early in the SDL is whether all key regulations, compliance frameworks, and certifications for the product (or libraries) have been identified. This success factor depends on understanding product objectives and customer uses. One can easily make the mistake of thinking that certain regulations will not be applicable because their use cases are not considered valid. Customers, however, often have a different take on this. A cloud product that a customer uses to interact with other customers might not need to be compliant with HIPAA from one viewpoint. However, for a customer, it is crucial that this product, if not compliant with HIPAA, at least does not create issues that may result in noncompliance.
For nearly 30 years, Brook has regularly reminded us that any piece of software that its users find even moderately useful will be employed for tasks that the makers of the software never imagined. It’s very important when thinking through use cases, that one does not preclude those imaginative uses such that one also excludes potential customers.
Compliance frameworks are another thing to watch out for. Depending on how the product is used (in-house or in the cloud), different permutations are expected by customers. If customers are going for an ISO 27001 certification and are using your product in a cloud environment, they will expect a demonstrable and verifiable operational and product security posture. If customers are paying for your service using credit cards, not only they but your environment may fall under the regulations of PCI standards. Though we are focusing on product security here, operational security is equally important.
Finally, many times, while covering regulations, compliance frameworks, and certifications, security and development teams fail to look closely at dependencies. For example, if the product needs to comply with the Federal Information Processing Standards (FIPS), how will using an open source library affect compliance? If the product needs to obtain Certification A, will dependent software make or break this certification? These questions need to be carefully considered to prevent future firefighting.
Over the last few years, customers of software vendors have increasingly requested independent audits to verify the security and quality of software applications that they have either purchased or are evaluating for purchase. Software vulnerabilities have increasingly been tied to high-profile data breaches over the last few years and have resulted in more customers requiring independent and visible proof that the software they purchase is secure. This, of course, has helped put pressure on companies that develop software to ensure that the secure software development processes are built into the SDLC to avoid the very costly discovery of vulnerabilities that are caught post-release—often a sign of an immature, ineffective, or nonexistent software security program. Because of the preponderance of post-release code having security vulnerabilities and privacy issues that should have been caught during development, third-party assessment of post-release or near-release code has become the norm in the industry, whether the company producing the software has a reputation for producing secure code or not. In some cases, it is demanded by the prospective or current customer, and, in other cases, it is conducted proactively by the company producing the code.
Even for companies that have outstanding software security programs, software applications can alternate in and out of compliance with policies or regulatory requirements over long periods of time for a variety of reasons. For example, a new functionality or use case in a new version of the application may introduce new vulnerabilities or planes of attack, causing the application to drop out of compliance. Additionally, these requirements may change over time. Many companies use third-party code reviews to help identify these situations rather than spend the limited resources of their internal teams.
Third-party testing should include testing the entire stack, not just your product. That means performing testing as outlined in earlier chapters as well as continuous post-release testing. At a minimum, post-release testing should include annual penetration (pen) testing (application and software stack). Any new code released after initial release should follow the SDL requirements outlined in previous chapters.
The biggest challenge is to do this in a timely and cost-effective manner while also protecting the source code and other intellectual property during the process. Some of the choices for third-party testing include the following. 
approach is to employ a company that only uses tools that require the exposure of binary code only. In this case, the contractor inspects the application at the same level as it is attacked—the binaries—and can ensure that all threats are detected. This type of testing can be done onsite or remotely, as a service.
- 3. Purchase, install, and train development teams to use on-premise tools and function as lower-level software security architects as an extension of the software security group to conduct the “people side” of the software security architectural review. Then invite auditors into your organization to document your processes. Many mature software security organizations have done this. A mature software security program such as that described in this book will help scale and reduce the need for additional headcounts to do this work. Building this into your SDL/SDLC process is a cost- effective, efficient, and manageable way to do this.
- 4. Require third-party suppliers of code in your application to do the same. In today’s software development environments, a majority of software development organizations make use of code developed elsewhere, either commercial off-the-shelf (COTS) or open source software. Just as with internally developed software, a third party should prepare an attestation report per the software application owner’s requirements, which may include an attack-surface review; review of cryptography; architecture-risk analysis; technology-specific security testing; binary analysis, if source code is unavailable; source code analysis, if it is; and fuzz testing, in addition to a general pen testing routine.
Third-party reviews are often critical to demonstrate “security” to end users and customers. A preferred list of vendors should be created by the software team, and these vendors should be vetted for their skills as well as ability to handle sensitive information. Since these vendors will be handling sensitive security information, it is important to note if they use full disk encryption, communicate securely, dispose of any customer data as soon as testing ends, and so on. Any time there is a need for security testing, one of these vendors should be selected for the testing. Security testing of the entire software stack and product portfolio should be performed at least annually.
-  Hand over source code to a third party for inspection. This is not a real option for those who wantto protect the most precious intellectual property that a software development organization possesses—their source code. 2. Contract manual penetration testing services that can also do deep-dive code and software architectural design reviews for each new release. To avoid the risk of source code leaving the control ofthe company that is developing it, contractors must be required to work onsite in a controlledenvironment, under special nondisclosure agreements and under specific guidelines. These typically include a source-code protection policy and IP-protection guidelines. An alternative to this