The practice of developing application software can be described as a process that results in a type of computer program that performs a specific function. Generally speaking, each program is designed to assist the end user(s) with a particular operational process. This may may be related to productivity, creativity, communication, etc. The resulting features are sometimes created as a standalone dedicated toolset – for example, a basic smartphone app. Larger bodies of work may be integrated with others that perform a wide range of functions within an application suite. Regardless of size or scale, the application will be introduced to the technology environment when it is released into production.
By their nature, most software applications routinely access, modify, display and transmit information. Some of this informatin may be considered sensitive. Depending on the size of the user base of the application, the applications that access data can have widespread reach within an organisation. This may include an external audience. These attributes make software applications a potential cause of data corruption or leakage to unauthorised parties.
Potentially risky applications may be used by many people across an organisation, So, development must be carried out in accordance with industry best practices in relation to secure coding. This is especially important if the application interacts with other in-house applications or data sets. Even more risky is information that can be accessed by external parties, eg via a web application.
Policies for application development (internal, external and including web-based administrative access to applications) must be included in your company’s ISMS. Policies must ensure vulnerability prevention, covering the following:
- Development Standards
- Testing and Change Control
- Management of Patches and Modification
The core principle behind the software development policies of an ISMS is the need to observe industry standards or best practices relating to secure coding. In short, security must always be an integral part of the process when developing application software. The intent is that developers always specifically address security in software coding standards and consider known vulnerabilities when developing applications.
Many common vulnerabilities are well known within the development community. To assist, there are readily available reference sources for areas of special risk, like public facing web applications. A good place to start when reviewing best practice coding is the Open Web Application Security Project® (OWASP) website. They maintain a list showing the Top 10 security risks to web applications. The Top 10 list covers areas such as: Access Control and Authentication; Cryptography; Design Flaws; Known Vulnerabilities; Data Integrity, etc. Though the list is concerned with what are considered critical security risks for developers of web applications, the principles are valid for all types of application development.
So, professional development standards must be observed at all times during the software development lifecycle. An important part of this is the inclusion of a mandatory review of custom code for vulnerabilities prior to release. This is a code review by knowledgeable personnel other than the original developer. This peer review will help prevent the introduction of vulnerabilities in software by applying a ‘second pair of eyes’ to check the code. This should ensure none of the known issues are included in the application.
During the development function, a test environment is often used to validate new code. Separation of the development and test environments must be maintained to ensure integrity of each environment. This helps avoid cross-contamination in the event of any incidents. To support this concept, there must also be clear separation of duties across users of the different environments.
As the development cycle progresses to testing, there must be proper management of test data used to validate software. This is primarily to ensure there is no accidental inclusion of ‘live’ data which could be exposed during the testing phase. Any custom accounts and passwords used to aid testing must be removed before activating new code in production.
Several additional considerations are needed for testing of software accessing sensitive information. For example, payment card details. Refer to documentation on the PCI DSS framework for more details.
When it comes to releasing software patches and modifications, a Change/Release Control process must be carried out using a formal routine. The process includes: functionality testing of the new code, impact analysis on the introduction of new code to the production environment, documentation of approval to promote the new code. The process should also include backout procedures to handle failure of new features or security breach caused by new code. Similar to the development and testing phases, separation of duties and roles in the software release process must be maintained to ensure each environment is adequately protected.
More To Come...
Look out for the next instalment in this topic.
In the meantime, browse more Thistle Tech posts by clicking this button.
If you need assistance with this topic,
or advice on any other aspect of what we do,
feel free to contact us using this button.