While the CIA Triad may sound like some sort of menacing criminal gang, it is in fact an essential tool of cybersecurity.
Lack of security requirements is still a major problem in many software projects today, in this article we will discuss the main security requirements and the relationship they have to secure design concepts and principles.
The first question: how to get the security requirements and who to involve?
There are several sources from which security requirements can be obtained. They can be classified into internal and external sources. Internal sources can be organizational sources, such as policies, standards, guidelines, models and practices, and obviously business requirements. External sources can be local laws or compliance requirements with certain standards, such as PCI DSS.
The business owners are the main players in all of this. They define the objectives and are responsible for determining the acceptable risk level, i.e. the residual risk. End users and customers also play an important role in determining the security requirements of the software and must be actively involved in the process of eliciting the requirements. Last but not least, the DevOps team and the Security team, which are the units responsible for ensuring that the software is reliable, resilient and recoverable.
Principal Types of Requirements
There are different types of requirements, here the main ones.
These requirements concern the functions usually visible to users, such as the authentication method, the transaction integrity check, profiles and roles, the methods of interfacing with other external software and the laws and regulations in force in the context in which the software falls.
These requirements are divided into two families: those for the refinement of functional requirements, such as the credential caching mode, the choice of cryptographic protocols or the sanitization of inputs, and those relating to the architecture, such as the separation of the administrative interface from the others modules, the choice of third-party libraries, the interception of all possible errors so that they are managed or the management of the software update system.
These requirements concern actual development and depend on the language and frameworks used. In general, they concern the standard for naming classes, methods, variables and constants, the management of comments excluding critical information, the management of memory to avoid overflows and the functions not to be used because they are deprecated.
But we must always remember that the main purpose of security in software is to defend data.
In the requirements phase the data must be identified as public and non-public and taken into consideration from their creation until they are deposited, passing through any processing and movement.
Data states fall into three categories:
When data is transmitted over networks or communication channels.
When data is kept in memory or on external media for processing, usually these are temporary situations, which end with the deletion of the data.
When the data is permanently stored, whether they are transactional, i.e. data that have a defined life time such as the data of an order or a payment, or they are non-transactional, i.e. those data whose importance remains over time, such as the information of clients.
CIA Triad And AAA Model
The CIA triad describes the three important objectives of cybersecurity regarding data, which are Confidentiality, Integrity and Availability, while the AAA (or Triple-A) model, which is one of the main methods through which these objectives are achieved, is composed by Authentication, Authorization and Accountability.
Let's see for each of these the relative concept, requirements and possible implementation design.
Confidentiality is the concept of security that has to do with protecting against unauthorized disclosure of information. Confidentiality not only ensures the secrecy of the data, but also helps to preserve the privacy of the same.
Confidentiality requirements are those that refer to protection against unauthorized disclosure of sensitive information to unauthorized users. At this stage it is important to understand and classify the data used as public and non-public.
Some examples are:
- Personal health information must be protected from disclosure using approved and secure encryption mechanisms.
- Passwords and other sensitive input fields must be masked.
- The passwords must not be stored in clear text in the backend systems and when stored they must be Hashed with Salt through the use of algorithms equivalent or superior to SHA-256.
- A secure version of Transport Layer Security must be adopted to protect against possible Man-in-the-Middle threats for all flows where credit card data is transmitted.
- The use of a transport protocol such as the File Transfer Protocol to transmit sensitive data in clear text outside the organization should not be allowed.
Disclosure protection can be achieved in several ways using encryption and masking techniques. Masking is useful for disclosure protection when data is displayed on screen or printed forms; but to ensure Confidentiality when data is transmitted or stored, encryption techniques are mainly used.
Generally therefore, to ensure Confidentiality as a security requirement, we have:
- Secret Writing: It is the most common form used to hide a data. This includes Overt mechanisms such as public or private key encryption and hashing, or Covert mechanisms such as steganography and digital watermarking.
The distinction between Overt and Covert lies in their goal of achieving disclosure protection. The goal of the first is to make information humanly indecipherable or incomprehensible even if disclosed, while the goal of the second is to hide information in itself or in other means or forms.
- Masking: It is the weakest form of Confidentiality requirement, mainly used to prevent Shoulder Surfing Attack. An example of this mechanism is when the data is partially overwritten with asterisks.
Integrity has to do mainly with two aspects. It must ensure that the data is transmitted, processed and stored unaltered with respect to the origin and must ensure that the software works reliably as expected.
Integrity requirements involve maintaining data consistency, accuracy, and reliability throughout the life cycle. The data must not be modified during transport and measures must be taken to ensure that the data cannot be modified by unauthorized persons (in violation of Confidentiality).
Some examples are:
- All input forms and query string inputs need to be validated against a predefined set of acceptable parameters before the software processes them.
- The published software should be accompanied by a checksum, usually a hash value, so that the end user will be able to verify its accuracy, completeness and consistency by recalculating the checksum.
- All non-human actors such as systems and batch processes need to be identified and monitored, and that data is prevented from altering as it passes through the systems on which it is run unless the modification has been expressly authorized.
Integrity design ensures that there are no unauthorized changes to software or data. Software and data integrity can be achieved using one or more of the following techniques:
- Hashing: Hash functions are used to condense inputs of variable length into an irreversible output of fixed size known as a digest or hash value. The best known algorithms are the Message Digest (latest version MD5), and the Secure Hash Algoritm family (known as SHA).
- Referential Integrity: It ensures that data is not left in an orphaned state by using primary keys, that is, columns or combinations of columns in a table that uniquely identify each row.
- Resource Locking: It is used in those cases where two or more simultaneous operations on an object are not allowed. Bad design could cause deadlocks.
- Code signing: It certifies the authenticity of the software being installed. It also protects the brand and intellectual property.
Availability is the concept of security linked to access to the software or information it manages. While the overall purpose of a Business Continuity Plan may be to ensure that downtime and its impact on the business is minimized, the concept of Availability also has to do with security. The software or data it manages must be accessible only to those authorized and only when necessary, thus maintaining the concept of Confidentiality.
The requirements for Availability may seem closer to the issues of Business Continuity and Disaster Recovery rather than security, but it is important to underline the fact that incorrect software design can lead to data loss and system instability caused by Denial of Services.
What we define in this phase will then be subject to stress tests and performance tests, or in general the Business Impact Analysis.
We must therefore evaluate two fundamental times:
- Maximum Tollerable Downtime: The maximum amount of time, considered acceptable, in which a system can be in a state that prevents the normal provision of the services provided without impacting the business.
- Recovery Time Objective: The amount of time required to restore the software to normal operation.
Some examples are:
- The software must ensure high availability of five nines (99.999%) as defined in the Service Level Agreement (SLA).
- The number of users at any given time who should be able to use the software can be 300 users.
- Software and data should be replicated between data centers to provide load balancing and redundancy.
- Mission critical functionality in the software should be restored to normal operation within 1 hour of the outage, mission essential functionality should be restored within 4 hours of the outage, and mission support capabilities should be restored within 24 hours of the outage.
Furthermore, the requirements for Availability should always be part of the SLAs (Service Level Agreements) and if they are not it means that the software can potentially be susceptible to DoS attacks with a high probability that it has never been tested for it purpose.
The output of the Business Impact Analysis can be used to determine how to design the software for its availability. Although no code has been written at design time, you can look at configuration requirements such as connection pooling, the use of cursors, and loop constructs.
Among the approaches that can be used are:
- Replication: Mainly used to keep the Maximum Tollerable Downtime and the Recovery Time Objective at acceptable levels. Having data and systems redundancy allows you to eliminate any single points of failure. It is also used to reduce the workload. To be carefully designed to maintain data integrity.
- Failover and Failback: It refers to the automatic transition from active transactional software, server, system, hardware or network component to a standby (or redundant) system. It differs from switchover which is a manual process.
- Scalability: It is the ability of the system or software to handle an increasing amount of work without reducing its functionality or performance. The two primary design methods for scaling are vertical scaling and horizontal scaling.
Authentication is a security concept that answers the question "Are you who you claim to be?". Not only does it ensure that the identity of an entity (person or asset) is specified in the format expected by the software, it also validates or verifies the identity information that has been provided.
The Authentication requirements concern the methodologies that must be used to allow users to be recognized by the system. They are essential for managing access to resources and data.
The evaluation of these requirements should also be carried out taking into account usability. Too much security implementation in non-critical environments could lead to customer losses.
Some examples are:
- The software will only be reachable in the Intranet environment and authenticated users should not need to provide a username and password once they are logged into the network.
- The software will need to support Single Sign-On (SSO) with authorized third party vendors.
- Both intranet users and internet users should be able to access the software.
- The authentication policy enforces the need for two or more factor authentication for all financial processing software.
There are several ways in which Authentication can be implemented in the software. Each has its pros and cons when it comes to security. In any case, it is important to consider multi-factor authentication and single sign-on, as well as determining the type of authentication required as specified in the requirements documentation.
Let's see the most common implementations:
- Anonymous: Anonymous authentication is the means of accessing public areas of the system without requiring credentials. While this may be required from a privacy standpoint, the security implications are severe as anonymous authentication cannot connect a user or system to the actions they take.
- Basic: It is one of the specifications of the HTTP 1.0 protocol. These credentials are transmitted in Base64 encoded form. To be avoided, as it can be easily decoded.
- Digest: Unlike Basic, no credentials are sent but a message digest of the original credential, using the MD5 hashing function and a nonce.
- Integrated: Mainly used in Intranet applications or for network services such as Active Directory, through the use of Microsoft's NT LAN Manager suite or Kerberos, a network protocol for authentication via cryptography that uses token management.
- Client Certificates: These certificates are issued by a Certification Authority (CA) which guarantees the validity of the holder. They are then used to validate and process requests for access to resources.
- Forms: Common in web applications, this authentication generally requires the user to enter a username and password. The data is sent in clear text, so it is necessary to implement the security of the communication channel, for example by using the Transport Layer Security.
- Token: Usually used in conjunction with Forms authentication, it involves the creation of a token after verifying the username and password. This token will then be used for every request, so no username and password are sent every time. It is particularly useful in Single Sign-On.
- Smart Cards: It is an authentication based on ownership, that is, something you own. In the embedded microchip there is the information to authenticate. The advantage is that an attacker would have to physically steal the Smart Card in order to exploit it (for example with the Proxmark 3 RDV4).
- Biometrics: Biological features are used, that is, something you are, such as blood vessel patterns in the retina, facial features and fingerprints. The disadvantage of this authentication is that biological traits can potentially change over time, and it is therefore necessary to periodically update this data. It is usually used for physical access and not for the remote use of resources.
- Multifactor: The implementation of multiple factors considerably increases safety. It is integrated with one thing you know (password or PIN), one thing you have (Smart Cards, Bank Token) and one thing you are (fingerprint, retina).
- Single Sign-On: It allows users to log into a system and, after being authenticated, launch other applications without having to provide their identification information again. Common technologies used are Kerberos and Secure Assertion Markup Language (SAML).
Authorization is the concept of security by which access to objects is controlled on the basis of rights and privileges that are granted to the requester. The Authorization process must take place after the Authentication process and never before.
The Authorization requirements define the rights and privileges of the entities, authenticated or not, to access and perform operations on the resources made available by the system.
Some examples are:
- Access to highly sensitive files will be restricted to users with authorization levels called "secret" or "top secret".
- Unauthenticated users will not be able to send messages or communications of any kind to other users.
- Users with the "Administrator", "Auditor" and "Team Leader" roles will be able to access the application logs view.
- All unauthenticated users will inherit the read-only permissions that are part of the "Guest" user role while authenticated users will by default have read and write permissions as part of the "General" user role. Only members of the "Administrator" role will have full rights as a general user as well as having permissions for executive operations.
When planning to integrate the Authorization, pay particular attention to its impact on performance and the principles of Separation of Duties and Least Privilege. Checking access rights each time, to enforce the Complete Mediation principle, can lead to degraded performance and reduced user experience.
There are some well-known access control models, including:
- Discretionary Access Control (DAC): Access is guaranteed on the basis of privileges and rights defined by the owner of the resource.
- Mandatory Access Control (MAC): Access is managed through special labels containing a classification and a category that are assigned to both the object and the subject. Used in government.
- Role Based Access Control (RBAC): Access is guaranteed based on the roles assigned to users. In this case the subject is not directly assigned to the object. It is the most common model in the application field but also one of the most difficult to design, considering the principles of Least Privilege and Separation of Duties.
We must be careful not to confuse roles and groups. Roles are defined by the job function and are a simple collection of permissions, while groups are a set of roles and permissions can be applied to both groups and users.
Accountability is the concept of security by which business-critical state transactions are recorded through logs and audit processes. The logs can be used to reconstruct the history of events that have occurred, which can be used for troubleshooting or as evidence for forensic analysis. It also guarantees non-repudiation if used in conjunction with Authentication.
Auditing requirements are those that help reconstruct events caused by user actions and/or software errors and exceptions.
All critical business transactions and administrative features should be identified and recorded, including items that correspond to the WHO, WHAT, WHERE and WHEN paradigm.
For example, PCI DSS requirement 10.2 indicates that we must register:
- All access to sensitive and/or confidential data.
- All actions performed by users with administrative or root privileges.
- The accesses to the same logs.
- Invalid logical access attempts.
- The use and changes of identification and authentication mechanisms.
- The initialization of the audit log.
- The creation and destruction of objects at the operating system level.
Although often overlooked, design for Accountability has proven to be extremely important in the event of a breach, primarily for forensic purposes, and therefore should be considered in software design right from the start.
Let's see some good practices:
- Recorded software operations data should follow the WHO, WHAT, WHERE and WHEN paradigm. As part of WHO, it is important not to forget non-human actors such as batch processes and services, daemons or logins from other external services.
- Because it is a good idea to add to registers and not overwrite them, constraints and capacity requirements are important design considerations.
- Design decisions to retain, archive and dispose of records should not contradict external regulatory or internal retention requirements.
- Sensitive data must never be recorded in plain text form.
- The protection mechanisms of the registry itself must be taken into account.
- Joint control with other security controls such as authentication can provide non-repudiation.
In The End
There are many other topics still to be treated and explored, but for this there are several guides and courses about it that can be easily found on the internet.
What I wanted people to start understanding is that the requirements are fundamental for an accurate design, whether they are about security or generic.
There is no doubt that it is very difficult to collect the requirements from the various players involved, also remaining within the time and costs imposed by the Business, but the time saved in having to go and get your hands on the incorrect implementations is worth it.
Not to mention that an incorrect design (and implementation) could lead to blocking problems that would impact on the whole business. And this is the discourse with which you must leverage with managers. Because if we think about security, they only think about money.