Information technology general controls (Wikipedia Lab Guide)

Information Technology General Controls (ITGC) - A Deep Dive Study Guide
1) Introduction and Scope
Information Technology General Controls (ITGC), also known as General Computer Controls (GCC), represent the fundamental, overarching security and operational mechanisms that govern an organization's entire Information Technology (IT) ecosystem. These are distinct from application-specific controls, which focus on the integrity and security of individual software programs. ITGCs establish a broad, systemic framework designed to ensure the secure, reliable, and compliant operation of all IT systems, infrastructure, applications, and data. Their scope encompasses the complete lifecycle of IT assets, from initial conception, development, and deployment through ongoing operations, maintenance, and eventual decommissioning.
The primary technical and business objectives of ITGCs are to:
- Ensure Application Integrity and Security: Guarantee that applications are developed, implemented, and maintained using secure coding practices, robust configuration management, and rigorous testing, thereby preventing the introduction of exploitable vulnerabilities or data corruption. This involves understanding and mitigating common software weaknesses such as buffer overflows, race conditions, and injection vulnerabilities.
- Maintain Data Integrity and Confidentiality: Protect the accuracy, completeness, consistency, and confidentiality of data throughout its lifecycle (creation, storage, processing, transmission, destruction). This requires robust access controls, encryption, and data loss prevention mechanisms, often leveraging cryptographic primitives and secure protocols.
- Secure Infrastructure: Safeguard the underlying hardware, network devices, operating systems, middleware, and cloud environments that form the foundation for applications and data. This involves hardening systems by reducing the attack surface and implementing network segmentation to isolate critical assets.
- Promote Operational Reliability and Availability: Ensure consistent, predictable, and resilient system performance and availability through effective monitoring, change management, incident response, and disaster recovery planning. This often involves implementing redundant systems and robust fault tolerance mechanisms.
- Facilitate Regulatory and Compliance Adherence: Establish and maintain a robust control environment that meets stringent regulatory mandates (e.g., Sarbanes-Oxley Act (SOX), General Data Protection Regulation (GDPR), Health Insurance Portability and Accountability Act (HIPAA)) and industry best practices. This requires meticulous documentation and auditable processes.
This study guide provides a deep technical exploration of common ITGC categories, offering practical insights, detailed architectural explanations, and defensive engineering considerations for cybersecurity professionals, system administrators, auditors, developers, and compliance officers.
2) Deep Technical Foundations
The efficacy of ITGCs is rooted in fundamental principles of information security and robust systems engineering. A thorough understanding of these principles is indispensable for their effective implementation, auditing, and maintenance.
2.1) Defense-in-Depth Principle
ITGCs are a practical embodiment of the defense-in-depth strategy. This layered security approach posits that a compromise in one security control layer should not lead to a complete system breach. Instead, multiple, independent layers of defense should exist, each designed to mitigate different types of threats. This involves:
- Physical Security: Protecting the physical IT infrastructure (e.g., data centers, server rooms, network closets) from unauthorized physical access, environmental hazards (fire, flood, power loss), and theft. This includes access control mechanisms (biometrics, keycards), surveillance (CCTV), and environmental monitoring (temperature, humidity, smoke detection).
- Network Security: Implementing controls at the network perimeter and within internal network segments to manage traffic flow, segment networks, and prevent unauthorized access. This encompasses firewalls (stateful packet inspection, application-layer gateways), Intrusion Detection/Prevention Systems (IDS/IPS) utilizing signature-based and anomaly-based detection, Virtual Private Networks (VPNs) for secure remote access, and network segmentation (VLANs, subnets, micro-segmentation) to limit lateral movement.
- Host-Based Security: Securing individual servers, workstations, and endpoints through measures like host-based firewalls, Endpoint Detection and Response (EDR) solutions, Host-based Intrusion Detection Systems (HIDS), secure operating system configurations (hardening) such as disabling unnecessary services and applying security patches promptly, and mandatory access control (MAC) frameworks like SELinux or AppArmor.
- Application Security: Ensuring secure coding practices, input validation, output encoding, robust authentication and authorization mechanisms, and secure configuration of applications themselves. This includes understanding and mitigating vulnerabilities outlined by the OWASP Top 10 (e.g., Injection, Broken Authentication, Sensitive Data Exposure).
- Data Security: Protecting data at rest (e.g., full-disk encryption using AES-256, database Transparent Data Encryption - TDE), in transit (e.g., TLS 1.2/1.3 for web traffic, IPsec VPNs for network tunnels), and in use (e.g., secure processing environments, Data Loss Prevention (DLP) systems that monitor data exfiltration patterns).
- Administrative and Procedural Controls: Establishing clear policies, procedures, standards, and guidelines for IT operations, personnel management (background checks, security awareness training), and security incident response. This includes formal processes for access reviews, incident handling, and disaster recovery.
2.2) Principle of Least Privilege
A cornerstone of logical access controls, the principle of least privilege mandates that users, processes, and systems are granted only the minimum set of permissions and access rights necessary to perform their intended, legitimate functions. This principle is critical for:
- Minimizing the Attack Surface: Reducing the number of potential entry points and privileges an attacker could exploit. If a process only has read access to a file, it cannot modify or delete it, even if compromised.
- Limiting Blast Radius: Containing the damage caused by a compromised credential, insider threat, or malicious process. A compromised user account with limited privileges will cause less harm than one with administrative rights.
- Preventing Accidental Misuse: Reducing the likelihood of authorized users unintentionally performing unauthorized actions that could lead to data corruption or system instability.
Example (Conceptual - Linux Process Permissions and SELinux):
A web server process (e.g., nginx or apache) running as user www-data should ideally have:
- Read-only access (
r-x) to web content directories (/var/www/html). - Write access (
rw-) only to its own log directories (/var/log/nginx). - No direct write access to system configuration files (
/etc/nginx/nginx.conf) or sensitive user data databases.
This is enforced through file system permissions (e.g., chmod, chown) and potentially more granular controls like SELinux or AppArmor, which define security contexts for processes and files, restricting interactions even if standard Unix permissions would allow them.
# Example of setting restrictive file permissions
sudo chown -R www-data:www-data /var/www/html
sudo chmod -R 755 /var/www/html # Owner: rwx, Group: rx, Others: rx
sudo chown -R www-data:www-data /var/log/nginx
sudo chmod -R 750 /var/log/nginx # Owner: rwx, Group: rx, Others: ---
# Example of SELinux context (conceptual)
# An SELinux policy might define that processes running in the 'httpd_t' domain
# can only read files labeled 'httpd_sys_content_t' and write to files labeled 'httpd_log_t'.
# This prevents a compromised web server process from reading /etc/shadow, for instance.2.3) Separation of Duties (SoD)
To prevent fraud, errors, and unauthorized actions, conflicting responsibilities within critical IT processes should be assigned to different individuals or distinct roles. This ensures that no single individual has the power to complete a sensitive transaction or process from initiation to finalization without oversight.
Example (Technical Context - Database Administration and Change Management):
- Process: Creating a new production database user with elevated privileges.
- Conflicting Duties:
- Initiation/Creation: A junior DBA creates the user account and assigns initial, broad permissions.
- Approval: A senior DBA or Security Officer reviews the user's necessity and the assigned privileges. This review should be documented, potentially within a ticketing system.
- Implementation/Granting: The senior DBA (or a separate automated process triggered by approval) applies the approved and modified set of specific, least-privileged permissions. The junior DBA should not have the ability to grant these final permissions.
- Verification: An automated monitoring system (e.g., SIEM rule) or a periodic manual audit reviews user accounts and their permissions for compliance with policy.
Implementing SoD often requires careful role design in Identity and Access Management (IAM) systems and robust workflow approvals in change management and privileged access management (PAM) tools. For instance, a PAM system might require two distinct approvals (e.g., from a manager and a security officer) before granting temporary elevated privileges to a system administrator.
2.4) Auditability and Logging
Effective ITGCs necessitate comprehensive, immutable, and securely stored logs of system events, access attempts, configuration modifications, and security-related incidents. These logs serve as an indispensable audit trail, enabling:
- Detection of Suspicious Activities: Identifying anomalous patterns that may indicate a compromise or policy violation (e.g., multiple failed login attempts followed by a successful one from an unusual IP address, large data transfers at odd hours, unexpected configuration changes). This often involves real-time log analysis by SIEM systems.
- Incident Investigation: Reconstructing the sequence of events leading up to and during a security incident. This involves correlating logs from various sources (OS, applications, network devices, firewalls, IDS/IPS).
- Verification of Control Effectiveness: Confirming that controls are functioning as intended and that policies are being adhered to (e.g., verifying that only authorized personnel can access sensitive data logs, checking that failed login attempts are logged and alerted upon).
- Forensic Analysis: Providing evidence for legal proceedings or internal investigations. Logs must be collected with sufficient detail and maintained in a tamper-evident manner.
Logs must be protected from tampering (e.g., by writing to read-only media, using cryptographic signing of log entries, or forwarding to an immutable SIEM system), retained for appropriate periods (often dictated by compliance requirements like GDPR's data retention policies or SOX's audit trail mandates), and aggregated into a centralized Security Information and Event Management (SIEM) system for correlation, alerting, and analysis.
Log Data Example (Syslog - Linux Authentication Failure):
Mar 30 10:00:05 webserver sshd[12345]: Failed password for invalid user testuser from 192.168.1.100 port 54321 ssh2
Mar 30 10:00:06 webserver sshd[12345]: Failed password for invalid user testuser from 192.168.1.100 port 54321 ssh2
Mar 30 10:00:07 webserver sshd[12345]: Failed password for invalid user testuser from 192.168.1.100 port 54321 ssh2
Mar 30 10:00:08 webserver sshd[12345]: Connection closed by 192.168.1.100 port 54321 [preauth]A SIEM rule could detect this pattern (multiple failed logins from the same IP within a short timeframe) and trigger an alert for potential brute-force attack.
3) Internal Mechanics / Architecture Details
Let's delve into the technical architecture and mechanics underpinning key ITGC categories.
3.1) Logical Access Controls
This category focuses on verifying user or system identity and authorizing their access to IT resources based on established policies.
3.1.1) Authentication: Verifying Identity
The process of confirming that a user or system is who they claim to be.
- Password-Based Authentication:
- Hashing Algorithms: Secure password storage relies on strong, computationally intensive, and cryptographically secure hashing algorithms. These algorithms are designed to be one-way functions, making it computationally infeasible to derive the original password from the hash. Modern recommendations include Argon2 (the winner of the Password Hashing Competition), scrypt, and bcrypt, which incorporate work factors (computational cost), memory hardness, and salting. Older algorithms like MD5 and SHA-1 are considered cryptographically broken for password hashing due to their susceptibility to rainbow table attacks and collision vulnerabilities.
- Salt: A unique, random, and secret value generated for each password before hashing. The salt is stored alongside the hash. This prevents attackers from using pre-computed rainbow tables for common passwords, as the same password with different salts will produce different hashes.
- Formula:
StoredHash = HASH(Password + Salt) - Example (Python - using bcrypt):
import bcrypt def hash_password(password: str) -> bytes: """Hashes a password using bcrypt, automatically generating and including a salt.""" # bcrypt.gensalt() generates a salt and includes it in the returned hash string. hashed_password = bcrypt.hashpw(password.encode('utf-8'), bcrypt.gensalt()) return hashed_password def verify_password(stored_hash: bytes, provided_password: str) -> bool: """Verifies a provided password against a stored bcrypt hash.""" # bcrypt.checkpw extracts the salt from the stored_hash and compares hashes. return bcrypt.checkpw(provided_password.encode('utf-8'), stored_hash) # --- Usage --- my_password = "SuperSecretP@ssword1!" stored_hash = hash_password(my_password) print(f"Stored Hash (includes salt): {stored_hash.decode('utf-8')}") # Example: $2b$12$abcdefghijklmnopqrstuv.XYZABCDEFGH... # Simulate login attempt user_input_password_correct = "SuperSecretP@ssword1!" if verify_password(stored_hash, user_input_password_correct): print("Authentication successful!") else: print("Authentication failed.") # Example of a failed attempt user_input_password_wrong = "WrongPassword" if verify_password(stored_hash, user_input_password_wrong): print("This should not print.") else: print("Authentication failed (as expected).")
- Formula:
- Multi-Factor Authentication (MFA): Enhances security by requiring verification from at least two distinct categories of credentials.
- Something You Know: Passwords, PINs, security questions (though less secure and prone to social engineering).
- Something You Have: Physical tokens (e.g., YubiKey, RSA SecurID hardware tokens), mobile authenticator apps (e.g., Google Authenticator, Authy generating Time-based One-Time Passwords - TOTP), software tokens, smart cards.
- Something You Are: Biometric data (fingerprint, facial recognition, iris scan).
- Protocol Snippet (TOTP - RFC 6238 Conceptual Flow):
# Server-side: # 1. User registers a TOTP authenticator app. # 2. Server generates a shared secret key (e.g., 160-bit random value, Base32 encoded). # 3. Server securely stores this secret key associated with the user (e.g., in a database). # 4. Server initiates a challenge (e.g., requests OTP from user during login). # 5. Server gets the current Unix time (t). # 6. Server calculates the time step: T = floor((t - T0) / X) where T0 is Unix epoch start (often 0), X is step interval (e.g., 30s). # 7. Server computes: HMAC-SHA1(secret_key, T) - using the stored secret and the current time step. # 8. Server performs Dynamic Truncation on the HMAC output to get a 6-8 digit code (e.g., based on specific bits of the HMAC output). # 9. Server compares the user-provided OTP with its computed OTP. A small window (e.g., +/- 1 time step) is often allowed for clock drift. # Client-side (Authenticator App): # 1. User inputs the shared secret key (via QR code scan or manual entry). # 2. App gets the current Unix time (t). # 3. App calculates the time step: T = floor((t - T0) / X). # 4. App uses the stored secret key and T to compute HMAC-SHA1, then truncates to get the OTP. # 5. App displays the 6-8 digit code to the user, which updates every X seconds.
- Certificate-Based Authentication: Utilizes Public Key Infrastructure (PKI) where users or devices are issued digital certificates (typically X.509) by a trusted Certificate Authority (CA). These certificates contain the entity's public key and identity information, signed by the CA's private key. Authentication involves verifying the certificate's validity (not expired, not revoked via CRL or OCSP), its chain of trust up to a trusted root CA, and matching the presented certificate's private key to the public key within the certificate.
- Protocol Snippet (TLS Handshake - Mutual TLS Authentication - mTLS):
Client: Client Hello (cipher suites, extensions) Server: Server Hello (selected cipher suite, session ID) Server: Certificate (Server's X.509 certificate) Server: Server Key Exchange (optional, e.g., for Diffie-Hellman parameters) Server: Certificate Request (Server requests client's certificate) Server: Server Hello Done Client: --- Verifies Server Certificate --- Client: Certificate (Client's X.509 certificate) Client: Client Key Exchange (e.g., client's public key or DH shared secret) Client: Certificate Verify (Client signs a message with its private key to prove possession) Client: Change Cipher Spec (Indicates client will now use encrypted communication) Client: Finished (Encrypted and MAC'd hash of previous handshake messages) Server: --- Verifies Client Certificate (chain, revocation, identity) --- Server: --- Verifies Certificate Verify signature using client cert's public key --- Server: Change Cipher Spec Server: Finished # If all checks pass, client and server are mutually authenticated and session keys are established.
- Protocol Snippet (TLS Handshake - Mutual TLS Authentication - mTLS):
- Single Sign-On (SSO): A mechanism that allows a user to authenticate once with an Identity Provider (IdP) and gain access to multiple independent Service Providers (SPs) without re-authenticating for each.
- Protocols:
- SAML (Security Assertion Markup Language): An XML-based standard for exchanging authentication and authorization data between parties. Commonly used in enterprise SSO.
- OAuth 2.0: An authorization framework that allows users to grant third-party applications limited access to their data on other services. While primarily for authorization, it's often used in conjunction with OpenID Connect.
- OpenID Connect (OIDC): An identity layer built on top of OAuth 2.0, providing authentication and basic profile information about the user.
- SAML Flow (Simplified Browser-Based SSO):
- User Access Request: User attempts to access an SP (e.g.,
https://app.example.com). - SP Redirect to IdP: The SP, not recognizing the user, redirects the user's browser to the IdP (e.g.,
https://idp.example.com/sso/saml/login) with a SAML Authentication Request (an XML document typically signed by the SP). - IdP Authentication: The user authenticates with the IdP (e.g., enters username/password, completes MFA).
- IdP Generates Assertion: Upon successful authentication, the IdP generates a SAML Assertion – a signed XML document containing user identity attributes (e.g.,
NameID,emailAddress,roles) and proof of authentication. The assertion is signed with the IdP's private key. - IdP Redirects to SP with Assertion: The IdP redirects the user's browser back to the SP, embedding the SAML Assertion (often via HTTP POST binding).
- SP Validation and Access Grant: The SP receives the assertion, verifies its digital signature using the IdP's public key (obtained from the IdP's metadata), parses the attributes, and grants the user access.
- User Access Request: User attempts to access an SP (e.g.,
- Protocols:
3.1.2) Authorization: Granting Permissions
The process of determining what an authenticated user or system is permitted to do with specific resources.
- Role-Based Access Control (RBAC): Permissions are grouped into roles, and users are assigned to one or more roles. This simplifies permission management by abstracting permissions from individual users.
- Technical Implementation (Database Example - PostgreSQL):
-- Create roles representing user groups or job functions CREATE ROLE report_viewer; CREATE ROLE data_analyst; CREATE ROLE system_administrator; -- Grant specific privileges to roles on database objects -- Grant SELECT on the 'sales_data' table to the 'report_viewer' role. GRANT SELECT ON TABLE public.sales_data TO report_viewer; -- Grant SELECT and INSERT on 'customer_records' to 'data_analyst'. GRANT SELECT, INSERT ON TABLE public.customer_records TO data_analyst; -- Grant ALL privileges on the entire database to 'system_administrator' (use with extreme caution). -- In practice, this should be broken down further. GRANT ALL PRIVILEGES ON DATABASE my_production_db TO system_administrator; -- Assign users to roles CREATE USER alice WITH PASSWORD 'secure_password_alice'; GRANT report_viewer TO alice; -- Alice can now perform actions allowed by report_viewer role. CREATE USER bob WITH PASSWORD 'secure_password_bob'; GRANT data_analyst TO bob; GRANT report_viewer TO bob; -- Bob can also view reports, inheriting permissions.
- Technical Implementation (Database Example - PostgreSQL):
- Access Control Lists (ACLs): A list of permissions associated with an object (e.g., file, directory, network port). Each entry in the ACL specifies a trustee (user or group) and the operations they are allowed or denied.
- File System ACL (Linux
getfaclandsetfaclExample):# Assume a file 'sensitive_data.txt' with default permissions -rw-r----- ls -l sensitive_data.txt # Output: -rw-r----- 1 owner group 1024 Mar 30 10:00 sensitive_data.txt getfacl sensitive_data.txt # Output: # file: sensitive_data.txt # owner: owner # group: group # permissions: rw-r----- # user::rw- # group::r-- # other::--- # Grant write access to a specific user 'analyst_user' # '-m' for modify, 'u:analyst_user:rw-' specifies user, name, and permissions. setfacl -m u:analyst_user:rw- sensitive_data.txt getfacl sensitive_data.txt # Output: # file: sensitive_data.txt # owner: owner # group: group # ACL: # user::rw- # user:analyst_user:rw- <-- Added entry for analyst_user with read/write permissions. # group::r-- # mask::rw- <-- The mask determines the maximum effective permissions for user and group entries. # other::--- - Network ACLs (Firewall Rules): As shown in Section 4.1, firewall rules function as network ACLs, defining permitted or denied traffic based on source/destination IP, ports, and protocols.
- File System ACL (Linux
- Attribute-Based Access Control (ABAC): A more dynamic and granular approach where access decisions are made based on policies that evaluate attributes of the user (e.g., role, department, security clearance), the resource being accessed (e.g., data classification, owner), the action being performed (e.g., read, write, delete), and the environment (e.g., time of day, location, device posture).
- Example Policy (Conceptual - using Open Policy Agent - OPA):
This policy would be evaluated by an authorization engine (like OPA) which receives input data describing the request context.package authz # Define a rule for accessing a specific confidential report. # This rule checks multiple attributes of the request, resource, user, and environment. allow { input.request.action == "read" input.resource.name == "confidential_report.pdf" input.resource.classification == "Confidential" # Check user attributes: department and security clearance level. user_attributes := input.user.attributes user_attributes.department == "Finance" user_attributes.security_level >= "Confidential" # User must have at least 'Confidential' clearance. # Check environment attributes: network segment. environment_attributes := input.environment.attributes environment_attributes.network_segment == "CorporateLAN" # Access only allowed from the corporate network. } # Deny all other requests by default. This is a common security practice. default allow = false
- Example Policy (Conceptual - using Open Policy Agent - OPA):
3.2) System Development Life Cycle (SDLC) Controls
These controls ensure that security is an integral part of the software development process, from requirements gathering to deployment and maintenance.
- Secure Coding Standards and Guidelines: Adherence to established best practices to prevent common vulnerabilities. Examples include:
- OWASP (Open Web Application Security Project) Secure Coding Practices: Providing guidelines for preventing vulnerabilities like SQL Injection, Cross-Site Scripting (XSS), Broken Authentication, Insecure Deserialization, etc.
- Input Validation: Crucial for preventing injection attacks. All external input (user input, API calls, file reads, network packets) must be validated against an expected format, type, length, and character set. This is often done using allow-lists (whitelisting) rather than deny-lists (blacklisting) for maximum security.
- Example (Python - Strict Validation with Pydantic):
from pydantic import BaseModel, EmailStr, Field, validator, ValidationError class UserRegistration(BaseModel): username: str = Field(..., min_length=3, max_length=50) email: EmailStr age: int = Field(..., gt=18, lt=120) # User must be adult # Custom validator to ensure username contains only alphanumeric characters. @validator('username') def username_must_be_alphanumeric(cls, v): if not v.isalnum(): raise ValueError('Username must contain only alphanumeric characters') return v def process_user_registration(data: dict): try: user_data = UserRegistration(**data) # Proceed with database insertion using validated data. print(f"Registering user: {user_data.username}, Email: {user_data.email}, Age: {user_data.age}") return True except ValidationError as e: print(f"Validation Error: {e}") return False # --- Usage --- valid_data = {"username": "testuser123", "email": "test@example.com", "age": 30} process_user_registration(valid_data) # Output: Registering user: testuser123, Email: test@example.com, Age: 30 invalid_data_username = {"username": "test user", "email": "test@example.com", "age": 30} process_user_registration(invalid_data_username) # Output: Validation Error: 1 validation error for UserRegistration... invalid_data_email = {"username": "testuser123", "email": "invalid-email", "age": 30} process_user_registration(invalid_data_email) # Output: Validation Error: 1 validation error for UserRegistration...
- Example (Python - Strict Validation with Pydantic):
- Output Encoding: Preventing XSS by encoding user-supplied data before rendering it in HTML, JavaScript, or other contexts. This transforms potentially malicious characters (e.g.,
<,>,&) into their HTML entity equivalents (e.g.,<,>,&).- Example (Python Flask - Jinja2 handles encoding by default):
from flask import Flask, request, render_template_string app = Flask(__name__) @app.route('/') def index(): user_input = request.args.get('name', 'Guest') # Jinja2 templating engine automatically encodes user_input to prevent XSS. # If user_input is "<script>alert('XSS')</script>", it will be rendered as # <script>alert('XSS')</script> in the HTML. template = """ <!DOCTYPE html> <html> <head><title>Greeting</title></head> <body> <h1>Hello, {{ user_input }}!</h1> </body> </html> """ return render_template_string(template, user_input=user_input) if __name__ == '__main__': # To test: navigate to http://127.0.0.1:5000/?name=<script>alert('XSS')</script> # The browser will display the literal string, not execute the script. app.run(debug=True)
- Example (Python Flask - Jinja2 handles encoding by default):
- Static Application Security Testing (SAST): Tools that analyze source code, byte code, or binary code without executing it to identify potential security flaws. They can detect patterns indicative of vulnerabilities (e.g., unsanitized user input passed directly to a database query, use of weak cryptographic algorithms, potential buffer overflows).
- Example Tool: SonarQube, Checkmarx, Veracode, Bandit (for Python).
- Example (Bandit output):
# Run Bandit on a Python project bandit -r my_python_app/ # Example Output Snippet: # >>> [main] # INFO Running bandit at 2023-10-27 10:30:00 # INFO Using Bandit version 1.7.4 # INFO Normalizing results ... # INFO Found 3 issues in 1 file # # >>> [file] my_python_app/utils.py # # [...] # L50: [B311] cryptographic component uses insecure algorithm. # Use of MD5 hash. Consider using SHA256 or stronger. # Severity: Medium # # L65: [B306] Possible hardcoded password. # Check if password 'admin123' is hardcoded. # Severity: High
- Dynamic Application Security Testing (DAST): Tools that test applications in a running state by simulating external attacks. They interact with the application's interfaces (e.g., web UI, APIs) to discover vulnerabilities like SQL injection, XSS, insecure configurations, and broken access controls. DAST tools act as black-box testers.
- Example Tool: OWASP ZAP, Burp Suite, Nessus (can also perform DAST).
- Dependency Scanning (Software Composition Analysis - SCA): Tools that identify third-party libraries and components used in an application and check them against databases of known vulnerabilities (CVEs). This is critical for managing supply chain risk.
- Example (npm audit for Node.js):
# After 'npm install' npm audit --production # Example Output Snippet: # # npm audit report # # 1 high severity vulnerability # # react-scripts <=4.0.3 # Severity: High # Cross-site Scripting (XSS) - https://npmjs.org/advisories/react-scripts # Node.js vulnerability # # fix available via `npm audit fix` - Example (pip-audit for Python):
# Install pip-audit pip install pip-audit # Run pip-audit against installed packages pip-audit # Example Output Snippet: # # Installed package Vulnerability ID Affected Versions Installed Version Fixed Versions # --------------------------------------------------------------------------------------------------- # requests PYSEC-2021-123 <2.27.0 2.26.0 2.27.0 # urllib3 PYSEC-2023-101 <1.26.17 1.26.15 1.26.17
- Example (npm audit for Node.js):
3.3) Program Change Management Controls
These controls govern the process of modifying existing software and systems to ensure changes are authorized, tested, and implemented without introducing new risks.
- Formal Change Request Process: A documented procedure for submitting, reviewing, approving, and tracking all proposed changes. This typically involves:
- Change Request (CR) Form: Detailing the proposed change, justification, scope, affected systems, rollback plan, testing strategy, and risk assessment.
- Change Advisory Board (CAB): A committee responsible for reviewing and approving or rejecting change requests based on risk, impact, resource availability, and alignment with business objectives. This enforces SoD by having multiple stakeholders review changes.
- Version Control Systems (VCS): Essential for tracking code changes, facilitating collaboration, and enabling rollback to previous stable versions.
- Tools: Git, Subversion (SVN).
- Git Workflow Example (Gitflow - simplified):
# 1. Start a new feature development from the main development branch ('develop'). git checkout develop git pull origin develop git checkout -b feature/user-profile-editing # 2. Make code changes, commit frequently with descriptive messages. git add . git commit -m "Feat: Implement basic user profile display" # ... more changes ... git commit -m "Feat: Add profile editing form" # 3. Push the feature branch to the remote repository. git push origin feature/user-profile-editing # 4. Create a Pull Request (PR) for code review and merging into 'develop'. # This is where SoD is enforced: developers don't merge their own code. # The PR process involves peer review, automated tests (CI), and security scans. # 5. After approval and successful CI, merge into 'develop'. # git checkout develop # git merge --no-ff feature/user-profile-editing # --no-ff prevents merge commits if possible, maintaining linear history. # git push origin develop # 6. For release: A 'release' branch is created from 'develop'. It's stabilized, then merged into 'main' (production) and back into 'develop'. # Hotfixes are branched from 'main', merged back to 'main' and 'develop'.
- Testing and Quality Assurance (QA): Comprehensive testing phases are critical before deployment:
- Unit Testing: Verifying individual code components in isolation.
- Integration Testing: Testing interactions between components or services.
- **System Testing
Source
- Wikipedia page: https://en.wikipedia.org/wiki/Information_technology_general_controls
- Wikipedia API endpoint: https://en.wikipedia.org/w/api.php
- AI enriched at: 2026-03-30T22:40:50.264Z
