Sensitive Information Logging

ID

swift.sensitive_information_logging

Severity

low

Resource

Information Leak

Language

Swift

Tags

CWE:532, MASVS:storage-2, MASWE:0001, NIST.SP.800-53, OWASP:2021:A1, PCI-DSS:3.4

Description

Logging sensitive data such as passwords, authentication tokens, API keys, credit card numbers, or personally identifiable information (PII) can lead to serious security vulnerabilities when logs are accessed by unauthorized parties, stored insecurely, or transmitted to external logging services.

iOS and macOS applications commonly use various logging mechanisms that write messages to the console, system logs, or external analytics platforms. When developers inadvertently log sensitive data during debugging or normal operation, this information becomes vulnerable to:

  • Physical device access: Attackers with physical access can extract logs using Xcode, Console app, or device forensics tools

  • Crash reports: Logs may be included in crash reports sent to developers or third-party services

  • Cloud logging services: Third-party analytics and logging SDKs may transmit logs over insecure channels

  • Backup extraction: Device backups may include log files containing sensitive information

  • Malware access: Malicious apps with appropriate permissions can read system logs

Common logging functions in Swift/iOS:

  • print() - Swift’s basic console output (writes to standard output)

  • NSLog() - Foundation logging that writes to Apple System Log

  • os_log() - Modern unified logging system (C API)

  • OSLog() / Logger - Swift wrapper for unified logging

  • os_signpost() - Performance logging with signposts

Rationale

The following example demonstrates vulnerable code that logs sensitive information:

import Foundation
import os.log

class UserAuthentication {
    let logger = Logger(subsystem: "com.example.app", category: "auth")

    // VULNERABLE: Logging password in plaintext
    func login_vulnerable1(username: String, password: String) {
        print("User login attempt: \(username), password: \(password)")  // FLAW

        // Authentication logic...
        if authenticate(username: username, password: password) {
            print("Login successful for \(username)")
        }
    }

    // VULNERABLE: Logging authentication token
    func storeToken_vulnerable2(token: String) {
        NSLog("Storing auth token: %@", token)  // FLAW
        UserDefaults.standard.set(token, forKey: "authToken")
    }

    // VULNERABLE: Logging API key
    func configureAPI_vulnerable3() {
        let apiKey = "sk_live_1234567890abcdef"
        logger.info("Configuring API with key: \(apiKey)")  // FLAW

        // API configuration...
    }

    // VULNERABLE: Logging credit card information
    func processPayment_vulnerable4(cardNumber: String, cvv: String) {
        print("Processing payment for card: \(cardNumber), CVV: \(cvv)")  // FLAW

        // Payment processing...
    }

    // VULNERABLE: Logging personal health information
    func logHealthData_vulnerable5(patientId: String, diagnosis: String) {
        os_log("Patient %@ diagnosed with %@", patientId, diagnosis)  // FLAW
    }

    private func authenticate(username: String, password: String) -> Bool {
        // Authentication logic
        return true
    }
}

// VULNERABLE: Logging social security number
class UserProfile {
    func updateSSN_vulnerable6(ssn: String) {
        print("Updating SSN: \(ssn)")  // FLAW
        // Update logic...
    }
}

// VULNERABLE: Logging location coordinates
class LocationTracker {
    func trackLocation_vulnerable7(latitude: Double, longitude: Double) {
        NSLog("User location: lat=%.6f, lon=%.6f", latitude, longitude)  // FLAW
    }
}

This code has several critical security problems:

  1. Password exposure: login_vulnerable1() logs the user’s password in plaintext, making it visible in system logs and potentially accessible to other apps or attackers with device access.

  2. Token leakage: storeToken_vulnerable2() logs authentication tokens that could be used to impersonate users or access protected resources.

  3. API key disclosure: configureAPI_vulnerable3() logs API keys that could allow unauthorized access to backend services.

  4. Financial data exposure: processPayment_vulnerable4() logs complete credit card details including CVV, violating PCI-DSS compliance requirements.

  5. Health information leakage: logHealthData_vulnerable5() logs protected health information (PHI), violating HIPAA regulations.

  6. PII disclosure: updateSSN_vulnerable6() logs social security numbers, exposing highly sensitive personal identifiers.

  7. Location privacy violation: trackLocation_vulnerable7() logs precise GPS coordinates, compromising user privacy and potentially revealing home/work addresses.

Attack Scenarios

Scenario 1: Physical Device Access

An attacker gains temporary physical access to a device (lost phone, borrowed device, device repair scenario). Then he can connect device to computer with Xcode, open Console app or Xcode’s debugging console, and search for password, token, etc. to gather sensitive information.

Scenario 2: Malicious App

A seemingly benign app with log reading permissions silently extracts sensitive data from system logs:

// Malicious app code
let process = Process()
process.launchPath = "/usr/bin/log"
process.arguments = ["show", "--predicate", "process == 'VictimApp'"]
// Exfiltrates logs containing passwords, tokens, keys...

Scenario 3: Third-Party Analytics SDK

An analytics SDK configured to capture debug logs automatically transmits sensitive information to external servers:

// App uses third-party SDK that captures all logs
Analytics.captureDebugLogs = true  // Dangerous if sensitive data is logged
// All print() and NSLog() statements sent to analytics platform

Remediation

Recommended Approach: Use OSLog Privacy Levels

Modern Swift logging with Logger and OSLog provides built-in privacy controls through string interpolation privacy markers:

import os.log

class SecureAuthentication {
    let logger = Logger(subsystem: "com.example.app", category: "auth")

    // SECURE: Using privacy markers to redact sensitive data
    func login_secure1(username: String, password: String) {
        // Password is completely redacted in logs
        logger.info("User login attempt: \(username, privacy: .public), password: \(password, privacy: .private)")

        if authenticate(username: username, password: password) {
            logger.info("Login successful for \(username, privacy: .private)")
        }
    }

    // SECURE: Marking tokens as private
    func storeToken_secure2(token: String) {
        logger.info("Storing auth token: \(token, privacy: .private)")
        UserDefaults.standard.set(token, forKey: "authToken")
    }

    // SECURE: Complete redaction of API keys
    func configureAPI_secure3() {
        let apiKey = getAPIKey()
        logger.info("Configuring API with key: <redacted>")
        // Never log the actual key, even with .private
    }

    // SECURE: Don't log payment card details at all
    func processPayment_secure4(cardNumber: String, cvv: String) {
        logger.info("Processing payment for card ending in: \(String(cardNumber.suffix(4)))")
        // Log only last 4 digits, never CVV
    }

    // SECURE: Hash identifiers before logging
    func logHealthData_secure5(patientId: String, diagnosis: String) {
        let hashedId = hashIdentifier(patientId)
        logger.info("Patient \(hashedId, privacy: .public) updated")
        // Never log diagnosis details
    }

    private func authenticate(username: String, password: String) -> Bool {
        return true
    }

    private func getAPIKey() -> String {
        return ""
    }

    private func hashIdentifier(_ id: String) -> String {
        return String(id.hashValue)
    }
}

Privacy Level Options:

Privacy Level Behavior Use Case

.private

Redacted in log viewers (shows <private>)

Sensitive data: passwords, tokens, PII

.public

Visible in all log contexts

Non-sensitive data: public IDs, states

.auto (default)

System decides (conservative default)

General logging

.sensitive

Even more restricted than .private

Highly sensitive data

Best Practices

Never log sensitive data in production:

+

#if DEBUG
    print("Debug info: \(value)")
#endif
// Better: Use proper logging with privacy levels

Use conditional compilation for debug logging:

+

func debugLog(_ message: String) {
    #if DEBUG
        logger.debug("\(message)")
    #endif
}

Sanitize or hash sensitive values before logging:

+

func logUserAction(userId: String, action: String) {
    let hashedId = SHA256.hash(data: Data(userId.utf8))
    logger.info("User \(hashedId.description, privacy: .public) performed \(action)")
}

Log only necessary information:

+

// Bad: Logging entire user object with all fields
logger.info("User data: \(user)")

// Good: Log only the action, not sensitive details
logger.info("User profile updated")

Review third-party SDK configurations:

+

// Ensure analytics SDKs don't capture sensitive logs
Analytics.configure {
    $0.captureDebugLogs = false
    $0.logLevel = .error  // Only critical errors
}

Implement log scrubbing for legacy code:

+

class SecureLogger {
    private static let sensitivePatterns = [
        "password", "token", "api.?key", "ssn", "credit.?card"
    ]

    static func safePrint(_ message: String) {
        var sanitized = message
        for pattern in sensitivePatterns {
            sanitized = sanitized.replacingOccurrences(
                of: pattern,
                with: "[REDACTED]",
                options: .regularExpression,
                range: nil
            )
        }
        print(sanitized)
    }
}

Use structured logging instead of string interpolation:

+

// Structured logging makes it easier to control what gets logged
logger.info("User action", metadata: [
    "action": "login",
    "userId": .stringConvertible(hashedUserId),
    "timestamp": .stringConvertible(Date())
])

Comparison: Logging Approaches

Approach Security Level Recommendation

print() with sensitive data

⚠️ Dangerous

Never use in production

NSLog() with sensitive data

⚠️ Dangerous

Avoid for sensitive info

os_log() without privacy

⚠️ Risky

Add privacy markers

Logger with .private marker

✅ Secure

Recommended for iOS 14+

No logging of sensitive data

✅ Most Secure

Best practice

Configuration

This detector analyzes logging statements for potentially sensitive data based on variable names, function parameters, and context. It checks for:

  • Variable names containing keywords like: password, token, key, secret, ssn, credit, api, auth

  • Function parameters passed to logging functions

  • String literals in log messages

The detector recognizes privacy markers (.private, .public, etc.) in OSLog string interpolation as neutralizations, assuming the developer is handling sensitive data appropriately.

You can configure the types of sensitive data to detect:

properties:
  sensitiveKinds:
    - access_control        # Passwords, tokens, auth credentials
    - crypto               # Cryptographic keys, secrets
    - financial            # Credit cards, bank accounts
    - health               # Medical information, PHI
    - location             # GPS coordinates, addresses
    - personal_identifiable_information  # SSN, ID numbers, names

References