Security Knowledge Base

Security FAQ

Find answers to common security, compliance, and architecture questions about the Obsidian iD platform.

Domain 1

Architecture & Isolation

Obsidian ensures your organization's data remains completely separate from other customers through multiple layers of protection. Every piece of data in our system is tagged with your organization's unique identifier, and our application automatically filters all database queries to only show data belonging to your organization. This means even if there were a bug in our application code, the database structure itself prevents cross-customer data access. Additionally, when you log in, your session is permanently bound to your selected organization, so you cannot accidentally access another organization's data during your session.

See also:

Obsidian uses row-level tenant isolation, which means all customers share the same database infrastructure for efficiency and cost-effectiveness, but each row of data is tagged with an organization identifier. Every time the application retrieves data, it automatically includes a filter to only return records belonging to your organization. This approach provides strong isolation while allowing us to efficiently manage and scale the platform. Your data is logically separated from other organizations' data at the most granular level possible—individual database rows.

See also:

We employ a defense-in-depth strategy to ensure your data never leaks to other organizations. Every API request you make includes a cryptographically signed token containing your organization's identity, which our security guards validate before processing any request. All database operations automatically filter by your organization, and every data access attempt is logged for security monitoring. Additionally, our login system is designed to prevent attackers from discovering which organizations or users exist on our platform—even failed login attempts return the same response whether an account exists or not, preventing reconnaissance attacks.

See also:

Obsidian enforces tenant boundaries through both application code and database-level security. At the application layer, every service automatically filters all data access by your organization's identifier, and security guards verify your tenant context before any operation. At the database layer, our RlsContextGuard sets PostgreSQL Row-Level Security session variables before every database operation, ensuring the database itself enforces tenant isolation. This guard uses atomic operations to set and verify context, and follows a fail-closed security model—if context verification fails for any reason, the request is blocked. This defense-in-depth approach means tenant isolation is maintained even in the event of an application-level bug.

See also:

Absolutely. Obsidian is designed as a true multi-tenant platform where organizations share infrastructure for efficiency while maintaining complete trust boundaries. Your organization's data, settings, roles, and audit logs are all completely isolated from other organizations. Importantly, if a user belongs to multiple organizations (for example, a consultant working with several clients), their permissions are entirely independent in each organization. Being an administrator in one organization grants zero privileges in another—each organization is a completely separate trust domain.

See also:

Obsidian provides dedicated service account support for machine-to-machine authentication, fully scoped to your organization. Service accounts have their own entity type with dedicated credentials and role assignments, separate from human user accounts. Through the admin API, you can create, manage, and delete service accounts, rotate their credentials on a schedule, and revoke individual credentials immediately if compromised. Each service account can be assigned specific roles to follow the principle of least privilege. Additionally, API keys provide a simpler alternative for basic machine authentication. All service account activity is tenant-scoped and audited.

See also:

Background processes in Obsidian, such as session cleanup, security enforcement, and webhook delivery, maintain proper tenant isolation by explicitly tracking which organization each operation belongs to. All background job activities are logged with your organization's identifier for full auditability. While the current implementation ensures data isolation, we are working on enhanced job isolation features that will prevent one organization's heavy processing loads from affecting another organization's performance—a concept known as "noisy neighbor" prevention.

See also:

Obsidian has robust protections against privilege escalation between organizations. When you authenticate, your session is permanently locked to the organization you selected—there is no way to "upgrade" that session to access another organization. Your roles and permissions are entirely organization-specific; being an administrator in one organization provides absolutely no special access in another. If you legitimately need to access multiple organizations (as a member of both), you must either re-authenticate or use the secure tenant-switching flow, which creates a distinct session for the other organization.

See also:

Your organization identity is securely embedded in the cryptographically signed access token you receive when logging in. This token is issued for a specific organization and cannot be modified. Every API request you make includes this token, and our servers extract your organization identity from it—there's no way for request parameters or headers to override this. For users who belong to multiple organizations, we present a tenant selection screen during login so you explicitly choose which organization's context you want to work in. This approach ensures your organization identity is tamper-proof and consistently applied.

See also:

A confused-deputy attack occurs when a trusted component is tricked into misusing its authority on behalf of an attacker. Obsidian protects against this through multiple mechanisms. First, every token specifies its intended audience (administrative, developer, or customer portal), and requests are rejected if the token doesn't match the target system. Second, before any operation modifies a resource, we verify that both the requesting user and the resource belong to the same organization. Third, all state-changing operations (creating, updating, or deleting data) require a CSRF token that proves the request originated from your actual browser session, not from a malicious site attempting to hijack your credentials.

See also:
Domain 2

Identity Model & User Types

Obsidian supports multiple types of identities to meet different use cases. Human users are the primary identity type, with full support for passwords, multi-factor authentication, and session management. For automated systems and integrations, we provide dedicated service accounts with their own credential management, role assignments, and lifecycle controls—including credential rotation and revocation. API keys offer a simpler alternative for basic machine authentication. We also support OAuth clients for third-party applications that need to act on behalf of users. This comprehensive identity model covers human, machine, and application use cases.

See also:

Obsidian maintains a clear separation between platform-level administrators (who manage the entire Obsidian installation) and organization administrators (who manage individual customer organizations). Platform owners have access to system-wide settings and can access any organization for support purposes, while organization administrators have full control only within their own organization. These two admin types use different access portals and their access tokens contain different claims, ensuring platform administrators cannot accidentally be confused with organization administrators and vice versa.

See also:

Yes, a single user can belong to multiple organizations while maintaining just one set of login credentials. This is ideal for consultants, contractors, or employees who work across multiple companies. When you log in, Obsidian checks all your active memberships and, if you belong to more than one organization, presents a selection screen so you can choose which organization to access. Each organization maintains completely separate permissions for you—being an administrator in one organization has no bearing on your access level in another. You can switch between organizations by logging in again and selecting a different organization.

See also:

If you belong to multiple organizations, you have a single global identity with one email address, one password, and one set of multi-factor authentication methods. When you join additional organizations, a membership record is created linking your global identity to that organization with a specific role. This design means you only need to remember one password, and if you enable MFA, it protects all your organization memberships. One of your organizations is marked as your "primary" organization, which becomes the default when accessing Obsidian without specifically selecting an organization.

See also:

Your identity in Obsidian is globally unique across the entire platform. Your email address can only be registered once, regardless of how many organizations you belong to. This means there's a single "you" in the system, and when you're invited to additional organizations, those organizations are linked to your existing identity rather than creating a duplicate. This prevents confusion, ensures consistent credentials, and allows you to use the same MFA setup everywhere. What varies between organizations is your membership status and role—those are specific to each organization.

See also:

Obsidian clearly distinguishes between internal team members (your organization's employees) and external users (your customers or end-users in B2B2C scenarios). Internal users access the administrative and developer portals with full management capabilities, while external users access a customer-facing portal with limited, appropriate functionality. Each user's type is encoded in their access token, and the system enforces that customer-type users cannot access administrative functions. This separation ensures your customers get a streamlined experience without seeing internal tools, and your internal team has full administrative capabilities.

See also:

Obsidian clearly distinguishes between human users, system-level operations, service accounts, and API key-based access in our audit trails and access controls. Platform-level operations are tracked as system actions, while dedicated service accounts provide formal application identities with their own credentials, roles, and lifecycle management. API keys offer simpler machine authentication for basic integrations. Our audit logs clearly show whether an action was performed by a human user, a service account, an API key, or an internal system process, ensuring clear accountability across all identity types.

See also:

When a user is removed from your organization, we don't permanently delete their record. Instead, we mark them as deleted, which prevents them from logging in or appearing in user lists while preserving the complete audit trail of their past actions. Your audit logs will continue to show who performed historical actions, maintaining compliance and forensic capability. You can also temporarily suspend users (making them inactive) without deleting them, allowing for easy reinstatement if needed.

See also:

Obsidian takes steps to prevent orphaned identities—users who exist but have no valid organization memberships. When users are removed from organizations, their membership status is updated to 'removed' rather than deleted, maintaining referential integrity. Inactive and removed memberships are excluded from login discovery, so orphaned users effectively cannot access the system. We're planning to add automated cleanup processes to detect and report identities that have lost all their valid memberships, ensuring cleaner data management.

See also:

Yes, Obsidian supports users who don't have traditional passwords. Users can authenticate exclusively through social login providers (like Google or GitHub) without ever setting a password. Similarly, users can rely solely on magic links (passwordless email authentication). These users have valid identities and can access all features appropriate to their role, but they cannot use the password login option since they never set one. This flexibility allows organizations to choose the authentication methods that best suit their security policies and user preferences.

See also:
Domain 3

Authentication Methods

Obsidian provides multiple authentication options to meet diverse organizational needs. Standard email and password authentication is available with industry-standard bcrypt hashing. For organizations preferring passwordless authentication, we offer Shadow Links (magic links sent via email). Social login through providers like Google and GitHub is supported for user convenience. Enterprise customers can implement SAML-based Single Sign-On to integrate with their existing identity providers like Okta, Azure AD, or OneLogin. The system intelligently detects which authentication method to present based on the user's configuration when they enter their email address.

See also:

Yes, Obsidian fully supports passwordless authentication through our Shadow Link feature. When you choose passwordless login, we send a secure, single-use link to your email address. Clicking this link authenticates you and creates your session—no password required. These links expire after 5 minutes for security and can only be used once. Shadow Links can be used as your primary login method (completely eliminating passwords) or as a second factor alongside your password. This approach eliminates password-related security risks like weak passwords, password reuse, and credential stuffing attacks.

See also:

Magic links in Obsidian are secured with multiple layers of protection. Each link contains a cryptographically random token that's computationally impossible to guess. These tokens expire after just 5 minutes, minimizing the window for any potential interception. Each token can only be used once—after you click the link, it's immediately invalidated. We never store the actual token; only a cryptographic hash is saved, so even if our database were compromised, attackers couldn't use the stored data. Finally, we rate-limit magic link requests to prevent abuse, allowing only 5 requests per hour for any given email address.

See also:

Absolutely not. Obsidian never stores or logs passwords in plaintext under any circumstances. When you set a password, it's immediately transformed using the bcrypt hashing algorithm—a one-way function designed specifically for passwords. Even we cannot reverse this hash to see your password. All password transmission occurs over encrypted HTTPS connections. Our logging system explicitly excludes password fields, so even debugging logs cannot accidentally capture credentials. When login attempts are audited, we record whether the attempt succeeded or failed, but never the password that was attempted.

See also:

Obsidian uses industry-leading cryptographic algorithms for all credential types. Passwords are hashed using bcrypt, specifically designed for password security with adjustable computational cost to stay ahead of hardware advances. Session tokens use a dual-hash approach—SHA-256 for fast lookups and bcrypt for security—ensuring both performance and protection. MFA secrets for authenticator apps are encrypted with AES-256-GCM, the same encryption standard used by government agencies. All backup codes and one-time passwords are stored as one-way SHA-256 hashes. These algorithm choices align with current industry best practices and cryptographic recommendations.

See also:

Credential stuffing—where attackers use stolen username/password combinations from other sites—is actively defended against through multiple mechanisms. First, rate limiting restricts how many login attempts can be made from a single source, slowing down automated attacks. Second, after a configurable number of failed attempts, accounts are temporarily locked, stopping attackers even if they have valid credentials. Third, our risk scoring engine analyzes login patterns—unusual IP addresses, impossible geographic travel, or abnormal behavior trigger additional scrutiny. Finally, our login flow returns identical responses whether an email exists or not, preventing attackers from building lists of valid usernames.

See also:

Yes, rate limiting is applied differently based on the sensitivity of each authentication endpoint. Standard login attempts are limited to 10 per minute, providing reasonable allowance for mistyped passwords while blocking automated attacks. The email identification endpoint has stricter limits to prevent email address harvesting. MFA challenge endpoints allow only 5 attempts per hour, since legitimate users rarely need multiple attempts. Password reset requests are also rate-limited to prevent abuse. These limits apply per-user and per-IP, ensuring that attackers cannot simply rotate accounts to bypass restrictions.

See also:

Obsidian actively monitors for brute-force password attacks through multiple detection mechanisms. Every failed login attempt is recorded, and when attempts from the same source or against the same account exceed thresholds, defensive measures activate. These include progressive delays (each subsequent attempt must wait longer), temporary account lockouts, and security alerts. We track attempts both by account (protecting individual users) and by IP address (blocking distributed attacks). When brute-force patterns are detected, high-severity audit events are generated that can trigger alerts to your security team.

See also:

Organizations can control several authentication policy aspects for their users. You can mandate multi-factor authentication for all users in your organization, set password complexity requirements, and configure whether SSO is required. Users who don't meet your organization's authentication requirements will be prompted or blocked accordingly. While not all authentication methods can be individually whitelisted or blacklisted yet, the most critical controls—MFA requirements and SSO enforcement—are fully available. We're planning to add more granular authentication method controls in future updates.

See also:

Currently, authentication method requirements apply uniformly to all users within an organization rather than varying by role. For example, if you require MFA, all users must use MFA—you cannot currently require hardware security keys only for administrators while allowing app-based MFA for regular users. This feature is on our roadmap as we recognize that many enterprise security policies require stronger authentication for privileged roles. The planned implementation would allow you to specify required authentication methods (such as hardware tokens or specific MFA types) on a per-role basis.

See also:
Domain 4

MFA (Multi-Factor Authentication)

Obsidian offers flexible multi-factor authentication options to balance security and user convenience. Time-based One-Time Passwords (TOTP) work with standard authenticator apps like Google Authenticator, Microsoft Authenticator, or Authy. For users who prefer not to install an app, Email OTP sends a 6-digit code to your registered email. Shadow Link MFA provides a clickable email link as the second factor. Every user also receives 10 single-use backup codes for account recovery if they lose access to their primary MFA method. Hardware security key support through WebAuthn is being finalized, and SMS and push notification options are on our roadmap.

See also:

MFA protects you at multiple points in Obsidian. When you log in with your password, you'll immediately be prompted for your second factor before gaining access. For particularly sensitive actions—such as disabling MFA, changing security settings, or accessing critical data—we require step-up authentication even if you're already logged in. This means you'll need to re-verify your identity with your password and MFA code if you attempt these actions and your last authentication was more than 5 minutes ago. This approach ensures that even if someone gains access to an unlocked session, they cannot perform the most damaging actions without additional verification.

See also:

Obsidian includes a comprehensive risk scoring engine that evaluates each login attempt based on multiple factors including device familiarity, IP address reputation, geographic location, failed attempt history, and behavioral patterns. We also detect "impossible travel"—for example, if you log in from New York and then Singapore 10 minutes later, using Haversine distance calculations to accurately assess geographic plausibility. Based on the computed risk score, the system automatically determines the appropriate response: normal access for low-risk logins, suggested or required MFA for medium-risk logins, and outright blocking for high-risk attempts. These thresholds are configurable per-tenant, and trusted signals (like using a known device or passkey authentication) positively reduce risk scores.

See also:

Your MFA secrets, such as TOTP seeds for authenticator apps, are encrypted at rest using AES-256-GCM—the same encryption standard trusted by governments and financial institutions. The encryption key itself is stored separately and securely, never alongside the encrypted data. Each encrypted secret includes a version identifier, allowing us to rotate encryption keys without disrupting your MFA setup. Even if an attacker gained access to the database, they would only find encrypted data that's computationally infeasible to decrypt without the encryption key.

See also:

When you set up MFA, we require proof that you've successfully configured your authenticator app before enabling protection on your account. The setup process generates your secret and displays a QR code, but your account remains unprotected at this stage. Only after you successfully enter a valid code from your authenticator app do we mark your MFA as verified and begin requiring it for future logins. This prevents situations where users accidentally lock themselves out by enabling MFA without properly configuring their authenticator app.

See also:

Automated systems and integrations authenticate differently from human users—they don't go through the interactive MFA challenge flow. Obsidian provides dedicated service accounts with their own credential type, specifically designed for machine-to-machine communication. These credentials include built-in rotation and revocation capabilities, and each service account can be assigned specific roles following the principle of least privilege. API keys provide a simpler alternative for basic integrations. For maximum security, service account credentials should be rotated regularly, and unused credentials should be revoked promptly.

See also:

MFA fatigue attacks involve bombarding users with authentication requests until they approve one out of frustration. While our current MFA methods require explicit user action (entering a code), we implement protections that would apply if we add push-notification MFA in the future. Rate limiting restricts how frequently MFA can be attempted, and all MFA challenges are single-use—once a challenge expires or is used, a completely new one must be generated. We log all MFA attempts so suspicious patterns can be detected and investigated. We're planning to add user notifications when unusual MFA activity is detected on their account.

See also:

Organizations have full control over their MFA policies. You can make MFA mandatory for all users, optional (user's choice), or completely disabled (not recommended). If you mandate MFA, you can set a grace period during which users are reminded to enable MFA but not yet blocked. After the grace period, a soft lock prevents access to most features until MFA is configured. Organizations can also specify which MFA methods are acceptable—for example, requiring TOTP apps while disabling less secure options. This flexibility allows you to balance security requirements with your users' needs and technical capabilities.

See also:

Applications and scripts that communicate with Obsidian's API don't go through interactive MFA flows. Instead, they authenticate using API keys (for simple integrations) or OAuth client credentials (for more sophisticated applications). These credentials are long-lived secrets that should be managed carefully. If you're building a command-line tool for human users, the MFA challenge can be presented and responded to programmatically—the user would enter their MFA code in the terminal, and your application would submit it to complete authentication. This provides flexibility for various integration patterns while maintaining security.

See also:

During MFA setup, you receive 10 one-time backup codes specifically designed for account recovery. Store these codes securely (printed in a safe, password manager, etc.). If you lose access to your authenticator app, these codes let you log in and reconfigure MFA. If you've also lost your backup codes, organization administrators can disable MFA on your account after verifying your identity through out-of-band channels (like a video call or in-person verification). Platform administrators have additional recovery options for complete lockout scenarios. We recommend periodically verifying that your backup codes are accessible and that your authenticator app is properly backed up.

See also:
Domain 5

Sessions & Tokens

Each time you log in to Obsidian, a session record is created that tracks your authenticated state. This session links your identity to your organization and includes security features like CSRF protection tokens and expiration timestamps. Sessions have both an absolute expiration (maximum lifetime regardless of activity) and an idle timeout (expires if you're inactive for too long). The session also records your access context—whether you're using the admin portal, developer portal, or customer portal—ensuring you can't cross boundaries between these environments without re-authenticating.

See also:

Obsidian uses a hybrid approach that combines the benefits of both stateful and stateless authentication. Your refresh token is stateful—stored in our database—allowing us to instantly revoke your session if needed (for logout, security incidents, or admin action). Your access token is stateless—a self-contained JWT—providing fast API authentication without database lookups on every request. These access tokens are short-lived (15 minutes by default), limiting the window during which a stolen token could be used. For immediate revocation of access tokens (before they naturally expire), we maintain a blacklist of invalidated token IDs.

See also:

Sessions can be revoked instantly through several mechanisms. When you log out, your session is immediately marked as revoked, and subsequent requests with that refresh token will be rejected. For immediate access token invalidation, we add the token's unique identifier to a blacklist that's checked on every authenticated request. Organization administrators can revoke any user's sessions from the admin portal—useful if an employee leaves or a device is stolen. Platform administrators can revoke sessions across any organization. Session revocation takes effect immediately; there's no delay or cache that would allow continued access after revocation.

See also:

When an administrator changes a user's roles or permissions, the update takes effect at their next token refresh. Because access tokens are self-contained and short-lived (15 minutes by default), users will receive their updated permissions within this window without any manual action. If immediate permission enforcement is required (for example, revoking access from a compromised account), administrators can explicitly revoke the user's session, forcing them to re-authenticate and receive their updated (or removed) permissions immediately. We're planning to add automatic permission refresh triggers for critical permission changes.

See also:

Session fixation attacks occur when an attacker forces a victim to use a session ID that the attacker knows. Obsidian prevents this by always generating a completely new session when you authenticate—you cannot "bring" a pre-existing session through the login process. Session tokens are generated using cryptographically secure random number generators, making them impossible to predict. We optionally bind sessions to device fingerprints, so even if a session token were intercepted, it couldn't be used from a different device. Cookies are configured with security flags that prevent JavaScript access and cross-site usage.

See also:

Yes, refresh tokens are rotated with each use for enhanced security. When you use your refresh token to obtain new access tokens, you receive both a new access token and a completely new refresh token. The old refresh token is invalidated. This rotation limits the damage from token theft—a stolen refresh token can only be used once before being replaced. If someone tries to use an already-rotated token, we detect this as potential theft and may take protective action like invalidating all sessions for that user.

See also:

If a refresh token is stolen and the attacker uses it, the legitimate user's next refresh attempt will fail because their token was already rotated. This mismatch—where a supposedly-valid token has already been replaced—is a strong signal of potential token theft. When we detect this, we log a security event and may invalidate all of the user's sessions as a precaution, requiring them to re-authenticate with their password and MFA. Each token use is logged with IP address and device information, helping identify suspicious patterns like the same token being used from multiple locations simultaneously.

See also:

Every access token specifies its intended audience—whether it's for administrative functions, developer portals, or customer-facing applications. This prevents tokens issued for one purpose from being used for another. For example, if you log in through the customer portal, your token is marked as a "customer" token and will be rejected by the admin portal. This separation ensures that even if a token for a less privileged context is compromised, it cannot be used to access more sensitive administrative or developer functions.

See also:

Obsidian handles token expiration consistently even when requests are routed to different servers. Session state is stored centrally in our database, so all API servers have the same view of session validity. Access tokens contain embedded expiration timestamps that any server can validate independently. We account for minor clock differences between servers (clock skew) to prevent false expirations. When your session expires, it's expired everywhere simultaneously—there's no window where some servers think you're authenticated and others don't.

See also:

Obsidian provides cryptographic device binding for maximum session security. When enabled for your organization, every API request must include a device signature generated using ECDSA P-256 public-key cryptography. The DeviceBindingGuard verifies that the request originates from a registered device by validating the digital signature against the stored public key. The signed payload includes the request method, URL, timestamp, and nonce to prevent replay attacks. If a request arrives with an invalid or missing device signature, it is immediately rejected. This feature is configurable per-tenant—you can enable it for high-security environments while keeping it optional for others. Additionally, device information (browser, OS, location) is captured at session creation for forensic analysis.

See also:
Domain 6

Authorization Model

Obsidian combines two powerful authorization approaches. Role-Based Access Control (RBAC) lets you assign roles like "Admin," "Editor," or "Viewer" that grant bundles of permissions—this is intuitive and easy to manage for most use cases. Attribute-Based Access Control (ABAC) extends this with fine-grained policies that can consider any attribute—user department, resource ownership, time of day, location, or custom fields. For example, you might allow all users to read documents, but only allow users to edit documents they created or that belong to their department. This hybrid model gives you both simplicity (through roles) and flexibility (through policies).

See also:

In Obsidian, roles and permissions work together hierarchically. A **permission** (also called a privilege) is the most basic unit—a specific action you can perform, like `users:read` or `settings:write`. A **role** is a named collection of permissions—for example, an "Admin" role might include all permissions while a "Viewer" role only includes read permissions. You assign roles to users, and those users automatically receive all the permissions in that role. **Policies** add conditional logic—they can modify or override permissions based on context, such as "allow editing only if you're the document owner." This layered approach keeps access control both manageable and flexible.

See also:

Currently, permissions are always assigned through roles—you cannot grant individual permissions directly to a user without that permission being part of a role. This design is intentional and follows industry best practices: it prevents permission sprawl and makes access audits much simpler. Instead of tracking "User A has 47 individual permissions," you track "User A has the Editor role." If you need to create a unique permission set, you can create a custom role with exactly those permissions. We may add direct permission grants in the future for advanced use cases, but the role-based model handles the vast majority of scenarios cleanly.

See also:

When you attempt an action, Obsidian evaluates applicable policies in real-time. First, we load all policies for your organization (these are cached for performance). We then filter to policies relevant to your specific action—for example, policies about document editing when you try to edit a document. These filtered policies are evaluated in priority order; a high-priority policy's decision takes precedence. Each policy contains rules that check conditions like "user department equals resource department" or "request time is within business hours." If the first matching policy allows the action, you proceed; if it denies, you're blocked. If no policy matches, access is denied by default (fail-closed security).

See also:

Policy decisions are made consistently across all Obsidian servers. Each API server runs identical policy evaluation logic and caches policy definitions for performance. When you modify a policy, the cache is invalidated, and all servers pick up the change on their next evaluation. This centralized approach ensures that access decisions are consistent regardless of which server handles your request. You won't encounter situations where one server allows an action while another denies it—the source of truth is always the central database, and all servers evaluate policies identically.

See also:

Policies in Obsidian can reference a wide variety of attributes for sophisticated access control. You can base decisions on user attributes (their department, role, location), resource attributes (who created it, when, what type it is), or environmental context (current time, the user's IP address, geographic location). This enables powerful rules like "Allow access to financial data only during business hours from corporate IP ranges" or "Allow users to edit resources only in their own department." The policy language supports comparison operators including equals, contains, matches (regex), between (ranges), and set membership, giving you precise control over access decisions.

See also:

Deny rules in Obsidian are powerful tools for creating exceptions and restrictions. Policies are evaluated in priority order (you assign priorities when creating policies), and the first matching policy determines the outcome. This means you can create a high-priority deny rule that overrides more permissive policies. For example, you might have a policy allowing all editors to modify documents, but a higher-priority deny policy blocking modifications to archived documents. If multiple policies could match, the one with the lowest priority number wins. If no policy matches at all, access is denied—Obsidian defaults to denying access rather than allowing it, which is the secure approach.

See also:

Each policy has a priority number, and lower numbers mean higher priority. When evaluating access, Obsidian sorts applicable policies by priority and evaluates them in order. The first policy whose conditions match determines the outcome—remaining policies are not evaluated. This "first match wins" model makes policy behavior predictable: you can create a high-priority (low number) deny policy to create hard blocks, then lower-priority allow policies for general access. If you're unsure what priority to use, common practice is to use multiples of 10 (10, 20, 30) to leave room for inserting new policies between existing ones.

See also:

Policy changes are tracked in our audit log, so you have a record of who changed what and when. However, we don't currently maintain full version history with the ability to view or restore previous versions of a policy. If you make a mistake, you would need to manually recreate the previous policy state. We're planning to add proper policy versioning that would let you view the history of each policy, compare versions, and roll back to a previous version with one click. For now, we recommend documenting significant policies outside the system and testing policy changes in a non-production environment first.

See also:

Obsidian provides comprehensive policy testing and lifecycle management tools. You can test policies through a dedicated API endpoint that evaluates your policies against simulated requests, letting you see exactly what the decision would be before deploying changes. Our decision logging captures every policy evaluation including which policies matched and why, giving you full visibility into your authorization decisions. For policy management, the import/export services enable you to back up, version, and migrate policies between environments, supporting review workflows and change management. Combined with the policy cache system, you have complete control over your authorization policy lifecycle.

See also:
Domain 7

Roles, Groups, and Permissions

Permissions in Obsidian are highly granular, following a `resource:action` naming convention. Each permission represents a specific action on a specific resource type. You can grant `users:read` (view users) without granting `users:write` (modify users) or `users:delete` (remove users). Beyond the built-in permissions, you can create custom permissions for your application's specific needs. This granularity lets you implement least-privilege access—users get exactly the permissions they need and nothing more.

See also:

Obsidian permissions combine both concepts: they specify an action (what you can do) on a resource (what you can do it to). The format is always `resource:action`, like `documents:read` or `settings:manage`. The `*` wildcard is supported for both—`users:*` means all actions on users, while `*:read` could theoretically mean read access to everything. However, explicit permissions are preferred for clarity. This combined approach gives you precise control while keeping the permission model understandable.

See also:

Obsidian doesn't currently support formal role inheritance (where "Manager" automatically includes all "Employee" permissions). However, you can achieve similar results by assigning multiple roles to a user—all permissions from all assigned roles are combined. For example, instead of making "Admin" inherit from "Editor," you'd assign both roles to admin users. We're considering adding true role inheritance for convenience, but the current multi-role assignment model is well-understood and provides equivalent capability.

See also:

Groups in Obsidian let you organize users into logical collections—like departments, teams, or project groups. Each group is scoped to your organization and can have any number of members. The real power of groups comes in policy integration: your ABAC policies can reference group membership as an attribute. For example, you could create a policy that allows users to access resources belonging to any group they're a member of. Groups provide a dynamic way to manage access without creating a new role for every team or project.

See also:

Currently, groups in Obsidian have a flat structure—you cannot create subgroups or group hierarchies (like "Engineering" containing "Frontend" and "Backend" subgroups). Each group is independent. If you need to represent hierarchies, you would create separate groups and assign users to all applicable groups. We may add nested groups in the future for organizations with complex structures, but for now, flat groups with multiple-group membership handles most use cases.

See also:

Obsidian uses a smart hybrid approach for performance without sacrificing security. Your basic role permissions are embedded in your access token, so we don't need a database query for every request. These tokens refresh every 15 minutes (by default), picking up any permission changes. For ABAC policy evaluation, policy definitions are cached per organization, but evaluation happens fresh each request using current context. This means role changes take effect within 15 minutes automatically, and policy changes take effect immediately.

See also:

Permission sprawl—where users accumulate more permissions than they need over time—is controlled through several mechanisms. We provide predefined role templates for common use cases, encouraging reuse rather than creating custom roles for each user. All role and permission changes are logged for audit, helping you identify excessive grants during security reviews. Only administrators can assign roles, preventing users from self-granting. We're developing additional governance features including analytics to show which permissions are actually being used (to identify over-provisioning) and automated detection of orphaned permissions.

See also:

Yes, role assignments can have expiration dates. When you assign a role to a user, you can specify a start date (for future activation) and an end date (for automatic expiration). This is perfect for temporary access scenarios—a contractor who needs admin access for a month, or a project team member who should have elevated permissions only during the project. When the end date passes, the role is automatically excluded from the user's effective permissions without any manual intervention needed.

See also:

When users join your organization, they're automatically assigned a default role configured in your organization's settings. This ensures no one has zero permissions upon joining while also not requiring manual role assignment for every new user. You can configure which role serves as your default—commonly a "Member" or "Viewer" role with basic permissions. The very first user in a new organization is automatically granted the Admin role since someone needs administrative access to set up the organization. Additional roles can be assigned manually by administrators afterward.

See also:

All Obsidian environments (development, staging, production) use the same role structure—roles are defined in the database schema and apply consistently. If you need different role configurations per environment, you would set up each environment's database with different seed data. For example, you might have a "Developer Tools" role in development and staging but not in production. This is managed through environment-specific database seeding rather than environment-aware role toggles in the application. This approach ensures production security isn't accidentally compromised by development-only roles.

See also:
Domain 8

Policy Creation & Management

Only users with explicit policy management permissions can create, modify, or delete policies in your organization. Typically, this is limited to administrators. The `policies:manage` permission is required, which you can include in custom roles if you want to delegate policy management to non-admin users. Importantly, users can only manage policies within their own organization—there's no way to accidentally or intentionally modify another organization's access control rules.

See also:

Every policy change in Obsidian is recorded in the audit log. When you create, update, or delete a policy, an audit event captures who made the change, when it was made, and what changed. Policy deletions are logged at WARNING severity given their impact. This audit trail is essential for compliance and troubleshooting—if access suddenly breaks or is unexpectedly granted, you can review the policy change history to understand what happened.

See also:

Currently, Obsidian doesn't have a built-in rollback feature for policies. If you need to undo a policy change, you would need to manually recreate the previous configuration. We log all policy changes in the audit trail, so you can see what the policy looked like before, but restoring it is a manual process. We're planning to add policy versioning with one-click rollback capability. In the meantime, we recommend keeping documentation of important policies or using infrastructure-as-code approaches where you define policies in version-controlled configuration files.

See also:

Before saving any policy, Obsidian validates its structure to catch syntax errors and invalid configurations. All changes are logged so you can track modifications. For testing, you can evaluate policy decisions without enforcement to see what would happen. However, we don't currently analyze the impact of a policy change before you make it—for example, warning you that "this change will affect 500 users." Impact analysis tooling is on our roadmap. For now, we recommend testing policy changes in a non-production environment and reviewing audit logs after changes to verify expected behavior.

See also:

Currently, when you save a policy, it takes effect immediately—there's no draft state or approval process. This is efficient for rapid iteration but may not meet the needs of heavily regulated environments that require change control processes. We're developing an approval workflow feature where policies could be created in draft form, reviewed by designated approvers, and activated only after approval. This will also include scheduled activation for changes that need to take effect at specific times.

See also:

All policy operations are available through our REST API, enabling infrastructure-as-code approaches. You can create, read, update, and delete policies programmatically, which is perfect for automating policy management, synchronizing policies across environments, or managing policies through version-controlled configuration files. The API uses the same authentication and authorization as the UI—you need appropriate permissions to manage policies. This enables integration with your existing DevOps workflows and change management processes.

See also:

Obsidian provides some basic templating capabilities for policies. New organizations are provisioned with a set of default policies that cover common access patterns. You can duplicate existing policies as a starting point for new ones. However, we don't currently have a formal template library where you could browse and apply pre-built policies for common scenarios (like "standard read-only access" or "owner-only editing"). Template library features are planned for future releases.

See also:

Obsidian validates policy structure when you save—syntax errors, invalid operators, or malformed rules are caught and rejected. We also warn about priority conflicts if two policies have the same priority number. However, we don't currently detect logical issues like conflicting policies (one allows what another denies), redundant rules, or unreachable policies that would never match due to higher-priority ones. Advanced policy analysis is on our roadmap. For complex policy sets, we recommend documenting your intended logic and periodically reviewing policies to identify redundancies.

See also:

Policies can reference several types of contextual data: information about the request (IP address, user agent), user attributes stored in Obsidian (roles, groups, custom fields), and environmental factors (current time, date). However, we don't currently support calling external systems during policy evaluation—for example, checking an external HR system for department information or calling a risk assessment API. This would add latency to every authorization check, so we're carefully considering the best approach. For now, if you need external data in policies, you would sync that data into Obsidian user attributes.

See also:

Several safeguards help prevent overly permissive policies. Only administrators can create or modify policies, reducing the risk of unauthorized changes. All policy changes are logged for review. The priority system lets you create high-priority deny rules that override more permissive policies—useful for hard blocks that shouldn't be bypassed. However, we don't currently analyze policies for excessive permissiveness or alert you if a policy effectively allows everyone to do everything. Policy risk scoring is planned for future releases. Regular security reviews of your policy set are recommended.

See also:
Domain 9

Encryption & Key Management

Obsidian encrypts sensitive data at the application level using AES-256-GCM, the same encryption standard used by governments and financial institutions. This applies particularly to MFA secrets like TOTP seeds, which must be reversible (unlike passwords which are hashed). The encryption key is 256 bits—the strongest commonly available—making brute-force decryption computationally infeasible. Additional database-level encryption may be configured at the infrastructure level depending on your deployment, providing defense in depth.

See also:

All data transmitted to and from Obsidian is encrypted using TLS 1.2 or higher—the same encryption that protects banking and e-commerce websites. This includes API traffic from your users, connections between our servers and the database, and any internal service-to-service communication. TLS encryption ensures that even if someone intercepts network traffic, they cannot read the contents. In production deployments, HTTPS is mandatory; only local development environments may use unencrypted connections for convenience.

See also:

Currently, Obsidian uses a platform-wide encryption key rather than separate keys per organization. This means all encrypted data across the platform is protected by the same key. While this is secure (the key is still only known to the platform), some enterprise security requirements call for per-tenant keys to provide cryptographic isolation. We're developing per-tenant key management that would give each organization a unique encryption key, ensuring that even at the cryptographic level, one organization's data cannot be decrypted with another's key.

See also:

Different types of secrets receive protection appropriate to their use case. Passwords are stored as bcrypt hashes—one-way functions that can verify correctness but never reveal the original password. MFA secrets (which need to be decrypted for TOTP verification) are encrypted with AES-256-GCM. API keys and session tokens are hashed for secure verification without plaintext storage. Backup codes are hashed like passwords since they're only ever compared, never displayed. This layered approach ensures each secret type has appropriate protection.

See also:

Obsidian does not currently support customer-managed keys (sometimes called "bring your own key" or BYOK). Encryption keys are managed by the platform. For organizations requiring regulatory compliance where key control is mandatory, this may be a consideration. Full customer-managed key support—including integration with AWS KMS, Azure Key Vault, or on-premises HSMs—is on our enterprise roadmap. This would allow you to maintain control of encryption keys while still using Obsidian's services, with the ability to revoke access to your data by revoking the key.

See also:

Obsidian supports key rotation—you can generate a new encryption key and transition to it without losing access to previously encrypted data. Each encrypted value includes a version identifier, so the system knows which key to use for decryption. However, rotation is currently manual; we don't automatically rotate keys on a schedule. We're developing automated key rotation that would rotate keys according to a configurable schedule (such as annually) and automatically re-encrypt data with the new key during low-activity periods.

See also:

If an encryption key were compromised, the response is to immediately generate and deploy a new key, then re-encrypt all affected data. For MFA secrets, this would involve users needing to re-enroll their authenticator apps after the transition. We have documented procedures for this scenario, though it requires operational intervention. We're developing automated incident response capabilities that would streamline key compromise handling, including automated re-encryption and user notification workflows. Prevention through strict key access controls remains the primary defense.

See also:

Password protection and secret encryption are handled by completely different systems. Passwords use bcrypt hashing—an irreversible, one-way function where the password can never be recovered, only verified. MFA secrets like TOTP seeds must be decryptable (we need the actual secret to verify time-based codes), so they use reversible AES-256 encryption with a separate key. These systems are independent: a compromise of the encryption key doesn't expose passwords, and there's no key that could expose passwords since they're hashed, not encrypted.

See also:

Envelope encryption—where each piece of data has its own unique key (DEK), and all DEKs are encrypted by a master key (KEK)—is not currently implemented in Obsidian. This pattern is recommended by cloud security best practices because it limits the blast radius of any single key compromise and enables efficient key rotation. We're evaluating implementing envelope encryption for sensitive data, which would improve our security posture and make key rotation more efficient (only the KEK needs rotation, not re-encryption of all data).

See also:

Since Obsidian encrypts sensitive data before storing it in the database, that data remains encrypted in any database backup—we don't decrypt, backup, and re-encrypt. Database backups themselves can be additionally encrypted depending on your infrastructure configuration (AWS RDS encryption, Azure encryption, or pg_dump encryption options). Cloud storage for backups typically offers its own encryption layer. We're developing verification tooling to automatically audit that all backup systems have appropriate encryption enabled, providing documented assurance for compliance.

See also:
Domain 10

Audit Logs & Compliance

Obsidian maintains comprehensive audit logs covering all security-relevant events. Authentication events (successful logins, failed attempts, logouts) track who accessed the system and when. MFA events record multi-factor authentication challenges and completions. Session events track session creation, revocation, and expiration. Administrative events capture configuration changes, policy modifications, and special actions like user impersonation. Each event includes the actor, timestamp, affected resource, and relevant metadata. This audit trail is essential for security investigations, compliance audits, and understanding system activity.

See also:

Audit logs in Obsidian are designed as append-only—there's no API or normal mechanism to modify or delete them. Once an event is recorded, it stays recorded. However, true immutability (with cryptographic guarantees that tampering is detectable) is not yet implemented. Someone with direct database access could theoretically modify logs without detection. For compliance frameworks requiring certified immutability, we're developing cryptographic chaining (where each log entry includes a hash of the previous entry) and database-level controls to prevent deletion.

See also:

Audit logs can be exported through our API for analysis, archival, or integration with your SIEM (Security Information and Event Management) system. You can query logs with filters for date range, action type, user, and other criteria. Results are returned in JSON format with pagination for handling large result sets. This enables you to pull audit data into your existing security monitoring infrastructure, create custom reports, or archive logs to long-term storage for compliance retention requirements.

See also:

By default, Obsidian retains audit logs indefinitely—we don't automatically delete old logs. This is the safest default for compliance, as many regulations require long retention periods. Retention periods are configurable per organization in the settings. However, automated enforcement (actually deleting logs after the retention period expires) is not yet implemented. You would need to periodically export and archive old logs if storage is a concern, then potentially use direct database access for cleanup. Automated retention enforcement is on our roadmap.

See also:

Absolutely. Every audit event is tagged with the organization it belongs to. When you view audit logs, you only see events from your organization—there's no way to access another organization's audit trail. Platform-level events (like system health checks) have no tenant association and are only visible to platform administrators. This isolation ensures your audit data is as protected as any other data in the system.

See also:

Obsidian supports real-time audit log streaming via WebSocket connections. Your security monitoring tools can subscribe to audit events and receive them immediately as they occur, enabling real-time alerting and dashboards. This is valuable for security operations centers (SOCs) that need immediate visibility into authentication events, privilege changes, or suspicious activities. The streaming connection is authenticated and tenant-scoped, ensuring you only receive events from your organization.

See also:

Audit logs are protected through several mechanisms. There's no API endpoint to modify or delete log entries. The logging service uses a "fire-and-forget" pattern where business logic cannot suppress or modify audit events. In production, database users don't have DELETE permissions on audit tables. However, true tamper-evidence (where any modification would be cryptographically detectable) is not yet implemented. For the highest security requirements, we're developing hash chaining and external backup capabilities to ensure audit integrity is verifiable.

See also:

Administrative actions receive special treatment in audit logs. Events are tagged with the actor type—USER, ADMIN, SYSTEM, API_KEY, or SERVICE—making it easy to filter for administrative actions. Admin actions typically have elevated severity (WARNING or CRITICAL) for visibility in log analysis. Sensitive operations like user impersonation generate explicit audit events that include who performed the impersonation, who was impersonated, and why. This enables clear oversight of privileged access.

See also:

Our audit API supports flexible querying with multiple filter options. You can filter by action type (login events, policy changes), actor (specific user's actions), resource (all events affecting a specific entity), severity level, and time range. Results are paginated using cursor-based pagination, which handles large result sets efficiently. This API enables custom reporting, integration with SIEM systems, investigation workflows, and automated compliance checking.

See also:

Obsidian's audit logging meets many SOC2 and ISO 27001 requirements out of the box: comprehensive event capture, accurate timestamps, actor identification, and tenant isolation. However, achieving full certification compliance may require additional measures that we're developing. These include tamper-evident storage (cryptographic proof that logs haven't been modified), automated retention policy enforcement, and external backup/archival to meet data protection requirements. We recommend discussing specific compliance requirements with our team to understand the current state versus your needs.

See also:
Domain 11

Threat Modeling & Abuse Scenarios

Obsidian is designed with security as a core principle, addressing threats identified by OWASP and industry IAM security standards. We protect against authentication bypass through MFA and rate limiting, session hijacking through CSRF tokens and token binding, privilege escalation through RBAC and tenant isolation, and data leakage through encryption and strict tenant scoping. While these protections are comprehensive, we don't currently maintain a formal, published threat model document. For enterprise customers requiring formal threat analysis, we're developing comprehensive threat modeling documentation following industry frameworks like STRIDE.

See also:

Privilege confusion attacks attempt to trick the system into granting access intended for a different context. We prevent this through multiple mechanisms. Your access token includes an explicit audience claim that must match the target system. Your session is permanently bound to your organization, preventing context switches. Every resource is tagged with its owning organization, and guards verify your context matches before allowing access. This ensures that even if an attacker attempts to present credentials in the wrong context, access will be denied.

See also:

Horizontal privilege escalation—accessing another user's resources at the same privilege level—is prevented through strict validation. Every database query includes your organization's identifier as a filter. Before any operation on a resource, we verify that the resource belongs to your organization. We don't rely on "security by obscurity" with resource IDs; even if you know another resource's ID, attempting to access it will be rejected and logged. This multi-layer approach ensures you can only access resources within your organization.

See also:

Vertical privilege escalation—gaining higher privileges than assigned—is prevented through strict role enforcement. Before any operation, your permissions are verified against the required privileges. Platform-level operations require the `is_platform_owner` flag, which cannot be self-assigned. The most sensitive operations require recent authentication (step-up auth), preventing exploitation of stale sessions. Role assignments can only be performed by administrators, and those assignments are logged. There's no "self-grant" mechanism that could allow users to elevate their own privileges.

See also:

Insider threats—malicious or compromised internal users—are addressed through several controls. All administrative actions are logged at elevated severity for monitoring. Default roles follow least-privilege principles, limiting what even trusted users can access. Session limits prevent credential sharing. However, automated detection of anomalous insider behavior (like an admin accessing unusual amounts of data) is not yet implemented. We're developing behavioral analytics to detect potential insider threats and planning approval workflows for sensitive operations.

See also:

If a tenant administrator's account is compromised, the damage is contained to that single organization. The attacker cannot access platform-level systems, other organizations' data, or elevate to platform administrator. Within the affected organization, they would have administrator access until contained. Response options include immediate session revocation (stopping the attacker), account lockout/deactivation, and full audit trail review to understand what was accessed. The attack surface is limited to one organization, and recovery options are available without affecting other organizations.

See also:

All role and permission changes are logged with detailed information about who made the change and what was changed. Bulk operations (granting many roles at once) are logged at WARNING severity for visibility. However, automated anomaly detection—alerting when permission grants deviate from normal patterns—is not yet implemented. Currently, detection requires reviewing audit logs manually or through your SIEM. We're developing intelligent alerting that would automatically flag unusual permission patterns.

See also:

Replay attacks—capturing and re-using valid requests—are defended against at multiple levels. Access tokens expire quickly (15 minutes by default), limiting the window for replay. Each token has a unique identifier (JTI), and revoked tokens can be blacklisted before natural expiration. Refresh tokens are rotated on each use, so a captured token can only be used once. State-changing operations require CSRF tokens that are validated and single-use. Even if an attacker captures a request, these mechanisms prevent or severely limit the ability to replay it.

See also:

Session replay across regions is defended through multiple implemented mechanisms. First, geo-velocity detection identifies "impossible travel"—if your session is used from London and then Tokyo 10 minutes later, that's flagged as suspicious. Second, the DeviceBindingGuard cryptographically binds sessions to specific devices using ECDSA P-256 signatures, meaning a stolen token cannot be replayed from a different device. Third, our risk scoring engine evaluates geographic anomalies as part of overall login risk assessment. Optional strict geo-binding (locking sessions to their originating region) is a potential future enhancement, but the existing device binding provides strong protection against session replay attacks.

See also:

If tokens are suspected to be compromised, immediate response options are available. Sessions can be revoked instantly through the admin API, preventing the refresh token from being used. Active access tokens can be blacklisted via the JTI blacklist, invalidating them before natural expiration. The affected user will need to re-authenticate when their current token expires or is blacklisted. The full audit trail shows all token usage, enabling investigation of what was accessed with the compromised credentials. These capabilities enable rapid containment of token theft incidents.

See also:
Domain 12

Authorization Evaluation Internals

Authorization checks happen early in the request lifecycle, before any business logic executes. When a request arrives, it first passes through middleware (parsing, CSRF validation), then through a series of guards that verify authentication (is the token valid?), authorization (does the user have required permissions?), and any custom checks. Only after passing all guards does the request reach the actual business logic. This "fail fast" approach means unauthorized requests are rejected immediately, before consuming unnecessary resources or risking any unintended data access.

See also:

Authorization is enforced before your request reaches the business logic that would process it. While the route is matched first (so the system knows which endpoint you're trying to access), the authorization guards execute before the actual controller method runs. This means unauthorized requests never execute any business logic—they're rejected as soon as the guard determines the request isn't authorized. This is safer than checking authorization within the business logic, as it eliminates the risk of accidentally executing logic before the check.

See also:

If something goes wrong during authorization evaluation—a database error, an unexpected exception, or any technical failure—the system denies access. This "fail-closed" behavior is a security best practice: when in doubt, deny access and log an error for investigation. You'll never see a situation where a system failure grants access that shouldn't be granted. Failures are logged at error severity for operational monitoring, but from a security perspective, the most important aspect is that authorization never "fails open."

See also:

Obsidian is designed to fail closed—when anything is uncertain, access is denied. If no policy matches your request, access is denied (not granted by default). If an error occurs during evaluation, access is denied. If required context is missing, access is denied. This might occasionally cause inconvenience if a configuration error blocks legitimate access, but it ensures that misconfigurations or failures never accidentally grant access. An open jail is useless; Obsidian's authorization errs on the side of security.

See also:

When permissions change, there's a brief window where different parts of the system might see different states. This is managed through several mechanisms. Permissions embedded in your access token remain consistent until token refresh (15 minutes max). Policy caches are invalidated immediately on update. Database operations use transactions to ensure atomicity. For most use cases, this is sufficient. The brief window between permission change and token refresh is mitigated by short token TTLs and the ability to force-revoke sessions for immediate enforcement.

See also:

Authorization decisions involve some degree of eventual consistency—permission changes don't instantly propagate to every active session. We manage this through short access token lifetimes (15 minutes by default), so permissions naturally refresh frequently. Policy caches have configurable TTLs that balance performance with freshness. When immediate enforcement is needed, sessions can be revoked or access tokens blacklisted, forcing users to re-authenticate and receive fresh permissions. For most security-sensitive changes, this provides timely enforcement without sacrificing performance.

See also:

Obsidian is designed to protect all routes by default—you must explicitly opt routes out of authorization using the `@Public()` decorator. This "secure by default" approach means accidentally exposing an unprotected endpoint is much harder than with systems where you must remember to add protection. Guards inspect route decorator metadata to determine the required authorization level. However, we don't currently have an automated route registry or verification tooling that would audit all routes for appropriate protection. This tooling is on our roadmap to provide CI/CD checks that flag any routes without proper authorization configuration.

See also:

Currently, endpoint protection is validated through code review (reviewers check for proper guards), integration tests (tests verify endpoints reject unauthorized requests), and route tracking. However, this relies on human diligence. We're developing automated scanning that would run during CI/CD, flagging any endpoints that lack proper authorization decorators. This would provide a safety net ensuring every endpoint has intentional authorization configuration—either protected or explicitly marked as public with documented justification.

See also:

Multiple safeguards prevent developers from accidentally (or intentionally) bypassing authorization. Global guards apply to all routes by default—a developer can't "forget" to add protection; they'd have to explicitly disable it. Making a route public requires an explicit `@Public()` decorator that's visible in code review. Linting rules can flag suspicious patterns. The PR review process includes security review for authorization-related changes. While no system is foolproof, these layered safeguards make it difficult to accidentally introduce unprotected endpoints.

See also:

Authorization enforcement is currently tested through integration tests (automated tests that verify specific endpoints require proper authentication and authorization), behavioral tests that validate live API behavior, and periodic security audits. However, we don't have comprehensive coverage reporting showing what percentage of endpoints have authorization tests, or automated fuzz testing that might discover bypass vulnerabilities. Enhanced authorization test coverage is on our roadmap to provide greater confidence in our security posture.

See also:
Domain 13

SDK & Client-Side Security

The Obsidian SDK incorporates security features to help developers build secure applications. TypeScript definitions catch common errors at compile time. Token management is handled automatically—the SDK refreshes tokens and manages session lifecycle without developer intervention. CSRF protection is built in; the SDK automatically includes CSRF tokens on state-changing requests. Secure defaults are applied (HttpOnly cookies, secure transport requirements) so developers don't need to remember to enable security features. The SDK is designed to make the secure path the easy path.

See also:

While the SDK handles many security concerns, some responsibilities remain with your application. If your application stores tokens (rather than using HTTP-only cookies), securing that storage is your responsibility. Preventing XSS attacks in your application code requires proper output encoding and content security policies. Input validation (beyond what Obsidian validates) is your responsibility. Network security configuration (TLS setup, certificate management) depends on your infrastructure. The SDK documentation clearly delineates these responsibilities so you know what's handled and what requires your attention.

See also:

The SDK is designed to make misuse difficult. TypeScript prevents calling methods with wrong parameter types or missing required arguments—these errors appear at compile time, not runtime. The API is designed so correct usage is intuitive and incorrect usage feels awkward. Documentation includes security guidelines and best practices. When configuration errors do occur, error messages clearly explain what's wrong and how to fix it. While determined developers can always find ways to misuse a tool, the SDK is designed to guide developers toward secure patterns.

See also:

Trust depends on where the SDK runs. The server-side SDK runs in your controlled backend environment and can be trusted like any of your own code. The browser SDK runs in users' browsers—an environment you don't control—and therefore cannot be fully trusted. This is why Obsidian validates all inputs server-side regardless of what the SDK sends. Never rely on client-side SDK validation for security; it provides user experience benefits but can always be bypassed. The server is the ultimate arbiter of what's allowed.

See also:

We use multiple layers to protect tokens on the client side. Session cookies are marked HttpOnly, preventing JavaScript access—even if an XSS attack executes in your page, it cannot read the session token. Access tokens have short lifetimes, limiting the damage window if somehow exposed. CSRF tokens prevent attackers from using your cookies from a malicious site. The Secure flag ensures tokens only travel over HTTPS, preventing interception on insecure networks. These layered protections significantly reduce token exposure risk.

See also:

The SDK works well in browsers using cookie-based authentication since browsers handle cookie security automatically. For mobile applications (React Native), you'll need to implement secure token storage using platform-specific secure storage mechanisms (iOS Keychain, Android Keystore). Using the SDK from environments you don't control (like third-party servers) is not recommended without additional security measures. We're developing mobile-specific SDK variants with built-in secure storage integration to simplify mobile development.

See also:

Browser SDK security leverages modern web security features. The same-origin proxy pattern routes API calls through your domain, eliminating CORS complexity and keeping authentication cookies secure. CSRF protection requires tokens on all state-changing operations. Cookies are configured with SameSite=Lax (prevents cross-site use), Secure (HTTPS only), and HttpOnly (no JavaScript access). We recommend implementing Content Security Policy headers in your application to further protect against XSS attacks. Together, these measures provide comprehensive browser security.

See also:

Each token issued by Obsidian contains a unique identifier (JTI) that allows that specific token to be tracked and blacklisted if needed. Short expiration times (15 minutes default) limit the window during which a captured token could be replayed. The JTI blacklist enables immediate invalidation before natural expiration. Refresh tokens are rotated on use, so even if captured between refreshes, they become invalid after legitimate use. These mechanisms make token replay attacks much more difficult.

See also:

SDK versions are published through NPM with standard verification. TypeScript compilation catches incompatibilities between your code and the SDK version you're using. Changelogs document what changed between versions, including breaking changes. However, we don't currently enforce minimum SDK versions at runtime—an old SDK will still work if the underlying API hasn't changed. We're considering adding version negotiation where the server could warn or reject outdated SDK versions that have known security issues.

See also:

If we discovered a security vulnerability in an SDK version, we would immediately deprecate the affected version on NPM and publish a security advisory on GitHub with details and remediation guidance. The recommended safe version would be clearly documented. However, we cannot currently force applications to update—they would continue using the vulnerable version until they choose to update. We're developing capabilities to detect clients using vulnerable SDK versions and potentially block them, though this requires careful balance between security and avoiding service disruption for legitimate users.

See also:
Domain 14

API Security & Abuse Prevention

Obsidian actively monitors for API abuse through multiple mechanisms. Rate limiting restricts how quickly you can call endpoints, blocking automated attacks. All API requests are logged for pattern analysis. Our risk scoring engine evaluates request patterns, flagging suspicious behavior like rapid-fire attempts or unusual access patterns. Repeated authentication failures trigger account lockout. These layered defenses work together to detect and block abuse while allowing normal usage. When abuse is detected, responses include appropriate HTTP status codes that allow well-behaved clients to back off gracefully.

See also:

Rate limits operate at multiple levels for comprehensive protection. IP-based limits catch unauthenticated abuse and distributed attacks. Identity-based limits prevent any single user from monopolizing resources, even if distributing requests across IPs. Tenant-based limits ensure one organization's heavy usage doesn't impact others. These limits are carefully calibrated to allow normal usage patterns while blocking abuse. Legitimate users rarely encounter rate limits; if you do, it's usually a sign of misconfigured automation or a potential security issue worth investigating.

See also:

API keys are protected with a "show once" model—the full key is displayed only when created, then only a hash is stored. Even if our database were compromised, attackers couldn't recover your API keys. Keys can be rotated seamlessly; create a new key, update your systems, then revoke the old one—both work during the transition. Keys can have expiration dates for automatic rotation. If an API key is exposed (committed to a repository, shared accidentally), rotate it immediately through the admin portal. We recommend regular rotation even without suspected compromise.

See also:

API key rotation is designed to be seamless. Start by generating a new key—your old key continues working. Update your systems to use the new key. Once you've verified everything works, explicitly revoke the old key. During the transition period, both keys are valid, ensuring no service disruption. The entire process is audit-logged so you have a record of when keys were created and revoked. We recommend periodic rotation (e.g., quarterly) as a security best practice, not just in response to suspected compromise.

See also:

Currently, all API requests are logged with their source IP address, and unusual IPs contribute to risk scores. However, we don't yet support explicit IP whitelisting where you could configure "this API key only works from these IP addresses." This feature is on our enterprise roadmap. For organizations requiring IP restrictions, consider implementing them at your network firewall or API gateway level. Once implemented, IP restrictions will add another layer of security for API keys used from known, static infrastructure.

See also:

Every API request is validated against strict Data Transfer Object (DTO) definitions. Invalid types, missing required fields, or unexpected data are rejected before reaching business logic. CSRF tokens on state-changing requests verify the request originated from a legitimate source. Resource IDs are validated for ownership—you can't tamper with IDs to access resources belonging to others. Type coercion is strict; sending a string where a number is expected results in a validation error, not silent type conversion that could cause unexpected behavior.

See also:

Request integrity is validated at multiple levels. JSON parsing uses strict mode, rejecting malformed JSON immediately. DTO validation verifies the structure matches expected schemas with proper types and constraints. CSRF tokens prove the request comes from your authenticated session, not a malicious site. Content-Type headers are enforced, preventing content-type confusion attacks. Headers, cookies, and body content are all validated before processing. Requests that fail any validation check receive clear error messages explaining the issue.

See also:

Mass assignment attacks—where attackers submit extra fields hoping to modify protected attributes—are prevented by strict allowlisting. Each endpoint's DTO explicitly defines which fields are accepted; extra fields are silently dropped. Even if a field is submitted, it's only processed if the DTO allows it. Sensitive entity fields (like `id`, `created_at`, `tenant_id`) are protected at the entity level and cannot be set via API. Update operations (PATCH) explicitly check which fields are being modified against allowed lists.

See also:

Insecure Direct Object Reference attacks—where attackers guess or enumerate IDs to access other users' resources—are comprehensively prevented. Before any resource is returned, we verify the resource belongs to your organization. We use UUIDs for external references, which are practically impossible to guess or enumerate (unlike sequential integers). All database queries include your organization's ID as a filter, so you can't even accidentally retrieve another organization's data. Authorization checks occur before data retrieval, not after, so unauthorized resources are never loaded.

See also:

Hostile payloads are handled safely at multiple levels. Request body size limits prevent oversized payloads from consuming resources. Validation runs before any processing, rejecting malformed requests immediately. Error responses use generic messages that don't leak implementation details or stack traces to attackers. Malformed and suspicious requests are logged for security monitoring—patterns of malformed requests might indicate an attack in progress. All of this happens before body content reaches business logic, protecting against parser exploits and injection attacks.

See also:
Domain 15

Secrets, Credentials & Sensitive Data

Obsidian treats different types of data with appropriate levels of protection based on their sensitivity. Passwords receive the highest protection (irreversible hashing). Cryptographic secrets like MFA seeds are encrypted at rest. Personal information is tenant-isolated and protected by access controls. Audit data is immutable. While we implement these protections consistently, we're developing formal data classification documentation that explicitly labels data types and their required protections—useful for compliance audits and helping customers understand how their data is handled.

See also:

Secrets never appear in application logs. Our logging system uses structured logging with explicit field selection—this prevents accidentally logging entire objects that might contain sensitive data. Fields known to contain secrets (passwords, tokens, API keys) are explicitly excluded from log output. Even during debugging, secret values are redacted. Audit events record that authentication occurred but never the credentials used. You can safely share application logs with support or store them in log aggregation services without exposing sensitive credentials.

See also:

In production, error responses never include stack traces—users see generic error messages that don't reveal implementation details. For internal logging, stack traces are captured but limited in production environments. We're developing automatic secret scrubbing that would detect and redact patterns matching secrets (like tokens or keys) from any log output, including stack traces. This would provide an additional safety net against accidental exposure in unexpected error conditions.

See also:

Secrets are managed differently per environment. In local development, `.env` files store configuration (these are Git-ignored to prevent accidental commit). In production, secrets are provided through environment variables—the secure standard for containerized and cloud deployments. Some runtime secrets are stored in the encrypted settings table for administrative modification without redeployment. This layered approach keeps development convenient while maintaining production security. Secrets never appear in version control or deployment artifacts.

See also:

Organizations can manage some configuration through tenant settings, including webhook secrets for integrations. However, we don't currently support full tenant-managed secrets like custom encryption keys or organization-provided OAuth credentials for SSO. These features are on our enterprise roadmap. For now, organization-specific configurations like SAML SSO are handled through our configuration system with secrets securely stored and managed by the platform.

See also:

When a secret is suspected compromised, immediate remediation options are available. API keys can be revoked instantly through the admin portal or API—the key becomes invalid immediately. Sessions can be invalidated, forcing the affected user to re-authenticate. Passwords can be force-reset, requiring the user to set a new password. Encryption key rotation requires a more involved process but is documented. The key principle is that compromised credentials should be revoked within minutes of detection, not hours or days.

See also:

Each environment (development, staging, production) uses completely separate secrets. Different encryption keys ensure that data from one environment cannot be decrypted in another. Database credentials are environment-specific. API keys and tokens don't work cross-environment. This isolation means that compromise of development credentials has no impact on production security. It also means development and staging are safe to use for testing without risk to production data.

See also:

Several measures discourage hardcoded secrets. Code review catches obvious violations. Our development patterns document how secrets should be accessed (through environment variables or settings services), providing a clear "right way." Linting can flag suspicious patterns. However, we don't currently have automated secret scanning in pre-commit hooks or CI/CD pipelines. This is a planned enhancement—automated tools like git-secrets or GitHub's secret scanning can catch accidentally committed credentials before they reach the repository.

See also:

CI/CD pipelines receive secrets through secure environment variable injection (GitHub Actions secrets, similar mechanisms on other platforms). Secrets are never stored in the repository or build artifacts. For enhanced enterprise scenarios, we're exploring integration with dedicated secrets management solutions like HashiCorp Vault, which would provide centralized secret storage, automatic rotation, and fine-grained access control across all deployment environments.

See also:

We log authentication events (which involve password verification), administrative access to settings (which might include viewing/modifying configuration), and API key usage. However, we don't have a dedicated "secret access" audit trail that tracks every time the application retrieves an encryption key or secret from storage. For compliance requirements involving comprehensive secret access auditing, we're developing enhanced logging that would capture these internal access patterns and alert on anomalies.

See also:
Domain 16

Backup, Recovery & Disaster Scenarios

Currently, database backups capture all data across all organizations. Data is logically separated by organization ID within these backups, but we don't generate separate backup files per organization. This means restoring one organization's data from backup would require either a full database restore or careful extraction of organization-specific data. We're developing per-organization backup capabilities that would enable isolated backup and restore operations—particularly important for enterprise customers who may need to restore a specific organization without affecting others.

See also:

Backup encryption operates at multiple layers. Data that's encrypted at the application level (like MFA secrets) remains encrypted in backups—we don't decrypt before backup. The backup process itself can be encrypted depending on your infrastructure (pg_dump encryption options, encrypted storage volumes). Cloud storage for backups uses provider encryption (AWS S3 server-side encryption, Azure storage encryption). The combination ensures backups are protected at rest, though the specific implementation depends on your deployment infrastructure.

See also:

Backups are protected through storage access controls (only authorized personnel/systems can access backup storage), encryption (backups are encrypted at rest), and retention policies (old backups are deleted per schedule). However, we don't currently maintain a detailed audit trail of backup access or automatically verify backup integrity. These enhancements are planned—backup access auditing would track who accessed backup data and when, and integrity verification would ensure backups haven't been tampered with.

See also:

Currently, restoring a single organization's data requires either a full database restore (which affects all organizations) or manual extraction of organization-specific data from a backup—a time-consuming process. Automated single-tenant restore is on our roadmap. This would allow rapidly restoring one organization's data to a previous point in time without any impact on other organizations. For organizations requiring recovery SLAs, this is a critical feature we're prioritizing.

See also:

With current architecture, a full database restore would affect all organizations. If we needed to restore organization A's data from yesterday, doing so via full restore would roll back organization B's data to yesterday as well. Mitigations include point-in-time recovery (restoring to a separate database, then extracting specific data) and careful operational procedures. Isolated tenant restore capability is planned (see Q154 above) and would eliminate this risk by allowing organization-specific recovery operations.

See also:

We periodically perform restore tests to verify backups are complete and recoverable. PostgreSQL's backup tools include checksums that detect corruption. However, automated, continuous backup verification isn't currently implemented. We're developing automated backup testing that would regularly restore backups to a test environment and verify data integrity without manual intervention. This provides confidence that when you need a backup, it will work.

See also:

When users are deleted, they're marked with a deletion timestamp rather than removed from the database. All active queries filter out deleted users, so they can't log in or appear in user lists. If a backup is restored, the deletion state is part of that backup—a user deleted before the backup won't be "resurrected." The deletion itself is permanently logged in the audit trail, providing evidence the action occurred. This design ensures deletions persist through backup/restore cycles.

See also:

We provide data export functionality (so users can request a copy of their data) and deletion capabilities (removing user accounts). However, complete GDPR "right to be forgotten" compliance—which may require removing personal data from backups and all derived datasets—is partially implemented. Full PII scrubbing across all tables and backup data handling is on our roadmap. For EU-operating customers with strict GDPR requirements, we recommend discussing specific compliance needs with our team.

See also:

Recovery from database corruption depends on your infrastructure. PostgreSQL's Write-Ahead Log (WAL) enables point-in-time recovery to a moment before corruption occurred. Regular backups provide fallback options. If read replicas are configured, failover can restore service while the primary is recovered. We're developing comprehensive disaster recovery runbooks that document specific procedures for various failure scenarios. For production deployments, we recommend discussing DR requirements to ensure appropriate infrastructure and procedures are in place.

See also:

Partial data loss recovery depends on the nature and extent of loss. For recent data, PostgreSQL's transaction log may contain recoverable changes. Table-level or row-level restoration from backups is possible with manual intervention. Some data may be reconstructable from audit logs (which contain snapshots of changes). Automated tooling for partial recovery is on our roadmap. For critical production systems, we recommend a robust backup strategy and discussing recovery procedures before an incident occurs.

See also:
Domain 17

Compliance & Regulatory Boundaries

Users can request a copy of all their personal data through our GDPR data export feature. The export includes profile information, activity history, and other personal data in a machine-readable JSON format. Export requests are logged in the audit trail for compliance documentation. For organization administrators handling data access requests from end-users, the admin portal provides tools to generate these exports. This satisfies the GDPR Article 15 right of access and Article 20 right to data portability.

See also:

Currently, all data is stored in a single database region. We don't yet support data residency requirements where specific organizations' data must remain within geographic boundaries (like EU data staying in EU data centers). This is a significant feature on our enterprise roadmap. Data residency would allow configuring per-organization storage locations, ensuring data never leaves specified geographic regions. For organizations with strict data residency requirements, please discuss your needs with our team to understand timeline and alternatives.

See also:

Regional data restrictions are not currently supported—see Q162 above for details on our data residency roadmap. When implemented, organizations will be able to specify that their data must be stored in particular regions (EU, US, APAC, etc.), with routing and storage infrastructure ensuring compliance. This is particularly important for organizations operating in regions with data locality laws.

See also:

Obsidian includes foundational elements for HIPAA-style compliance: comprehensive audit logging for access monitoring, fine-grained RBAC for minimum necessary access, and encryption for data protection. However, we don't currently offer formal Business Associate Agreements (BAAs) or PHI-specific data handling workflows. For healthcare customers or those handling protected health information, please discuss your specific requirements with our team. We're developing enhanced healthcare compliance features for future releases.

See also:

You can configure retention periods through tenant settings. However, automated enforcement—actually deleting data when retention periods expire—is not yet implemented. Currently, data remains until manually purged. We're developing automated retention enforcement that would delete expired data according to your configured policies, with appropriate logging for compliance. This is important for organizations with data lifecycle requirements from regulations like GDPR or industry standards.

See also:

For organizations operating under multiple regulatory frameworks (EU GDPR, California CCPA, Canadian PIPEDA, etc.), conflicting requirements can arise. Currently, Obsidian doesn't have a formal framework for detecting or resolving such conflicts—you would need to configure policies that meet your most stringent requirements manually. We're exploring multi-regulation support that would help identify conflicts and suggest resolutions, but this is a complex area requiring careful legal and technical design.

See also:

Organizations can configure several compliance-relevant settings independently: MFA requirements, password complexity policies, session timeouts, and similar security controls. However, we don't currently have comprehensive "compliance profiles" (like "HIPAA-compliant" or "SOX-compliant") that would automatically configure all relevant settings. Each control must be set individually. We're developing compliance profile templates that would allow selecting a compliance framework and automatically applying appropriate configurations.

See also:

You can verify compliance through audit logs (showing policy enforcement in action) and settings inspection (verifying policies are configured correctly). However, we don't have a compliance dashboard that would show an at-a-glance view of your compliance posture or automated checks that verify your configuration meets specific regulatory requirements. These features are planned—they would provide real-time visibility into compliance status and automatically flag configuration gaps.

See also:

During compliance audits, you can provide evidence through exported audit logs, configuration exports, and system documentation. We can provide additional materials on request (architecture diagrams, security documentation). However, we don't currently have pre-built compliance report packages that automatically gather required evidence for specific frameworks (SOC2, ISO 27001, etc.). We're developing audit support packages that would streamline evidence collection and provide auditor-ready documentation.

See also:

You can extract comprehensive compliance evidence through our APIs and tools. Audit logs provide complete event history with filtering options. User access reports show who accessed what and when. Configuration exports document your security settings. GDPR export functionality retrieves user personal data. These can be integrated with your compliance management systems or exported for auditors. While some automation is still in development, the raw data needed for most compliance demonstrations is available for extraction.

See also:
Domain 18

Governance & Change Control

Changes are documented in our changelog with each release. API versioning (v1, v2, etc.) allows us to introduce new versions without immediately breaking existing integrations. However, we don't currently have automated notifications for breaking changes—you would need to review release notes. We're developing proactive notification features that would alert you before and after breaking changes, with deprecation timelines for any affected functionality. In the meantime, we recommend subscribing to release announcements and reviewing changelogs.

See also:

Currently, all policies use the current behavior—you cannot "pin" to an older policy evaluation behavior if we update how policies work. Changes to policy evaluation are carefully tested to maintain backward compatibility, but there's no formal version pinning. If policy behavior changed in a breaking way (rare), all organizations would receive the change simultaneously. Policy versioning with pinning capability is a potential future enhancement for organizations requiring tight change control.

See also:

We maintain backward compatibility through API versioning—when breaking changes are necessary, they're introduced in a new API version (v2, v3, etc.) while the previous version continues to work. Changes within a version are additive: we might add new fields but won't remove or rename existing ones. When migration is required, we maintain both old and new patterns during a transition period. This approach minimizes disruption to your integrations while still allowing the platform to evolve.

See also:

Database schema changes are handled through TypeORM migrations—code files that define exactly what changes will be made. Each migration runs within a transaction, so if something fails, the entire change is rolled back rather than leaving the database in an inconsistent state. Every migration has a "down" function for rollback if needed. Migrations go through code review before being merged, with particular scrutiny for data-affecting changes. This disciplined approach ensures schema changes are safe, predictable, and reversible.

See also:

Obsidian is a SaaS platform where all organizations run on the same version. You cannot opt out of platform updates or stay on an older version. This is the standard SaaS model—it ensures consistent security updates, reduces operational complexity, and ensures support can help without version fragmentation. We carefully test updates for backward compatibility and follow staged rollout practices. For organizations requiring absolute version control, on-premises deployment may be more appropriate.

See also:

Configuration is stored in the database rather than files, ensuring all servers see identical settings. All configuration changes are audit logged, creating a trail of who changed what. For infrastructure configuration, standard DevOps practices apply (Infrastructure as Code, GitOps). We don't currently have automated drift detection that would alert you if manual changes were made outside normal processes. For organizations requiring strict configuration management, we recommend implementing change control procedures and reviewing audit logs regularly.

See also:

Code changes go through pull request review before merging. However, configuration changes made by administrators within the application are logged but not subject to approval workflows—they take effect immediately. We're developing in-app approval workflows where sensitive changes (like policy modifications or role assignments) could require approval from multiple parties before taking effect. This is important for organizations with separation-of-duties requirements.

See also:

Currently, any administrator with policy management permissions can create, modify, or delete policies without additional approval. For organizations in regulated industries where separation of duties is required, this is a gap. We're developing multi-party approval workflows that would allow requiring multiple administrators to approve policy changes before they take effect. This would support scenarios like "policy changes require approval from both Security and Compliance teams."

See also:

Destructive actions in the UI require confirmation dialogs. All operations are logged for review. Bulk operations are rate-limited to prevent runaway scripts. However, an administrator could still accidentally revoke many sessions or roles without additional approval gates. We're developing enhanced safeguards for bulk operations, including thresholds that would require additional confirmation or approval for actions affecting many records. Additionally, we're exploring undo/rollback capabilities for bulk changes.

See also:

All governance-related actions are comprehensively audited. Configuration changes record what was changed, by whom, and when. Role assignments and revocations are logged with the target user and the roles affected. Policy changes include before/after states. Administrative actions are logged at elevated severity for visibility in monitoring. This audit trail supports compliance requirements, incident investigation, and general governance oversight. You can query these logs through the audit API or export for your compliance tools.

See also:
Domain 19

Platform Admin & Superuser Controls

Platform super-administrators have complete control over the Obsidian installation. They can access any organization's data (for support purposes), modify platform-wide settings, manage users across all organizations, impersonate users for troubleshooting, and view comprehensive audit logs. This power is necessary for platform operations but carries significant responsibility. Platform owner accounts should be limited to essential personnel, protected with strong MFA, and their actions are logged at the highest severity level for oversight.

See also:

All super-admin actions are logged at elevated severity for monitoring. The admin portal is separate from regular user portals, reducing the chance of accidental cross-context actions. Admin endpoints explicitly check for the platform owner flag. However, there are no approval gates for sensitive actions—a super-admin can act unilaterally. We're developing privilege elevation controls that would require approval for the most sensitive actions and time-limited elevation (like AWS's IAM Roles requiring re-authentication for privileged actions).

See also:

"Break glass" access—emergency access that bypasses normal controls when critical—is not formally implemented. Currently, platform owners always have full access. A formal break glass model would require that high-privilege access is normally disabled, only available through an emergency procedure that requires approval, creates extensive audit trails, and automatically expires. This is on our roadmap for organizations requiring the tightest access controls where even platform admins shouldn't have routine access to sensitive operations.

See also:

Since formal break glass procedures aren't implemented, there's no break glass-specific auditing. Platform admin actions are always logged, but there's no distinction between normal admin activity and emergency access. When break glass is implemented, it will include comprehensive auditing: who requested emergency access, who approved it, what actions were taken, and when access was automatically revoked. See Q183 for the planned implementation.

See also:

Platform administrators can impersonate tenant users for support and troubleshooting purposes. When impersonating, the admin sees exactly what the user would see, which is invaluable for diagnosing user-reported issues. These sessions are clearly marked as impersonated both in audit logs and in the UI. Every impersonation is logged at CRITICAL severity with full details. This capability is essential for support but is controlled and audited to prevent abuse—impersonation events should be reviewed regularly.

See also:

Only platform owners can impersonate—tenant admins cannot impersonate their own users (they must use other support mechanisms). Every impersonation creates an audit entry visible in reporting. Impersonated sessions are marked so any actions taken are clearly attributable to impersonation rather than the actual user. Importantly, impersonation respects the target user's permissions—admins can't use impersonation to bypass authorization. They see exactly what the user can access, nothing more.

See also:

Platform administrators can view and modify any organization's policies for support and operational purposes. These modifications are logged as admin actions with high severity. When modifying tenant policies, the admin operates within that tenant's context—the changes are associated with the tenant, not the platform level. This capability is necessary for support scenarios where policies are misconfigured, but it represents significant power that should be monitored through audit log review.

See also:

Platform abuse is prevented primarily through limiting who has platform owner access, comprehensive audit logging, and code-level controls reviewed during development. However, we don't have automated detection of abusive patterns (like an admin accessing unusual amounts of data). We're developing behavioral analytics that would flag anomalous admin activity—for example, an admin who suddenly starts accessing many more tenants than usual. For now, regular audit log review is the primary detection mechanism for potential abuse.

See also:

Platform-level logs (system health, admin actions across tenants) are stored with a null tenant ID, separating them from tenant-specific logs. When you query your organization's logs, platform-level entries are automatically excluded—you only see logs relevant to your organization. Conversely, platform admins can view both platform logs and tenant logs, but tenants can never see each other's logs or platform-level logs. This isolation maintains appropriate visibility boundaries.

See also:

Revoking platform admin access is straightforward and immediate. Setting the platform owner flag to false removes admin capabilities. For complete revocation, all of the user's sessions should also be revoked, forcing immediate re-authentication (which will now have regular, not admin, privileges). The revocation itself is logged at CRITICAL severity. The change takes effect immediately—the next request from the former admin will be evaluated against their new (reduced) permissions. There's no delay or cache that would allow continued access.

See also:
Domain 20

Observability & Detection

Every login attempt is evaluated for suspicious patterns. Our risk scoring engine considers IP reputation, geographic location, login history, and behavioral patterns. Geo-velocity detection flags impossible travel—a login from Tokyo followed by one from London 10 minutes later is highly suspicious. Repeated failed attempts trigger account lockout. The audit log retains all login events, allowing you to query patterns and investigate anomalies. These combined mechanisms detect many common attack patterns like credential stuffing, account takeover attempts, and session hijacking.

See also:

Security events stream in real-time via WebSocket for dashboards and monitoring tools. The admin portal displays recent security activity. Critical events (like account lockout) can trigger email notifications. However, you cannot currently configure custom alert rules (e.g., "alert me if more than 10 failed logins occur in an hour") or route alerts to tools like Slack or PagerDuty directly from Obsidian. These integrations are on our roadmap. For now, organizations with advanced alerting needs typically export audit events to their SIEM or monitoring platform.

See also:

You can configure webhooks that receive audit events, optionally filtering by event type. This allows pushing events to your own systems for alerting. However, we don't have an in-app alert rule builder where you could configure conditions like "alert when failed logins exceed threshold" without external tooling. Most organizations integrate with their existing SIEM or monitoring platform (Splunk, DataDog, etc.) for custom alert rules. Native alert customization is planned for future releases.

See also:

Some automated responses are built in: repeated failed logins trigger account lockout, and certain risk detections can trigger session revocation. However, you cannot currently configure custom automated responses (like "if suspicious activity detected, require MFA step-up" or "if impossible travel, revoke all sessions"). Security orchestration and automated response (SOAR) capabilities are on our enterprise roadmap. For now, automated responses beyond built-in behaviors require integration with external security tools.

See also:

Events are correlated through shared identifiers—session ID links all events within a session, user ID connects all events for a user, and request IDs can trace individual request flows. However, we don't currently implement full distributed tracing (OpenTelemetry or similar) that would provide complete request traces across all system components. For organizations integrating Obsidian with other systems, these shared identifiers enable basic correlation. Full distributed tracing is planned for enhanced observability.

See also:

Security systems sometimes flag legitimate activity as suspicious. When this happens, administrators can review audit logs to understand what triggered the alert and unlock accounts that were incorrectly locked. Risk thresholds can be tuned to reduce false positives (though this may also reduce true positive detection). We don't currently have machine learning-based false positive reduction that would learn your organization's normal patterns. For now, tuning involves manually adjusting thresholds based on your observation of false positive rates.

See also:

You can export comprehensive audit logs via API and stream events to external systems via webhooks. This covers the security event data most organizations need. However, we don't currently expose Prometheus-format metrics per tenant or provide pre-built integrations with common SIEM platforms. Most organizations connect via webhook or API export to their monitoring tools. Formal SIEM integration documentation and metrics export are on our roadmap to simplify security platform integration.

See also:

Policy evaluation time is recorded with each decision in the audit log, allowing you to analyze which policies might be causing slow evaluations. General API latency metrics capture overall response times. However, we don't have dedicated dashboards or alerts for policy evaluation latency specifically. If policy complexity is causing performance issues, you would discover this through general latency monitoring or by analyzing decision logs. Dedicated policy performance metrics are planned for organizations with complex policy sets.

See also:

Every authorization failure (403 response) is logged with details about what was attempted and denied. Reviewing audit logs reveals patterns of repeated access attempts to unauthorized resources. Rate limiting slows down automated bypass attempts. However, we don't currently have real-time alerting specifically for authorization bypass patterns (like "user X just tried to access 100 different resources they're not permitted to reach"). This pattern detection is on our security monitoring roadmap.

See also:

Silent security failures—where security controls fail without obvious errors—are challenging to detect. We log all exceptions, monitor service health, and track error rates via metrics. However, detecting scenarios like "authentication is succeeding when it shouldn't" or "audit logging silently stopped" requires more sophisticated monitoring. We're developing security regression detection that would verify security controls continue operating correctly. For now, regular security testing and audit log review help catch silent failures.

See also: