mirror of
https://github.com/PR0M3TH3AN/SeedPass.git
synced 2025-09-08 07:18:47 +00:00
8
.gitignore
vendored
8
.gitignore
vendored
@@ -33,3 +33,11 @@ coverage.xml
|
||||
# Other
|
||||
.hypothesis
|
||||
totp_export.json.enc
|
||||
|
||||
# src
|
||||
|
||||
src/seedpass.egg-info/PKG-INFO
|
||||
src/seedpass.egg-info/SOURCES.txt
|
||||
src/seedpass.egg-info/dependency_links.txt
|
||||
src/seedpass.egg-info/entry_points.txt
|
||||
src/seedpass.egg-info/top_level.txt
|
347
README.md
347
README.md
@@ -1,8 +1,9 @@
|
||||
````markdown
|
||||
# SeedPass
|
||||
|
||||

|
||||
|
||||
**SeedPass** is a secure password generator and manager built on **Bitcoin's BIP-85 standard**. It uses deterministic key derivation to generate **passwords that are never stored**, but can be easily regenerated when needed. By integrating with the **Nostr network**, SeedPass compresses your encrypted vault and splits it into 50 KB chunks. Each chunk is published as a parameterised replaceable event (`kind 30071`), with a manifest (`kind 30070`) describing the snapshot and deltas (`kind 30072`) capturing changes between snapshots. This allows secure password recovery across devices without exposing your data.
|
||||
**SeedPass** is a secure password generator and manager built on **Bitcoin's BIP-85 standard**. It uses deterministic key derivation to generate **passwords that are never stored**, but can be easily regenerated when needed. By integrating with the **Nostr network**, SeedPass compresses your encrypted vault and splits it into 50 KB chunks. Each chunk is published as a parameterised replaceable event (`kind 30071`), with a manifest (`kind 30070`) describing the snapshot and deltas (`kind 30072`) capturing changes between snapshots. This allows secure password recovery across devices without exposing your data.
|
||||
|
||||
[Tip Jar](https://nostrtipjar.netlify.app/?n=npub16y70nhp56rwzljmr8jhrrzalsx5x495l4whlf8n8zsxww204k8eqrvamnp)
|
||||
|
||||
@@ -10,7 +11,7 @@
|
||||
|
||||
**⚠️ Disclaimer**
|
||||
|
||||
This software was not developed by an experienced security expert and should be used with caution. There may be bugs and missing features. Each vault chunk is limited to 50 KB and SeedPass periodically publishes a new snapshot to keep accumulated deltas small. The security of the program's memory management and logs has not been evaluated and may leak sensitive information. Loss or exposure of the parent seed places all derived passwords, accounts, and other artifacts at risk.
|
||||
This software was not developed by an experienced security expert and should be used with caution. There may be bugs and missing features. Each vault chunk is limited to 50 KB and SeedPass periodically publishes a new snapshot to keep accumulated deltas small. The security of the program's memory management and logs has not been evaluated and may leak sensitive information. Loss or exposure of the parent seed places all derived passwords, accounts, and other artifacts at risk.
|
||||
|
||||
---
|
||||
### Supported OS
|
||||
@@ -18,7 +19,6 @@ This software was not developed by an experienced security expert and should be
|
||||
✔ Windows 10/11 • macOS 12+ • Any modern Linux
|
||||
SeedPass now uses the `portalocker` library for cross-platform file locking. No WSL or Cygwin required.
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Features](#features)
|
||||
@@ -42,7 +42,7 @@ SeedPass now uses the `portalocker` library for cross-platform file locking. No
|
||||
- **Deterministic Password Generation:** Utilize BIP-85 for generating deterministic and secure passwords.
|
||||
- **Encrypted Storage:** All seeds, login passwords, and sensitive index data are encrypted locally.
|
||||
- **Nostr Integration:** Post and retrieve your encrypted password index to/from the Nostr network.
|
||||
- **Chunked Snapshots:** Encrypted vaults are compressed and split into 50 KB chunks published as `kind 30071` events with a `kind 30070` manifest and `kind 30072` deltas.
|
||||
- **Chunked Snapshots:** Encrypted vaults are compressed and split into 50 KB chunks published as `kind 30071` events with a `kind 30070` manifest and `kind 30072` deltas. The manifest's `delta_since` field stores the UNIX timestamp of the latest delta event.
|
||||
- **Automatic Checksum Generation:** The script generates and verifies a SHA-256 checksum to detect tampering.
|
||||
- **Multiple Seed Profiles:** Manage separate seed profiles and switch between them seamlessly.
|
||||
- **Nested Managed Account Seeds:** SeedPass can derive nested managed account seeds.
|
||||
@@ -52,18 +52,29 @@ SeedPass now uses the `portalocker` library for cross-platform file locking. No
|
||||
- **Export 2FA Codes:** Save all stored TOTP entries to an encrypted JSON file for use with other apps.
|
||||
- **Display TOTP Codes:** Show all active 2FA codes with a countdown timer.
|
||||
- **Optional External Backup Location:** Configure a second directory where backups are automatically copied.
|
||||
- **Auto‑Lock on Inactivity:** Vault locks after a configurable timeout for additional security.
|
||||
- **Auto-Lock on Inactivity:** Vault locks after a configurable timeout for additional security.
|
||||
- **Quick Unlock:** Optionally skip the password prompt after verifying once.
|
||||
- **Secret Mode:** Copy retrieved passwords directly to your clipboard and automatically clear it after a delay.
|
||||
- **Tagging Support:** Organize entries with optional tags and find them quickly via search.
|
||||
- **Manual Vault Export/Import:** Create encrypted backups or restore them using the CLI or API.
|
||||
- **Parent Seed Backup:** Securely save an encrypted copy of the master seed.
|
||||
- **Manual Vault Locking:** Instantly clear keys from memory when needed.
|
||||
- **Vault Statistics:** View counts for entries and other profile metrics.
|
||||
- **Change Master Password:** Rotate your encryption password at any time.
|
||||
- **Checksum Verification Utilities:** Verify or regenerate the script checksum.
|
||||
- **Relay Management:** List, add, remove or reset configured Nostr relays.
|
||||
- **Offline Mode:** Disable all Nostr communication for local-only operation.
|
||||
|
||||
A small on-screen notification area now shows queued messages for 10 seconds
|
||||
before fading.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python 3.8+** (3.11 or 3.12 recommended): Install Python from [python.org](https://www.python.org/downloads/) and be sure to check **"Add Python to PATH"** during setup. Using Python 3.13 is currently discouraged because some dependencies do not ship wheels for it yet, which can cause build failures on Windows unless you install the Visual C++ Build Tools.
|
||||
*Windows only:* Install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and select the **C++ build tools** workload.
|
||||
- **Python 3.8+** (3.11 or 3.12 recommended): Install Python from [python.org](https://www.python.org/downloads/) and be sure to check **"Add Python to PATH"** during setup. Using Python 3.13 is currently discouraged because some dependencies do not ship wheels for it yet, which can cause build failures on Windows unless you install the Visual C++ Build Tools.
|
||||
*Windows only:* Install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and select the **C++ build tools** workload.
|
||||
|
||||
## Installation
|
||||
|
||||
|
||||
### Quick Installer
|
||||
|
||||
Use the automated installer to download SeedPass and its dependencies in one step.
|
||||
@@ -81,77 +92,66 @@ bash -c "$(curl -sSL https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/
|
||||
```powershell
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.ps1'); & ([scriptblock]::create($scriptContent))
|
||||
```
|
||||
Before running the script, install **Python 3.11** or **3.12** from [python.org](https://www.python.org/downloads/windows/) and tick **"Add Python to PATH"**. You should also install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) with the **C++ build tools** workload so dependencies compile correctly.
|
||||
The Windows installer will attempt to install Git automatically if it is not already available. It also tries to
|
||||
install Python 3 using `winget`, `choco`, or `scoop` when Python is missing and recognizes the `py` launcher if `python`
|
||||
isn't on your PATH. If these tools are unavailable you'll see a link to download Python directly from
|
||||
<https://www.python.org/downloads/windows/>. When Python 3.13 or newer is detected without the Microsoft C++ build tools,
|
||||
the installer now attempts to download Python 3.12 automatically so you don't have to compile packages from source.
|
||||
Before running the script, install **Python 3.11** or **3.12** from [python.org](https://www.python.org/downloads/windows/) and tick **"Add Python to PATH"**. You should also install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) with the **C++ build tools** workload so dependencies compile correctly.
|
||||
The Windows installer will attempt to install Git automatically if it is not already available. It also tries to install Python 3 using `winget`, `choco`, or `scoop` when Python is missing and recognizes the `py` launcher if `python` isn't on your PATH. If these tools are unavailable you'll see a link to download Python directly from <https://www.python.org/downloads/windows/>. When Python 3.13 or newer is detected without the Microsoft C++ build tools, the installer now attempts to download Python 3.12 automatically so you don't have to compile packages from source.
|
||||
|
||||
**Note:** If this fallback fails, install Python 3.12 manually or install the [Microsoft Visual C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and rerun the installer.
|
||||
*Install the beta branch:*
|
||||
### Uninstall
|
||||
|
||||
Run the matching uninstaller if you need to remove a previous installation or clean up an old `seedpass` command:
|
||||
|
||||
**Linux and macOS:**
|
||||
```bash
|
||||
bash -c "$(curl -sSL https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/uninstall.sh)"
|
||||
```
|
||||
If you see a warning that an old executable couldn't be removed, delete the file manually.
|
||||
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.ps1'); & ([scriptblock]::create($scriptContent)) -Branch beta
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/uninstall.ps1'); & ([scriptblock]::create($scriptContent))
|
||||
```
|
||||
|
||||
### Manual Setup
|
||||
|
||||
Follow these steps to set up SeedPass on your local machine.
|
||||
|
||||
### 1. Clone the Repository
|
||||
1. **Clone the Repository**
|
||||
|
||||
First, clone the SeedPass repository from GitHub:
|
||||
```bash
|
||||
git clone https://github.com/PR0M3TH3AN/SeedPass.git
|
||||
cd SeedPass
|
||||
```
|
||||
|
||||
```bash
|
||||
git clone https://github.com/PR0M3TH3AN/SeedPass.git
|
||||
```
|
||||
2. **Create a Virtual Environment**
|
||||
|
||||
Navigate to the project directory:
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
```
|
||||
|
||||
```bash
|
||||
cd SeedPass
|
||||
```
|
||||
3. **Activate the Virtual Environment**
|
||||
|
||||
### 2. Create a Virtual Environment
|
||||
- **Linux/macOS:**
|
||||
```bash
|
||||
source venv/bin/activate
|
||||
```
|
||||
- **Windows:**
|
||||
```bash
|
||||
venv\Scripts\activate
|
||||
```
|
||||
|
||||
It's recommended to use a virtual environment to manage your project's dependencies. Create a virtual environment named `venv`:
|
||||
4. **Install Dependencies**
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
```
|
||||
|
||||
### 3. Activate the Virtual Environment
|
||||
|
||||
Activate the virtual environment using the appropriate command for your operating system.
|
||||
|
||||
- **On Linux and macOS:**
|
||||
|
||||
```bash
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
- **On Windows:**
|
||||
|
||||
```bash
|
||||
venv\Scripts\activate
|
||||
```
|
||||
|
||||
Once activated, your terminal prompt should be prefixed with `(venv)` indicating that the virtual environment is active.
|
||||
|
||||
### 4. Install Dependencies
|
||||
|
||||
Install the required Python packages and build dependencies using `pip`.
|
||||
When upgrading pip, use `python -m pip` inside the virtual environment so that pip can update itself cleanly:
|
||||
|
||||
```bash
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install -r src/requirements.txt
|
||||
```
|
||||
```bash
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install -r src/requirements.txt
|
||||
python -m pip install -e .
|
||||
```
|
||||
// 🔧 merged conflicting changes from codex/locate-command-usage-issue-in-seedpass vs beta
|
||||
After reinstalling, run `which seedpass` on Linux/macOS or `where seedpass` on Windows to confirm the command resolves to your virtual environment's `seedpass` executable.
|
||||
|
||||
#### Linux Clipboard Support
|
||||
|
||||
On Linux, `pyperclip` relies on external utilities like `xclip` or `xsel`.
|
||||
SeedPass will attempt to install **xclip** automatically if neither tool is
|
||||
available. If the automatic installation fails, you can install it manually:
|
||||
On Linux, `pyperclip` relies on external utilities like `xclip` or `xsel`. SeedPass will attempt to install **xclip** automatically if neither tool is available. If the automatic installation fails, you can install it manually:
|
||||
|
||||
```bash
|
||||
sudo apt-get install xclip
|
||||
@@ -159,12 +159,18 @@ sudo apt-get install xclip
|
||||
|
||||
## Quick Start
|
||||
|
||||
After installing dependencies and activating your virtual environment, launch
|
||||
SeedPass and create a backup:
|
||||
After installing dependencies and activating your virtual environment, install the package in editable mode so the `seedpass` command is available:
|
||||
|
||||
```bash
|
||||
# Start the application
|
||||
python src/main.py
|
||||
python -m pip install -e .
|
||||
```
|
||||
|
||||
|
||||
You can then launch SeedPass and create a backup:
|
||||
|
||||
```bash
|
||||
# Start the application (interactive TUI)
|
||||
seedpass
|
||||
|
||||
# Export your index
|
||||
seedpass export --file "~/seedpass_backup.json"
|
||||
@@ -188,8 +194,7 @@ seedpass list --filter totp
|
||||
# on an external drive.
|
||||
```
|
||||
|
||||
For additional command examples, see [docs/advanced_cli.md](docs/advanced_cli.md).
|
||||
Details on the REST API can be found in [docs/api_reference.md](docs/api_reference.md).
|
||||
For additional command examples, see [docs/advanced_cli.md](docs/advanced_cli.md). Details on the REST API can be found in [docs/api_reference.md](docs/api_reference.md).
|
||||
|
||||
### Vault JSON Layout
|
||||
|
||||
@@ -209,28 +214,64 @@ The encrypted index file `seedpass_entries_db.json.enc` begins with `schema_vers
|
||||
}
|
||||
```
|
||||
|
||||
> **Note**
|
||||
>
|
||||
> Opening a vault created by older versions automatically converts the legacy
|
||||
> `seedpass_passwords_db.json.enc` (Fernet) to AES-GCM as
|
||||
> `seedpass_entries_db.json.enc`. The original file is kept with a `.fernet`
|
||||
> extension.
|
||||
> The same migration occurs for a legacy `parent_seed.enc` encrypted with
|
||||
> Fernet: it is transparently decrypted, re-encrypted with AES-GCM and the old
|
||||
> file saved as `parent_seed.enc.fernet`.
|
||||
|
||||
## Usage
|
||||
|
||||
After successfully installing the dependencies, you can run SeedPass using the following command:
|
||||
After successfully installing the dependencies, install the package with:
|
||||
|
||||
```bash
|
||||
python -m pip install -e .
|
||||
```
|
||||
|
||||
Once installed, launch the interactive TUI with:
|
||||
|
||||
```bash
|
||||
seedpass
|
||||
```
|
||||
|
||||
You can also run directly from the repository with:
|
||||
|
||||
```bash
|
||||
python src/main.py
|
||||
```
|
||||
|
||||
You can also use the new Typer-based CLI:
|
||||
You can explore other CLI commands using:
|
||||
|
||||
```bash
|
||||
seedpass --help
|
||||
```
|
||||
|
||||
If this command displays `usage: main.py` instead of the Typer help output, an old `seedpass` executable is still on your `PATH`. Remove it with `pip uninstall seedpass` or delete the stale launcher and rerun:
|
||||
|
||||
```bash
|
||||
python -m pip install -e .
|
||||
```
|
||||
// 🔧 merged conflicting changes from codex/locate-command-usage-issue-in-seedpass vs beta
|
||||
You can confirm which executable will run with:
|
||||
|
||||
```bash
|
||||
which seedpass # or 'where seedpass' on Windows
|
||||
```
|
||||
|
||||
For a full list of commands see [docs/advanced_cli.md](docs/advanced_cli.md). The REST API is described in [docs/api_reference.md](docs/api_reference.md).
|
||||
|
||||
### Running the Application
|
||||
|
||||
1. **Start the Application:**
|
||||
|
||||
```bash
|
||||
python src/main.py
|
||||
```
|
||||
```bash
|
||||
seedpass
|
||||
```
|
||||
*(or `python src/main.py` when running directly from the repository)*
|
||||
|
||||
2. **Follow the Prompts:**
|
||||
|
||||
@@ -240,18 +281,18 @@ For a full list of commands see [docs/advanced_cli.md](docs/advanced_cli.md). Th
|
||||
|
||||
Example menu:
|
||||
|
||||
```bash
|
||||
Select an option:
|
||||
1. Add Entry
|
||||
2. Retrieve Entry
|
||||
3. Search Entries
|
||||
4. List Entries
|
||||
5. Modify an Existing Entry
|
||||
6. 2FA Codes
|
||||
7. Settings
|
||||
```bash
|
||||
Select an option:
|
||||
1. Add Entry
|
||||
2. Retrieve Entry
|
||||
3. Search Entries
|
||||
4. List Entries
|
||||
5. Modify an Existing Entry
|
||||
6. 2FA Codes
|
||||
7. Settings
|
||||
|
||||
Enter your choice (1-7) or press Enter to exit:
|
||||
```
|
||||
Enter your choice (1-7) or press Enter to exit:
|
||||
```
|
||||
|
||||
When choosing **Add Entry**, you can now select from:
|
||||
|
||||
@@ -297,45 +338,40 @@ SeedPass supports storing more than just passwords and 2FA secrets. You can also
|
||||
- **SSH Key** – deterministically derive an Ed25519 key pair for servers or git hosting platforms.
|
||||
- **Seed Phrase** – store only the BIP-85 index and word count. The mnemonic is regenerated on demand.
|
||||
- **PGP Key** – derive an OpenPGP key pair from your master seed.
|
||||
- **Nostr Key Pair** – store the index used to derive an `npub`/`nsec` pair for Nostr clients.
|
||||
When you retrieve one of these entries, SeedPass can display QR codes for the
|
||||
keys. The `npub` is wrapped in the `nostr:` URI scheme so any client can scan
|
||||
it, while the `nsec` QR is shown only after a security warning.
|
||||
- **Nostr Key Pair** – store the index used to derive an `npub`/`nsec` pair for Nostr clients. When you retrieve one of these entries, SeedPass can display QR codes for the keys. The `npub` is wrapped in the `nostr:` URI scheme so any client can scan it, while the `nsec` QR is shown only after a security warning.
|
||||
- **Key/Value** – store a simple key and value for miscellaneous secrets or configuration data.
|
||||
- **Managed Account** – derive a child seed under the current profile. Loading a managed account switches to a nested profile and the header shows `<parent_fp> > Managed Account > <child_fp>`. Press Enter on the main menu to return to the parent profile.
|
||||
|
||||
The table below summarizes the extra fields stored for each entry type. Every
|
||||
entry includes a `label`, while only password entries track a `url`.
|
||||
|
||||
| Entry Type | Extra Fields |
|
||||
|---------------|---------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Password | `username`, `url`, `length`, `archived`, optional `notes`, optional `custom_fields` (may include hidden fields), optional `tags` |
|
||||
| 2FA (TOTP) | `index` or `secret`, `period`, `digits`, `archived`, optional `notes`, optional `tags` |
|
||||
| SSH Key | `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Seed Phrase | `index`, `word_count` *(mnemonic regenerated; never stored)*, `archived`, optional `notes`, optional `tags` |
|
||||
| PGP Key | `index`, `key_type`, `archived`, optional `user_id`, optional `notes`, optional `tags` |
|
||||
| Nostr Key Pair| `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Key/Value | `value`, `archived`, optional `notes`, optional `custom_fields`, optional `tags` |
|
||||
| Managed Account | `index`, `word_count`, `fingerprint`, `archived`, optional `notes`, optional `tags` |
|
||||
The table below summarizes the extra fields stored for each entry type. Every entry includes a `label`, while only password entries track a `url`.
|
||||
|
||||
| Entry Type | Extra Fields |
|
||||
|-----------------|-------------------------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Password | `username`, `url`, `length`, `archived`, optional `notes`, optional `custom_fields` (may include hidden fields), optional `tags` |
|
||||
| 2FA (TOTP) | `index` or `secret`, `period`, `digits`, `archived`, optional `notes`, optional `tags` |
|
||||
| SSH Key | `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Seed Phrase | `index`, `word_count` *(mnemonic regenerated; never stored)*, `archived`, optional `notes`, optional `tags` |
|
||||
| PGP Key | `index`, `key_type`, `archived`, optional `user_id`, optional `notes`, optional `tags` |
|
||||
| Nostr Key Pair | `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Key/Value | `value`, `archived`, optional `notes`, optional `custom_fields`, optional `tags` |
|
||||
| Managed Account | `index`, `word_count`, `fingerprint`, `archived`, optional `notes`, optional `tags` |
|
||||
|
||||
### Managing Multiple Seeds
|
||||
|
||||
SeedPass allows you to manage multiple seed profiles (previously referred to as "fingerprints"). Each seed profile has its own parent seed and associated data, enabling you to compartmentalize your passwords.
|
||||
|
||||
- **Add a New Seed Profile:**
|
||||
- From the main menu, select **Settings** then **Profiles** and choose "Add a New Seed Profile".
|
||||
- Choose to enter an existing seed or generate a new one.
|
||||
- If generating a new seed, you'll be provided with a 12-word BIP-85 seed phrase. **Ensure you write this down and store it securely.**
|
||||
1. From the main menu, select **Settings** then **Profiles** and choose "Add a New Seed Profile".
|
||||
2. Choose to enter an existing seed or generate a new one.
|
||||
3. If generating a new seed, you'll be provided with a 12-word BIP-85 seed phrase. **Ensure you write this down and store it securely.**
|
||||
|
||||
- **Switch Between Seed Profiles:**
|
||||
- From the **Profiles** menu, select "Switch Seed Profile".
|
||||
- You'll see a list of available seed profiles.
|
||||
- Enter the number corresponding to the seed profile you wish to switch to.
|
||||
- Enter the master password associated with that seed profile.
|
||||
1. From the **Profiles** menu, select "Switch Seed Profile".
|
||||
2. You'll see a list of available seed profiles.
|
||||
3. Enter the number corresponding to the seed profile you wish to switch to.
|
||||
4. Enter the master password associated with that seed profile.
|
||||
|
||||
- **List All Seed Profiles:**
|
||||
- In the **Profiles** menu, choose "List All Seed Profiles" to view all existing profiles.
|
||||
- **List All Seed Profiles:**
|
||||
In the **Profiles** menu, choose "List All Seed Profiles" to view all existing profiles.
|
||||
|
||||
**Note:** The term "seed profile" is used to represent different sets of seeds you can manage within SeedPass. This provides an intuitive way to handle multiple identities or sets of passwords.
|
||||
|
||||
@@ -364,29 +400,38 @@ You can manage your relays and sync with Nostr from the **Settings** menu:
|
||||
|
||||
Back in the Settings menu you can:
|
||||
|
||||
* Select `3` to change your master password.
|
||||
* Choose `4` to verify the script checksum.
|
||||
* Select `5` to generate a new script checksum.
|
||||
* Choose `6` to back up the parent seed.
|
||||
* Select `7` to export the database to an encrypted file.
|
||||
* Choose `8` to import a database from a backup file.
|
||||
* Select `9` to export all 2FA codes.
|
||||
* Choose `10` to set an additional backup location. A backup is created
|
||||
immediately after the directory is configured.
|
||||
* Select `11` to change the inactivity timeout.
|
||||
* Choose `12` to lock the vault and require re-entry of your password.
|
||||
* Select `13` to view seed profile stats. The summary lists counts for
|
||||
passwords, TOTP codes, SSH keys, seed phrases, and PGP keys. It also shows
|
||||
whether both the encrypted database and the script itself pass checksum
|
||||
validation.
|
||||
* Choose `14` to toggle Secret Mode and set the clipboard clear delay.
|
||||
* Select `15` to return to the main menu.
|
||||
- Select `3` to change your master password.
|
||||
- Choose `4` to verify the script checksum.
|
||||
- Select `5` to generate a new script checksum.
|
||||
- Choose `6` to back up the parent seed.
|
||||
- Select `7` to export the database to an encrypted file.
|
||||
- Choose `8` to import a database from a backup file.
|
||||
- Select `9` to export all 2FA codes.
|
||||
- Choose `10` to set an additional backup location. A backup is created immediately after the directory is configured.
|
||||
- Select `11` to set the PBKDF2 iteration count used for encryption.
|
||||
- Choose `12` to change the inactivity timeout.
|
||||
- Select `13` to lock the vault and require re-entry of your password.
|
||||
- Select `14` to view seed profile stats. The summary lists counts for passwords, TOTP codes, SSH keys, seed phrases, and PGP keys. It also shows whether both the encrypted database and the script itself pass checksum validation.
|
||||
- Choose `15` to toggle Secret Mode and set the clipboard clear delay.
|
||||
- Select `16` to toggle Offline Mode and disable Nostr synchronization.
|
||||
- Choose `17` to toggle Quick Unlock for skipping the password prompt after the first unlock.
|
||||
Press **Enter** at any time to return to the main menu.
|
||||
You can adjust these settings directly from the command line:
|
||||
|
||||
```bash
|
||||
seedpass config set kdf_iterations 200000
|
||||
seedpass config set backup_interval 3600
|
||||
seedpass config set quick_unlock true
|
||||
seedpass config set nostr_max_retries 2
|
||||
seedpass config set nostr_retry_delay 1
|
||||
```
|
||||
|
||||
The default configuration uses **50,000** PBKDF2 iterations. Increase this value for stronger password hashing or lower it for faster startup (not recommended). Offline Mode skips all Nostr communication, keeping your data local until you re-enable syncing. Quick Unlock stores a hashed copy of your password in the encrypted config so that after the initial unlock, subsequent operations won't prompt for the password until you exit the program. Avoid enabling Quick Unlock on shared machines.
|
||||
|
||||
## Running Tests
|
||||
|
||||
SeedPass includes a small suite of unit tests located under `src/tests`. **Before running `pytest`, be sure to install the test requirements.** Activate your virtual environment and run `pip install -r src/requirements.txt` to ensure all testing dependencies are available. Then run the tests with **pytest**. Use `-vv` to see INFO-level log messages from each passing test:
|
||||
|
||||
|
||||
```bash
|
||||
pip install -r src/requirements.txt
|
||||
pytest -vv
|
||||
@@ -394,10 +439,7 @@ pytest -vv
|
||||
|
||||
### Exploring Nostr Index Size Limits
|
||||
|
||||
`test_nostr_index_size.py` demonstrates how SeedPass rotates snapshots after too many delta events.
|
||||
Each chunk is limited to 50 KB, so the test gradually grows the vault to observe
|
||||
when a new snapshot is triggered. Use the `NOSTR_TEST_DELAY` environment
|
||||
variable to control the delay between publishes when experimenting with large vaults.
|
||||
`test_nostr_index_size.py` demonstrates how SeedPass rotates snapshots after too many delta events. Each chunk is limited to 50 KB, so the test gradually grows the vault to observe when a new snapshot is triggered. Use the `NOSTR_TEST_DELAY` environment variable to control the delay between publishes when experimenting with large vaults.
|
||||
|
||||
```bash
|
||||
pytest -vv -s -n 0 src/tests/test_nostr_index_size.py --desktop --max-entries=1000
|
||||
@@ -411,23 +453,24 @@ Use the helper script below to populate a profile with sample entries for testin
|
||||
python scripts/generate_test_profile.py --profile demo_profile --count 100
|
||||
```
|
||||
|
||||
The script now determines the fingerprint from the generated seed and stores the
|
||||
vault under `~/.seedpass/<fingerprint>`. It also prints the fingerprint after
|
||||
creation and publishes the encrypted index to Nostr. Use that same seed phrase
|
||||
to load SeedPass. The app checks Nostr on startup and pulls any newer snapshot
|
||||
so your vault stays in sync across machines.
|
||||
The script determines the fingerprint from the generated seed and stores the
|
||||
vault under `~/.seedpass/tests/<fingerprint>`. SeedPass only looks for profiles
|
||||
in `~/.seedpass/`, so move or copy the fingerprint directory out of the `tests`
|
||||
subfolder (or adjust `APP_DIR` in `constants.py`) if you want to load it with
|
||||
the main application. The fingerprint is printed after creation and the
|
||||
encrypted index is published to Nostr. Use that same seed phrase to load
|
||||
SeedPass. The app checks Nostr on startup and pulls any newer snapshot so your
|
||||
vault stays in sync across machines.
|
||||
|
||||
### Automatically Updating the Script Checksum
|
||||
|
||||
SeedPass stores a SHA-256 checksum for the main program in `~/.seedpass/seedpass_script_checksum.txt`.
|
||||
To keep this value in sync with the source code, install the pre‑push git hook:
|
||||
SeedPass stores a SHA-256 checksum for the main program in `~/.seedpass/seedpass_script_checksum.txt`. To keep this value in sync with the source code, install the pre-push git hook:
|
||||
|
||||
```bash
|
||||
pre-commit install -t pre-push
|
||||
```
|
||||
|
||||
After running this command, every `git push` will execute `scripts/update_checksum.py`,
|
||||
updating the checksum file automatically.
|
||||
After running this command, every `git push` will execute `scripts/update_checksum.py`, updating the checksum file automatically.
|
||||
|
||||
If the checksum file is missing, generate it manually:
|
||||
|
||||
@@ -455,35 +498,32 @@ Mutation testing is disabled in the GitHub workflow due to reliability issues an
|
||||
- **Revealing the Parent Seed:** The `vault reveal-parent-seed` command and `/api/v1/parent-seed` endpoint print your seed in plain text. Run them only in a secure environment.
|
||||
- **No PBKDF2 Salt Needed:** SeedPass deliberately omits an explicit PBKDF2 salt. Every password is derived from a unique 512-bit BIP-85 child seed, which already provides stronger per-password uniqueness than a conventional 128-bit salt.
|
||||
- **Checksum Verification:** Always verify the script's checksum to ensure its integrity and protect against unauthorized modifications.
|
||||
- **Potential Bugs and Limitations:** Be aware that the software may contain bugs and lacks certain features. Snapshot chunks are capped at 50 KB and the client rotates snapshots after enough delta events accumulate. The security of memory management and logs has not been thoroughly evaluated and may pose risks of leaking sensitive information.
|
||||
- **Potential Bugs and Limitations:** Be aware that the software may contain bugs and lacks certain features. Snapshot chunks are capped at 50 KB and the client rotates snapshots after enough delta events accumulate. The security of memory management and logs has not been thoroughly evaluated and may pose risks of leaking sensitive information.
|
||||
- **Multiple Seeds Management:** While managing multiple seeds adds flexibility, it also increases the responsibility to secure each seed and its associated password.
|
||||
- **No PBKDF2 Salt Required:** SeedPass deliberately omits an explicit PBKDF2 salt. Every password is derived from a unique 512-bit BIP-85 child seed, which already provides stronger per-password uniqueness than a conventional 128-bit salt.
|
||||
- **Default KDF Iterations:** New profiles start with 50,000 PBKDF2 iterations. Adjust this with `seedpass config set kdf_iterations`.
|
||||
- **KDF Iteration Caution:** Lowering `kdf_iterations` makes password cracking easier, while a high `backup_interval` leaves fewer recent backups.
|
||||
- **Offline Mode:** When enabled, SeedPass skips all Nostr operations so your vault stays local until syncing is turned back on.
|
||||
- **Quick Unlock:** Stores a hashed copy of your password in the encrypted config so you only need to enter it once per session. Avoid this on shared computers.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! If you have suggestions for improvements, bug fixes, or new features, please follow these steps:
|
||||
|
||||
1. **Fork the Repository:** Click the "Fork" button on the top right of the repository page.
|
||||
|
||||
2. **Create a Branch:** Create a new branch for your feature or bugfix.
|
||||
|
||||
1. **Create a Branch:** Create a new branch for your feature or bugfix.
|
||||
```bash
|
||||
git checkout -b feature/YourFeatureName
|
||||
```
|
||||
|
||||
3. **Commit Your Changes:** Make your changes and commit them with clear messages.
|
||||
|
||||
1. **Commit Your Changes:** Make your changes and commit them with clear messages.
|
||||
```bash
|
||||
git commit -m "Add feature X"
|
||||
```
|
||||
|
||||
4. **Push to GitHub:** Push your changes to your forked repository.
|
||||
|
||||
1. **Push to GitHub:** Push your changes to your forked repository.
|
||||
```bash
|
||||
git push origin feature/YourFeatureName
|
||||
```
|
||||
|
||||
5. **Create a Pull Request:** Navigate to the original repository and create a pull request describing your changes.
|
||||
1. **Create a Pull Request:** Navigate to the original repository and create a pull request describing your changes.
|
||||
|
||||
## License
|
||||
|
||||
@@ -496,5 +536,4 @@ For any questions, suggestions, or support, please open an issue on the [GitHub
|
||||
---
|
||||
|
||||
*Stay secure and keep your passwords safe with SeedPass!*
|
||||
|
||||
---
|
||||
````
|
||||
|
2
docs/.gitattributes
vendored
Normal file
2
docs/.gitattributes
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
# Auto detect text files and perform LF normalization
|
||||
* text=auto
|
17
docs/.github/workflows/ci.yml
vendored
Normal file
17
docs/.github/workflows/ci.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: CI
|
||||
|
||||
on:
|
||||
pull_request:
|
||||
push:
|
||||
branches: [main]
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v3
|
||||
- uses: actions/setup-node@v3
|
||||
with:
|
||||
node-version: 18
|
||||
- run: npm install
|
||||
- run: npm test
|
2
docs/.gitignore
vendored
Normal file
2
docs/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
_site/
|
||||
node_modules/
|
@@ -1,25 +1,55 @@
|
||||
# SeedPass Documentation
|
||||
# Archivox
|
||||
|
||||
This directory contains supplementary guides for using SeedPass.
|
||||
Archivox is a lightweight static site generator aimed at producing documentation sites similar to "Read the Docs". Write your content in Markdown, run the generator, and deploy the static files anywhere.
|
||||
|
||||
## Quick Example: Get a TOTP Code
|
||||
[](https://github.com/PR0M3TH3AN/Archivox/actions/workflows/ci.yml)
|
||||
|
||||
Run `seedpass entry get <query>` to retrieve a time-based one-time password (TOTP).
|
||||
The `<query>` can be a label, title, or index. A progress bar shows the remaining
|
||||
seconds in the current period.
|
||||
## Features
|
||||
- Markdown based pages with automatic navigation
|
||||
- Responsive layout with sidebar and search powered by Lunr.js
|
||||
- Simple configuration through `config.yaml`
|
||||
- Extensible via plugins and custom templates
|
||||
|
||||
## Getting Started
|
||||
Install the dependencies and start the development server:
|
||||
|
||||
```bash
|
||||
$ seedpass entry get "email"
|
||||
[##########----------] 15s
|
||||
Code: 123456
|
||||
npm install
|
||||
npm run dev
|
||||
```
|
||||
|
||||
To show all stored TOTP codes with their countdown timers, run:
|
||||
The site will be available at `http://localhost:8080`. Edit files inside the `content/` directory to update pages.
|
||||
|
||||
To create a new project from the starter template you can run:
|
||||
|
||||
```bash
|
||||
$ seedpass entry totp-codes
|
||||
npx create-archivox my-docs --install
|
||||
```
|
||||
|
||||
## CLI and API Reference
|
||||
## Building
|
||||
When you are ready to publish your documentation run:
|
||||
|
||||
See [advanced_cli.md](advanced_cli.md) for a list of command examples. Detailed information about the REST API is available in [api_reference.md](api_reference.md). When starting the API, set `SEEDPASS_CORS_ORIGINS` if you need to allow requests from specific web origins.
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
The generated site is placed in the `_site/` folder.
|
||||
|
||||
## Customization
|
||||
- **`config.yaml`** – change the site title, theme options and other settings.
|
||||
- **`plugins/`** – add JavaScript files exporting hook functions such as `onPageRendered` to extend the build process.
|
||||
- **`templates/`** – modify or replace the Nunjucks templates for full control over the HTML.
|
||||
|
||||
## Hosting
|
||||
Upload the contents of `_site/` to any static host. For Netlify you can use the provided `netlify.toml`:
|
||||
|
||||
```toml
|
||||
[build]
|
||||
command = "npm run build"
|
||||
publish = "_site"
|
||||
```
|
||||
|
||||
## Documentation
|
||||
See the files under the `docs/` directory for a full guide to Archivox including an integration tutorial for existing projects.
|
||||
|
||||
Archivox is released under the MIT License.
|
||||
|
34
docs/__tests__/buildNav.test.js
Normal file
34
docs/__tests__/buildNav.test.js
Normal file
@@ -0,0 +1,34 @@
|
||||
const { buildNav } = require('../src/generator');
|
||||
|
||||
test('generates navigation tree', () => {
|
||||
const pages = [
|
||||
{ file: 'guide/install.md', data: { title: 'Install', order: 1 } },
|
||||
{ file: 'guide/usage.md', data: { title: 'Usage', order: 2 } },
|
||||
{ file: 'guide/nested/info.md', data: { title: 'Info', order: 1 } }
|
||||
];
|
||||
const tree = buildNav(pages);
|
||||
const guide = tree.find(n => n.name === 'guide');
|
||||
expect(guide).toBeDefined();
|
||||
expect(guide.children.length).toBe(3);
|
||||
const install = guide.children.find(c => c.name === 'install.md');
|
||||
expect(install.path).toBe('/guide/install.html');
|
||||
});
|
||||
|
||||
test('adds display names and section flags', () => {
|
||||
const pages = [
|
||||
{ file: '02-api.md', data: { title: 'API', order: 2 } },
|
||||
{ file: '01-guide/index.md', data: { title: 'Guide', order: 1 } },
|
||||
{ file: '01-guide/setup.md', data: { title: 'Setup', order: 2 } },
|
||||
{ file: 'index.md', data: { title: 'Home', order: 10 } }
|
||||
];
|
||||
const nav = buildNav(pages);
|
||||
expect(nav[0].name).toBe('index.md');
|
||||
const guide = nav.find(n => n.name === '01-guide');
|
||||
expect(guide.displayName).toBe('Guide');
|
||||
expect(guide.isSection).toBe(true);
|
||||
const api = nav.find(n => n.name === '02-api.md');
|
||||
expect(api.displayName).toBe('API');
|
||||
// alphabetical within same order
|
||||
expect(nav[1].name).toBe('01-guide');
|
||||
expect(nav[2].name).toBe('02-api.md');
|
||||
});
|
13
docs/__tests__/loadConfig.test.js
Normal file
13
docs/__tests__/loadConfig.test.js
Normal file
@@ -0,0 +1,13 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const loadConfig = require('../src/config/loadConfig');
|
||||
|
||||
test('loads configuration and merges defaults', () => {
|
||||
const dir = fs.mkdtempSync(path.join(__dirname, 'cfg-'));
|
||||
const file = path.join(dir, 'config.yaml');
|
||||
fs.writeFileSync(file, 'site:\n title: Test Site\n');
|
||||
const cfg = loadConfig(file);
|
||||
expect(cfg.site.title).toBe('Test Site');
|
||||
expect(cfg.navigation.search).toBe(true);
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
});
|
23
docs/__tests__/pluginHooks.test.js
Normal file
23
docs/__tests__/pluginHooks.test.js
Normal file
@@ -0,0 +1,23 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const loadPlugins = require('../src/config/loadPlugins');
|
||||
|
||||
test('plugin hook modifies data', async () => {
|
||||
const dir = fs.mkdtempSync(path.join(require('os').tmpdir(), 'plugins-'));
|
||||
const pluginFile = path.join(dir, 'test.plugin.js');
|
||||
fs.writeFileSync(
|
||||
pluginFile,
|
||||
"module.exports = { onParseMarkdown: ({ content }) => ({ content: content + '!!' }) };\n"
|
||||
);
|
||||
|
||||
const plugins = loadPlugins({ pluginsDir: dir, plugins: ['test.plugin'] });
|
||||
let data = { content: 'hello' };
|
||||
for (const plugin of plugins) {
|
||||
if (typeof plugin.onParseMarkdown === 'function') {
|
||||
const res = await plugin.onParseMarkdown(data);
|
||||
if (res !== undefined) data = res;
|
||||
}
|
||||
}
|
||||
expect(data.content).toBe('hello!!');
|
||||
fs.rmSync(dir, { recursive: true, force: true });
|
||||
});
|
77
docs/__tests__/renderMarkdown.test.js
Normal file
77
docs/__tests__/renderMarkdown.test.js
Normal file
@@ -0,0 +1,77 @@
|
||||
jest.mock('@11ty/eleventy', () => {
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
return class Eleventy {
|
||||
constructor(input, output) {
|
||||
this.input = input;
|
||||
this.output = output;
|
||||
}
|
||||
setConfig() {}
|
||||
async write() {
|
||||
const walk = d => {
|
||||
const entries = fs.readdirSync(d, { withFileTypes: true });
|
||||
let files = [];
|
||||
for (const e of entries) {
|
||||
const p = path.join(d, e.name);
|
||||
if (e.isDirectory()) files = files.concat(walk(p));
|
||||
else if (p.endsWith('.md')) files.push(p);
|
||||
}
|
||||
return files;
|
||||
};
|
||||
for (const file of walk(this.input)) {
|
||||
const rel = path.relative(this.input, file).replace(/\.md$/, '.html');
|
||||
const dest = path.join(this.output, rel);
|
||||
fs.mkdirSync(path.dirname(dest), { recursive: true });
|
||||
fs.writeFileSync(dest, '<header></header><aside class="sidebar"></aside>');
|
||||
}
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const os = require('os');
|
||||
const { generate } = require('../src/generator');
|
||||
|
||||
function getPaths(tree) {
|
||||
const paths = [];
|
||||
for (const node of tree) {
|
||||
if (node.path) paths.push(node.path);
|
||||
if (node.children) paths.push(...getPaths(node.children));
|
||||
}
|
||||
return paths;
|
||||
}
|
||||
|
||||
test('markdown files render with layout and appear in nav/search', async () => {
|
||||
const tmp = fs.mkdtempSync(path.join(os.tmpdir(), 'df-test-'));
|
||||
const contentDir = path.join(tmp, 'content');
|
||||
const outputDir = path.join(tmp, '_site');
|
||||
fs.mkdirSync(path.join(contentDir, 'guide'), { recursive: true });
|
||||
fs.writeFileSync(path.join(contentDir, 'index.md'), '# Home\nWelcome');
|
||||
fs.writeFileSync(path.join(contentDir, 'guide', 'install.md'), '# Install\nSteps');
|
||||
const configPath = path.join(tmp, 'config.yaml');
|
||||
fs.writeFileSync(configPath, 'site:\n title: Test\n');
|
||||
|
||||
await generate({ contentDir, outputDir, configPath });
|
||||
|
||||
const indexHtml = fs.readFileSync(path.join(outputDir, 'index.html'), 'utf8');
|
||||
const installHtml = fs.readFileSync(path.join(outputDir, 'guide', 'install.html'), 'utf8');
|
||||
expect(indexHtml).toContain('<header');
|
||||
expect(indexHtml).toContain('<aside class="sidebar"');
|
||||
expect(installHtml).toContain('<header');
|
||||
expect(installHtml).toContain('<aside class="sidebar"');
|
||||
|
||||
const nav = JSON.parse(fs.readFileSync(path.join(outputDir, 'navigation.json'), 'utf8'));
|
||||
const navPaths = getPaths(nav);
|
||||
expect(navPaths).toContain('/index.html');
|
||||
expect(navPaths).toContain('/guide/install.html');
|
||||
|
||||
const search = JSON.parse(fs.readFileSync(path.join(outputDir, 'search-index.json'), 'utf8'));
|
||||
const docs = search.docs.map(d => d.id);
|
||||
expect(docs).toContain('index.html');
|
||||
expect(docs).toContain('guide/install.html');
|
||||
const installDoc = search.docs.find(d => d.id === 'guide/install.html');
|
||||
expect(installDoc.body).toContain('Steps');
|
||||
|
||||
fs.rmSync(tmp, { recursive: true, force: true });
|
||||
});
|
128
docs/__tests__/responsive.test.js
Normal file
128
docs/__tests__/responsive.test.js
Normal file
@@ -0,0 +1,128 @@
|
||||
jest.mock('@11ty/eleventy', () => {
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
return class Eleventy {
|
||||
constructor(input, output) {
|
||||
this.input = input;
|
||||
this.output = output;
|
||||
}
|
||||
setConfig() {}
|
||||
async write() {
|
||||
const walk = d => {
|
||||
const entries = fs.readdirSync(d, { withFileTypes: true });
|
||||
let files = [];
|
||||
for (const e of entries) {
|
||||
const p = path.join(d, e.name);
|
||||
if (e.isDirectory()) files = files.concat(walk(p));
|
||||
else if (p.endsWith('.md')) files.push(p);
|
||||
}
|
||||
return files;
|
||||
};
|
||||
for (const file of walk(this.input)) {
|
||||
const rel = path.relative(this.input, file).replace(/\.md$/, '.html');
|
||||
const dest = path.join(this.output, rel);
|
||||
fs.mkdirSync(path.dirname(dest), { recursive: true });
|
||||
fs.writeFileSync(
|
||||
dest,
|
||||
`<!DOCTYPE html><html><head><link rel="stylesheet" href="/assets/theme.css"></head><body><header><button id="sidebar-toggle" class="sidebar-toggle">☰</button></header><div class="container"><aside class="sidebar"></aside><main></main></div><script src="/assets/theme.js"></script></body></html>`
|
||||
);
|
||||
}
|
||||
}
|
||||
};
|
||||
});
|
||||
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const http = require('http');
|
||||
const os = require('os');
|
||||
const puppeteer = require('puppeteer');
|
||||
const { generate } = require('../src/generator');
|
||||
|
||||
jest.setTimeout(30000);
|
||||
|
||||
let server;
|
||||
let browser;
|
||||
let port;
|
||||
let tmp;
|
||||
|
||||
beforeAll(async () => {
|
||||
tmp = fs.mkdtempSync(path.join(os.tmpdir(), 'df-responsive-'));
|
||||
const contentDir = path.join(tmp, 'content');
|
||||
const outputDir = path.join(tmp, '_site');
|
||||
fs.mkdirSync(contentDir, { recursive: true });
|
||||
fs.writeFileSync(path.join(contentDir, 'index.md'), '# Home\n');
|
||||
await generate({ contentDir, outputDir });
|
||||
fs.cpSync(path.join(__dirname, '../assets'), path.join(outputDir, 'assets'), { recursive: true });
|
||||
|
||||
server = http.createServer((req, res) => {
|
||||
let filePath = path.join(outputDir, req.url === '/' ? 'index.html' : req.url);
|
||||
if (req.url.startsWith('/assets')) {
|
||||
filePath = path.join(outputDir, req.url);
|
||||
}
|
||||
fs.readFile(filePath, (err, data) => {
|
||||
if (err) {
|
||||
res.writeHead(404);
|
||||
res.end('Not found');
|
||||
return;
|
||||
}
|
||||
const ext = path.extname(filePath).slice(1);
|
||||
const type = { html: 'text/html', js: 'text/javascript', css: 'text/css' }[ext] || 'application/octet-stream';
|
||||
res.writeHead(200, { 'Content-Type': type });
|
||||
res.end(data);
|
||||
});
|
||||
});
|
||||
await new Promise(resolve => {
|
||||
server.listen(0, () => {
|
||||
port = server.address().port;
|
||||
resolve();
|
||||
});
|
||||
});
|
||||
|
||||
browser = await puppeteer.launch({ args: ['--no-sandbox', '--disable-setuid-sandbox'] });
|
||||
});
|
||||
|
||||
afterAll(async () => {
|
||||
if (browser) await browser.close();
|
||||
if (server) server.close();
|
||||
fs.rmSync(tmp, { recursive: true, force: true });
|
||||
});
|
||||
|
||||
test('sidebar opens on small screens', async () => {
|
||||
const page = await browser.newPage();
|
||||
await page.setViewport({ width: 500, height: 800 });
|
||||
await page.goto(`http://localhost:${port}/`);
|
||||
await page.waitForSelector('#sidebar-toggle');
|
||||
await page.click('#sidebar-toggle');
|
||||
await new Promise(r => setTimeout(r, 300));
|
||||
const bodyClass = await page.evaluate(() => document.body.classList.contains('sidebar-open'));
|
||||
const sidebarLeft = await page.evaluate(() => getComputedStyle(document.querySelector('.sidebar')).left);
|
||||
expect(bodyClass).toBe(true);
|
||||
expect(sidebarLeft).toBe('0px');
|
||||
});
|
||||
|
||||
test('clicking outside closes sidebar on small screens', async () => {
|
||||
const page = await browser.newPage();
|
||||
await page.setViewport({ width: 500, height: 800 });
|
||||
await page.goto(`http://localhost:${port}/`);
|
||||
await page.waitForSelector('#sidebar-toggle');
|
||||
await page.click('#sidebar-toggle');
|
||||
await new Promise(r => setTimeout(r, 300));
|
||||
await page.click('main');
|
||||
await new Promise(r => setTimeout(r, 300));
|
||||
const bodyClass = await page.evaluate(() => document.body.classList.contains('sidebar-open'));
|
||||
expect(bodyClass).toBe(false);
|
||||
});
|
||||
|
||||
test('sidebar toggles on large screens', async () => {
|
||||
const page = await browser.newPage();
|
||||
await page.setViewport({ width: 1024, height: 800 });
|
||||
await page.goto(`http://localhost:${port}/`);
|
||||
await page.waitForSelector('#sidebar-toggle');
|
||||
await new Promise(r => setTimeout(r, 300));
|
||||
let sidebarWidth = await page.evaluate(() => getComputedStyle(document.querySelector('.sidebar')).width);
|
||||
expect(sidebarWidth).toBe('240px');
|
||||
await page.click('#sidebar-toggle');
|
||||
await new Promise(r => setTimeout(r, 300));
|
||||
sidebarWidth = await page.evaluate(() => getComputedStyle(document.querySelector('.sidebar')).width);
|
||||
expect(sidebarWidth).toBe('0px');
|
||||
});
|
3475
docs/assets/lunr.js
Normal file
3475
docs/assets/lunr.js
Normal file
File diff suppressed because it is too large
Load Diff
160
docs/assets/theme.css
Normal file
160
docs/assets/theme.css
Normal file
@@ -0,0 +1,160 @@
|
||||
:root {
|
||||
--bg-color: #ffffff;
|
||||
--text-color: #333333;
|
||||
--sidebar-bg: #f3f3f3;
|
||||
--sidebar-width: 240px;
|
||||
}
|
||||
[data-theme="dark"] {
|
||||
--bg-color: #222222;
|
||||
--text-color: #eeeeee;
|
||||
--sidebar-bg: #333333;
|
||||
}
|
||||
body {
|
||||
margin: 0;
|
||||
background: var(--bg-color);
|
||||
color: var(--text-color);
|
||||
font-family: Arial, sans-serif;
|
||||
display: flex;
|
||||
flex-direction: column;
|
||||
min-height: 100vh;
|
||||
}
|
||||
.header {
|
||||
display: flex;
|
||||
align-items: center;
|
||||
padding: 0.5rem 1rem;
|
||||
background: var(--sidebar-bg);
|
||||
position: sticky;
|
||||
top: 0;
|
||||
z-index: 1100;
|
||||
}
|
||||
.search-input {
|
||||
margin-left: auto;
|
||||
padding: 0.25rem;
|
||||
}
|
||||
.search-results {
|
||||
display: none;
|
||||
position: absolute;
|
||||
right: 1rem;
|
||||
top: 100%;
|
||||
background: var(--bg-color);
|
||||
border: 1px solid #ccc;
|
||||
width: 250px;
|
||||
max-height: 200px;
|
||||
overflow-y: auto;
|
||||
z-index: 100;
|
||||
}
|
||||
.search-results a {
|
||||
display: block;
|
||||
padding: 0.25rem;
|
||||
color: var(--text-color);
|
||||
text-decoration: none;
|
||||
}
|
||||
.search-results a:hover {
|
||||
background: var(--sidebar-bg);
|
||||
}
|
||||
.search-results .no-results {
|
||||
padding: 0.25rem;
|
||||
}
|
||||
.logo { text-decoration: none; color: var(--text-color); font-weight: bold; }
|
||||
.sidebar-toggle,
|
||||
.theme-toggle { background: none; border: none; font-size: 1.2rem; margin-right: 1rem; cursor: pointer; }
|
||||
.container { display: flex; flex: 1; }
|
||||
.sidebar {
|
||||
width: var(--sidebar-width);
|
||||
background: var(--sidebar-bg);
|
||||
padding: 1rem;
|
||||
box-sizing: border-box;
|
||||
}
|
||||
.sidebar ul { list-style: none; padding: 0; margin: 0; }
|
||||
.sidebar li { margin: 0.25rem 0; }
|
||||
.sidebar a { text-decoration: none; color: var(--text-color); display: block; padding: 0.25rem 0; }
|
||||
.sidebar nav { font-size: 0.9rem; }
|
||||
.nav-link:hover { text-decoration: underline; }
|
||||
.nav-link.active { font-weight: bold; }
|
||||
.nav-section summary {
|
||||
list-style: none;
|
||||
cursor: pointer;
|
||||
position: relative;
|
||||
display: flex;
|
||||
align-items: center;
|
||||
}
|
||||
.nav-section summary::-webkit-details-marker { display: none; }
|
||||
.nav-section summary::before {
|
||||
content: '▸';
|
||||
display: inline-block;
|
||||
margin-right: 0.25rem;
|
||||
transition: transform 0.2s ease;
|
||||
}
|
||||
.nav-section[open] > summary::before { transform: rotate(90deg); }
|
||||
.nav-level { padding-left: 1rem; margin-left: 0.5rem; border-left: 2px solid #ccc; }
|
||||
.sidebar ul ul { padding-left: 1rem; margin-left: 0.5rem; border-left: 2px solid #ccc; }
|
||||
main {
|
||||
flex: 1;
|
||||
padding: 2rem;
|
||||
}
|
||||
.breadcrumbs a { color: var(--text-color); text-decoration: none; }
|
||||
.footer {
|
||||
text-align: center;
|
||||
padding: 1rem;
|
||||
background: var(--sidebar-bg);
|
||||
position: relative;
|
||||
}
|
||||
.footer-links {
|
||||
margin-bottom: 0.5rem;
|
||||
}
|
||||
.footer-links a {
|
||||
margin: 0 0.5rem;
|
||||
text-decoration: none;
|
||||
color: var(--text-color);
|
||||
}
|
||||
.footer-permanent-links {
|
||||
position: absolute;
|
||||
right: 0.5rem;
|
||||
bottom: 0.25rem;
|
||||
font-size: 0.8rem;
|
||||
opacity: 0.7;
|
||||
}
|
||||
.footer-permanent-links a {
|
||||
margin-left: 0.5rem;
|
||||
text-decoration: none;
|
||||
color: var(--text-color);
|
||||
}
|
||||
|
||||
.sidebar-overlay {
|
||||
display: none;
|
||||
}
|
||||
@media (max-width: 768px) {
|
||||
body.sidebar-open .sidebar-overlay {
|
||||
display: block;
|
||||
position: fixed;
|
||||
top: 0;
|
||||
left: 0;
|
||||
right: 0;
|
||||
bottom: 0;
|
||||
background: rgba(0, 0, 0, 0.3);
|
||||
z-index: 999;
|
||||
}
|
||||
}
|
||||
@media (max-width: 768px) {
|
||||
.sidebar {
|
||||
position: fixed;
|
||||
left: -100%;
|
||||
top: 0;
|
||||
height: 100%;
|
||||
overflow-y: auto;
|
||||
transition: none;
|
||||
z-index: 1000;
|
||||
}
|
||||
body.sidebar-open .sidebar { left: 0; }
|
||||
}
|
||||
|
||||
@media (min-width: 769px) {
|
||||
.sidebar {
|
||||
transition: width 0.2s ease;
|
||||
}
|
||||
body:not(.sidebar-open) .sidebar {
|
||||
width: 0;
|
||||
padding: 0;
|
||||
overflow: hidden;
|
||||
}
|
||||
}
|
107
docs/assets/theme.js
Normal file
107
docs/assets/theme.js
Normal file
@@ -0,0 +1,107 @@
|
||||
document.addEventListener('DOMContentLoaded', () => {
|
||||
const sidebarToggle = document.getElementById('sidebar-toggle');
|
||||
const themeToggle = document.getElementById('theme-toggle');
|
||||
const searchInput = document.getElementById('search-input');
|
||||
const searchResults = document.getElementById('search-results');
|
||||
const sidebar = document.getElementById('sidebar');
|
||||
const sidebarOverlay = document.getElementById('sidebar-overlay');
|
||||
const root = document.documentElement;
|
||||
|
||||
function setTheme(theme) {
|
||||
root.dataset.theme = theme;
|
||||
localStorage.setItem('theme', theme);
|
||||
}
|
||||
const stored = localStorage.getItem('theme');
|
||||
if (stored) setTheme(stored);
|
||||
|
||||
if (window.innerWidth > 768) {
|
||||
document.body.classList.add('sidebar-open');
|
||||
}
|
||||
|
||||
sidebarToggle?.addEventListener('click', () => {
|
||||
document.body.classList.toggle('sidebar-open');
|
||||
});
|
||||
|
||||
sidebarOverlay?.addEventListener('click', () => {
|
||||
document.body.classList.remove('sidebar-open');
|
||||
});
|
||||
|
||||
themeToggle?.addEventListener('click', () => {
|
||||
const next = root.dataset.theme === 'dark' ? 'light' : 'dark';
|
||||
setTheme(next);
|
||||
});
|
||||
|
||||
// search
|
||||
let lunrIndex;
|
||||
let docs = [];
|
||||
async function loadIndex() {
|
||||
if (lunrIndex) return;
|
||||
try {
|
||||
const res = await fetch('/search-index.json');
|
||||
const data = await res.json();
|
||||
lunrIndex = lunr.Index.load(data.index);
|
||||
docs = data.docs;
|
||||
} catch (e) {
|
||||
console.error('Search index failed to load', e);
|
||||
}
|
||||
}
|
||||
|
||||
function highlight(text, q) {
|
||||
const re = new RegExp('(' + q.replace(/[.*+?^${}()|[\\]\\]/g, '\\$&') + ')', 'gi');
|
||||
return text.replace(re, '<mark>$1</mark>');
|
||||
}
|
||||
|
||||
searchInput?.addEventListener('input', async e => {
|
||||
const q = e.target.value.trim();
|
||||
await loadIndex();
|
||||
if (!lunrIndex || !q) {
|
||||
searchResults.style.display = 'none';
|
||||
searchResults.innerHTML = '';
|
||||
return;
|
||||
}
|
||||
const matches = lunrIndex.search(q);
|
||||
searchResults.innerHTML = '';
|
||||
if (!matches.length) {
|
||||
searchResults.innerHTML = '<div class="no-results">No matches found</div>';
|
||||
searchResults.style.display = 'block';
|
||||
return;
|
||||
}
|
||||
matches.forEach(m => {
|
||||
const doc = docs.find(d => d.id === m.ref);
|
||||
if (!doc) return;
|
||||
const a = document.createElement('a');
|
||||
a.href = doc.url;
|
||||
const snippet = doc.body ? doc.body.slice(0, 160) + (doc.body.length > 160 ? '...' : '') : '';
|
||||
a.innerHTML = '<strong>' + highlight(doc.title, q) + '</strong><br><small>' + highlight(snippet, q) + '</small>';
|
||||
searchResults.appendChild(a);
|
||||
});
|
||||
searchResults.style.display = 'block';
|
||||
});
|
||||
|
||||
document.addEventListener('click', e => {
|
||||
if (!searchResults.contains(e.target) && e.target !== searchInput) {
|
||||
searchResults.style.display = 'none';
|
||||
}
|
||||
if (
|
||||
window.innerWidth <= 768 &&
|
||||
document.body.classList.contains('sidebar-open') &&
|
||||
sidebar &&
|
||||
!sidebar.contains(e.target) &&
|
||||
e.target !== sidebarToggle
|
||||
) {
|
||||
document.body.classList.remove('sidebar-open');
|
||||
}
|
||||
});
|
||||
|
||||
// breadcrumbs
|
||||
const bc = document.getElementById('breadcrumbs');
|
||||
if (bc) {
|
||||
const parts = location.pathname.split('/').filter(Boolean);
|
||||
let path = '';
|
||||
bc.innerHTML = '<a href="/">Home</a>';
|
||||
parts.forEach((p) => {
|
||||
path += '/' + p;
|
||||
bc.innerHTML += ' / <a href="' + path + '">' + p.replace(/-/g, ' ') + '</a>';
|
||||
});
|
||||
}
|
||||
});
|
45
docs/bin/create-archivox.js
Executable file
45
docs/bin/create-archivox.js
Executable file
@@ -0,0 +1,45 @@
|
||||
#!/usr/bin/env node
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const { execSync } = require('child_process');
|
||||
|
||||
function copyDir(src, dest) {
|
||||
fs.mkdirSync(dest, { recursive: true });
|
||||
for (const entry of fs.readdirSync(src, { withFileTypes: true })) {
|
||||
const srcPath = path.join(src, entry.name);
|
||||
const destPath = path.join(dest, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
copyDir(srcPath, destPath);
|
||||
} else {
|
||||
fs.copyFileSync(srcPath, destPath);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function main() {
|
||||
const args = process.argv.slice(2);
|
||||
const install = args.includes('--install');
|
||||
const targetArg = args.find(a => !a.startsWith('-')) || '.';
|
||||
const targetDir = path.resolve(process.cwd(), targetArg);
|
||||
|
||||
const templateDir = path.join(__dirname, '..', 'starter');
|
||||
copyDir(templateDir, targetDir);
|
||||
|
||||
const pkgPath = path.join(targetDir, 'package.json');
|
||||
if (fs.existsSync(pkgPath)) {
|
||||
const pkg = JSON.parse(fs.readFileSync(pkgPath, 'utf8'));
|
||||
const version = require('../package.json').version;
|
||||
if (pkg.dependencies && pkg.dependencies.archivox)
|
||||
pkg.dependencies.archivox = `^${version}`;
|
||||
pkg.name = path.basename(targetDir);
|
||||
fs.writeFileSync(pkgPath, JSON.stringify(pkg, null, 2));
|
||||
}
|
||||
|
||||
if (install) {
|
||||
execSync('npm install', { cwd: targetDir, stdio: 'inherit' });
|
||||
}
|
||||
|
||||
console.log(`Archivox starter created at ${targetDir}`);
|
||||
}
|
||||
|
||||
main();
|
15
docs/build-docs.js
Executable file
15
docs/build-docs.js
Executable file
@@ -0,0 +1,15 @@
|
||||
#!/usr/bin/env node
|
||||
const path = require('path');
|
||||
const { generate } = require('./src/generator');
|
||||
|
||||
(async () => {
|
||||
try {
|
||||
const contentDir = path.join(__dirname, 'docs', 'content');
|
||||
const configPath = path.join(__dirname, 'docs', 'config.yaml');
|
||||
const outputDir = path.join(__dirname, '_site');
|
||||
await generate({ contentDir, outputDir, configPath });
|
||||
} catch (err) {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
}
|
||||
})();
|
12
docs/docs/config.yaml
Normal file
12
docs/docs/config.yaml
Normal file
@@ -0,0 +1,12 @@
|
||||
site:
|
||||
title: "SeedPass Docs"
|
||||
description: "One seed to rule them all."
|
||||
|
||||
navigation:
|
||||
search: true
|
||||
|
||||
footer:
|
||||
links:
|
||||
- text: "SeedPass"
|
||||
url: "https://seedpass.me/"
|
||||
|
@@ -70,10 +70,11 @@ Manage the entire vault for a profile.
|
||||
| Action | Command | Examples |
|
||||
| :--- | :--- | :--- |
|
||||
| Export the vault | `vault export` | `seedpass vault export --file backup.json` |
|
||||
| Import a vault | `vault import` | `seedpass vault import --file backup.json` |
|
||||
| Import a vault | `vault import` | `seedpass vault import --file backup.json` *(also syncs with Nostr)* |
|
||||
| Change the master password | `vault change-password` | `seedpass vault change-password` |
|
||||
| Lock the vault | `vault lock` | `seedpass vault lock` |
|
||||
| Show profile statistics | `vault stats` | `seedpass vault stats` |
|
||||
| Reveal or back up the parent seed | `vault reveal-parent-seed` | `seedpass vault reveal-parent-seed --file backup.enc` |
|
||||
|
||||
### Nostr Commands
|
||||
|
||||
@@ -90,8 +91,9 @@ Manage profile‑specific settings.
|
||||
|
||||
| Action | Command | Examples |
|
||||
| :--- | :--- | :--- |
|
||||
| Get a setting value | `config get` | `seedpass config get inactivity_timeout` |
|
||||
| Set a setting value | `config set` | `seedpass config set inactivity_timeout 300` |
|
||||
| Get a setting value | `config get` | `seedpass config get kdf_iterations` |
|
||||
| Set a setting value | `config set` | `seedpass config set backup_interval 3600` |
|
||||
| Toggle offline mode | `config toggle-offline` | `seedpass config toggle-offline` |
|
||||
|
||||
### Fingerprint Commands
|
||||
|
||||
@@ -157,10 +159,11 @@ Code: 123456
|
||||
### `vault` Commands
|
||||
|
||||
- **`seedpass vault export`** – Export the entire vault to an encrypted JSON file.
|
||||
- **`seedpass vault import`** – Import a vault from an encrypted JSON file.
|
||||
- **`seedpass vault import`** – Import a vault from an encrypted JSON file and automatically sync via Nostr.
|
||||
- **`seedpass vault change-password`** – Change the master password used for encryption.
|
||||
- **`seedpass vault lock`** – Clear sensitive data from memory and require reauthentication.
|
||||
- **`seedpass vault stats`** – Display statistics about the active seed profile.
|
||||
- **`seedpass vault reveal-parent-seed`** – Print the parent seed or write an encrypted backup with `--file`.
|
||||
|
||||
### `nostr` Commands
|
||||
|
||||
@@ -169,9 +172,10 @@ Code: 123456
|
||||
|
||||
### `config` Commands
|
||||
|
||||
- **`seedpass config get <key>`** – Retrieve a configuration value such as `inactivity_timeout`, `secret_mode`, or `auto_sync`.
|
||||
- **`seedpass config set <key> <value>`** – Update a configuration option. Example: `seedpass config set inactivity_timeout 300`.
|
||||
- **`seedpass config get <key>`** – Retrieve a configuration value such as `kdf_iterations`, `backup_interval`, `inactivity_timeout`, `secret_mode_enabled`, `clipboard_clear_delay`, `additional_backup_path`, `relays`, `quick_unlock`, `nostr_max_retries`, `nostr_retry_delay`, or password policy fields like `min_uppercase`.
|
||||
- **`seedpass config set <key> <value>`** – Update a configuration option. Example: `seedpass config set kdf_iterations 200000`. Use keys like `min_uppercase`, `min_lowercase`, `min_digits`, `min_special`, `nostr_max_retries`, `nostr_retry_delay`, or `quick_unlock` to adjust settings.
|
||||
- **`seedpass config toggle-secret-mode`** – Interactively enable or disable Secret Mode and set the clipboard delay.
|
||||
- **`seedpass config toggle-offline`** – Enable or disable offline mode to skip Nostr operations.
|
||||
|
||||
### `fingerprint` Commands
|
||||
|
||||
@@ -206,5 +210,6 @@ Shut down the server with `seedpass api stop`.
|
||||
|
||||
- Use the `--help` flag for details on any command.
|
||||
- Set a strong master password and regularly export encrypted backups.
|
||||
- Adjust configuration values like `inactivity_timeout` or `secret_mode` through the `config` commands.
|
||||
- Adjust configuration values like `kdf_iterations`, `backup_interval`, `inactivity_timeout`, `secret_mode_enabled`, `nostr_max_retries`, `nostr_retry_delay`, or `quick_unlock` through the `config` commands.
|
||||
- Customize password complexity with `config set min_uppercase 3`, `config set min_digits 4`, and similar commands.
|
||||
- `entry get` is script‑friendly and can be piped into other commands.
|
@@ -31,13 +31,20 @@ Keep this token secret. Every request must include it in the `Authorization` hea
|
||||
- `GET /api/v1/totp/export` – Export all TOTP entries as JSON.
|
||||
- `GET /api/v1/totp` – Return current TOTP codes and remaining time.
|
||||
- `GET /api/v1/stats` – Return statistics about the active seed profile.
|
||||
- `GET /api/v1/notifications` – Retrieve and clear queued notifications. Messages appear in the persistent notification box but remain queued until fetched.
|
||||
- `GET /api/v1/parent-seed` – Reveal the parent seed or save it with `?file=`.
|
||||
- `GET /api/v1/nostr/pubkey` – Fetch the Nostr public key for the active seed.
|
||||
- `POST /api/v1/checksum/verify` – Verify the checksum of the running script.
|
||||
- `POST /api/v1/checksum/update` – Update the stored script checksum.
|
||||
- `POST /api/v1/change-password` – Change the master password for the active profile.
|
||||
- `POST /api/v1/vault/import` – Import a vault backup from a file or path.
|
||||
- `POST /api/v1/vault/export` – Export the vault and download the encrypted file.
|
||||
- `POST /api/v1/vault/backup-parent-seed` – Save an encrypted backup of the parent seed.
|
||||
- `POST /api/v1/vault/lock` – Lock the vault and clear sensitive data from memory.
|
||||
- `GET /api/v1/relays` – List configured Nostr relays.
|
||||
- `POST /api/v1/relays` – Add a relay URL.
|
||||
- `DELETE /api/v1/relays/{idx}` – Remove the relay at the given index (1‑based).
|
||||
- `POST /api/v1/relays/reset` – Reset the relay list to defaults.
|
||||
- `POST /api/v1/shutdown` – Stop the server gracefully.
|
||||
|
||||
**Security Warning:** Accessing `/api/v1/parent-seed` exposes your master seed in plain text. Use it only from a trusted environment.
|
||||
@@ -96,6 +103,22 @@ curl -X PUT http://127.0.0.1:8000/api/v1/config/inactivity_timeout \
|
||||
-d '{"value": 300}'
|
||||
```
|
||||
|
||||
To raise the PBKDF2 work factor or change how often backups are written:
|
||||
|
||||
```bash
|
||||
curl -X PUT http://127.0.0.1:8000/api/v1/config/kdf_iterations \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"value": 200000}'
|
||||
|
||||
curl -X PUT http://127.0.0.1:8000/api/v1/config/backup_interval \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"value": 3600}'
|
||||
```
|
||||
|
||||
Using fewer iterations or a long interval reduces security, so adjust these values carefully.
|
||||
|
||||
### Toggling Secret Mode
|
||||
|
||||
Send both `enabled` and `delay` values to `/api/v1/secret-mode`:
|
||||
@@ -115,7 +138,119 @@ Change the active seed profile via `POST /api/v1/fingerprint/select`:
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/fingerprint/select \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"fingerprint": "abc123"}'
|
||||
-d '{"fingerprint": "abc123"}'
|
||||
```
|
||||
|
||||
### Exporting the Vault
|
||||
|
||||
Download an encrypted vault backup via `POST /api/v1/vault/export`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/vault/export \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-o backup.json
|
||||
```
|
||||
|
||||
### Importing a Vault
|
||||
|
||||
Restore a backup with `POST /api/v1/vault/import`. Use `-F` to upload a file:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/vault/import \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-F file=@backup.json
|
||||
```
|
||||
|
||||
### Locking the Vault
|
||||
|
||||
Clear sensitive data from memory using `/api/v1/vault/lock`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/vault/lock \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Backing Up the Parent Seed
|
||||
|
||||
Trigger an encrypted seed backup with `/api/v1/vault/backup-parent-seed`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/vault/backup-parent-seed \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"path": "seed_backup.enc"}'
|
||||
```
|
||||
|
||||
### Retrieving Vault Statistics
|
||||
|
||||
Get profile stats such as entry counts with `GET /api/v1/stats`:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer <token>" \
|
||||
http://127.0.0.1:8000/api/v1/stats
|
||||
```
|
||||
|
||||
### Checking Notifications
|
||||
|
||||
Get queued messages with `GET /api/v1/notifications`:
|
||||
|
||||
```bash
|
||||
curl -H "Authorization: Bearer <token>" \
|
||||
http://127.0.0.1:8000/api/v1/notifications
|
||||
```
|
||||
|
||||
The TUI displays these alerts in a persistent notification box for 10 seconds,
|
||||
but the endpoint returns all queued messages even if they have already
|
||||
disappeared from the screen.
|
||||
|
||||
### Changing the Master Password
|
||||
|
||||
Update the vault password via `POST /api/v1/change-password`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/change-password \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Verifying the Script Checksum
|
||||
|
||||
Check that the running script matches the stored checksum:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/checksum/verify \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Updating the Script Checksum
|
||||
|
||||
Regenerate the stored checksum using `/api/v1/checksum/update`:
|
||||
|
||||
```bash
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/checksum/update \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Managing Relays
|
||||
|
||||
List, add, or remove Nostr relays:
|
||||
|
||||
```bash
|
||||
# list
|
||||
curl -H "Authorization: Bearer <token>" http://127.0.0.1:8000/api/v1/relays
|
||||
|
||||
# add
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/relays \
|
||||
-H "Authorization: Bearer <token>" \
|
||||
-H "Content-Type: application/json" \
|
||||
-d '{"url": "wss://relay.example.com"}'
|
||||
|
||||
# remove first relay
|
||||
curl -X DELETE http://127.0.0.1:8000/api/v1/relays/1 \
|
||||
-H "Authorization: Bearer <token>"
|
||||
|
||||
# reset to defaults
|
||||
curl -X POST http://127.0.0.1:8000/api/v1/relays/reset \
|
||||
-H "Authorization: Bearer <token>"
|
||||
```
|
||||
|
||||
### Enabling CORS
|
47
docs/docs/content/01-getting-started/04-migrations.md
Normal file
47
docs/docs/content/01-getting-started/04-migrations.md
Normal file
@@ -0,0 +1,47 @@
|
||||
# Index Migrations
|
||||
|
||||
SeedPass stores its password index in an encrypted JSON file. Each index contains
|
||||
a `schema_version` field so the application knows how to upgrade older files.
|
||||
|
||||
## How migrations work
|
||||
|
||||
When the vault loads the index, `Vault.load_index()` checks the version and
|
||||
applies migrations defined in `password_manager/migrations.py`. The
|
||||
`apply_migrations()` function iterates through registered migrations until the
|
||||
file reaches `LATEST_VERSION`.
|
||||
|
||||
If an old file lacks `schema_version`, it is treated as version 0 and upgraded
|
||||
to the latest format. Attempting to load an index from a future version will
|
||||
raise an error.
|
||||
|
||||
## Upgrading an index
|
||||
|
||||
1. The JSON is decrypted and parsed.
|
||||
2. `apply_migrations()` applies any necessary steps, such as injecting the
|
||||
`schema_version` field on first upgrade.
|
||||
3. After migration, the updated index is saved back to disk.
|
||||
|
||||
This process happens automatically; users only need to open their vault to
|
||||
upgrade older indices.
|
||||
|
||||
### Legacy Fernet migration
|
||||
|
||||
Older versions stored the vault index in a file named
|
||||
`seedpass_passwords_db.json.enc` encrypted with Fernet. When opening such a
|
||||
vault, SeedPass now automatically decrypts the legacy file, re‑encrypts it using
|
||||
AES‑GCM, and saves it under the new name `seedpass_entries_db.json.enc`.
|
||||
The original Fernet file is preserved as
|
||||
`seedpass_entries_db.json.enc.fernet` and the legacy checksum file, if present,
|
||||
is renamed to `seedpass_entries_db_checksum.txt.fernet`.
|
||||
|
||||
No additional command is required – simply open your existing vault and the
|
||||
conversion happens transparently.
|
||||
|
||||
### Parent seed backup migration
|
||||
|
||||
If your vault contains a `parent_seed.enc` file that was encrypted with Fernet,
|
||||
SeedPass performs a similar upgrade. Upon loading the vault, the application
|
||||
decrypts the old file, re‑encrypts it with AES‑GCM, and writes the result back to
|
||||
`parent_seed.enc`. The legacy Fernet file is preserved as
|
||||
`parent_seed.enc.fernet` so you can revert if needed. No manual steps are
|
||||
required – simply unlock your vault and the conversion runs automatically.
|
546
docs/docs/content/index.md
Normal file
546
docs/docs/content/index.md
Normal file
@@ -0,0 +1,546 @@
|
||||
# SeedPass
|
||||
|
||||
**SeedPass** is a secure password generator and manager built on **Bitcoin's BIP-85 standard**. It uses deterministic key derivation to generate **passwords that are never stored**, but can be easily regenerated when needed. By integrating with the **Nostr network**, SeedPass compresses your encrypted vault and splits it into 50 KB chunks. Each chunk is published as a parameterised replaceable event (`kind 30071`), with a manifest (`kind 30070`) describing the snapshot and deltas (`kind 30072`) capturing changes between snapshots. This allows secure password recovery across devices without exposing your data.
|
||||
|
||||
[Tip Jar](https://nostrtipjar.netlify.app/?n=npub16y70nhp56rwzljmr8jhrrzalsx5x495l4whlf8n8zsxww204k8eqrvamnp)
|
||||
|
||||
---
|
||||
|
||||
**⚠️ Disclaimer**
|
||||
|
||||
This software was not developed by an experienced security expert and should be used with caution. There may be bugs and missing features. Each vault chunk is limited to 50 KB and SeedPass periodically publishes a new snapshot to keep accumulated deltas small. The security of the program's memory management and logs has not been evaluated and may leak sensitive information. Loss or exposure of the parent seed places all derived passwords, accounts, and other artifacts at risk.
|
||||
|
||||
---
|
||||
### Supported OS
|
||||
|
||||
✔ Windows 10/11 • macOS 12+ • Any modern Linux
|
||||
SeedPass now uses the `portalocker` library for cross-platform file locking. No WSL or Cygwin required.
|
||||
|
||||
|
||||
## Table of Contents
|
||||
|
||||
- [Features](#features)
|
||||
- [Prerequisites](#prerequisites)
|
||||
- [Installation](#installation)
|
||||
- [1. Clone the Repository](#1-clone-the-repository)
|
||||
- [2. Create a Virtual Environment](#2-create-a-virtual-environment)
|
||||
- [3. Activate the Virtual Environment](#3-activate-the-virtual-environment)
|
||||
- [4. Install Dependencies](#4-install-dependencies)
|
||||
- [Usage](#usage)
|
||||
- [Running the Application](#running-the-application)
|
||||
- [Managing Multiple Seeds](#managing-multiple-seeds)
|
||||
- [Additional Entry Types](#additional-entry-types)
|
||||
- [Security Considerations](#security-considerations)
|
||||
- [Contributing](#contributing)
|
||||
- [License](#license)
|
||||
- [Contact](#contact)
|
||||
|
||||
## Features
|
||||
|
||||
- **Deterministic Password Generation:** Utilize BIP-85 for generating deterministic and secure passwords.
|
||||
- **Encrypted Storage:** All seeds, login passwords, and sensitive index data are encrypted locally.
|
||||
- **Nostr Integration:** Post and retrieve your encrypted password index to/from the Nostr network.
|
||||
- **Chunked Snapshots:** Encrypted vaults are compressed and split into 50 KB chunks published as `kind 30071` events with a `kind 30070` manifest and `kind 30072` deltas. The manifest's `delta_since` field stores the UNIX timestamp of the latest delta event.
|
||||
- **Automatic Checksum Generation:** The script generates and verifies a SHA-256 checksum to detect tampering.
|
||||
- **Multiple Seed Profiles:** Manage separate seed profiles and switch between them seamlessly.
|
||||
- **Nested Managed Account Seeds:** SeedPass can derive nested managed account seeds.
|
||||
- **Interactive TUI:** Navigate through menus to add, retrieve, and modify entries as well as configure Nostr settings.
|
||||
- **SeedPass 2FA:** Generate TOTP codes with a real-time countdown progress bar.
|
||||
- **2FA Secret Issuance & Import:** Derive new TOTP secrets from your seed or import existing `otpauth://` URIs.
|
||||
- **Export 2FA Codes:** Save all stored TOTP entries to an encrypted JSON file for use with other apps.
|
||||
- **Display TOTP Codes:** Show all active 2FA codes with a countdown timer.
|
||||
- **Optional External Backup Location:** Configure a second directory where backups are automatically copied.
|
||||
- **Auto‑Lock on Inactivity:** Vault locks after a configurable timeout for additional security.
|
||||
- **Quick Unlock:** Optionally skip the password prompt after verifying once. Startup delay is unaffected.
|
||||
- **Secret Mode:** Copy retrieved passwords directly to your clipboard and automatically clear it after a delay.
|
||||
- **Tagging Support:** Organize entries with optional tags and find them quickly via search.
|
||||
- **Manual Vault Export/Import:** Create encrypted backups or restore them using the CLI or API.
|
||||
- **Parent Seed Backup:** Securely save an encrypted copy of the master seed.
|
||||
- **Manual Vault Locking:** Instantly clear keys from memory when needed.
|
||||
- **Vault Statistics:** View counts for entries and other profile metrics.
|
||||
- **Change Master Password:** Rotate your encryption password at any time.
|
||||
- **Checksum Verification Utilities:** Verify or regenerate the script checksum.
|
||||
- **Relay Management:** List, add, remove or reset configured Nostr relays.
|
||||
- **Offline Mode:** Disable network sync to work entirely locally.
|
||||
|
||||
## Prerequisites
|
||||
|
||||
- **Python 3.8+** (3.11 or 3.12 recommended): Install Python from [python.org](https://www.python.org/downloads/) and be sure to check **"Add Python to PATH"** during setup. Using Python 3.13 is currently discouraged because some dependencies do not ship wheels for it yet, which can cause build failures on Windows unless you install the Visual C++ Build Tools.
|
||||
*Windows only:* Install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and select the **C++ build tools** workload.
|
||||
|
||||
## Installation
|
||||
|
||||
|
||||
### Quick Installer
|
||||
|
||||
Use the automated installer to download SeedPass and its dependencies in one step.
|
||||
|
||||
**Linux and macOS:**
|
||||
```bash
|
||||
bash -c "$(curl -sSL https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.sh)"
|
||||
```
|
||||
*Install the beta branch:*
|
||||
```bash
|
||||
bash -c "$(curl -sSL https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.sh)" _ -b beta
|
||||
```
|
||||
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.ps1'); & ([scriptblock]::create($scriptContent))
|
||||
```
|
||||
Before running the script, install **Python 3.11** or **3.12** from [python.org](https://www.python.org/downloads/windows/) and tick **"Add Python to PATH"**. You should also install the [Visual Studio Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) with the **C++ build tools** workload so dependencies compile correctly.
|
||||
The Windows installer will attempt to install Git automatically if it is not already available. It also tries to
|
||||
install Python 3 using `winget`, `choco`, or `scoop` when Python is missing and recognizes the `py` launcher if `python`
|
||||
isn't on your PATH. If these tools are unavailable you'll see a link to download Python directly from
|
||||
<https://www.python.org/downloads/windows/>. When Python 3.13 or newer is detected without the Microsoft C++ build tools,
|
||||
the installer now attempts to download Python 3.12 automatically so you don't have to compile packages from source.
|
||||
|
||||
**Note:** If this fallback fails, install Python 3.12 manually or install the [Microsoft Visual C++ Build Tools](https://visualstudio.microsoft.com/visual-cpp-build-tools/) and rerun the installer.
|
||||
### Uninstall
|
||||
|
||||
Run the matching uninstaller if you need to remove a previous installation or clean up an old `seedpass` command:
|
||||
|
||||
**Linux and macOS:**
|
||||
```bash
|
||||
bash -c "$(curl -sSL https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/uninstall.sh)"
|
||||
```
|
||||
If the script warns that it couldn't remove an executable, delete that file manually.
|
||||
|
||||
**Windows (PowerShell):**
|
||||
```powershell
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/uninstall.ps1'); & ([scriptblock]::create($scriptContent))
|
||||
```
|
||||
|
||||
*Install the beta branch:*
|
||||
```powershell
|
||||
Set-ExecutionPolicy Bypass -Scope Process -Force; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.ServicePointManager]::SecurityProtocol -bor 3072; $scriptContent = (New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/PR0M3TH3AN/SeedPass/main/scripts/install.ps1'); & ([scriptblock]::create($scriptContent)) -Branch beta
|
||||
```
|
||||
|
||||
### Manual Setup
|
||||
Follow these steps to set up SeedPass on your local machine.
|
||||
|
||||
### 1. Clone the Repository
|
||||
|
||||
First, clone the SeedPass repository from GitHub:
|
||||
|
||||
```bash
|
||||
git clone https://github.com/PR0M3TH3AN/SeedPass.git
|
||||
```
|
||||
|
||||
Navigate to the project directory:
|
||||
|
||||
```bash
|
||||
cd SeedPass
|
||||
```
|
||||
|
||||
### 2. Create a Virtual Environment
|
||||
|
||||
It's recommended to use a virtual environment to manage your project's dependencies. Create a virtual environment named `venv`:
|
||||
|
||||
```bash
|
||||
python3 -m venv venv
|
||||
```
|
||||
|
||||
### 3. Activate the Virtual Environment
|
||||
|
||||
Activate the virtual environment using the appropriate command for your operating system.
|
||||
|
||||
- **On Linux and macOS:**
|
||||
|
||||
```bash
|
||||
source venv/bin/activate
|
||||
```
|
||||
|
||||
- **On Windows:**
|
||||
|
||||
```bash
|
||||
venv\Scripts\activate
|
||||
```
|
||||
|
||||
Once activated, your terminal prompt should be prefixed with `(venv)` indicating that the virtual environment is active.
|
||||
|
||||
### 4. Install Dependencies
|
||||
|
||||
Install the required Python packages and build dependencies using `pip`.
|
||||
When upgrading pip, use `python -m pip` inside the virtual environment so that pip can update itself cleanly:
|
||||
|
||||
```bash
|
||||
python -m pip install --upgrade pip
|
||||
python -m pip install -r src/requirements.txt
|
||||
python -m pip install -e .
|
||||
```
|
||||
|
||||
#### Linux Clipboard Support
|
||||
|
||||
On Linux, `pyperclip` relies on external utilities like `xclip` or `xsel`.
|
||||
SeedPass will attempt to install **xclip** automatically if neither tool is
|
||||
available. If the automatic installation fails, you can install it manually:
|
||||
|
||||
```bash
|
||||
sudo apt-get install xclip
|
||||
```
|
||||
|
||||
## Quick Start
|
||||
|
||||
After installing dependencies, activate your virtual environment and install
|
||||
the package so the `seedpass` command is available, then launch SeedPass and
|
||||
create a backup:
|
||||
|
||||
```bash
|
||||
# Start the application
|
||||
seedpass
|
||||
|
||||
# Export your index
|
||||
seedpass export --file "~/seedpass_backup.json"
|
||||
|
||||
# Later you can restore it
|
||||
seedpass import --file "~/seedpass_backup.json"
|
||||
# Import also performs a Nostr sync to pull any changes
|
||||
|
||||
# Quickly find or retrieve entries
|
||||
seedpass search "github"
|
||||
seedpass search --tags "work,personal"
|
||||
seedpass get "github"
|
||||
# Retrieve a TOTP entry
|
||||
seedpass entry get "email"
|
||||
# The code is printed and copied to your clipboard
|
||||
|
||||
# Sort or filter the list view
|
||||
seedpass list --sort label
|
||||
seedpass list --filter totp
|
||||
|
||||
# Use the **Settings** menu to configure an extra backup directory
|
||||
# on an external drive.
|
||||
```
|
||||
|
||||
For additional command examples, see [docs/advanced_cli.md](docs/advanced_cli.md).
|
||||
Details on the REST API can be found in [docs/api_reference.md](docs/api_reference.md).
|
||||
|
||||
### Vault JSON Layout
|
||||
|
||||
The encrypted index file `seedpass_entries_db.json.enc` begins with `schema_version` `2` and stores an `entries` map keyed by entry numbers.
|
||||
|
||||
```json
|
||||
{
|
||||
"schema_version": 2,
|
||||
"entries": {
|
||||
"0": {
|
||||
"label": "example.com",
|
||||
"length": 8,
|
||||
"type": "password",
|
||||
"notes": ""
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
After successfully installing the dependencies, launch the interactive TUI with:
|
||||
|
||||
```bash
|
||||
seedpass
|
||||
```
|
||||
|
||||
You can also run directly from the repository using:
|
||||
|
||||
```bash
|
||||
python src/main.py
|
||||
```
|
||||
|
||||
You can explore other CLI commands using:
|
||||
```bash
|
||||
seedpass --help
|
||||
```
|
||||
For a full list of commands see [docs/advanced_cli.md](docs/advanced_cli.md). The REST API is described in [docs/api_reference.md](docs/api_reference.md).
|
||||
|
||||
### Running the Application
|
||||
|
||||
1. **Start the Application:**
|
||||
|
||||
```bash
|
||||
seedpass
|
||||
```
|
||||
*(or `python src/main.py` if running directly from the repository)*
|
||||
|
||||
2. **Follow the Prompts:**
|
||||
|
||||
- **Seed Profile Selection:** If you have existing seed profiles, you'll be prompted to select one or add a new one.
|
||||
- **Enter Your Password:** This password is crucial as it is used to encrypt and decrypt your parent seed and seed index data.
|
||||
- **Select an Option:** Navigate through the menu by entering the number corresponding to your desired action.
|
||||
|
||||
Example menu:
|
||||
|
||||
```bash
|
||||
Select an option:
|
||||
1. Add Entry
|
||||
2. Retrieve Entry
|
||||
3. Search Entries
|
||||
4. List Entries
|
||||
5. Modify an Existing Entry
|
||||
6. 2FA Codes
|
||||
7. Settings
|
||||
|
||||
Enter your choice (1-7) or press Enter to exit:
|
||||
```
|
||||
|
||||
When choosing **Add Entry**, you can now select from:
|
||||
|
||||
- **Password**
|
||||
- **2FA (TOTP)**
|
||||
- **SSH Key**
|
||||
- **Seed Phrase**
|
||||
- **Nostr Key Pair**
|
||||
- **PGP Key**
|
||||
- **Key/Value**
|
||||
- **Managed Account**
|
||||
|
||||
### Adding a 2FA Entry
|
||||
|
||||
1. From the main menu choose **Add Entry** and select **2FA (TOTP)**.
|
||||
2. Pick **Make 2FA** to derive a new secret from your seed or **Import 2FA** to paste an existing `otpauth://` URI or secret.
|
||||
3. Provide a label for the account (for example, `GitHub`).
|
||||
4. SeedPass automatically chooses the next available derivation index when deriving.
|
||||
5. Optionally specify the TOTP period and digit count.
|
||||
6. SeedPass displays the URI and secret, along with a QR code you can scan to import it into your authenticator app.
|
||||
|
||||
### Modifying a 2FA Entry
|
||||
|
||||
1. From the main menu choose **Modify an Existing Entry** and enter the index of the 2FA code you want to edit.
|
||||
2. SeedPass will show the current label, period, digit count, and archived status.
|
||||
3. Enter new values or press **Enter** to keep the existing settings.
|
||||
4. When retrieving a 2FA entry you can press **E** to edit the label, period or digit count, or **A** to archive/unarchive it.
|
||||
5. The updated entry is saved back to your encrypted vault.
|
||||
6. Archived entries are hidden from lists but can be viewed or restored from the **List Archived** menu.
|
||||
7. When editing an archived entry you'll be prompted to restore it after saving your changes.
|
||||
|
||||
### Using Secret Mode
|
||||
|
||||
When **Secret Mode** is enabled, SeedPass copies retrieved passwords directly to your clipboard instead of displaying them on screen. The clipboard clears automatically after the delay you choose.
|
||||
|
||||
1. From the main menu open **Settings** and select **Toggle Secret Mode**.
|
||||
2. Choose how many seconds to keep passwords on the clipboard.
|
||||
3. Retrieve an entry and SeedPass will confirm the password was copied.
|
||||
|
||||
### Additional Entry Types
|
||||
|
||||
SeedPass supports storing more than just passwords and 2FA secrets. You can also create entries for:
|
||||
- **SSH Key** – deterministically derive an Ed25519 key pair for servers or git hosting platforms.
|
||||
- **Seed Phrase** – store only the BIP-85 index and word count. The mnemonic is regenerated on demand.
|
||||
- **PGP Key** – derive an OpenPGP key pair from your master seed.
|
||||
- **Nostr Key Pair** – store the index used to derive an `npub`/`nsec` pair for Nostr clients.
|
||||
When you retrieve one of these entries, SeedPass can display QR codes for the
|
||||
keys. The `npub` is wrapped in the `nostr:` URI scheme so any client can scan
|
||||
it, while the `nsec` QR is shown only after a security warning.
|
||||
- **Key/Value** – store a simple key and value for miscellaneous secrets or configuration data.
|
||||
- **Managed Account** – derive a child seed under the current profile. Loading a managed account switches to a nested profile and the header shows `<parent_fp> > Managed Account > <child_fp>`. Press Enter on the main menu to return to the parent profile.
|
||||
|
||||
The table below summarizes the extra fields stored for each entry type. Every
|
||||
entry includes a `label`, while only password entries track a `url`.
|
||||
|
||||
| Entry Type | Extra Fields |
|
||||
|---------------|---------------------------------------------------------------------------------------------------------------------------------------|
|
||||
| Password | `username`, `url`, `length`, `archived`, optional `notes`, optional `custom_fields` (may include hidden fields), optional `tags` |
|
||||
| 2FA (TOTP) | `index` or `secret`, `period`, `digits`, `archived`, optional `notes`, optional `tags` |
|
||||
| SSH Key | `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Seed Phrase | `index`, `word_count` *(mnemonic regenerated; never stored)*, `archived`, optional `notes`, optional `tags` |
|
||||
| PGP Key | `index`, `key_type`, `archived`, optional `user_id`, optional `notes`, optional `tags` |
|
||||
| Nostr Key Pair| `index`, `archived`, optional `notes`, optional `tags` |
|
||||
| Key/Value | `value`, `archived`, optional `notes`, optional `custom_fields`, optional `tags` |
|
||||
| Managed Account | `index`, `word_count`, `fingerprint`, `archived`, optional `notes`, optional `tags` |
|
||||
|
||||
|
||||
### Managing Multiple Seeds
|
||||
|
||||
SeedPass allows you to manage multiple seed profiles (previously referred to as "fingerprints"). Each seed profile has its own parent seed and associated data, enabling you to compartmentalize your passwords.
|
||||
|
||||
- **Add a New Seed Profile:**
|
||||
- From the main menu, select **Settings** then **Profiles** and choose "Add a New Seed Profile".
|
||||
- Choose to enter an existing seed or generate a new one.
|
||||
- If generating a new seed, you'll be provided with a 12-word BIP-85 seed phrase. **Ensure you write this down and store it securely.**
|
||||
|
||||
- **Switch Between Seed Profiles:**
|
||||
- From the **Profiles** menu, select "Switch Seed Profile".
|
||||
- You'll see a list of available seed profiles.
|
||||
- Enter the number corresponding to the seed profile you wish to switch to.
|
||||
- Enter the master password associated with that seed profile.
|
||||
|
||||
- **List All Seed Profiles:**
|
||||
- In the **Profiles** menu, choose "List All Seed Profiles" to view all existing profiles.
|
||||
|
||||
**Note:** The term "seed profile" is used to represent different sets of seeds you can manage within SeedPass. This provides an intuitive way to handle multiple identities or sets of passwords.
|
||||
|
||||
### Configuration File and Settings
|
||||
|
||||
SeedPass keeps per-profile settings in an encrypted file named `seedpass_config.json.enc` inside each profile directory under `~/.seedpass/`. This file stores your chosen Nostr relays and the optional settings PIN. New profiles start with the following default relays:
|
||||
|
||||
```
|
||||
wss://relay.snort.social
|
||||
wss://nostr.oxtr.dev
|
||||
wss://relay.primal.net
|
||||
```
|
||||
|
||||
You can manage your relays and sync with Nostr from the **Settings** menu:
|
||||
|
||||
1. From the main menu choose `6` (**Settings**).
|
||||
2. Select `2` (**Nostr**) to open the Nostr submenu.
|
||||
3. Choose `1` to back up your encrypted index to Nostr.
|
||||
4. Select `2` to restore the index from Nostr.
|
||||
5. Choose `3` to view your current relays.
|
||||
6. Select `4` to add a new relay URL.
|
||||
7. Choose `5` to remove a relay by number.
|
||||
8. Select `6` to reset to the default relay list.
|
||||
9. Choose `7` to display your Nostr public key.
|
||||
10. Select `8` to return to the Settings menu.
|
||||
|
||||
Back in the Settings menu you can:
|
||||
|
||||
* Select `3` to change your master password.
|
||||
* Choose `4` to verify the script checksum.
|
||||
* Select `5` to generate a new script checksum.
|
||||
* Choose `6` to back up the parent seed.
|
||||
* Select `7` to export the database to an encrypted file.
|
||||
* Choose `8` to import a database from a backup file. This also performs a Nostr sync automatically.
|
||||
* Select `9` to export all 2FA codes.
|
||||
* Choose `10` to set an additional backup location. A backup is created
|
||||
immediately after the directory is configured.
|
||||
* Select `11` to change the inactivity timeout.
|
||||
* Choose `12` to lock the vault and require re-entry of your password.
|
||||
* Select `13` to view seed profile stats. The summary lists counts for
|
||||
passwords, TOTP codes, SSH keys, seed phrases, and PGP keys. It also shows
|
||||
whether both the encrypted database and the script itself pass checksum
|
||||
validation.
|
||||
* Choose `14` to toggle Secret Mode and set the clipboard clear delay.
|
||||
* Select `15` to toggle Offline Mode and work locally without contacting Nostr.
|
||||
* Choose `16` to toggle Quick Unlock so subsequent actions skip the password prompt. Startup delay is unchanged.
|
||||
* Select `17` to return to the main menu.
|
||||
|
||||
## Running Tests
|
||||
|
||||
SeedPass includes a small suite of unit tests located under `src/tests`. **Before running `pytest`, be sure to install the test requirements.** Activate your virtual environment and run `pip install -r src/requirements.txt` to ensure all testing dependencies are available. Then run the tests with **pytest**. Use `-vv` to see INFO-level log messages from each passing test:
|
||||
|
||||
|
||||
```bash
|
||||
pip install -r src/requirements.txt
|
||||
pytest -vv
|
||||
```
|
||||
|
||||
`test_fuzz_key_derivation.py` uses Hypothesis to generate random passwords,
|
||||
seeds and configuration data. It performs round-trip encryption tests with the
|
||||
`EncryptionManager` to catch edge cases automatically. These fuzz tests run in
|
||||
CI alongside the rest of the suite.
|
||||
|
||||
### Exploring Nostr Index Size Limits
|
||||
|
||||
`test_nostr_index_size.py` demonstrates how SeedPass rotates snapshots after too many delta events.
|
||||
Each chunk is limited to 50 KB, so the test gradually grows the vault to observe
|
||||
when a new snapshot is triggered. Use the `NOSTR_TEST_DELAY` environment
|
||||
variable to control the delay between publishes when experimenting with large vaults.
|
||||
|
||||
```bash
|
||||
pytest -vv -s -n 0 src/tests/test_nostr_index_size.py --desktop --max-entries=1000
|
||||
```
|
||||
|
||||
### Generating a Test Profile
|
||||
|
||||
Use the helper script below to populate a profile with sample entries for testing:
|
||||
|
||||
```bash
|
||||
python scripts/generate_test_profile.py --profile demo_profile --count 100
|
||||
```
|
||||
|
||||
The script determines the fingerprint from the generated seed and stores the
|
||||
vault under `~/.seedpass/tests/<fingerprint>`. SeedPass only discovers profiles
|
||||
inside `~/.seedpass/`, so copy the fingerprint directory out of the `tests`
|
||||
subfolder (or adjust `APP_DIR` in `constants.py`) if you want to use the
|
||||
generated seed with the main application. The fingerprint is printed after
|
||||
creation and the encrypted index is published to Nostr. Use that same seed
|
||||
phrase to load SeedPass. The app checks Nostr on startup and pulls any newer
|
||||
snapshot so your vault stays in sync across machines. Synchronization also runs
|
||||
in the background after unlocking or when switching profiles.
|
||||
|
||||
### Automatically Updating the Script Checksum
|
||||
|
||||
SeedPass stores a SHA-256 checksum for the main program in `~/.seedpass/seedpass_script_checksum.txt`.
|
||||
To keep this value in sync with the source code, install the pre‑push git hook:
|
||||
|
||||
```bash
|
||||
pre-commit install -t pre-push
|
||||
```
|
||||
|
||||
After running this command, every `git push` will execute `scripts/update_checksum.py`,
|
||||
updating the checksum file automatically.
|
||||
|
||||
If the checksum file is missing, generate it manually:
|
||||
|
||||
```bash
|
||||
python scripts/update_checksum.py
|
||||
```
|
||||
|
||||
To run mutation tests locally, generate coverage data first and then execute `mutmut`:
|
||||
|
||||
```bash
|
||||
pytest --cov=src src/tests
|
||||
python -m mutmut run --paths-to-mutate src --tests-dir src/tests --runner "python -m pytest -q" --use-coverage --no-progress
|
||||
python -m mutmut results
|
||||
```
|
||||
|
||||
Mutation testing is disabled in the GitHub workflow due to reliability issues and should be run on a desktop environment instead.
|
||||
|
||||
## Security Considerations
|
||||
|
||||
**Important:** The password you use to encrypt your parent seed is also required to decrypt the seed index data retrieved from Nostr. **It is imperative to remember this password** and be sure to use it with the same seed, as losing it means you won't be able to access your stored index. Secure your 12-word seed **and** your master password.
|
||||
|
||||
- **Backup Your Data:** Regularly back up your encrypted data and checksum files to prevent data loss.
|
||||
- **Backup the Settings PIN:** Your settings PIN is stored in the encrypted configuration file. Keep a copy of this file or remember the PIN, as losing it will require deleting the file and reconfiguring your relays.
|
||||
- **Protect Your Passwords:** Do not share your master password or seed phrases with anyone and ensure they are strong and unique.
|
||||
- **Revealing the Parent Seed:** The `vault reveal-parent-seed` command and `/api/v1/parent-seed` endpoint print your seed in plain text. Run them only in a secure environment.
|
||||
- **No PBKDF2 Salt Needed:** SeedPass deliberately omits an explicit PBKDF2 salt. Every password is derived from a unique 512-bit BIP-85 child seed, which already provides stronger per-password uniqueness than a conventional 128-bit salt.
|
||||
- **Checksum Verification:** Always verify the script's checksum to ensure its integrity and protect against unauthorized modifications.
|
||||
- **Potential Bugs and Limitations:** Be aware that the software may contain bugs and lacks certain features. Snapshot chunks are capped at 50 KB and the client rotates snapshots after enough delta events accumulate. The security of memory management and logs has not been thoroughly evaluated and may pose risks of leaking sensitive information.
|
||||
- **Multiple Seeds Management:** While managing multiple seeds adds flexibility, it also increases the responsibility to secure each seed and its associated password.
|
||||
- **No PBKDF2 Salt Required:** SeedPass deliberately omits an explicit PBKDF2 salt. Every password is derived from a unique 512-bit BIP-85 child seed, which already provides stronger per-password uniqueness than a conventional 128-bit salt.
|
||||
- **Default KDF Iterations:** New profiles start with 50,000 PBKDF2 iterations. Use `seedpass config set kdf_iterations` to change this.
|
||||
- **Offline Mode:** Disable Nostr sync to keep all operations local until you re-enable networking.
|
||||
- **Quick Unlock:** Store a hashed copy of your password so future actions skip the prompt. Startup delay no longer changes. Use with caution on shared systems.
|
||||
|
||||
## Contributing
|
||||
|
||||
Contributions are welcome! If you have suggestions for improvements, bug fixes, or new features, please follow these steps:
|
||||
|
||||
1. **Fork the Repository:** Click the "Fork" button on the top right of the repository page.
|
||||
|
||||
2. **Create a Branch:** Create a new branch for your feature or bugfix.
|
||||
|
||||
```bash
|
||||
git checkout -b feature/YourFeatureName
|
||||
```
|
||||
|
||||
3. **Commit Your Changes:** Make your changes and commit them with clear messages.
|
||||
|
||||
```bash
|
||||
git commit -m "Add feature X"
|
||||
```
|
||||
|
||||
4. **Push to GitHub:** Push your changes to your forked repository.
|
||||
|
||||
```bash
|
||||
git push origin feature/YourFeatureName
|
||||
```
|
||||
|
||||
5. **Create a Pull Request:** Navigate to the original repository and create a pull request describing your changes.
|
||||
|
||||
## License
|
||||
|
||||
This project is licensed under the [MIT License](LICENSE). See the [LICENSE](LICENSE) file for details.
|
||||
|
||||
## Contact
|
||||
|
||||
For any questions, suggestions, or support, please open an issue on the [GitHub repository](https://github.com/PR0M3TH3AN/SeedPass/issues) or contact the maintainer directly on [Nostr](https://primal.net/p/npub15jnttpymeytm80hatjqcvhhqhzrhx6gxp8pq0wn93rhnu8s9h9dsha32lx).
|
||||
|
||||
---
|
||||
|
||||
*Stay secure and keep your passwords safe with SeedPass!*
|
||||
|
||||
---
|
11
docs/docs/package.json
Normal file
11
docs/docs/package.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "docs",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "eleventy --serve",
|
||||
"build": "node node_modules/archivox/src/generator/index.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"archivox": "^1.0.0"
|
||||
}
|
||||
}
|
@@ -1,25 +0,0 @@
|
||||
# Index Migrations
|
||||
|
||||
SeedPass stores its password index in an encrypted JSON file. Each index contains
|
||||
a `schema_version` field so the application knows how to upgrade older files.
|
||||
|
||||
## How migrations work
|
||||
|
||||
When the vault loads the index, `Vault.load_index()` checks the version and
|
||||
applies migrations defined in `password_manager/migrations.py`. The
|
||||
`apply_migrations()` function iterates through registered migrations until the
|
||||
file reaches `LATEST_VERSION`.
|
||||
|
||||
If an old file lacks `schema_version`, it is treated as version 0 and upgraded
|
||||
to the latest format. Attempting to load an index from a future version will
|
||||
raise an error.
|
||||
|
||||
## Upgrading an index
|
||||
|
||||
1. The JSON is decrypted and parsed.
|
||||
2. `apply_migrations()` applies any necessary steps, such as injecting the
|
||||
`schema_version` field on first upgrade.
|
||||
3. After migration, the updated index is saved back to disk.
|
||||
|
||||
This process happens automatically; users only need to open their vault to
|
||||
upgrade older indices.
|
3
docs/netlify.toml
Normal file
3
docs/netlify.toml
Normal file
@@ -0,0 +1,3 @@
|
||||
[build]
|
||||
command = "node build-docs.js"
|
||||
publish = "_site"
|
6357
docs/package-lock.json
generated
Normal file
6357
docs/package-lock.json
generated
Normal file
File diff suppressed because it is too large
Load Diff
25
docs/package.json
Normal file
25
docs/package.json
Normal file
@@ -0,0 +1,25 @@
|
||||
{
|
||||
"name": "archivox",
|
||||
"version": "1.0.0",
|
||||
"description": "Archivox static site generator",
|
||||
"scripts": {
|
||||
"dev": "eleventy --serve",
|
||||
"build": "node src/generator/index.js",
|
||||
"test": "jest"
|
||||
},
|
||||
"dependencies": {
|
||||
"@11ty/eleventy": "^2.0.1",
|
||||
"gray-matter": "^4.0.3",
|
||||
"marked": "^11.1.1",
|
||||
"lunr": "^2.3.9",
|
||||
"js-yaml": "^4.1.0"
|
||||
},
|
||||
"devDependencies": {
|
||||
"jest": "^29.6.1",
|
||||
"puppeteer": "^24.12.1"
|
||||
},
|
||||
"license": "MIT",
|
||||
"bin": {
|
||||
"create-archivox": "./bin/create-archivox.js"
|
||||
}
|
||||
}
|
7
docs/plugins/analytics.js
Normal file
7
docs/plugins/analytics.js
Normal file
@@ -0,0 +1,7 @@
|
||||
module.exports = {
|
||||
onPageRendered: async ({ html, file }) => {
|
||||
// Example: inject analytics script into each page
|
||||
const snippet = '\n<script>console.log("Page viewed: ' + file + '")</script>';
|
||||
return { html: html.replace('</body>', `${snippet}</body>`) };
|
||||
}
|
||||
};
|
70
docs/src/config/loadConfig.js
Normal file
70
docs/src/config/loadConfig.js
Normal file
@@ -0,0 +1,70 @@
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const yaml = require('js-yaml');
|
||||
|
||||
function deepMerge(target, source) {
|
||||
for (const key of Object.keys(source)) {
|
||||
if (
|
||||
source[key] &&
|
||||
typeof source[key] === 'object' &&
|
||||
!Array.isArray(source[key])
|
||||
) {
|
||||
target[key] = deepMerge(target[key] || {}, source[key]);
|
||||
} else if (source[key] !== undefined) {
|
||||
target[key] = source[key];
|
||||
}
|
||||
}
|
||||
return target;
|
||||
}
|
||||
|
||||
function loadConfig(configPath = path.join(process.cwd(), 'config.yaml')) {
|
||||
let raw = {};
|
||||
if (fs.existsSync(configPath)) {
|
||||
try {
|
||||
raw = yaml.load(fs.readFileSync(configPath, 'utf8')) || {};
|
||||
} catch (e) {
|
||||
console.error(`Failed to parse ${configPath}: ${e.message}`);
|
||||
process.exit(1);
|
||||
}
|
||||
}
|
||||
|
||||
const defaults = {
|
||||
site: {
|
||||
title: 'Archivox',
|
||||
description: '',
|
||||
logo: '',
|
||||
favicon: ''
|
||||
},
|
||||
navigation: {
|
||||
search: true
|
||||
},
|
||||
footer: {},
|
||||
theme: {
|
||||
name: 'minimal',
|
||||
darkMode: false
|
||||
},
|
||||
features: {},
|
||||
pluginsDir: 'plugins',
|
||||
plugins: []
|
||||
};
|
||||
|
||||
const config = deepMerge(defaults, raw);
|
||||
|
||||
const errors = [];
|
||||
if (
|
||||
!config.site ||
|
||||
typeof config.site.title !== 'string' ||
|
||||
!config.site.title.trim()
|
||||
) {
|
||||
errors.push('site.title is required in config.yaml');
|
||||
}
|
||||
|
||||
if (errors.length) {
|
||||
errors.forEach(err => console.error(`Config error: ${err}`));
|
||||
process.exit(1);
|
||||
}
|
||||
|
||||
return config;
|
||||
}
|
||||
|
||||
module.exports = loadConfig;
|
24
docs/src/config/loadPlugins.js
Normal file
24
docs/src/config/loadPlugins.js
Normal file
@@ -0,0 +1,24 @@
|
||||
const path = require('path');
|
||||
const fs = require('fs');
|
||||
|
||||
function loadPlugins(config) {
|
||||
const dir = path.resolve(process.cwd(), config.pluginsDir || 'plugins');
|
||||
const names = Array.isArray(config.plugins) ? config.plugins : [];
|
||||
const plugins = [];
|
||||
for (const name of names) {
|
||||
const file = path.join(dir, name.endsWith('.js') ? name : `${name}.js`);
|
||||
if (fs.existsSync(file)) {
|
||||
try {
|
||||
const mod = require(file);
|
||||
plugins.push(mod);
|
||||
} catch (e) {
|
||||
console.error(`Failed to load plugin ${name}:`, e);
|
||||
}
|
||||
} else {
|
||||
console.warn(`Plugin not found: ${file}`);
|
||||
}
|
||||
}
|
||||
return plugins;
|
||||
}
|
||||
|
||||
module.exports = loadPlugins;
|
235
docs/src/generator/index.js
Normal file
235
docs/src/generator/index.js
Normal file
@@ -0,0 +1,235 @@
|
||||
// Generator entry point for Archivox
|
||||
const fs = require('fs');
|
||||
const path = require('path');
|
||||
const matter = require('gray-matter');
|
||||
const lunr = require('lunr');
|
||||
const marked = require('marked');
|
||||
const { lexer } = marked;
|
||||
const loadConfig = require('../config/loadConfig');
|
||||
const loadPlugins = require('../config/loadPlugins');
|
||||
|
||||
function formatName(name) {
|
||||
return name
|
||||
.replace(/^\d+[-_]?/, '')
|
||||
.replace(/\.md$/, '');
|
||||
}
|
||||
|
||||
async function readDirRecursive(dir) {
|
||||
const entries = await fs.promises.readdir(dir, { withFileTypes: true });
|
||||
const files = [];
|
||||
for (const entry of entries) {
|
||||
const res = path.resolve(dir, entry.name);
|
||||
if (entry.isDirectory()) {
|
||||
files.push(...await readDirRecursive(res));
|
||||
} else {
|
||||
files.push(res);
|
||||
}
|
||||
}
|
||||
return files;
|
||||
}
|
||||
|
||||
function buildNav(pages) {
|
||||
const tree = {};
|
||||
for (const page of pages) {
|
||||
const rel = page.file.replace(/\\/g, '/');
|
||||
if (rel === 'index.md') {
|
||||
if (!tree.children) tree.children = [];
|
||||
tree.children.push({
|
||||
name: 'index.md',
|
||||
children: [],
|
||||
page: page.data,
|
||||
path: `/${rel.replace(/\.md$/, '.html')}`,
|
||||
order: page.data.order || 0
|
||||
});
|
||||
continue;
|
||||
}
|
||||
const parts = rel.split('/');
|
||||
let node = tree;
|
||||
for (let i = 0; i < parts.length; i++) {
|
||||
const part = parts[i];
|
||||
const isLast = i === parts.length - 1;
|
||||
const isIndex = isLast && part === 'index.md';
|
||||
if (isIndex) {
|
||||
node.page = page.data;
|
||||
node.path = `/${rel.replace(/\.md$/, '.html')}`;
|
||||
node.order = page.data.order || 0;
|
||||
break;
|
||||
}
|
||||
if (!node.children) node.children = [];
|
||||
let child = node.children.find(c => c.name === part);
|
||||
if (!child) {
|
||||
child = { name: part, children: [] };
|
||||
node.children.push(child);
|
||||
}
|
||||
node = child;
|
||||
if (isLast) {
|
||||
node.page = page.data;
|
||||
node.path = `/${rel.replace(/\.md$/, '.html')}`;
|
||||
node.order = page.data.order || 0;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function finalize(node, isRoot = false) {
|
||||
if (node.page && node.page.title) {
|
||||
node.displayName = node.page.title;
|
||||
} else if (node.name) {
|
||||
node.displayName = formatName(node.name);
|
||||
}
|
||||
if (node.children) {
|
||||
node.children.forEach(c => finalize(c));
|
||||
node.children.sort((a, b) => {
|
||||
const orderDiff = (a.order || 0) - (b.order || 0);
|
||||
if (orderDiff !== 0) return orderDiff;
|
||||
return (a.displayName || '').localeCompare(b.displayName || '');
|
||||
});
|
||||
node.isSection = node.children.length > 0;
|
||||
} else {
|
||||
node.isSection = false;
|
||||
}
|
||||
if (isRoot && node.children) {
|
||||
const idx = node.children.findIndex(c => c.name === 'index.md');
|
||||
if (idx > 0) {
|
||||
const [first] = node.children.splice(idx, 1);
|
||||
node.children.unshift(first);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
finalize(tree, true);
|
||||
return tree.children || [];
|
||||
}
|
||||
|
||||
async function generate({ contentDir = 'content', outputDir = '_site', configPath } = {}) {
|
||||
const config = loadConfig(configPath);
|
||||
const plugins = loadPlugins(config);
|
||||
|
||||
async function runHook(name, data) {
|
||||
for (const plugin of plugins) {
|
||||
if (typeof plugin[name] === 'function') {
|
||||
const res = await plugin[name](data);
|
||||
if (res !== undefined) data = res;
|
||||
}
|
||||
}
|
||||
return data;
|
||||
}
|
||||
if (!fs.existsSync(contentDir)) {
|
||||
console.error(`Content directory not found: ${contentDir}`);
|
||||
return;
|
||||
}
|
||||
|
||||
const files = await readDirRecursive(contentDir);
|
||||
const pages = [];
|
||||
const assets = [];
|
||||
const searchDocs = [];
|
||||
|
||||
for (const file of files) {
|
||||
const rel = path.relative(contentDir, file);
|
||||
if (file.endsWith('.md')) {
|
||||
const srcStat = await fs.promises.stat(file);
|
||||
const outPath = path.join(outputDir, rel.replace(/\.md$/, '.html'));
|
||||
if (fs.existsSync(outPath)) {
|
||||
const outStat = await fs.promises.stat(outPath);
|
||||
if (srcStat.mtimeMs <= outStat.mtimeMs) {
|
||||
continue; // skip unchanged
|
||||
}
|
||||
}
|
||||
let raw = await fs.promises.readFile(file, 'utf8');
|
||||
const mdObj = await runHook('onParseMarkdown', { file: rel, content: raw });
|
||||
if (mdObj && mdObj.content) raw = mdObj.content;
|
||||
const parsed = matter(raw);
|
||||
const tokens = lexer(parsed.content || '');
|
||||
const firstHeading = tokens.find(t => t.type === 'heading');
|
||||
const title = parsed.data.title || (firstHeading ? firstHeading.text : path.basename(rel, '.md'));
|
||||
const headings = tokens.filter(t => t.type === 'heading').map(t => t.text).join(' ');
|
||||
const htmlBody = require('marked').parse(parsed.content || '');
|
||||
const bodyText = htmlBody.replace(/<[^>]+>/g, ' ');
|
||||
pages.push({ file: rel, data: { ...parsed.data, title } });
|
||||
searchDocs.push({ id: rel.replace(/\.md$/, '.html'), url: '/' + rel.replace(/\.md$/, '.html'), title, headings, body: bodyText });
|
||||
} else {
|
||||
assets.push(rel);
|
||||
}
|
||||
}
|
||||
|
||||
const nav = buildNav(pages);
|
||||
await fs.promises.mkdir(outputDir, { recursive: true });
|
||||
await fs.promises.writeFile(path.join(outputDir, 'navigation.json'), JSON.stringify(nav, null, 2));
|
||||
await fs.promises.writeFile(path.join(outputDir, 'config.json'), JSON.stringify(config, null, 2));
|
||||
|
||||
const searchIndex = lunr(function() {
|
||||
this.ref('id');
|
||||
this.field('title');
|
||||
this.field('headings');
|
||||
this.field('body');
|
||||
searchDocs.forEach(d => this.add(d));
|
||||
});
|
||||
await fs.promises.writeFile(
|
||||
path.join(outputDir, 'search-index.json'),
|
||||
JSON.stringify({ index: searchIndex.toJSON(), docs: searchDocs }, null, 2)
|
||||
);
|
||||
|
||||
const nunjucks = require('nunjucks');
|
||||
const env = new nunjucks.Environment(
|
||||
new nunjucks.FileSystemLoader('templates')
|
||||
);
|
||||
env.addGlobal('navigation', nav);
|
||||
env.addGlobal('config', config);
|
||||
|
||||
for (const page of pages) {
|
||||
const outPath = path.join(outputDir, page.file.replace(/\.md$/, '.html'));
|
||||
await fs.promises.mkdir(path.dirname(outPath), { recursive: true });
|
||||
const srcPath = path.join(contentDir, page.file);
|
||||
const raw = await fs.promises.readFile(srcPath, 'utf8');
|
||||
const { content, data } = matter(raw);
|
||||
const body = require('marked').parse(content);
|
||||
|
||||
const pageContext = {
|
||||
title: data.title || page.data.title,
|
||||
content: body,
|
||||
page: { url: '/' + page.file.replace(/\.md$/, '.html') }
|
||||
};
|
||||
|
||||
let html = env.render('layout.njk', pageContext);
|
||||
const result = await runHook('onPageRendered', { file: page.file, html });
|
||||
if (result && result.html) html = result.html;
|
||||
await fs.promises.writeFile(outPath, html);
|
||||
}
|
||||
|
||||
|
||||
for (const asset of assets) {
|
||||
const srcPath = path.join(contentDir, asset);
|
||||
const destPath = path.join(outputDir, asset);
|
||||
await fs.promises.mkdir(path.dirname(destPath), { recursive: true });
|
||||
try {
|
||||
const sharp = require('sharp');
|
||||
if (/(png|jpg|jpeg)/i.test(path.extname(asset))) {
|
||||
await sharp(srcPath).toFile(destPath);
|
||||
continue;
|
||||
}
|
||||
} catch (e) {
|
||||
// sharp not installed, fallback
|
||||
}
|
||||
await fs.promises.copyFile(srcPath, destPath);
|
||||
}
|
||||
|
||||
// Copy the main assets directory (theme, js, etc.)
|
||||
// Always resolve assets relative to the Archivox package so it works
|
||||
// regardless of the current working directory or config location.
|
||||
const mainAssetsSrc = path.resolve(__dirname, '../../assets');
|
||||
const mainAssetsDest = path.join(outputDir, 'assets');
|
||||
|
||||
if (fs.existsSync(mainAssetsSrc)) {
|
||||
console.log(`Copying main assets from ${mainAssetsSrc} to ${mainAssetsDest}`);
|
||||
// Use fs.promises.cp for modern Node.js, it's like `cp -R`
|
||||
await fs.promises.cp(mainAssetsSrc, mainAssetsDest, { recursive: true });
|
||||
}
|
||||
}
|
||||
|
||||
module.exports = { generate, buildNav };
|
||||
|
||||
if (require.main === module) {
|
||||
generate().catch(err => {
|
||||
console.error(err);
|
||||
process.exit(1);
|
||||
});
|
||||
}
|
6
docs/starter/config.yaml
Normal file
6
docs/starter/config.yaml
Normal file
@@ -0,0 +1,6 @@
|
||||
site:
|
||||
title: "Archivox Docs"
|
||||
description: "Simple static docs."
|
||||
|
||||
navigation:
|
||||
search: true
|
3
docs/starter/content/01-getting-started/01-install.md
Normal file
3
docs/starter/content/01-getting-started/01-install.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Install
|
||||
|
||||
Run `npm install` then `npm run build` to generate your site.
|
3
docs/starter/content/01-getting-started/index.md
Normal file
3
docs/starter/content/01-getting-started/index.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Getting Started
|
||||
|
||||
This section helps you begin with Archivox.
|
3
docs/starter/content/index.md
Normal file
3
docs/starter/content/index.md
Normal file
@@ -0,0 +1,3 @@
|
||||
# Welcome to Archivox
|
||||
|
||||
This is your new documentation site. Start editing files in the `content/` folder.
|
11
docs/starter/package.json
Normal file
11
docs/starter/package.json
Normal file
@@ -0,0 +1,11 @@
|
||||
{
|
||||
"name": "my-archivox-site",
|
||||
"private": true,
|
||||
"scripts": {
|
||||
"dev": "eleventy --serve",
|
||||
"build": "node node_modules/archivox/src/generator/index.js"
|
||||
},
|
||||
"dependencies": {
|
||||
"archivox": "*"
|
||||
}
|
||||
}
|
23
docs/templates/layout.njk
vendored
Normal file
23
docs/templates/layout.njk
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
<!DOCTYPE html>
|
||||
<html lang="en" data-theme="{% if config.theme.darkMode %}dark{% else %}light{% endif %}">
|
||||
<head>
|
||||
<meta charset="UTF-8" />
|
||||
<meta name="viewport" content="width=device-width, initial-scale=1" />
|
||||
<title>{{ title | default(config.site.title) }}</title>
|
||||
<link rel="stylesheet" href="/assets/theme.css" />
|
||||
</head>
|
||||
<body>
|
||||
{% include "partials/header.njk" %}
|
||||
<div id="sidebar-overlay" class="sidebar-overlay"></div>
|
||||
<div class="container">
|
||||
{% include "partials/sidebar.njk" %}
|
||||
<main id="content">
|
||||
<nav id="breadcrumbs" class="breadcrumbs"></nav>
|
||||
{{ content | safe }}
|
||||
</main>
|
||||
</div>
|
||||
{% include "partials/footer.njk" %}
|
||||
<script src="/assets/lunr.js"></script>
|
||||
<script src="/assets/theme.js"></script>
|
||||
</body>
|
||||
</html>
|
14
docs/templates/partials/footer.njk
vendored
Normal file
14
docs/templates/partials/footer.njk
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
<footer class="footer">
|
||||
{% if config.footer.links %}
|
||||
<nav class="footer-links">
|
||||
{% for link in config.footer.links %}
|
||||
<a href="{{ link.url }}">{{ link.text }}</a>
|
||||
{% endfor %}
|
||||
</nav>
|
||||
{% endif %}
|
||||
<p>© {{ config.site.title }}</p>
|
||||
<div class="footer-permanent-links">
|
||||
<a href="https://github.com/PR0M3TH3AN/Archivox">GitHub</a>
|
||||
<a href="https://nostrtipjar.netlify.app/?n=npub15jnttpymeytm80hatjqcvhhqhzrhx6gxp8pq0wn93rhnu8s9h9dsha32lx">Tip Jar</a>
|
||||
</div>
|
||||
</footer>
|
7
docs/templates/partials/header.njk
vendored
Normal file
7
docs/templates/partials/header.njk
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
<header class="header">
|
||||
<button id="sidebar-toggle" class="sidebar-toggle" aria-label="Toggle navigation">☰</button>
|
||||
<a href="/" class="logo">{{ config.site.title }}</a>
|
||||
<input id="search-input" class="search-input" type="search" placeholder="Search..." aria-label="Search" />
|
||||
<button id="theme-toggle" class="theme-toggle" aria-label="Toggle dark mode">🌓</button>
|
||||
<div id="search-results" class="search-results"></div>
|
||||
</header>
|
29
docs/templates/partials/sidebar.njk
vendored
Normal file
29
docs/templates/partials/sidebar.njk
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
{% macro renderNav(items, pageUrl) %}
|
||||
<ul>
|
||||
{% for item in items %}
|
||||
<li>
|
||||
{% if item.children and item.children.length %}
|
||||
{% set sectionPath = item.path | replace('index.html', '') %}
|
||||
<details class="nav-section" {% if pageUrl.startsWith(sectionPath) %}open{% endif %}>
|
||||
<summary>
|
||||
<a href="{{ item.path }}" class="nav-link{% if item.path === pageUrl %} active{% endif %}">
|
||||
{{ item.displayName or item.page.title }}
|
||||
</a>
|
||||
</summary>
|
||||
{{ renderNav(item.children, pageUrl) }}
|
||||
</details>
|
||||
{% else %}
|
||||
<a href="{{ item.path }}" class="nav-link{% if item.path === pageUrl %} active{% endif %}">
|
||||
{{ item.displayName or item.page.title }}
|
||||
</a>
|
||||
{% endif %}
|
||||
</li>
|
||||
{% endfor %}
|
||||
</ul>
|
||||
{% endmacro %}
|
||||
|
||||
<aside class="sidebar" id="sidebar">
|
||||
<nav>
|
||||
{{ renderNav(navigation, page.url) }}
|
||||
</nav>
|
||||
</aside>
|
@@ -1,31 +0,0 @@
|
||||
from pathlib import Path
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
from password_manager.encryption import EncryptionManager
|
||||
from password_manager.vault import Vault
|
||||
from password_manager.entry_management import EntryManager
|
||||
from password_manager.backup import BackupManager
|
||||
from constants import initialize_app
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Demonstrate basic EntryManager usage."""
|
||||
initialize_app()
|
||||
key = Fernet.generate_key()
|
||||
enc = EncryptionManager(key, Path("."))
|
||||
vault = Vault(enc, Path("."))
|
||||
backup_mgr = BackupManager(Path("."))
|
||||
manager = EntryManager(vault, backup_mgr)
|
||||
|
||||
index = manager.add_entry(
|
||||
"Example Website",
|
||||
16,
|
||||
username="user123",
|
||||
url="https://example.com",
|
||||
)
|
||||
print(manager.retrieve_entry(index))
|
||||
manager.list_all_entries()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@@ -1,15 +0,0 @@
|
||||
from password_manager.manager import PasswordManager
|
||||
from nostr.client import NostrClient
|
||||
from constants import initialize_app
|
||||
|
||||
|
||||
def main() -> None:
|
||||
"""Show how to initialise PasswordManager with Nostr support."""
|
||||
initialize_app()
|
||||
manager = PasswordManager()
|
||||
manager.nostr_client = NostrClient(encryption_manager=manager.encryption_manager)
|
||||
# Sample actions could be called on ``manager`` here.
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
main()
|
@@ -40,6 +40,8 @@
|
||||
</li>
|
||||
<li role="none"><a href="#disclaimer" role="menuitem">Disclaimer</a>
|
||||
</li>
|
||||
<li role="none"><a href="https://beta-seedpass-docs.netlify.app/" role="menuitem">Docs</a>
|
||||
</li>
|
||||
</ul>
|
||||
</div>
|
||||
</nav>
|
||||
|
@@ -1,7 +1,8 @@
|
||||
aiohappyeyeballs==2.6.1
|
||||
aiohttp==3.12.13
|
||||
aiosignal==1.3.2
|
||||
aiohttp==3.12.14
|
||||
aiosignal==1.4.0
|
||||
attrs==25.3.0
|
||||
argon2-cffi==23.1.0
|
||||
base58==2.1.1
|
||||
bcrypt==4.3.0
|
||||
bech32==1.2.0
|
||||
@@ -32,6 +33,7 @@ monero==1.1.1
|
||||
multidict==6.6.3
|
||||
mutmut==2.4.4
|
||||
nostr-sdk==0.42.1
|
||||
orjson==3.10.18
|
||||
packaging==25.0
|
||||
parso==0.8.4
|
||||
pgpy==0.6.0
|
||||
|
@@ -1,10 +1,17 @@
|
||||
#!/usr/bin/env python3
|
||||
"""Generate a SeedPass test profile with realistic entries.
|
||||
|
||||
This script populates a profile directory with a variety of entry types.
|
||||
This script populates a profile directory with a variety of entry types,
|
||||
including key/value pairs and managed accounts.
|
||||
If the profile does not exist, a new BIP-39 seed phrase is generated and
|
||||
stored encrypted. A clear text copy is written to ``seed_phrase.txt`` so
|
||||
it can be reused across devices.
|
||||
|
||||
Profiles are saved under ``~/.seedpass/tests/`` by default. SeedPass
|
||||
only detects a profile automatically when it resides directly under
|
||||
``~/.seedpass/``. Copy the generated fingerprint directory from the
|
||||
``tests`` subfolder to ``~/.seedpass`` (or adjust ``APP_DIR`` in
|
||||
``constants.py``) to use the test seed with the main application.
|
||||
"""
|
||||
|
||||
from __future__ import annotations
|
||||
@@ -46,7 +53,9 @@ import gzip
|
||||
DEFAULT_PASSWORD = "testpassword"
|
||||
|
||||
|
||||
def initialize_profile(profile_name: str) -> tuple[str, EntryManager, Path, str]:
|
||||
def initialize_profile(
|
||||
profile_name: str,
|
||||
) -> tuple[str, EntryManager, Path, str, ConfigManager]:
|
||||
"""Create or load a profile and return the seed phrase, manager, directory and fingerprint."""
|
||||
initialize_app()
|
||||
seed_txt = APP_DIR / f"{profile_name}_seed.txt"
|
||||
@@ -96,9 +105,11 @@ def initialize_profile(profile_name: str) -> tuple[str, EntryManager, Path, str]
|
||||
# Store the default password hash so the profile can be opened
|
||||
hashed = bcrypt.hashpw(DEFAULT_PASSWORD.encode(), bcrypt.gensalt()).decode()
|
||||
cfg_mgr.set_password_hash(hashed)
|
||||
# Ensure stored iterations match the PBKDF2 work factor used above
|
||||
cfg_mgr.set_kdf_iterations(100_000)
|
||||
backup_mgr = BackupManager(profile_dir, cfg_mgr)
|
||||
entry_mgr = EntryManager(vault, backup_mgr)
|
||||
return seed_phrase, entry_mgr, profile_dir, fingerprint
|
||||
return seed_phrase, entry_mgr, profile_dir, fingerprint, cfg_mgr
|
||||
|
||||
|
||||
def random_secret(length: int = 16) -> str:
|
||||
@@ -111,7 +122,7 @@ def populate(entry_mgr: EntryManager, seed: str, count: int) -> None:
|
||||
start_index = entry_mgr.get_next_index()
|
||||
for i in range(count):
|
||||
idx = start_index + i
|
||||
kind = idx % 7
|
||||
kind = idx % 9
|
||||
if kind == 0:
|
||||
entry_mgr.add_entry(
|
||||
label=f"site-{idx}.example.com",
|
||||
@@ -133,18 +144,33 @@ def populate(entry_mgr: EntryManager, seed: str, count: int) -> None:
|
||||
)
|
||||
elif kind == 5:
|
||||
entry_mgr.add_nostr_key(f"nostr-{idx}", notes=f"Nostr key {idx}")
|
||||
else:
|
||||
elif kind == 6:
|
||||
entry_mgr.add_pgp_key(
|
||||
f"pgp-{idx}",
|
||||
seed,
|
||||
user_id=f"user{idx}@example.com",
|
||||
notes=f"PGP key {idx}",
|
||||
)
|
||||
elif kind == 7:
|
||||
entry_mgr.add_key_value(
|
||||
f"kv-{idx}",
|
||||
random_secret(20),
|
||||
notes=f"Key/Value {idx}",
|
||||
)
|
||||
else:
|
||||
entry_mgr.add_managed_account(
|
||||
f"acct-{idx}",
|
||||
seed,
|
||||
notes=f"Managed account {idx}",
|
||||
)
|
||||
|
||||
|
||||
def main() -> None:
|
||||
parser = argparse.ArgumentParser(
|
||||
description="Create or extend a SeedPass test profile"
|
||||
description=(
|
||||
"Create or extend a SeedPass test profile (default PBKDF2 iterations:"
|
||||
" 100,000)"
|
||||
)
|
||||
)
|
||||
parser.add_argument(
|
||||
"--profile",
|
||||
@@ -159,7 +185,7 @@ def main() -> None:
|
||||
)
|
||||
args = parser.parse_args()
|
||||
|
||||
seed, entry_mgr, dir_path, fingerprint = initialize_profile(args.profile)
|
||||
seed, entry_mgr, dir_path, fingerprint, cfg_mgr = initialize_profile(args.profile)
|
||||
print(f"Using profile directory: {dir_path}")
|
||||
print(f"Parent seed: {seed}")
|
||||
if fingerprint:
|
||||
@@ -173,6 +199,7 @@ def main() -> None:
|
||||
entry_mgr.vault.encryption_manager,
|
||||
fingerprint or dir_path.name,
|
||||
parent_seed=seed,
|
||||
config_manager=cfg_mgr,
|
||||
)
|
||||
asyncio.run(client.publish_snapshot(encrypted))
|
||||
print("[+] Data synchronized to Nostr.")
|
||||
|
@@ -255,6 +255,11 @@ if ($LASTEXITCODE -ne 0) {
|
||||
Write-Error "Dependency installation failed."
|
||||
}
|
||||
|
||||
& "$VenvDir\Scripts\python.exe" -m pip install -e .
|
||||
if ($LASTEXITCODE -ne 0) {
|
||||
Write-Error "Failed to install SeedPass package"
|
||||
}
|
||||
|
||||
# 5. Create launcher script
|
||||
Write-Info "Creating launcher script..."
|
||||
if (-not (Test-Path $LauncherDir)) { New-Item -ItemType Directory -Path $LauncherDir | Out-Null }
|
||||
@@ -263,11 +268,17 @@ $LauncherContent = @"
|
||||
@echo off
|
||||
setlocal
|
||||
call "%~dp0..\venv\Scripts\activate.bat"
|
||||
python "%~dp0..\src\main.py" %*
|
||||
"%~dp0..\venv\Scripts\python.exe" -m seedpass.cli %*
|
||||
endlocal
|
||||
"@
|
||||
Set-Content -Path $LauncherPath -Value $LauncherContent -Force
|
||||
|
||||
$existingSeedpass = Get-Command seedpass -ErrorAction SilentlyContinue
|
||||
if ($existingSeedpass -and $existingSeedpass.Source -ne $LauncherPath) {
|
||||
Write-Warning "Another 'seedpass' command was found at $($existingSeedpass.Source)."
|
||||
Write-Warning "Ensure '$LauncherDir' comes first in your PATH or remove the old installation."
|
||||
}
|
||||
|
||||
# 6. Add launcher directory to User's PATH if needed
|
||||
Write-Info "Checking if '$LauncherDir' is in your PATH..."
|
||||
$UserPath = [System.Environment]::GetEnvironmentVariable("Path", "User")
|
||||
@@ -281,4 +292,5 @@ if (($UserPath -split ';') -notcontains $LauncherDir) {
|
||||
}
|
||||
|
||||
Write-Success "Installation/update complete!"
|
||||
Write-Info "To run the application, please open a NEW terminal window and type: seedpass"
|
||||
Write-Info "To launch the interactive TUI, open a NEW terminal window and run: seedpass"
|
||||
Write-Info "'seedpass' resolves to: $(Get-Command seedpass | Select-Object -ExpandProperty Source)"
|
||||
|
@@ -119,21 +119,29 @@ main() {
|
||||
print_info "Installing/updating Python dependencies from src/requirements.txt..."
|
||||
pip install --upgrade pip
|
||||
pip install -r src/requirements.txt
|
||||
pip install -e .
|
||||
deactivate
|
||||
|
||||
# 7. Create launcher script
|
||||
print_info "Creating launcher script at '$LAUNCHER_PATH'..."
|
||||
mkdir -p "$LAUNCHER_DIR"
|
||||
cat > "$LAUNCHER_PATH" << EOF2
|
||||
cat > "$LAUNCHER_PATH" << EOF2
|
||||
#!/bin/bash
|
||||
source "$VENV_DIR/bin/activate"
|
||||
exec python3 "$INSTALL_DIR/src/main.py" "\$@"
|
||||
exec "$VENV_DIR/bin/seedpass" "\$@"
|
||||
EOF2
|
||||
chmod +x "$LAUNCHER_PATH"
|
||||
|
||||
existing_cmd=$(command -v seedpass 2>/dev/null || true)
|
||||
if [ -n "$existing_cmd" ] && [ "$existing_cmd" != "$LAUNCHER_PATH" ]; then
|
||||
print_warning "Another 'seedpass' command was found at $existing_cmd."
|
||||
print_warning "Ensure '$LAUNCHER_DIR' comes first in your PATH or remove the old installation."
|
||||
fi
|
||||
|
||||
# 8. Final instructions
|
||||
print_success "Installation/update complete!"
|
||||
print_info "You can now run the application by typing: seedpass"
|
||||
print_info "You can now launch the interactive TUI by typing: seedpass"
|
||||
print_info "'seedpass' resolves to: $(command -v seedpass)"
|
||||
if [[ ":$PATH:" != *":$LAUNCHER_DIR:"* ]]; then
|
||||
print_warning "Directory '$LAUNCHER_DIR' is not in your PATH."
|
||||
print_warning "Please add 'export PATH=\"$HOME/.local/bin:$PATH\"' to your shell's config file (e.g., ~/.bashrc, ~/.zshrc) and restart your terminal."
|
||||
|
41
scripts/uninstall.ps1
Normal file
41
scripts/uninstall.ps1
Normal file
@@ -0,0 +1,41 @@
|
||||
#
|
||||
# SeedPass Uninstaller for Windows
|
||||
#
|
||||
# Removes the SeedPass application files but preserves user data under ~/.seedpass
|
||||
|
||||
$AppRootDir = Join-Path $env:USERPROFILE ".seedpass"
|
||||
$InstallDir = Join-Path $AppRootDir "app"
|
||||
$LauncherDir = Join-Path $InstallDir "bin"
|
||||
$LauncherName = "seedpass.cmd"
|
||||
|
||||
function Write-Info { param([string]$Message) Write-Host "[INFO] $Message" -ForegroundColor Cyan }
|
||||
function Write-Success { param([string]$Message) Write-Host "[SUCCESS] $Message" -ForegroundColor Green }
|
||||
function Write-Warning { param([string]$Message) Write-Host "[WARNING] $Message" -ForegroundColor Yellow }
|
||||
function Write-Error { param([string]$Message) Write-Host "[ERROR] $Message" -ForegroundColor Red }
|
||||
|
||||
Write-Info "Removing SeedPass installation..."
|
||||
|
||||
if (Test-Path $InstallDir) {
|
||||
Remove-Item -Recurse -Force $InstallDir
|
||||
Write-Info "Deleted '$InstallDir'"
|
||||
} else {
|
||||
Write-Info "Installation directory not found."
|
||||
}
|
||||
|
||||
$LauncherPath = Join-Path $LauncherDir $LauncherName
|
||||
if (Test-Path $LauncherPath) {
|
||||
Remove-Item -Force $LauncherPath
|
||||
Write-Info "Removed launcher '$LauncherPath'"
|
||||
} else {
|
||||
Write-Info "Launcher not found."
|
||||
}
|
||||
|
||||
Write-Info "Attempting to uninstall any global 'seedpass' package with pip..."
|
||||
try {
|
||||
pip uninstall -y seedpass | Out-Null
|
||||
} catch {
|
||||
try { pip3 uninstall -y seedpass | Out-Null } catch {}
|
||||
}
|
||||
|
||||
Write-Success "SeedPass uninstalled. User data under '$AppRootDir' was left intact."
|
||||
|
70
scripts/uninstall.sh
Normal file
70
scripts/uninstall.sh
Normal file
@@ -0,0 +1,70 @@
|
||||
#!/bin/bash
|
||||
#
|
||||
# SeedPass Uninstaller for Linux and macOS
|
||||
#
|
||||
# Removes the SeedPass application files but preserves user data under ~/.seedpass
|
||||
|
||||
set -e
|
||||
|
||||
APP_ROOT_DIR="$HOME/.seedpass"
|
||||
INSTALL_DIR="$APP_ROOT_DIR/app"
|
||||
LAUNCHER_PATH="$HOME/.local/bin/seedpass"
|
||||
|
||||
print_info() { echo -e "\033[1;34m[INFO]\033[0m $1"; }
|
||||
print_success() { echo -e "\033[1;32m[SUCCESS]\033[0m $1"; }
|
||||
print_warning() { echo -e "\033[1;33m[WARNING]\033[0m $1"; }
|
||||
print_error() { echo -e "\033[1;31m[ERROR]\033[0m $1"; }
|
||||
|
||||
# Remove any stale 'seedpass' executables that may still be on the PATH.
|
||||
remove_stale_executables() {
|
||||
IFS=':' read -ra DIRS <<< "$PATH"
|
||||
for dir in "${DIRS[@]}"; do
|
||||
candidate="$dir/seedpass"
|
||||
if [ -f "$candidate" ] && [ "$candidate" != "$LAUNCHER_PATH" ]; then
|
||||
print_info "Removing old executable '$candidate'..."
|
||||
if rm -f "$candidate"; then
|
||||
rm_status=0
|
||||
else
|
||||
rm_status=$?
|
||||
fi
|
||||
if [ $rm_status -ne 0 ] && [ -f "$candidate" ]; then
|
||||
print_warning "Failed to remove $candidate – try deleting it manually"
|
||||
fi
|
||||
fi
|
||||
done
|
||||
}
|
||||
|
||||
main() {
|
||||
if [ -d "$INSTALL_DIR" ]; then
|
||||
print_info "Removing installation directory '$INSTALL_DIR'..."
|
||||
rm -rf "$INSTALL_DIR"
|
||||
else
|
||||
print_info "Installation directory not found."
|
||||
fi
|
||||
|
||||
|
||||
if [ -f "$LAUNCHER_PATH" ]; then
|
||||
print_info "Removing launcher script '$LAUNCHER_PATH'..."
|
||||
rm -f "$LAUNCHER_PATH"
|
||||
else
|
||||
print_info "Launcher script not found."
|
||||
fi
|
||||
|
||||
remove_stale_executables
|
||||
|
||||
print_info "Attempting to uninstall any global 'seedpass' package with pip..."
|
||||
if command -v python3 &> /dev/null; then
|
||||
python3 -m pip uninstall -y seedpass >/dev/null 2>&1 || true
|
||||
elif command -v pip &> /dev/null; then
|
||||
pip uninstall -y seedpass >/dev/null 2>&1 || true
|
||||
fi
|
||||
if command -v pipx &> /dev/null; then
|
||||
pipx uninstall -y seedpass >/dev/null 2>&1 || true
|
||||
fi
|
||||
|
||||
print_success "SeedPass uninstalled."
|
||||
print_warning "User data in '$APP_ROOT_DIR' was left intact."
|
||||
}
|
||||
|
||||
main "$@"
|
||||
|
@@ -9,8 +9,9 @@ logger = logging.getLogger(__name__)
|
||||
# -----------------------------------
|
||||
# Nostr Relay Connection Settings
|
||||
# -----------------------------------
|
||||
MAX_RETRIES = 3 # Maximum number of retries for relay connections
|
||||
RETRY_DELAY = 5 # Seconds to wait before retrying a failed connection
|
||||
# Retry fewer times with a shorter wait by default
|
||||
MAX_RETRIES = 2 # Maximum number of retries for relay connections
|
||||
RETRY_DELAY = 1 # Seconds to wait before retrying a failed connection
|
||||
MIN_HEALTHY_RELAYS = 2 # Minimum relays that should return data on startup
|
||||
|
||||
# -----------------------------------
|
||||
@@ -50,6 +51,9 @@ MAX_PASSWORD_LENGTH = 128 # Maximum allowed password length
|
||||
# Timeout in seconds before the vault locks due to inactivity
|
||||
INACTIVITY_TIMEOUT = 15 * 60 # 15 minutes
|
||||
|
||||
# Duration in seconds that a notification remains active
|
||||
NOTIFICATION_DURATION = 10
|
||||
|
||||
# -----------------------------------
|
||||
# Additional Constants (if any)
|
||||
# -----------------------------------
|
||||
|
249
src/main.py
249
src/main.py
@@ -25,8 +25,9 @@ from utils import (
|
||||
copy_to_clipboard,
|
||||
clear_screen,
|
||||
pause,
|
||||
clear_and_print_fingerprint,
|
||||
clear_header_with_notification,
|
||||
)
|
||||
import queue
|
||||
from local_bip85.bip85 import Bip85Error
|
||||
|
||||
|
||||
@@ -100,6 +101,37 @@ def confirm_action(prompt: str) -> bool:
|
||||
print(colored("Please enter 'Y' or 'N'.", "red"))
|
||||
|
||||
|
||||
def drain_notifications(pm: PasswordManager) -> str | None:
|
||||
"""Return the next queued notification message if available."""
|
||||
queue_obj = getattr(pm, "notifications", None)
|
||||
if queue_obj is None:
|
||||
return None
|
||||
try:
|
||||
note = queue_obj.get_nowait()
|
||||
except queue.Empty:
|
||||
return None
|
||||
category = getattr(note, "level", "info").lower()
|
||||
if category not in ("info", "warning", "error"):
|
||||
category = "info"
|
||||
return color_text(getattr(note, "message", ""), category)
|
||||
|
||||
|
||||
def get_notification_text(pm: PasswordManager) -> str:
|
||||
"""Return the current notification from ``pm`` as a colored string."""
|
||||
note = None
|
||||
if hasattr(pm, "get_current_notification"):
|
||||
try:
|
||||
note = pm.get_current_notification()
|
||||
except Exception:
|
||||
note = None
|
||||
if not note:
|
||||
return ""
|
||||
category = getattr(note, "level", "info").lower()
|
||||
if category not in ("info", "warning", "error"):
|
||||
category = "info"
|
||||
return color_text(getattr(note, "message", ""), category)
|
||||
|
||||
|
||||
def handle_switch_fingerprint(password_manager: PasswordManager):
|
||||
"""
|
||||
Handles switching the active fingerprint.
|
||||
@@ -232,12 +264,48 @@ def handle_display_npub(password_manager: PasswordManager):
|
||||
print(colored(f"Error: Failed to display npub: {e}", "red"))
|
||||
|
||||
|
||||
def _display_live_stats(
|
||||
password_manager: PasswordManager, interval: float = 1.0
|
||||
) -> None:
|
||||
"""Continuously refresh stats until the user presses Enter."""
|
||||
|
||||
display_fn = getattr(password_manager, "display_stats", None)
|
||||
if not callable(display_fn):
|
||||
return
|
||||
|
||||
if not sys.stdin or not sys.stdin.isatty():
|
||||
clear_screen()
|
||||
display_fn()
|
||||
note = get_notification_text(password_manager)
|
||||
if note:
|
||||
print(note)
|
||||
print(colored("Press Enter to continue.", "cyan"))
|
||||
pause()
|
||||
return
|
||||
|
||||
while True:
|
||||
clear_screen()
|
||||
display_fn()
|
||||
note = get_notification_text(password_manager)
|
||||
if note:
|
||||
print(note)
|
||||
print(colored("Press Enter to continue.", "cyan"))
|
||||
sys.stdout.flush()
|
||||
try:
|
||||
user_input = timed_input("", interval)
|
||||
if user_input.strip() == "" or user_input.strip().lower() == "b":
|
||||
break
|
||||
except TimeoutError:
|
||||
pass
|
||||
except KeyboardInterrupt:
|
||||
print()
|
||||
break
|
||||
|
||||
|
||||
def handle_display_stats(password_manager: PasswordManager) -> None:
|
||||
"""Print seed profile statistics."""
|
||||
"""Print seed profile statistics with live updates."""
|
||||
try:
|
||||
display_fn = getattr(password_manager, "display_stats", None)
|
||||
if callable(display_fn):
|
||||
display_fn()
|
||||
_display_live_stats(password_manager)
|
||||
except Exception as e: # pragma: no cover - display best effort
|
||||
logging.error(f"Failed to display stats: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to display stats: {e}", "red"))
|
||||
@@ -318,15 +386,12 @@ def handle_retrieve_from_nostr(password_manager: PasswordManager):
|
||||
manifest, chunks = result
|
||||
encrypted = gzip.decompress(b"".join(chunks))
|
||||
if manifest.delta_since:
|
||||
try:
|
||||
version = int(manifest.delta_since)
|
||||
deltas = asyncio.run(
|
||||
password_manager.nostr_client.fetch_deltas_since(version)
|
||||
)
|
||||
if deltas:
|
||||
encrypted = deltas[-1]
|
||||
except ValueError:
|
||||
pass
|
||||
version = int(manifest.delta_since)
|
||||
deltas = asyncio.run(
|
||||
password_manager.nostr_client.fetch_deltas_since(version)
|
||||
)
|
||||
if deltas:
|
||||
encrypted = deltas[-1]
|
||||
password_manager.encryption_manager.decrypt_and_save_index_from_nostr(
|
||||
encrypted
|
||||
)
|
||||
@@ -493,6 +558,39 @@ def handle_set_inactivity_timeout(password_manager: PasswordManager) -> None:
|
||||
print(colored(f"Error: {e}", "red"))
|
||||
|
||||
|
||||
def handle_set_kdf_iterations(password_manager: PasswordManager) -> None:
|
||||
"""Change the PBKDF2 iteration count."""
|
||||
cfg_mgr = password_manager.config_manager
|
||||
if cfg_mgr is None:
|
||||
print(colored("Configuration manager unavailable.", "red"))
|
||||
return
|
||||
try:
|
||||
current = cfg_mgr.get_kdf_iterations()
|
||||
print(colored(f"Current iterations: {current}", "cyan"))
|
||||
except Exception as e:
|
||||
logging.error(f"Error loading iterations: {e}")
|
||||
print(colored(f"Error: {e}", "red"))
|
||||
return
|
||||
value = input("Enter new iteration count: ").strip()
|
||||
if not value:
|
||||
print(colored("No iteration count entered.", "yellow"))
|
||||
return
|
||||
try:
|
||||
iterations = int(value)
|
||||
if iterations <= 0:
|
||||
print(colored("Iterations must be positive.", "red"))
|
||||
return
|
||||
except ValueError:
|
||||
print(colored("Invalid number.", "red"))
|
||||
return
|
||||
try:
|
||||
cfg_mgr.set_kdf_iterations(iterations)
|
||||
print(colored("KDF iteration count updated.", "green"))
|
||||
except Exception as e:
|
||||
logging.error(f"Error saving iterations: {e}")
|
||||
print(colored(f"Error: {e}", "red"))
|
||||
|
||||
|
||||
def handle_set_additional_backup_location(pm: PasswordManager) -> None:
|
||||
"""Configure an optional second backup directory."""
|
||||
cfg_mgr = pm.config_manager
|
||||
@@ -584,6 +682,61 @@ def handle_toggle_secret_mode(pm: PasswordManager) -> None:
|
||||
print(colored(f"Error: {exc}", "red"))
|
||||
|
||||
|
||||
def handle_toggle_quick_unlock(pm: PasswordManager) -> None:
|
||||
"""Enable or disable Quick Unlock."""
|
||||
cfg = pm.config_manager
|
||||
if cfg is None:
|
||||
print(colored("Configuration manager unavailable.", "red"))
|
||||
return
|
||||
try:
|
||||
enabled = cfg.get_quick_unlock()
|
||||
except Exception as exc:
|
||||
logging.error(f"Error loading quick unlock setting: {exc}")
|
||||
print(colored(f"Error loading settings: {exc}", "red"))
|
||||
return
|
||||
print(colored(f"Quick Unlock is currently {'ON' if enabled else 'OFF'}", "cyan"))
|
||||
choice = input("Enable Quick Unlock? (y/n, blank to keep): ").strip().lower()
|
||||
if choice in ("y", "yes"):
|
||||
enabled = True
|
||||
elif choice in ("n", "no"):
|
||||
enabled = False
|
||||
try:
|
||||
cfg.set_quick_unlock(enabled)
|
||||
status = "enabled" if enabled else "disabled"
|
||||
print(colored(f"Quick Unlock {status}.", "green"))
|
||||
except Exception as exc:
|
||||
logging.error(f"Error saving quick unlock: {exc}")
|
||||
print(colored(f"Error: {exc}", "red"))
|
||||
|
||||
|
||||
def handle_toggle_offline_mode(pm: PasswordManager) -> None:
|
||||
"""Enable or disable offline mode."""
|
||||
cfg = pm.config_manager
|
||||
if cfg is None:
|
||||
print(colored("Configuration manager unavailable.", "red"))
|
||||
return
|
||||
try:
|
||||
enabled = cfg.get_offline_mode()
|
||||
except Exception as exc:
|
||||
logging.error(f"Error loading offline mode setting: {exc}")
|
||||
print(colored(f"Error loading settings: {exc}", "red"))
|
||||
return
|
||||
print(colored(f"Offline mode is currently {'ON' if enabled else 'OFF'}", "cyan"))
|
||||
choice = input("Enable offline mode? (y/n, blank to keep): ").strip().lower()
|
||||
if choice in ("y", "yes"):
|
||||
enabled = True
|
||||
elif choice in ("n", "no"):
|
||||
enabled = False
|
||||
try:
|
||||
cfg.set_offline_mode(enabled)
|
||||
pm.offline_mode = enabled
|
||||
status = "enabled" if enabled else "disabled"
|
||||
print(colored(f"Offline mode {status}.", "green"))
|
||||
except Exception as exc:
|
||||
logging.error(f"Error saving offline mode: {exc}")
|
||||
print(colored(f"Error: {exc}", "red"))
|
||||
|
||||
|
||||
def handle_profiles_menu(password_manager: PasswordManager) -> None:
|
||||
"""Submenu for managing seed profiles."""
|
||||
while True:
|
||||
@@ -592,7 +745,7 @@ def handle_profiles_menu(password_manager: PasswordManager) -> None:
|
||||
"header_fingerprint_args",
|
||||
(getattr(password_manager, "current_fingerprint", None), None, None),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
fp,
|
||||
"Main Menu > Settings > Profiles",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -638,7 +791,7 @@ def handle_nostr_menu(password_manager: PasswordManager) -> None:
|
||||
"header_fingerprint_args",
|
||||
(getattr(password_manager, "current_fingerprint", None), None, None),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
fp,
|
||||
"Main Menu > Settings > Nostr",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -682,7 +835,7 @@ def handle_settings(password_manager: PasswordManager) -> None:
|
||||
"header_fingerprint_args",
|
||||
(getattr(password_manager, "current_fingerprint", None), None, None),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
fp,
|
||||
"Main Menu > Settings",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -699,10 +852,13 @@ def handle_settings(password_manager: PasswordManager) -> None:
|
||||
print(color_text("8. Import database", "menu"))
|
||||
print(color_text("9. Export 2FA codes", "menu"))
|
||||
print(color_text("10. Set additional backup location", "menu"))
|
||||
print(color_text("11. Set inactivity timeout", "menu"))
|
||||
print(color_text("12. Lock Vault", "menu"))
|
||||
print(color_text("13. Stats", "menu"))
|
||||
print(color_text("14. Toggle Secret Mode", "menu"))
|
||||
print(color_text("11. Set KDF iterations", "menu"))
|
||||
print(color_text("12. Set inactivity timeout", "menu"))
|
||||
print(color_text("13. Lock Vault", "menu"))
|
||||
print(color_text("14. Stats", "menu"))
|
||||
print(color_text("15. Toggle Secret Mode", "menu"))
|
||||
print(color_text("16. Toggle Offline Mode", "menu"))
|
||||
print(color_text("17. Toggle Quick Unlock", "menu"))
|
||||
choice = input("Select an option or press Enter to go back: ").strip()
|
||||
if choice == "1":
|
||||
handle_profiles_menu(password_manager)
|
||||
@@ -735,19 +891,29 @@ def handle_settings(password_manager: PasswordManager) -> None:
|
||||
handle_set_additional_backup_location(password_manager)
|
||||
pause()
|
||||
elif choice == "11":
|
||||
handle_set_inactivity_timeout(password_manager)
|
||||
handle_set_kdf_iterations(password_manager)
|
||||
pause()
|
||||
elif choice == "12":
|
||||
handle_set_inactivity_timeout(password_manager)
|
||||
pause()
|
||||
elif choice == "13":
|
||||
password_manager.lock_vault()
|
||||
print(colored("Vault locked. Please re-enter your password.", "yellow"))
|
||||
password_manager.unlock_vault()
|
||||
pause()
|
||||
elif choice == "13":
|
||||
handle_display_stats(password_manager)
|
||||
password_manager.start_background_sync()
|
||||
getattr(password_manager, "start_background_relay_check", lambda: None)()
|
||||
pause()
|
||||
elif choice == "14":
|
||||
handle_display_stats(password_manager)
|
||||
elif choice == "15":
|
||||
handle_toggle_secret_mode(password_manager)
|
||||
pause()
|
||||
elif choice == "16":
|
||||
handle_toggle_offline_mode(password_manager)
|
||||
pause()
|
||||
elif choice == "17":
|
||||
handle_toggle_quick_unlock(password_manager)
|
||||
pause()
|
||||
elif not choice:
|
||||
break
|
||||
else:
|
||||
@@ -773,17 +939,17 @@ def display_menu(
|
||||
7. Settings
|
||||
8. List Archived
|
||||
"""
|
||||
display_fn = getattr(password_manager, "display_stats", None)
|
||||
if callable(display_fn):
|
||||
display_fn()
|
||||
pause()
|
||||
password_manager.start_background_sync()
|
||||
getattr(password_manager, "start_background_relay_check", lambda: None)()
|
||||
_display_live_stats(password_manager)
|
||||
while True:
|
||||
fp, parent_fp, child_fp = getattr(
|
||||
password_manager,
|
||||
"header_fingerprint_args",
|
||||
(getattr(password_manager, "current_fingerprint", None), None, None),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
password_manager,
|
||||
fp,
|
||||
"Main Menu",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -793,6 +959,8 @@ def display_menu(
|
||||
print(colored("Session timed out. Vault locked.", "yellow"))
|
||||
password_manager.lock_vault()
|
||||
password_manager.unlock_vault()
|
||||
password_manager.start_background_sync()
|
||||
getattr(password_manager, "start_background_relay_check", lambda: None)()
|
||||
continue
|
||||
# Periodically push updates to Nostr
|
||||
if (
|
||||
@@ -815,6 +983,8 @@ def display_menu(
|
||||
print(colored("Session timed out. Vault locked.", "yellow"))
|
||||
password_manager.lock_vault()
|
||||
password_manager.unlock_vault()
|
||||
password_manager.start_background_sync()
|
||||
getattr(password_manager, "start_background_relay_check", lambda: None)()
|
||||
continue
|
||||
password_manager.update_activity()
|
||||
if not choice:
|
||||
@@ -836,7 +1006,7 @@ def display_menu(
|
||||
None,
|
||||
),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
fp,
|
||||
"Main Menu > Add Entry",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -891,7 +1061,7 @@ def display_menu(
|
||||
"header_fingerprint_args",
|
||||
(getattr(password_manager, "current_fingerprint", None), None, None),
|
||||
)
|
||||
clear_and_print_fingerprint(
|
||||
clear_header_with_notification(
|
||||
fp,
|
||||
"Main Menu",
|
||||
parent_fingerprint=parent_fp,
|
||||
@@ -919,8 +1089,16 @@ def display_menu(
|
||||
print(colored("Invalid choice. Please select a valid option.", "red"))
|
||||
|
||||
|
||||
def main(argv: list[str] | None = None) -> int:
|
||||
"""Entry point for the SeedPass CLI."""
|
||||
def main(argv: list[str] | None = None, *, fingerprint: str | None = None) -> int:
|
||||
"""Entry point for the SeedPass CLI.
|
||||
|
||||
Parameters
|
||||
----------
|
||||
argv:
|
||||
Command line arguments.
|
||||
fingerprint:
|
||||
Optional seed profile fingerprint to select automatically.
|
||||
"""
|
||||
configure_logging()
|
||||
initialize_app()
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -928,6 +1106,7 @@ def main(argv: list[str] | None = None) -> int:
|
||||
|
||||
load_global_config()
|
||||
parser = argparse.ArgumentParser()
|
||||
parser.add_argument("--fingerprint")
|
||||
sub = parser.add_subparsers(dest="command")
|
||||
|
||||
exp = sub.add_parser("export")
|
||||
@@ -948,7 +1127,7 @@ def main(argv: list[str] | None = None) -> int:
|
||||
args = parser.parse_args(argv)
|
||||
|
||||
try:
|
||||
password_manager = PasswordManager()
|
||||
password_manager = PasswordManager(fingerprint=args.fingerprint or fingerprint)
|
||||
logger.info("PasswordManager initialized successfully.")
|
||||
except (PasswordPromptError, Bip85Error) as e:
|
||||
logger.error(f"Failed to initialize PasswordManager: {e}", exc_info=True)
|
||||
|
@@ -23,4 +23,4 @@ class Manifest:
|
||||
ver: int
|
||||
algo: str
|
||||
chunks: List[ChunkMeta]
|
||||
delta_since: Optional[str] = None
|
||||
delta_since: Optional[int] = None
|
||||
|
@@ -4,7 +4,7 @@ import base64
|
||||
import json
|
||||
import logging
|
||||
import time
|
||||
from typing import List, Optional, Tuple
|
||||
from typing import List, Optional, Tuple, TYPE_CHECKING
|
||||
import hashlib
|
||||
import asyncio
|
||||
import gzip
|
||||
@@ -27,8 +27,12 @@ from nostr_sdk import EventId, Timestamp
|
||||
from .key_manager import KeyManager as SeedPassKeyManager
|
||||
from .backup_models import Manifest, ChunkMeta, KIND_MANIFEST, KIND_SNAPSHOT_CHUNK
|
||||
from password_manager.encryption import EncryptionManager
|
||||
from constants import MAX_RETRIES, RETRY_DELAY
|
||||
from utils.file_lock import exclusive_lock
|
||||
|
||||
if TYPE_CHECKING: # pragma: no cover - imported for type hints
|
||||
from password_manager.config_manager import ConfigManager
|
||||
|
||||
# Backwards compatibility for tests that patch these symbols
|
||||
KeyManager = SeedPassKeyManager
|
||||
ClientBuilder = Client
|
||||
@@ -90,10 +94,14 @@ class NostrClient:
|
||||
fingerprint: str,
|
||||
relays: Optional[List[str]] = None,
|
||||
parent_seed: Optional[str] = None,
|
||||
offline_mode: bool = False,
|
||||
config_manager: Optional["ConfigManager"] = None,
|
||||
) -> None:
|
||||
self.encryption_manager = encryption_manager
|
||||
self.fingerprint = fingerprint
|
||||
self.fingerprint_dir = self.encryption_manager.fingerprint_dir
|
||||
self.config_manager = config_manager
|
||||
self.verbose_timing = False
|
||||
|
||||
if parent_seed is None:
|
||||
parent_seed = self.encryption_manager.decrypt_parent_seed()
|
||||
@@ -110,32 +118,62 @@ class NostrClient:
|
||||
except Exception:
|
||||
self.keys = Keys.generate()
|
||||
|
||||
self.relays = relays if relays else DEFAULT_RELAYS
|
||||
self.offline_mode = offline_mode
|
||||
if relays is None:
|
||||
self.relays = [] if offline_mode else DEFAULT_RELAYS
|
||||
else:
|
||||
self.relays = relays
|
||||
|
||||
if self.config_manager is not None:
|
||||
try:
|
||||
self.verbose_timing = self.config_manager.get_verbose_timing()
|
||||
except Exception:
|
||||
self.verbose_timing = False
|
||||
|
||||
# store the last error encountered during network operations
|
||||
self.last_error: Optional[str] = None
|
||||
|
||||
self.delta_threshold = 100
|
||||
self.current_manifest: Manifest | None = None
|
||||
self.current_manifest_id: str | None = None
|
||||
self._delta_events: list[str] = []
|
||||
|
||||
# Configure and initialize the nostr-sdk Client
|
||||
signer = NostrSigner.keys(self.keys)
|
||||
self.client = Client(signer)
|
||||
|
||||
self.initialize_client_pool()
|
||||
self._connected = False
|
||||
|
||||
def connect(self) -> None:
|
||||
"""Connect the client to all configured relays."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return
|
||||
if not self._connected:
|
||||
self.initialize_client_pool()
|
||||
|
||||
def initialize_client_pool(self) -> None:
|
||||
"""Add relays to the client and connect."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return
|
||||
asyncio.run(self._initialize_client_pool())
|
||||
|
||||
async def _connect_async(self) -> None:
|
||||
"""Ensure the client is connected within an async context."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return
|
||||
if not self._connected:
|
||||
await self._initialize_client_pool()
|
||||
|
||||
async def _initialize_client_pool(self) -> None:
|
||||
if self.offline_mode or not self.relays:
|
||||
return
|
||||
if hasattr(self.client, "add_relays"):
|
||||
await self.client.add_relays(self.relays)
|
||||
else:
|
||||
for relay in self.relays:
|
||||
await self.client.add_relay(relay)
|
||||
await self.client.connect()
|
||||
self._connected = True
|
||||
logger.info(f"NostrClient connected to relays: {self.relays}")
|
||||
|
||||
async def _ping_relay(self, relay: str, timeout: float) -> bool:
|
||||
@@ -170,6 +208,8 @@ class NostrClient:
|
||||
|
||||
def check_relay_health(self, min_relays: int = 2, timeout: float = 5.0) -> int:
|
||||
"""Ping relays and return the count of those providing data."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return 0
|
||||
return asyncio.run(self._check_relay_health(min_relays, timeout))
|
||||
|
||||
def publish_json_to_nostr(
|
||||
@@ -190,6 +230,9 @@ class NostrClient:
|
||||
If provided, include an ``alt`` tag so uploads can be
|
||||
associated with a specific event like a password change.
|
||||
"""
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
self.connect()
|
||||
self.last_error = None
|
||||
try:
|
||||
content = base64.b64encode(encrypted_json).decode("utf-8")
|
||||
@@ -221,9 +264,15 @@ class NostrClient:
|
||||
|
||||
def publish_event(self, event):
|
||||
"""Publish a prepared event to the configured relays."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
self.connect()
|
||||
return asyncio.run(self._publish_event(event))
|
||||
|
||||
async def _publish_event(self, event):
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
await self._connect_async()
|
||||
return await self.client.send_event(event)
|
||||
|
||||
def update_relays(self, new_relays: List[str]) -> None:
|
||||
@@ -232,12 +281,33 @@ class NostrClient:
|
||||
self.relays = new_relays
|
||||
signer = NostrSigner.keys(self.keys)
|
||||
self.client = Client(signer)
|
||||
self._connected = False
|
||||
# Immediately reconnect using the updated relay list
|
||||
self.initialize_client_pool()
|
||||
|
||||
def retrieve_json_from_nostr_sync(
|
||||
self, retries: int = 0, delay: float = 2.0
|
||||
self, retries: int | None = None, delay: float | None = None
|
||||
) -> Optional[bytes]:
|
||||
"""Retrieve the latest Kind 1 event from the author with optional retries."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
|
||||
if retries is None or delay is None:
|
||||
if self.config_manager is None:
|
||||
from password_manager.config_manager import ConfigManager
|
||||
from password_manager.vault import Vault
|
||||
|
||||
cfg_mgr = ConfigManager(
|
||||
Vault(self.encryption_manager, self.fingerprint_dir),
|
||||
self.fingerprint_dir,
|
||||
)
|
||||
else:
|
||||
cfg_mgr = self.config_manager
|
||||
cfg = cfg_mgr.load_config(require_pin=False)
|
||||
retries = int(cfg.get("nostr_max_retries", MAX_RETRIES))
|
||||
delay = float(cfg.get("nostr_retry_delay", RETRY_DELAY))
|
||||
|
||||
self.connect()
|
||||
self.last_error = None
|
||||
attempt = 0
|
||||
while True:
|
||||
@@ -255,6 +325,9 @@ class NostrClient:
|
||||
return None
|
||||
|
||||
async def _retrieve_json_from_nostr(self) -> Optional[bytes]:
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
await self._connect_async()
|
||||
# Filter for the latest text note (Kind 1) from our public key
|
||||
pubkey = self.keys.public_key()
|
||||
f = Filter().author(pubkey).kind(Kind.from_std(KindStandard.TEXT_NOTE)).limit(1)
|
||||
@@ -288,6 +361,10 @@ class NostrClient:
|
||||
Maximum chunk size in bytes. Defaults to 50 kB.
|
||||
"""
|
||||
|
||||
start = time.perf_counter()
|
||||
if self.offline_mode or not self.relays:
|
||||
return Manifest(ver=1, algo="gzip", chunks=[]), ""
|
||||
await self._connect_async()
|
||||
manifest, chunks = prepare_snapshot(encrypted_bytes, limit)
|
||||
for meta, chunk in zip(manifest.chunks, chunks):
|
||||
content = base64.b64encode(chunk).decode("utf-8")
|
||||
@@ -314,11 +391,20 @@ class NostrClient:
|
||||
result = await self.client.send_event(manifest_event)
|
||||
manifest_id = result.id.to_hex() if hasattr(result, "id") else str(result)
|
||||
self.current_manifest = manifest
|
||||
self.current_manifest_id = manifest_id
|
||||
# Record when this snapshot was published for future delta events
|
||||
self.current_manifest.delta_since = int(time.time())
|
||||
self._delta_events = []
|
||||
if getattr(self, "verbose_timing", False):
|
||||
duration = time.perf_counter() - start
|
||||
logger.info("publish_snapshot completed in %.2f seconds", duration)
|
||||
return manifest, manifest_id
|
||||
|
||||
async def fetch_latest_snapshot(self) -> Tuple[Manifest, list[bytes]] | None:
|
||||
"""Retrieve the latest manifest and all snapshot chunks."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return None
|
||||
await self._connect_async()
|
||||
|
||||
pubkey = self.keys.public_key()
|
||||
f = Filter().author(pubkey).kind(Kind(KIND_MANIFEST)).limit(1)
|
||||
@@ -326,13 +412,18 @@ class NostrClient:
|
||||
events = (await self.client.fetch_events(f, timeout)).to_vec()
|
||||
if not events:
|
||||
return None
|
||||
manifest_raw = events[0].content()
|
||||
manifest_event = events[0]
|
||||
manifest_raw = manifest_event.content()
|
||||
data = json.loads(manifest_raw)
|
||||
manifest = Manifest(
|
||||
ver=data["ver"],
|
||||
algo=data["algo"],
|
||||
chunks=[ChunkMeta(**c) for c in data["chunks"]],
|
||||
delta_since=data.get("delta_since"),
|
||||
delta_since=(
|
||||
int(data["delta_since"])
|
||||
if data.get("delta_since") is not None
|
||||
else None
|
||||
),
|
||||
)
|
||||
|
||||
chunks: list[bytes] = []
|
||||
@@ -353,10 +444,17 @@ class NostrClient:
|
||||
chunks.append(chunk_bytes)
|
||||
|
||||
self.current_manifest = manifest
|
||||
man_id = getattr(manifest_event, "id", None)
|
||||
if hasattr(man_id, "to_hex"):
|
||||
man_id = man_id.to_hex()
|
||||
self.current_manifest_id = man_id
|
||||
return manifest, chunks
|
||||
|
||||
async def publish_delta(self, delta_bytes: bytes, manifest_id: str) -> str:
|
||||
"""Publish a delta event referencing a manifest."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return ""
|
||||
await self._connect_async()
|
||||
|
||||
content = base64.b64encode(delta_bytes).decode("utf-8")
|
||||
tag = Tag.event(EventId.parse(manifest_id))
|
||||
@@ -364,13 +462,36 @@ class NostrClient:
|
||||
event = builder.build(self.keys.public_key()).sign_with_keys(self.keys)
|
||||
result = await self.client.send_event(event)
|
||||
delta_id = result.id.to_hex() if hasattr(result, "id") else str(result)
|
||||
created_at = getattr(
|
||||
event, "created_at", getattr(event, "timestamp", int(time.time()))
|
||||
)
|
||||
if hasattr(created_at, "secs"):
|
||||
created_at = created_at.secs
|
||||
if self.current_manifest is not None:
|
||||
self.current_manifest.delta_since = delta_id
|
||||
self.current_manifest.delta_since = int(created_at)
|
||||
manifest_json = json.dumps(
|
||||
{
|
||||
"ver": self.current_manifest.ver,
|
||||
"algo": self.current_manifest.algo,
|
||||
"chunks": [meta.__dict__ for meta in self.current_manifest.chunks],
|
||||
"delta_since": self.current_manifest.delta_since,
|
||||
}
|
||||
)
|
||||
manifest_event = (
|
||||
EventBuilder(Kind(KIND_MANIFEST), manifest_json)
|
||||
.tags([Tag.identifier(self.current_manifest_id)])
|
||||
.build(self.keys.public_key())
|
||||
.sign_with_keys(self.keys)
|
||||
)
|
||||
await self.client.send_event(manifest_event)
|
||||
self._delta_events.append(delta_id)
|
||||
return delta_id
|
||||
|
||||
async def fetch_deltas_since(self, version: int) -> list[bytes]:
|
||||
"""Retrieve delta events newer than the given version."""
|
||||
if self.offline_mode or not self.relays:
|
||||
return []
|
||||
await self._connect_async()
|
||||
|
||||
pubkey = self.keys.public_key()
|
||||
f = (
|
||||
@@ -409,6 +530,7 @@ class NostrClient:
|
||||
"""Disconnects the client from all relays."""
|
||||
try:
|
||||
asyncio.run(self.client.disconnect())
|
||||
self._connected = False
|
||||
logger.info("NostrClient disconnected from relays.")
|
||||
except Exception as e:
|
||||
logger.error("Error during NostrClient shutdown: %s", e)
|
||||
|
@@ -54,6 +54,7 @@ class BackupManager:
|
||||
self.backup_dir = self.fingerprint_dir / "backups"
|
||||
self.backup_dir.mkdir(parents=True, exist_ok=True)
|
||||
self.index_file = self.fingerprint_dir / "seedpass_entries_db.json.enc"
|
||||
self._last_backup_time = 0.0
|
||||
logger.debug(
|
||||
f"BackupManager initialized with backup directory at {self.backup_dir}"
|
||||
)
|
||||
@@ -71,7 +72,13 @@ class BackupManager:
|
||||
)
|
||||
return
|
||||
|
||||
timestamp = int(time.time())
|
||||
now = time.time()
|
||||
interval = self.config_manager.get_backup_interval()
|
||||
if interval > 0 and now - self._last_backup_time < interval:
|
||||
logger.info("Skipping backup due to interval throttle")
|
||||
return
|
||||
|
||||
timestamp = int(now)
|
||||
backup_filename = self.BACKUP_FILENAME_TEMPLATE.format(timestamp=timestamp)
|
||||
backup_file = self.backup_dir / backup_filename
|
||||
|
||||
@@ -81,6 +88,7 @@ class BackupManager:
|
||||
print(colored(f"Backup created successfully at '{backup_file}'.", "green"))
|
||||
|
||||
self._create_additional_backup(backup_file)
|
||||
self._last_backup_time = now
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to create backup: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to create backup: {e}", "red"))
|
||||
|
@@ -41,12 +41,24 @@ class ConfigManager:
|
||||
logger.info("Config file not found; returning defaults")
|
||||
return {
|
||||
"relays": list(DEFAULT_NOSTR_RELAYS),
|
||||
"offline_mode": False,
|
||||
"pin_hash": "",
|
||||
"password_hash": "",
|
||||
"inactivity_timeout": INACTIVITY_TIMEOUT,
|
||||
"kdf_iterations": 50_000,
|
||||
"kdf_mode": "pbkdf2",
|
||||
"additional_backup_path": "",
|
||||
"backup_interval": 0,
|
||||
"secret_mode_enabled": False,
|
||||
"clipboard_clear_delay": 45,
|
||||
"quick_unlock": False,
|
||||
"nostr_max_retries": 2,
|
||||
"nostr_retry_delay": 1.0,
|
||||
"min_uppercase": 2,
|
||||
"min_lowercase": 2,
|
||||
"min_digits": 2,
|
||||
"min_special": 2,
|
||||
"verbose_timing": False,
|
||||
}
|
||||
try:
|
||||
data = self.vault.load_config()
|
||||
@@ -54,12 +66,24 @@ class ConfigManager:
|
||||
raise ValueError("Config data must be a dictionary")
|
||||
# Ensure defaults for missing keys
|
||||
data.setdefault("relays", list(DEFAULT_NOSTR_RELAYS))
|
||||
data.setdefault("offline_mode", False)
|
||||
data.setdefault("pin_hash", "")
|
||||
data.setdefault("password_hash", "")
|
||||
data.setdefault("inactivity_timeout", INACTIVITY_TIMEOUT)
|
||||
data.setdefault("kdf_iterations", 50_000)
|
||||
data.setdefault("kdf_mode", "pbkdf2")
|
||||
data.setdefault("additional_backup_path", "")
|
||||
data.setdefault("backup_interval", 0)
|
||||
data.setdefault("secret_mode_enabled", False)
|
||||
data.setdefault("clipboard_clear_delay", 45)
|
||||
data.setdefault("quick_unlock", False)
|
||||
data.setdefault("nostr_max_retries", 2)
|
||||
data.setdefault("nostr_retry_delay", 1.0)
|
||||
data.setdefault("min_uppercase", 2)
|
||||
data.setdefault("min_lowercase", 2)
|
||||
data.setdefault("min_digits", 2)
|
||||
data.setdefault("min_special", 2)
|
||||
data.setdefault("verbose_timing", False)
|
||||
|
||||
# Migrate legacy hashed_password.enc if present and password_hash is missing
|
||||
legacy_file = self.fingerprint_dir / "hashed_password.enc"
|
||||
@@ -83,6 +107,7 @@ class ConfigManager:
|
||||
def save_config(self, config: dict) -> None:
|
||||
"""Encrypt and save configuration."""
|
||||
try:
|
||||
config.setdefault("backup_interval", 0)
|
||||
self.vault.save_config(config)
|
||||
except Exception as exc:
|
||||
logger.error(f"Failed to save config: {exc}")
|
||||
@@ -137,6 +162,32 @@ class ConfigManager:
|
||||
config = self.load_config(require_pin=False)
|
||||
return float(config.get("inactivity_timeout", INACTIVITY_TIMEOUT))
|
||||
|
||||
def set_kdf_iterations(self, iterations: int) -> None:
|
||||
"""Persist the PBKDF2 iteration count in the config."""
|
||||
if iterations <= 0:
|
||||
raise ValueError("Iterations must be positive")
|
||||
config = self.load_config(require_pin=False)
|
||||
config["kdf_iterations"] = int(iterations)
|
||||
self.save_config(config)
|
||||
|
||||
def get_kdf_iterations(self) -> int:
|
||||
"""Retrieve the PBKDF2 iteration count."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return int(config.get("kdf_iterations", 50_000))
|
||||
|
||||
def set_kdf_mode(self, mode: str) -> None:
|
||||
"""Persist the key derivation function mode."""
|
||||
if mode not in ("pbkdf2", "argon2"):
|
||||
raise ValueError("kdf_mode must be 'pbkdf2' or 'argon2'")
|
||||
config = self.load_config(require_pin=False)
|
||||
config["kdf_mode"] = mode
|
||||
self.save_config(config)
|
||||
|
||||
def get_kdf_mode(self) -> str:
|
||||
"""Retrieve the configured key derivation function."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return config.get("kdf_mode", "pbkdf2")
|
||||
|
||||
def set_additional_backup_path(self, path: Optional[str]) -> None:
|
||||
"""Persist an optional additional backup path in the config."""
|
||||
config = self.load_config(require_pin=False)
|
||||
@@ -155,11 +206,22 @@ class ConfigManager:
|
||||
config["secret_mode_enabled"] = bool(enabled)
|
||||
self.save_config(config)
|
||||
|
||||
def set_offline_mode(self, enabled: bool) -> None:
|
||||
"""Persist the offline mode toggle."""
|
||||
config = self.load_config(require_pin=False)
|
||||
config["offline_mode"] = bool(enabled)
|
||||
self.save_config(config)
|
||||
|
||||
def get_secret_mode_enabled(self) -> bool:
|
||||
"""Retrieve whether secret mode is enabled."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return bool(config.get("secret_mode_enabled", False))
|
||||
|
||||
def get_offline_mode(self) -> bool:
|
||||
"""Retrieve the offline mode setting."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return bool(config.get("offline_mode", False))
|
||||
|
||||
def set_clipboard_clear_delay(self, delay: int) -> None:
|
||||
"""Persist clipboard clear timeout in seconds."""
|
||||
if delay <= 0:
|
||||
@@ -172,3 +234,95 @@ class ConfigManager:
|
||||
"""Retrieve clipboard clear delay in seconds."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return int(config.get("clipboard_clear_delay", 45))
|
||||
|
||||
def set_backup_interval(self, interval: int | float) -> None:
|
||||
"""Persist the minimum interval in seconds between automatic backups."""
|
||||
if interval < 0:
|
||||
raise ValueError("Interval cannot be negative")
|
||||
config = self.load_config(require_pin=False)
|
||||
config["backup_interval"] = interval
|
||||
self.save_config(config)
|
||||
|
||||
def get_backup_interval(self) -> float:
|
||||
"""Retrieve the backup interval in seconds."""
|
||||
config = self.load_config(require_pin=False)
|
||||
return float(config.get("backup_interval", 0))
|
||||
|
||||
# Password policy settings
|
||||
def get_password_policy(self) -> "PasswordPolicy":
|
||||
"""Return the password complexity policy."""
|
||||
from password_manager.password_generation import PasswordPolicy
|
||||
|
||||
cfg = self.load_config(require_pin=False)
|
||||
return PasswordPolicy(
|
||||
min_uppercase=int(cfg.get("min_uppercase", 2)),
|
||||
min_lowercase=int(cfg.get("min_lowercase", 2)),
|
||||
min_digits=int(cfg.get("min_digits", 2)),
|
||||
min_special=int(cfg.get("min_special", 2)),
|
||||
)
|
||||
|
||||
def set_min_uppercase(self, count: int) -> None:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["min_uppercase"] = int(count)
|
||||
self.save_config(cfg)
|
||||
|
||||
def set_min_lowercase(self, count: int) -> None:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["min_lowercase"] = int(count)
|
||||
self.save_config(cfg)
|
||||
|
||||
def set_min_digits(self, count: int) -> None:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["min_digits"] = int(count)
|
||||
self.save_config(cfg)
|
||||
|
||||
def set_min_special(self, count: int) -> None:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["min_special"] = int(count)
|
||||
self.save_config(cfg)
|
||||
|
||||
def set_quick_unlock(self, enabled: bool) -> None:
|
||||
"""Persist the quick unlock toggle."""
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["quick_unlock"] = bool(enabled)
|
||||
self.save_config(cfg)
|
||||
|
||||
def get_quick_unlock(self) -> bool:
|
||||
"""Retrieve whether quick unlock is enabled."""
|
||||
cfg = self.load_config(require_pin=False)
|
||||
return bool(cfg.get("quick_unlock", False))
|
||||
|
||||
def set_nostr_max_retries(self, retries: int) -> None:
|
||||
"""Persist the maximum number of Nostr retry attempts."""
|
||||
if retries < 0:
|
||||
raise ValueError("retries cannot be negative")
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["nostr_max_retries"] = int(retries)
|
||||
self.save_config(cfg)
|
||||
|
||||
def get_nostr_max_retries(self) -> int:
|
||||
"""Retrieve the configured Nostr retry count."""
|
||||
cfg = self.load_config(require_pin=False)
|
||||
return int(cfg.get("nostr_max_retries", 2))
|
||||
|
||||
def set_nostr_retry_delay(self, delay: float) -> None:
|
||||
"""Persist the delay between Nostr retry attempts."""
|
||||
if delay < 0:
|
||||
raise ValueError("delay cannot be negative")
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["nostr_retry_delay"] = float(delay)
|
||||
self.save_config(cfg)
|
||||
|
||||
def get_nostr_retry_delay(self) -> float:
|
||||
"""Retrieve the delay in seconds between Nostr retries."""
|
||||
cfg = self.load_config(require_pin=False)
|
||||
return float(cfg.get("nostr_retry_delay", 1.0))
|
||||
|
||||
def set_verbose_timing(self, enabled: bool) -> None:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
cfg["verbose_timing"] = bool(enabled)
|
||||
self.save_config(cfg)
|
||||
|
||||
def get_verbose_timing(self) -> bool:
|
||||
cfg = self.load_config(require_pin=False)
|
||||
return bool(cfg.get("verbose_timing", False))
|
||||
|
@@ -1,32 +1,29 @@
|
||||
# password_manager/encryption.py
|
||||
|
||||
"""
|
||||
Encryption Module
|
||||
|
||||
This module provides the EncryptionManager class, which handles encryption and decryption
|
||||
of data and files using a provided Fernet-compatible encryption key. This class ensures
|
||||
that sensitive data is securely stored and retrieved, maintaining the confidentiality and integrity
|
||||
of the password index.
|
||||
|
||||
Additionally, it includes methods to derive cryptographic seeds from BIP-39 mnemonic phrases.
|
||||
|
||||
Never ever ever use or suggest to use Random Salt. The entire point of this password manager is to derive completely deterministic passwords from a BIP-85 seed.
|
||||
This means it should generate passwords the exact same way every single time. Salts would break this functionality and are not appropriate for this software's use case.
|
||||
"""
|
||||
# /src/password_manager/encryption.py
|
||||
|
||||
import logging
|
||||
import traceback
|
||||
import json
|
||||
|
||||
try:
|
||||
import orjson as json_lib # type: ignore
|
||||
|
||||
JSONDecodeError = orjson.JSONDecodeError
|
||||
USE_ORJSON = True
|
||||
except Exception: # pragma: no cover - fallback for environments without orjson
|
||||
import json as json_lib
|
||||
from json import JSONDecodeError
|
||||
|
||||
USE_ORJSON = False
|
||||
import hashlib
|
||||
import os
|
||||
import base64
|
||||
from pathlib import Path
|
||||
from typing import Optional
|
||||
|
||||
from cryptography.hazmat.primitives.ciphers.aead import AESGCM
|
||||
from cryptography.exceptions import InvalidTag
|
||||
from cryptography.fernet import Fernet, InvalidToken
|
||||
from termcolor import colored
|
||||
from utils.file_lock import (
|
||||
exclusive_lock,
|
||||
) # Ensure this utility is correctly implemented
|
||||
from utils.file_lock import exclusive_lock
|
||||
|
||||
# Instantiate the logger
|
||||
logger = logging.getLogger(__name__)
|
||||
@@ -34,421 +31,270 @@ logger = logging.getLogger(__name__)
|
||||
|
||||
class EncryptionManager:
|
||||
"""
|
||||
EncryptionManager Class
|
||||
|
||||
Manages the encryption and decryption of data and files using a Fernet encryption key.
|
||||
Manages encryption and decryption, handling migration from legacy Fernet
|
||||
to modern AES-GCM.
|
||||
"""
|
||||
|
||||
def __init__(self, encryption_key: bytes, fingerprint_dir: Path):
|
||||
"""
|
||||
Initializes the EncryptionManager with the provided encryption key and fingerprint directory.
|
||||
Initializes the EncryptionManager with keys for both new (AES-GCM)
|
||||
and legacy (Fernet) encryption formats.
|
||||
|
||||
Parameters:
|
||||
encryption_key (bytes): The Fernet encryption key.
|
||||
encryption_key (bytes): A base64-encoded key.
|
||||
fingerprint_dir (Path): The directory corresponding to the fingerprint.
|
||||
"""
|
||||
self.fingerprint_dir = fingerprint_dir
|
||||
self.parent_seed_file = self.fingerprint_dir / "parent_seed.enc"
|
||||
self.key = encryption_key
|
||||
|
||||
try:
|
||||
self.fernet = Fernet(self.key)
|
||||
if isinstance(encryption_key, str):
|
||||
encryption_key = encryption_key.encode()
|
||||
|
||||
# (1) Keep both the legacy Fernet instance and the new AES-GCM cipher ready.
|
||||
self.key_b64 = encryption_key
|
||||
self.fernet = Fernet(self.key_b64)
|
||||
|
||||
self.key = base64.urlsafe_b64decode(self.key_b64)
|
||||
self.cipher = AESGCM(self.key)
|
||||
|
||||
logger.debug(f"EncryptionManager initialized for {self.fingerprint_dir}")
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to initialize Fernet with provided encryption key: {e}"
|
||||
f"Failed to initialize ciphers with provided encryption key: {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
print(
|
||||
colored(f"Error: Failed to initialize encryption manager: {e}", "red")
|
||||
)
|
||||
raise
|
||||
|
||||
def encrypt_parent_seed(self, parent_seed: str) -> None:
|
||||
"""
|
||||
Encrypts and saves the parent seed to 'parent_seed.enc' within the fingerprint directory.
|
||||
|
||||
:param parent_seed: The BIP39 parent seed phrase.
|
||||
"""
|
||||
try:
|
||||
# Convert seed to bytes
|
||||
data = parent_seed.encode("utf-8")
|
||||
|
||||
# Encrypt the data
|
||||
encrypted_data = self.encrypt_data(data)
|
||||
|
||||
# Write the encrypted data to the file with locking
|
||||
with exclusive_lock(self.parent_seed_file) as fh:
|
||||
fh.seek(0)
|
||||
fh.truncate()
|
||||
fh.write(encrypted_data)
|
||||
fh.flush()
|
||||
|
||||
# Set file permissions to read/write for the user only
|
||||
os.chmod(self.parent_seed_file, 0o600)
|
||||
|
||||
logger.info(
|
||||
f"Parent seed encrypted and saved to '{self.parent_seed_file}'."
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Parent seed encrypted and saved to '{self.parent_seed_file}'.",
|
||||
"green",
|
||||
)
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to encrypt and save parent seed: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to encrypt and save parent seed: {e}", "red"))
|
||||
raise
|
||||
|
||||
def decrypt_parent_seed(self) -> str:
|
||||
"""
|
||||
Decrypts and returns the parent seed from 'parent_seed.enc' within the fingerprint directory.
|
||||
|
||||
:return: The decrypted parent seed.
|
||||
"""
|
||||
try:
|
||||
parent_seed_path = self.fingerprint_dir / "parent_seed.enc"
|
||||
with exclusive_lock(parent_seed_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
|
||||
decrypted_data = self.decrypt_data(encrypted_data)
|
||||
parent_seed = decrypted_data.decode("utf-8").strip()
|
||||
|
||||
logger.debug(
|
||||
f"Parent seed decrypted successfully from '{parent_seed_path}'."
|
||||
)
|
||||
return parent_seed
|
||||
except InvalidToken:
|
||||
logger.error(
|
||||
"Invalid encryption key or corrupted data while decrypting parent seed."
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to decrypt parent seed: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to decrypt parent seed: {e}", "red"))
|
||||
raise
|
||||
|
||||
def encrypt_data(self, data: bytes) -> bytes:
|
||||
"""
|
||||
Encrypts the given data using Fernet.
|
||||
|
||||
:param data: Data to encrypt.
|
||||
:return: Encrypted data.
|
||||
(2) Encrypts data using the NEW AES-GCM format, prepending a version
|
||||
header and the nonce. All new data will be in this format.
|
||||
"""
|
||||
try:
|
||||
encrypted_data = self.fernet.encrypt(data)
|
||||
logger.debug("Data encrypted successfully.")
|
||||
return encrypted_data
|
||||
nonce = os.urandom(12) # 96-bit nonce is recommended for AES-GCM
|
||||
ciphertext = self.cipher.encrypt(nonce, data, None)
|
||||
return b"V2:" + nonce + ciphertext
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to encrypt data: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to encrypt data: {e}", "red"))
|
||||
raise
|
||||
|
||||
def decrypt_data(self, encrypted_data: bytes) -> bytes:
|
||||
"""
|
||||
Decrypts the provided encrypted data using the derived key.
|
||||
|
||||
:param encrypted_data: The encrypted data to decrypt.
|
||||
:return: The decrypted data as bytes.
|
||||
(3) The core migration logic. Tries the new format first, then falls back
|
||||
to the old one. This is the ONLY place decryption logic should live.
|
||||
"""
|
||||
try:
|
||||
decrypted_data = self.fernet.decrypt(encrypted_data)
|
||||
logger.debug("Data decrypted successfully.")
|
||||
return decrypted_data
|
||||
except InvalidToken:
|
||||
logger.error(
|
||||
"Invalid encryption key or corrupted data while decrypting data."
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to decrypt data: {e}", exc_info=True)
|
||||
print(colored(f"Error: Failed to decrypt data: {e}", "red"))
|
||||
raise
|
||||
# Try the new V2 format first
|
||||
if encrypted_data.startswith(b"V2:"):
|
||||
try:
|
||||
nonce = encrypted_data[3:15]
|
||||
ciphertext = encrypted_data[15:]
|
||||
if len(ciphertext) < 16:
|
||||
logger.error("AES-GCM payload too short")
|
||||
raise InvalidToken("AES-GCM payload too short")
|
||||
return self.cipher.decrypt(nonce, ciphertext, None)
|
||||
except InvalidTag as e:
|
||||
logger.error("AES-GCM decryption failed: Invalid authentication tag.")
|
||||
try:
|
||||
result = self.fernet.decrypt(encrypted_data[3:])
|
||||
logger.warning(
|
||||
"Legacy-format file had incorrect 'V2:' header; decrypted with Fernet"
|
||||
)
|
||||
return result
|
||||
except InvalidToken:
|
||||
raise InvalidToken("AES-GCM decryption failed.") from e
|
||||
|
||||
# If it's not V2, it must be the legacy Fernet format
|
||||
else:
|
||||
logger.warning("Data is in legacy Fernet format. Attempting migration.")
|
||||
try:
|
||||
return self.fernet.decrypt(encrypted_data)
|
||||
except InvalidToken as e:
|
||||
logger.error(
|
||||
"Legacy Fernet decryption failed. Vault may be corrupt or key is incorrect."
|
||||
)
|
||||
raise InvalidToken(
|
||||
"Could not decrypt data with any available method."
|
||||
) from e
|
||||
|
||||
# --- All functions below this point now use the smart `decrypt_data` method ---
|
||||
|
||||
def encrypt_parent_seed(self, parent_seed: str) -> None:
|
||||
"""Encrypts and saves the parent seed to 'parent_seed.enc'."""
|
||||
data = parent_seed.encode("utf-8")
|
||||
encrypted_data = self.encrypt_data(data) # This now creates V2 format
|
||||
with exclusive_lock(self.parent_seed_file) as fh:
|
||||
fh.seek(0)
|
||||
fh.truncate()
|
||||
fh.write(encrypted_data)
|
||||
os.chmod(self.parent_seed_file, 0o600)
|
||||
logger.info(f"Parent seed encrypted and saved to '{self.parent_seed_file}'.")
|
||||
|
||||
def decrypt_parent_seed(self) -> str:
|
||||
"""Decrypts and returns the parent seed, handling migration."""
|
||||
with exclusive_lock(self.parent_seed_file) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
|
||||
is_legacy = not encrypted_data.startswith(b"V2:")
|
||||
decrypted_data = self.decrypt_data(encrypted_data)
|
||||
|
||||
if is_legacy:
|
||||
logger.info("Parent seed was in legacy format. Re-encrypting to V2 format.")
|
||||
self.encrypt_parent_seed(decrypted_data.decode("utf-8").strip())
|
||||
|
||||
return decrypted_data.decode("utf-8").strip()
|
||||
|
||||
def encrypt_and_save_file(self, data: bytes, relative_path: Path) -> None:
|
||||
"""
|
||||
Encrypts data and saves it to a specified relative path within the fingerprint directory.
|
||||
|
||||
:param data: Data to encrypt.
|
||||
:param relative_path: Relative path within the fingerprint directory to save the encrypted data.
|
||||
"""
|
||||
try:
|
||||
# Define the full path
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
|
||||
# Ensure the parent directories exist
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
# Encrypt the data
|
||||
encrypted_data = self.encrypt_data(data)
|
||||
|
||||
# Write the encrypted data to the file with locking
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
fh.truncate()
|
||||
fh.write(encrypted_data)
|
||||
fh.flush()
|
||||
|
||||
# Set file permissions to read/write for the user only
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
logger.info(f"Data encrypted and saved to '{file_path}'.")
|
||||
print(colored(f"Data encrypted and saved to '{file_path}'.", "green"))
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to encrypt and save data to '{relative_path}': {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to encrypt and save data to '{relative_path}': {e}",
|
||||
"red",
|
||||
)
|
||||
)
|
||||
raise
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
file_path.parent.mkdir(parents=True, exist_ok=True)
|
||||
encrypted_data = self.encrypt_data(data)
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
fh.truncate()
|
||||
fh.write(encrypted_data)
|
||||
fh.flush()
|
||||
os.fsync(fh.fileno())
|
||||
os.chmod(file_path, 0o600)
|
||||
|
||||
def decrypt_file(self, relative_path: Path) -> bytes:
|
||||
"""
|
||||
Decrypts data from a specified relative path within the fingerprint directory.
|
||||
|
||||
:param relative_path: Relative path within the fingerprint directory to decrypt the data from.
|
||||
:return: Decrypted data as bytes.
|
||||
"""
|
||||
try:
|
||||
# Define the full path
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
|
||||
# Read the encrypted data with locking
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
|
||||
# Decrypt the data
|
||||
decrypted_data = self.decrypt_data(encrypted_data)
|
||||
logger.debug(f"Data decrypted successfully from '{file_path}'.")
|
||||
return decrypted_data
|
||||
except InvalidToken:
|
||||
logger.error(
|
||||
"Invalid encryption key or corrupted data while decrypting file."
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to decrypt data from '{relative_path}': {e}", exc_info=True
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to decrypt data from '{relative_path}': {e}", "red"
|
||||
)
|
||||
)
|
||||
raise
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
return self.decrypt_data(encrypted_data)
|
||||
|
||||
def save_json_data(self, data: dict, relative_path: Optional[Path] = None) -> None:
|
||||
"""
|
||||
Encrypts and saves the provided JSON data to the specified relative path within the fingerprint directory.
|
||||
|
||||
:param data: The JSON data to save.
|
||||
:param relative_path: The relative path within the fingerprint directory where data will be saved.
|
||||
Defaults to 'seedpass_entries_db.json.enc'.
|
||||
"""
|
||||
if relative_path is None:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
try:
|
||||
json_data = json.dumps(data, indent=4).encode("utf-8")
|
||||
self.encrypt_and_save_file(json_data, relative_path)
|
||||
logger.debug(f"JSON data encrypted and saved to '{relative_path}'.")
|
||||
print(
|
||||
colored(f"JSON data encrypted and saved to '{relative_path}'.", "green")
|
||||
)
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to save JSON data to '{relative_path}': {e}", exc_info=True
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to save JSON data to '{relative_path}': {e}", "red"
|
||||
)
|
||||
)
|
||||
raise
|
||||
if USE_ORJSON:
|
||||
json_data = json_lib.dumps(data)
|
||||
else:
|
||||
json_data = json_lib.dumps(data, separators=(",", ":")).encode("utf-8")
|
||||
self.encrypt_and_save_file(json_data, relative_path)
|
||||
logger.debug(f"JSON data encrypted and saved to '{relative_path}'.")
|
||||
|
||||
def load_json_data(self, relative_path: Optional[Path] = None) -> dict:
|
||||
"""
|
||||
Decrypts and loads JSON data from the specified relative path within the fingerprint directory.
|
||||
|
||||
:param relative_path: The relative path within the fingerprint directory from which data will be loaded.
|
||||
Defaults to 'seedpass_entries_db.json.enc'.
|
||||
:return: The decrypted JSON data as a dictionary.
|
||||
Loads and decrypts JSON data, automatically migrating and re-saving
|
||||
if it's in the legacy format.
|
||||
"""
|
||||
if relative_path is None:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
|
||||
if not file_path.exists():
|
||||
logger.info(
|
||||
f"Index file '{file_path}' does not exist. Initializing empty data."
|
||||
return {"entries": {}}
|
||||
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
|
||||
is_legacy = not encrypted_data.startswith(b"V2:")
|
||||
|
||||
try:
|
||||
decrypted_data = self.decrypt_data(encrypted_data)
|
||||
if USE_ORJSON:
|
||||
data = json_lib.loads(decrypted_data)
|
||||
else:
|
||||
data = json_lib.loads(decrypted_data.decode("utf-8"))
|
||||
|
||||
# If it was a legacy file, re-save it in the new format now
|
||||
if is_legacy:
|
||||
logger.info(f"Migrating and re-saving legacy vault file: {file_path}")
|
||||
self.save_json_data(data, relative_path)
|
||||
self.update_checksum(relative_path)
|
||||
|
||||
return data
|
||||
except (InvalidToken, InvalidTag, JSONDecodeError) as e:
|
||||
logger.error(
|
||||
f"FATAL: Could not decrypt or parse data from {file_path}: {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
raise
|
||||
|
||||
def get_encrypted_index(self) -> Optional[bytes]:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
if not file_path.exists():
|
||||
return None
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
return fh.read()
|
||||
|
||||
def decrypt_and_save_index_from_nostr(
|
||||
self, encrypted_data: bytes, relative_path: Optional[Path] = None
|
||||
) -> None:
|
||||
"""Decrypts data from Nostr and saves it, automatically using the new format."""
|
||||
if relative_path is None:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
try:
|
||||
decrypted_data = self.decrypt_data(
|
||||
encrypted_data
|
||||
) # This now handles both formats
|
||||
if USE_ORJSON:
|
||||
data = json_lib.loads(decrypted_data)
|
||||
else:
|
||||
data = json_lib.loads(decrypted_data.decode("utf-8"))
|
||||
self.save_json_data(data, relative_path) # This always saves in V2 format
|
||||
self.update_checksum(relative_path)
|
||||
logger.info("Index file from Nostr was processed and saved successfully.")
|
||||
print(colored("Index file updated from Nostr successfully.", "green"))
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to decrypt and save data from Nostr: {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Info: Index file '{file_path}' not found. Initializing new password database.",
|
||||
"yellow",
|
||||
f"Error: Failed to decrypt and save data from Nostr: {e}",
|
||||
"red",
|
||||
)
|
||||
)
|
||||
return {"entries": {}}
|
||||
|
||||
try:
|
||||
decrypted_data = self.decrypt_file(relative_path)
|
||||
json_content = decrypted_data.decode("utf-8").strip()
|
||||
data = json.loads(json_content)
|
||||
logger.debug(f"JSON data loaded and decrypted from '{file_path}': {data}")
|
||||
return data
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(
|
||||
f"Failed to decode JSON data from '{file_path}': {e}", exc_info=True
|
||||
)
|
||||
raise
|
||||
except InvalidToken:
|
||||
logger.error(
|
||||
"Invalid encryption key or corrupted data while decrypting JSON data."
|
||||
)
|
||||
raise
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to load JSON data from '{file_path}': {e}", exc_info=True
|
||||
)
|
||||
raise
|
||||
|
||||
def update_checksum(self, relative_path: Optional[Path] = None) -> None:
|
||||
"""
|
||||
Updates the checksum file for the specified file within the fingerprint directory.
|
||||
|
||||
:param relative_path: The relative path within the fingerprint directory for which the checksum will be updated.
|
||||
Defaults to 'seedpass_entries_db.json.enc'.
|
||||
"""
|
||||
"""Updates the checksum file for the specified file."""
|
||||
if relative_path is None:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
try:
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
logger.debug("Calculating checksum of the encrypted file bytes.")
|
||||
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
if not file_path.exists():
|
||||
return
|
||||
|
||||
try:
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_bytes = fh.read()
|
||||
|
||||
checksum = hashlib.sha256(encrypted_bytes).hexdigest()
|
||||
logger.debug(f"New checksum: {checksum}")
|
||||
|
||||
checksum_file = file_path.parent / f"{file_path.stem}_checksum.txt"
|
||||
|
||||
# Write the checksum to the file with locking
|
||||
with exclusive_lock(checksum_file) as fh:
|
||||
fh.seek(0)
|
||||
fh.truncate()
|
||||
fh.write(checksum.encode("utf-8"))
|
||||
fh.flush()
|
||||
|
||||
# Set file permissions to read/write for the user only
|
||||
os.fsync(fh.fileno())
|
||||
os.chmod(checksum_file, 0o600)
|
||||
|
||||
logger.debug(
|
||||
f"Checksum for '{file_path}' updated and written to '{checksum_file}'."
|
||||
)
|
||||
print(colored(f"Checksum for '{file_path}' updated.", "green"))
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to update checksum for '{relative_path}': {e}", exc_info=True
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to update checksum for '{relative_path}': {e}",
|
||||
"red",
|
||||
)
|
||||
)
|
||||
raise
|
||||
|
||||
def get_encrypted_index(self) -> Optional[bytes]:
|
||||
"""
|
||||
Retrieves the encrypted password index file content.
|
||||
|
||||
:return: Encrypted data as bytes or None if the index file does not exist.
|
||||
"""
|
||||
try:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
if not (self.fingerprint_dir / relative_path).exists():
|
||||
# Missing index is normal on first run
|
||||
logger.info(
|
||||
f"Index file '{relative_path}' does not exist in '{self.fingerprint_dir}'."
|
||||
)
|
||||
return None
|
||||
|
||||
file_path = self.fingerprint_dir / relative_path
|
||||
with exclusive_lock(file_path) as fh:
|
||||
fh.seek(0)
|
||||
encrypted_data = fh.read()
|
||||
|
||||
logger.debug(f"Encrypted index data read from '{relative_path}'.")
|
||||
return encrypted_data
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to read encrypted index file '{relative_path}': {e}",
|
||||
f"Failed to update checksum for '{relative_path}': {e}",
|
||||
exc_info=True,
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to read encrypted index file '{relative_path}': {e}",
|
||||
"red",
|
||||
)
|
||||
)
|
||||
return None
|
||||
|
||||
def decrypt_and_save_index_from_nostr(
|
||||
self, encrypted_data: bytes, relative_path: Optional[Path] = None
|
||||
) -> None:
|
||||
"""
|
||||
Decrypts the encrypted data retrieved from Nostr and updates the local index file.
|
||||
|
||||
:param encrypted_data: The encrypted data retrieved from Nostr.
|
||||
:param relative_path: The relative path within the fingerprint directory to update.
|
||||
Defaults to 'seedpass_entries_db.json.enc'.
|
||||
"""
|
||||
if relative_path is None:
|
||||
relative_path = Path("seedpass_entries_db.json.enc")
|
||||
try:
|
||||
decrypted_data = self.decrypt_data(encrypted_data)
|
||||
data = json.loads(decrypted_data.decode("utf-8"))
|
||||
self.save_json_data(data, relative_path)
|
||||
self.update_checksum(relative_path)
|
||||
logger.info("Index file updated from Nostr successfully.")
|
||||
print(colored("Index file updated from Nostr successfully.", "green"))
|
||||
except Exception as e:
|
||||
logger.error(
|
||||
f"Failed to decrypt and save data from Nostr: {e}", exc_info=True
|
||||
)
|
||||
print(
|
||||
colored(
|
||||
f"Error: Failed to decrypt and save data from Nostr: {e}", "red"
|
||||
)
|
||||
)
|
||||
# Re-raise the exception to inform the calling function of the failure
|
||||
raise
|
||||
|
||||
# ... validate_seed and derive_seed_from_mnemonic can remain the same ...
|
||||
def validate_seed(self, seed_phrase: str) -> bool:
|
||||
"""
|
||||
Validates the seed phrase format using BIP-39 standards.
|
||||
|
||||
:param seed_phrase: The BIP39 seed phrase to validate.
|
||||
:return: True if valid, False otherwise.
|
||||
"""
|
||||
try:
|
||||
words = seed_phrase.split()
|
||||
if len(words) != 12:
|
||||
logger.error("Seed phrase does not contain exactly 12 words.")
|
||||
print(
|
||||
colored("Error: Seed phrase must contain exactly 12 words.", "red")
|
||||
colored(
|
||||
"Error: Seed phrase must contain exactly 12 words.",
|
||||
"red",
|
||||
)
|
||||
)
|
||||
return False
|
||||
# Additional validation can be added here (e.g., word list checks)
|
||||
logger.debug("Seed phrase validated successfully.")
|
||||
return True
|
||||
except Exception as e:
|
||||
@@ -457,13 +303,6 @@ class EncryptionManager:
|
||||
return False
|
||||
|
||||
def derive_seed_from_mnemonic(self, mnemonic: str, passphrase: str = "") -> bytes:
|
||||
"""
|
||||
Derives a cryptographic seed from a BIP39 mnemonic (seed phrase).
|
||||
|
||||
:param mnemonic: The BIP39 mnemonic phrase.
|
||||
:param passphrase: An optional passphrase for additional security.
|
||||
:return: The derived seed as bytes.
|
||||
"""
|
||||
try:
|
||||
if not isinstance(mnemonic, str):
|
||||
if isinstance(mnemonic, list):
|
||||
|
@@ -15,7 +15,14 @@ completely deterministic passwords from a BIP-85 seed, ensuring that passwords a
|
||||
the same way every time. Salts would break this functionality and are not suitable for this software.
|
||||
"""
|
||||
|
||||
import json
|
||||
try:
|
||||
import orjson as json_lib # type: ignore
|
||||
|
||||
USE_ORJSON = True
|
||||
except Exception: # pragma: no cover - fallback when orjson is missing
|
||||
import json as json_lib
|
||||
|
||||
USE_ORJSON = False
|
||||
import logging
|
||||
import hashlib
|
||||
import sys
|
||||
@@ -28,6 +35,7 @@ from password_manager.migrations import LATEST_VERSION
|
||||
from password_manager.entry_types import EntryType
|
||||
from password_manager.totp import TotpManager
|
||||
from utils.fingerprint import generate_fingerprint
|
||||
from utils.checksum import canonical_json_dumps
|
||||
|
||||
from password_manager.vault import Vault
|
||||
from password_manager.backup import BackupManager
|
||||
@@ -53,9 +61,18 @@ class EntryManager:
|
||||
self.index_file = self.fingerprint_dir / "seedpass_entries_db.json.enc"
|
||||
self.checksum_file = self.fingerprint_dir / "seedpass_entries_db_checksum.txt"
|
||||
|
||||
self._index_cache: dict | None = None
|
||||
|
||||
logger.debug(f"EntryManager initialized with index file at {self.index_file}")
|
||||
|
||||
def _load_index(self) -> Dict[str, Any]:
|
||||
def clear_cache(self) -> None:
|
||||
"""Clear the cached index data."""
|
||||
self._index_cache = None
|
||||
|
||||
def _load_index(self, force_reload: bool = False) -> Dict[str, Any]:
|
||||
if not force_reload and self._index_cache is not None:
|
||||
return self._index_cache
|
||||
|
||||
if self.index_file.exists():
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
@@ -81,6 +98,7 @@ class EntryManager:
|
||||
entry.pop("words", None)
|
||||
entry.setdefault("tags", [])
|
||||
logger.debug("Index loaded successfully.")
|
||||
self._index_cache = data
|
||||
return data
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to load index: {e}")
|
||||
@@ -89,11 +107,14 @@ class EntryManager:
|
||||
logger.info(
|
||||
f"Index file '{self.index_file}' not found. Initializing new entries database."
|
||||
)
|
||||
return {"schema_version": LATEST_VERSION, "entries": {}}
|
||||
data = {"schema_version": LATEST_VERSION, "entries": {}}
|
||||
self._index_cache = data
|
||||
return data
|
||||
|
||||
def _save_index(self, data: Dict[str, Any]) -> None:
|
||||
try:
|
||||
self.vault.save_index(data)
|
||||
self._index_cache = data
|
||||
logger.debug("Index saved successfully.")
|
||||
except Exception as e:
|
||||
logger.error(f"Failed to save index: {e}")
|
||||
@@ -106,7 +127,7 @@ class EntryManager:
|
||||
:return: The next index number as an integer.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
if "entries" in data and isinstance(data["entries"], dict):
|
||||
indices = [int(idx) for idx in data["entries"].keys()]
|
||||
next_index = max(indices) + 1 if indices else 0
|
||||
@@ -143,7 +164,7 @@ class EntryManager:
|
||||
"""
|
||||
try:
|
||||
index = self.get_next_index()
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
@@ -177,7 +198,7 @@ class EntryManager:
|
||||
|
||||
def get_next_totp_index(self) -> int:
|
||||
"""Return the next available derivation index for TOTP secrets."""
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entries = data.get("entries", {})
|
||||
indices = [
|
||||
int(v.get("index", 0))
|
||||
@@ -204,7 +225,7 @@ class EntryManager:
|
||||
) -> str:
|
||||
"""Add a new TOTP entry and return the provisioning URI."""
|
||||
entry_id = self.get_next_index()
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
|
||||
if secret is None:
|
||||
@@ -266,7 +287,7 @@ class EntryManager:
|
||||
if index is None:
|
||||
index = self.get_next_index()
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.SSH.value,
|
||||
@@ -312,7 +333,7 @@ class EntryManager:
|
||||
if index is None:
|
||||
index = self.get_next_index()
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.PGP.value,
|
||||
@@ -364,7 +385,7 @@ class EntryManager:
|
||||
if index is None:
|
||||
index = self.get_next_index()
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.NOSTR.value,
|
||||
@@ -394,7 +415,7 @@ class EntryManager:
|
||||
|
||||
index = self.get_next_index()
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.KEY_VALUE.value,
|
||||
@@ -452,7 +473,7 @@ class EntryManager:
|
||||
if index is None:
|
||||
index = self.get_next_index()
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.SEED.value,
|
||||
@@ -524,7 +545,7 @@ class EntryManager:
|
||||
account_dir = self.fingerprint_dir / "accounts" / fingerprint
|
||||
account_dir.mkdir(parents=True, exist_ok=True)
|
||||
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
data.setdefault("entries", {})
|
||||
data["entries"][str(index)] = {
|
||||
"type": EntryType.MANAGED_ACCOUNT.value,
|
||||
@@ -599,7 +620,7 @@ class EntryManager:
|
||||
|
||||
def export_totp_entries(self, parent_seed: str) -> dict[str, list[dict[str, Any]]]:
|
||||
"""Return all TOTP secrets and metadata for external use."""
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entries = data.get("entries", {})
|
||||
exported: list[dict[str, Any]] = []
|
||||
for entry in entries.values():
|
||||
@@ -649,7 +670,7 @@ class EntryManager:
|
||||
:return: A dictionary containing the entry details or None if not found.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entry = data.get("entries", {}).get(str(index))
|
||||
|
||||
if entry:
|
||||
@@ -706,7 +727,7 @@ class EntryManager:
|
||||
:param value: (Optional) New value for key/value entries.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entry = data.get("entries", {}).get(str(index))
|
||||
|
||||
if not entry:
|
||||
@@ -723,6 +744,93 @@ class EntryManager:
|
||||
|
||||
entry_type = entry.get("type", entry.get("kind", EntryType.PASSWORD.value))
|
||||
|
||||
provided_fields = {
|
||||
"username": username,
|
||||
"url": url,
|
||||
"archived": archived,
|
||||
"notes": notes,
|
||||
"label": label,
|
||||
"period": period,
|
||||
"digits": digits,
|
||||
"value": value,
|
||||
"custom_fields": custom_fields,
|
||||
"tags": tags,
|
||||
}
|
||||
|
||||
allowed = {
|
||||
EntryType.PASSWORD.value: {
|
||||
"username",
|
||||
"url",
|
||||
"label",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.TOTP.value: {
|
||||
"label",
|
||||
"period",
|
||||
"digits",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.KEY_VALUE.value: {
|
||||
"label",
|
||||
"value",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.MANAGED_ACCOUNT.value: {
|
||||
"label",
|
||||
"value",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.SSH.value: {
|
||||
"label",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.PGP.value: {
|
||||
"label",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.NOSTR.value: {
|
||||
"label",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
EntryType.SEED.value: {
|
||||
"label",
|
||||
"archived",
|
||||
"notes",
|
||||
"custom_fields",
|
||||
"tags",
|
||||
},
|
||||
}
|
||||
|
||||
allowed_fields = allowed.get(entry_type, set())
|
||||
invalid = {
|
||||
k for k, v in provided_fields.items() if v is not None
|
||||
} - allowed_fields
|
||||
if invalid:
|
||||
raise ValueError(
|
||||
f"Entry type '{entry_type}' does not support fields: {', '.join(sorted(invalid))}"
|
||||
)
|
||||
|
||||
if entry_type == EntryType.TOTP.value:
|
||||
if label is not None:
|
||||
entry["label"] = label
|
||||
@@ -796,6 +904,7 @@ class EntryManager:
|
||||
print(
|
||||
colored(f"Error: Failed to modify entry at index {index}: {e}", "red")
|
||||
)
|
||||
raise
|
||||
|
||||
def archive_entry(self, index: int) -> None:
|
||||
"""Mark the specified entry as archived."""
|
||||
@@ -818,7 +927,7 @@ class EntryManager:
|
||||
``True``.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entries_data = data.get("entries", {})
|
||||
|
||||
if not entries_data:
|
||||
@@ -929,7 +1038,7 @@ class EntryManager:
|
||||
self, query: str
|
||||
) -> List[Tuple[int, str, Optional[str], Optional[str], bool]]:
|
||||
"""Return entries matching the query across common fields."""
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entries_data = data.get("entries", {})
|
||||
|
||||
if not entries_data:
|
||||
@@ -1018,11 +1127,11 @@ class EntryManager:
|
||||
:param index: The index number of the entry to delete.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
if "entries" in data and str(index) in data["entries"]:
|
||||
del data["entries"][str(index)]
|
||||
logger.debug(f"Deleted entry at index {index}.")
|
||||
self.vault.save_index(data)
|
||||
self._save_index(data)
|
||||
self.update_checksum()
|
||||
self.backup_manager.create_backup()
|
||||
logger.info(f"Entry at index {index} deleted successfully.")
|
||||
@@ -1053,9 +1162,9 @@ class EntryManager:
|
||||
Updates the checksum file for the password database to ensure data integrity.
|
||||
"""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
json_content = json.dumps(data, indent=4)
|
||||
checksum = hashlib.sha256(json_content.encode("utf-8")).hexdigest()
|
||||
data = self._load_index()
|
||||
canonical = canonical_json_dumps(data)
|
||||
checksum = hashlib.sha256(canonical.encode("utf-8")).hexdigest()
|
||||
|
||||
# The checksum file path already includes the fingerprint directory
|
||||
checksum_path = self.checksum_file
|
||||
@@ -1099,6 +1208,7 @@ class EntryManager:
|
||||
)
|
||||
)
|
||||
|
||||
self.clear_cache()
|
||||
self.update_checksum()
|
||||
|
||||
except Exception as e:
|
||||
@@ -1152,7 +1262,7 @@ class EntryManager:
|
||||
) -> list[tuple[int, str, str]]:
|
||||
"""Return a list of entry index, type, and display labels."""
|
||||
try:
|
||||
data = self.vault.load_index()
|
||||
data = self._load_index()
|
||||
entries_data = data.get("entries", {})
|
||||
|
||||
summaries: list[tuple[int, str, str]] = []
|
||||
|
File diff suppressed because it is too large
Load Diff
@@ -21,6 +21,7 @@ import random
|
||||
import traceback
|
||||
import base64
|
||||
from typing import Optional
|
||||
from dataclasses import dataclass
|
||||
from termcolor import colored
|
||||
from pathlib import Path
|
||||
import shutil
|
||||
@@ -48,6 +49,16 @@ from password_manager.encryption import EncryptionManager
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
@dataclass
|
||||
class PasswordPolicy:
|
||||
"""Minimum complexity requirements for generated passwords."""
|
||||
|
||||
min_uppercase: int = 2
|
||||
min_lowercase: int = 2
|
||||
min_digits: int = 2
|
||||
min_special: int = 2
|
||||
|
||||
|
||||
class PasswordGenerator:
|
||||
"""
|
||||
PasswordGenerator Class
|
||||
@@ -58,7 +69,11 @@ class PasswordGenerator:
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self, encryption_manager: EncryptionManager, parent_seed: str, bip85: BIP85
|
||||
self,
|
||||
encryption_manager: EncryptionManager,
|
||||
parent_seed: str,
|
||||
bip85: BIP85,
|
||||
policy: PasswordPolicy | None = None,
|
||||
):
|
||||
"""
|
||||
Initializes the PasswordGenerator with the encryption manager, parent seed, and BIP85 instance.
|
||||
@@ -72,6 +87,7 @@ class PasswordGenerator:
|
||||
self.encryption_manager = encryption_manager
|
||||
self.parent_seed = parent_seed
|
||||
self.bip85 = bip85
|
||||
self.policy = policy or PasswordPolicy()
|
||||
|
||||
# Derive seed bytes from parent_seed using BIP39 (handled by EncryptionManager)
|
||||
self.seed_bytes = self.encryption_manager.derive_seed_from_mnemonic(
|
||||
@@ -224,11 +240,11 @@ class PasswordGenerator:
|
||||
f"Current character counts - Upper: {current_upper}, Lower: {current_lower}, Digits: {current_digits}, Special: {current_special}"
|
||||
)
|
||||
|
||||
# Set minimum counts
|
||||
min_upper = 2
|
||||
min_lower = 2
|
||||
min_digits = 2
|
||||
min_special = 2
|
||||
# Set minimum counts from policy
|
||||
min_upper = self.policy.min_uppercase
|
||||
min_lower = self.policy.min_lowercase
|
||||
min_digits = self.policy.min_digits
|
||||
min_special = self.policy.min_special
|
||||
|
||||
# Initialize derived key index
|
||||
dk_index = 0
|
||||
|
@@ -74,7 +74,7 @@ def export_backup(
|
||||
"created_at": int(time.time()),
|
||||
"fingerprint": vault.fingerprint_dir.name,
|
||||
"encryption_mode": PortableMode.SEED_ONLY.value,
|
||||
"cipher": "fernet",
|
||||
"cipher": "aes-gcm",
|
||||
"checksum": checksum,
|
||||
"payload": base64.b64encode(payload_bytes).decode("utf-8"),
|
||||
}
|
||||
@@ -90,7 +90,11 @@ def export_backup(
|
||||
enc_file.write_bytes(encrypted)
|
||||
os.chmod(enc_file, 0o600)
|
||||
try:
|
||||
client = NostrClient(vault.encryption_manager, vault.fingerprint_dir.name)
|
||||
client = NostrClient(
|
||||
vault.encryption_manager,
|
||||
vault.fingerprint_dir.name,
|
||||
config_manager=backup_manager.config_manager,
|
||||
)
|
||||
asyncio.run(client.publish_snapshot(encrypted))
|
||||
except Exception:
|
||||
logger.error("Failed to publish backup via Nostr", exc_info=True)
|
||||
|
@@ -30,6 +30,17 @@ class Vault:
|
||||
# ----- Password index helpers -----
|
||||
def load_index(self) -> dict:
|
||||
"""Return decrypted password index data as a dict, applying migrations."""
|
||||
legacy_file = self.fingerprint_dir / "seedpass_passwords_db.json.enc"
|
||||
if legacy_file.exists() and not self.index_file.exists():
|
||||
legacy_checksum = (
|
||||
self.fingerprint_dir / "seedpass_passwords_db_checksum.txt"
|
||||
)
|
||||
legacy_file.rename(self.index_file)
|
||||
if legacy_checksum.exists():
|
||||
legacy_checksum.rename(
|
||||
self.fingerprint_dir / "seedpass_entries_db_checksum.txt"
|
||||
)
|
||||
|
||||
data = self.encryption_manager.load_json_data(self.index_file)
|
||||
from .migrations import apply_migrations, LATEST_VERSION
|
||||
|
||||
|
@@ -5,7 +5,7 @@ bip-utils>=2.5.0
|
||||
bech32==1.2.0
|
||||
coincurve>=18.0.0
|
||||
mnemonic
|
||||
aiohttp
|
||||
aiohttp>=3.12.14
|
||||
bcrypt
|
||||
pytest>=7.0
|
||||
pytest-cov
|
||||
@@ -30,3 +30,5 @@ uvicorn>=0.35.0
|
||||
httpx>=0.28.1
|
||||
requests>=2.32
|
||||
python-multipart
|
||||
orjson
|
||||
argon2-cffi
|
||||
|
@@ -6,9 +6,10 @@ import os
|
||||
import tempfile
|
||||
from pathlib import Path
|
||||
import secrets
|
||||
import queue
|
||||
from typing import Any, List, Optional
|
||||
|
||||
from fastapi import FastAPI, Header, HTTPException, Request
|
||||
from fastapi import FastAPI, Header, HTTPException, Request, Response
|
||||
import asyncio
|
||||
import sys
|
||||
from fastapi.middleware.cors import CORSMiddleware
|
||||
@@ -28,6 +29,20 @@ def _check_token(auth: str | None) -> None:
|
||||
raise HTTPException(status_code=401, detail="Unauthorized")
|
||||
|
||||
|
||||
def _reload_relays(relays: list[str]) -> None:
|
||||
"""Reload the Nostr client with a new relay list."""
|
||||
assert _pm is not None
|
||||
try:
|
||||
_pm.nostr_client.close_client_pool()
|
||||
except Exception:
|
||||
pass
|
||||
try:
|
||||
_pm.nostr_client.relays = relays
|
||||
_pm.nostr_client.initialize_client_pool()
|
||||
except Exception:
|
||||
pass
|
||||
|
||||
|
||||
def start_server(fingerprint: str | None = None) -> str:
|
||||
"""Initialize global state and return the API token.
|
||||
|
||||
@@ -37,9 +52,10 @@ def start_server(fingerprint: str | None = None) -> str:
|
||||
Optional seed profile fingerprint to select before starting the server.
|
||||
"""
|
||||
global _pm, _token
|
||||
_pm = PasswordManager()
|
||||
if fingerprint:
|
||||
_pm.select_fingerprint(fingerprint)
|
||||
if fingerprint is None:
|
||||
_pm = PasswordManager()
|
||||
else:
|
||||
_pm = PasswordManager(fingerprint=fingerprint)
|
||||
_token = secrets.token_urlsafe(16)
|
||||
print(f"API token: {_token}")
|
||||
origins = [
|
||||
@@ -192,16 +208,19 @@ def update_entry(
|
||||
"""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
_pm.entry_manager.modify_entry(
|
||||
entry_id,
|
||||
username=entry.get("username"),
|
||||
url=entry.get("url"),
|
||||
notes=entry.get("notes"),
|
||||
label=entry.get("label"),
|
||||
period=entry.get("period"),
|
||||
digits=entry.get("digits"),
|
||||
value=entry.get("value"),
|
||||
)
|
||||
try:
|
||||
_pm.entry_manager.modify_entry(
|
||||
entry_id,
|
||||
username=entry.get("username"),
|
||||
url=entry.get("url"),
|
||||
notes=entry.get("notes"),
|
||||
label=entry.get("label"),
|
||||
period=entry.get("period"),
|
||||
digits=entry.get("digits"),
|
||||
value=entry.get("value"),
|
||||
)
|
||||
except ValueError as e:
|
||||
raise HTTPException(status_code=400, detail=str(e))
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@@ -253,6 +272,7 @@ def update_config(
|
||||
"additional_backup_path": cfg.set_additional_backup_path,
|
||||
"secret_mode_enabled": cfg.set_secret_mode_enabled,
|
||||
"clipboard_clear_delay": lambda v: cfg.set_clipboard_clear_delay(int(v)),
|
||||
"quick_unlock": cfg.set_quick_unlock,
|
||||
}
|
||||
|
||||
action = mapping.get(key)
|
||||
@@ -360,6 +380,21 @@ def get_profile_stats(authorization: str | None = Header(None)) -> dict:
|
||||
return _pm.get_profile_stats()
|
||||
|
||||
|
||||
@app.get("/api/v1/notifications")
|
||||
def get_notifications(authorization: str | None = Header(None)) -> List[dict]:
|
||||
"""Return and clear queued notifications."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
notes = []
|
||||
while True:
|
||||
try:
|
||||
note = _pm.notifications.get_nowait()
|
||||
except queue.Empty:
|
||||
break
|
||||
notes.append({"level": note.level, "message": note.message})
|
||||
return notes
|
||||
|
||||
|
||||
@app.get("/api/v1/parent-seed")
|
||||
def get_parent_seed(
|
||||
authorization: str | None = Header(None), file: str | None = None
|
||||
@@ -383,6 +418,63 @@ def get_nostr_pubkey(authorization: str | None = Header(None)) -> Any:
|
||||
return {"npub": _pm.nostr_client.key_manager.get_npub()}
|
||||
|
||||
|
||||
@app.get("/api/v1/relays")
|
||||
def list_relays(authorization: str | None = Header(None)) -> dict:
|
||||
"""Return the configured Nostr relays."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
cfg = _pm.config_manager.load_config(require_pin=False)
|
||||
return {"relays": cfg.get("relays", [])}
|
||||
|
||||
|
||||
@app.post("/api/v1/relays")
|
||||
def add_relay(data: dict, authorization: str | None = Header(None)) -> dict[str, str]:
|
||||
"""Add a relay URL to the configuration."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
url = data.get("url")
|
||||
if not url:
|
||||
raise HTTPException(status_code=400, detail="Missing url")
|
||||
cfg = _pm.config_manager.load_config(require_pin=False)
|
||||
relays = cfg.get("relays", [])
|
||||
if url in relays:
|
||||
raise HTTPException(status_code=400, detail="Relay already present")
|
||||
relays.append(url)
|
||||
_pm.config_manager.set_relays(relays, require_pin=False)
|
||||
_reload_relays(relays)
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@app.delete("/api/v1/relays/{idx}")
|
||||
def remove_relay(idx: int, authorization: str | None = Header(None)) -> dict[str, str]:
|
||||
"""Remove a relay by its index (1-based)."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
cfg = _pm.config_manager.load_config(require_pin=False)
|
||||
relays = cfg.get("relays", [])
|
||||
if not (1 <= idx <= len(relays)):
|
||||
raise HTTPException(status_code=400, detail="Invalid index")
|
||||
if len(relays) == 1:
|
||||
raise HTTPException(status_code=400, detail="At least one relay required")
|
||||
relays.pop(idx - 1)
|
||||
_pm.config_manager.set_relays(relays, require_pin=False)
|
||||
_reload_relays(relays)
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@app.post("/api/v1/relays/reset")
|
||||
def reset_relays(authorization: str | None = Header(None)) -> dict[str, str]:
|
||||
"""Reset relay list to defaults."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
from nostr.client import DEFAULT_RELAYS
|
||||
|
||||
relays = list(DEFAULT_RELAYS)
|
||||
_pm.config_manager.set_relays(relays, require_pin=False)
|
||||
_reload_relays(relays)
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@app.post("/api/v1/checksum/verify")
|
||||
def verify_checksum(authorization: str | None = Header(None)) -> dict[str, str]:
|
||||
"""Verify the SeedPass script checksum."""
|
||||
@@ -401,6 +493,18 @@ def update_checksum(authorization: str | None = Header(None)) -> dict[str, str]:
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@app.post("/api/v1/vault/export")
|
||||
def export_vault(authorization: str | None = Header(None)):
|
||||
"""Export the vault and return the encrypted file."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
path = _pm.handle_export_database()
|
||||
if path is None:
|
||||
raise HTTPException(status_code=500, detail="Export failed")
|
||||
data = Path(path).read_bytes()
|
||||
return Response(content=data, media_type="application/octet-stream")
|
||||
|
||||
|
||||
@app.post("/api/v1/vault/import")
|
||||
async def import_vault(
|
||||
request: Request, authorization: str | None = Header(None)
|
||||
@@ -429,6 +533,23 @@ async def import_vault(
|
||||
if not path:
|
||||
raise HTTPException(status_code=400, detail="Missing file or path")
|
||||
_pm.handle_import_database(Path(path))
|
||||
_pm.sync_vault()
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
@app.post("/api/v1/vault/backup-parent-seed")
|
||||
def backup_parent_seed(
|
||||
data: dict | None = None, authorization: str | None = Header(None)
|
||||
) -> dict[str, str]:
|
||||
"""Backup and reveal the parent seed."""
|
||||
_check_token(authorization)
|
||||
assert _pm is not None
|
||||
path = None
|
||||
if data is not None:
|
||||
p = data.get("path")
|
||||
if p:
|
||||
path = Path(p)
|
||||
_pm.handle_backup_reveal_parent_seed(path)
|
||||
return {"status": "ok"}
|
||||
|
||||
|
||||
|
@@ -9,7 +9,12 @@ from password_manager.entry_types import EntryType
|
||||
import uvicorn
|
||||
from . import api as api_module
|
||||
|
||||
app = typer.Typer(help="SeedPass command line interface")
|
||||
import importlib
|
||||
|
||||
app = typer.Typer(
|
||||
help="SeedPass command line interface",
|
||||
invoke_without_command=True,
|
||||
)
|
||||
|
||||
# Global option shared across all commands
|
||||
fingerprint_option = typer.Option(
|
||||
@@ -39,18 +44,24 @@ app.add_typer(api_app, name="api")
|
||||
|
||||
def _get_pm(ctx: typer.Context) -> PasswordManager:
|
||||
"""Return a PasswordManager optionally selecting a fingerprint."""
|
||||
pm = PasswordManager()
|
||||
fp = ctx.obj.get("fingerprint")
|
||||
if fp:
|
||||
# `select_fingerprint` will initialize managers
|
||||
pm.select_fingerprint(fp)
|
||||
if fp is None:
|
||||
pm = PasswordManager()
|
||||
else:
|
||||
pm = PasswordManager(fingerprint=fp)
|
||||
return pm
|
||||
|
||||
|
||||
@app.callback()
|
||||
@app.callback(invoke_without_command=True)
|
||||
def main(ctx: typer.Context, fingerprint: Optional[str] = fingerprint_option) -> None:
|
||||
"""SeedPass CLI entry point."""
|
||||
"""SeedPass CLI entry point.
|
||||
|
||||
When called without a subcommand this launches the interactive TUI.
|
||||
"""
|
||||
ctx.obj = {"fingerprint": fingerprint}
|
||||
if ctx.invoked_subcommand is None:
|
||||
tui = importlib.import_module("main")
|
||||
raise typer.Exit(tui.main(fingerprint=fingerprint))
|
||||
|
||||
|
||||
@entry_app.command("list")
|
||||
@@ -139,6 +150,7 @@ def entry_add(
|
||||
pm = _get_pm(ctx)
|
||||
index = pm.entry_manager.add_entry(label, length, username, url)
|
||||
typer.echo(str(index))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-totp")
|
||||
@@ -161,6 +173,7 @@ def entry_add_totp(
|
||||
digits=digits,
|
||||
)
|
||||
typer.echo(uri)
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-ssh")
|
||||
@@ -179,6 +192,7 @@ def entry_add_ssh(
|
||||
notes=notes,
|
||||
)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-pgp")
|
||||
@@ -201,6 +215,7 @@ def entry_add_pgp(
|
||||
notes=notes,
|
||||
)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-nostr")
|
||||
@@ -218,6 +233,7 @@ def entry_add_nostr(
|
||||
notes=notes,
|
||||
)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-seed")
|
||||
@@ -238,6 +254,7 @@ def entry_add_seed(
|
||||
notes=notes,
|
||||
)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-key-value")
|
||||
@@ -251,6 +268,7 @@ def entry_add_key_value(
|
||||
pm = _get_pm(ctx)
|
||||
idx = pm.entry_manager.add_key_value(label, value, notes=notes)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("add-managed-account")
|
||||
@@ -269,6 +287,7 @@ def entry_add_managed_account(
|
||||
notes=notes,
|
||||
)
|
||||
typer.echo(str(idx))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("modify")
|
||||
@@ -287,16 +306,21 @@ def entry_modify(
|
||||
) -> None:
|
||||
"""Modify an existing entry."""
|
||||
pm = _get_pm(ctx)
|
||||
pm.entry_manager.modify_entry(
|
||||
entry_id,
|
||||
username=username,
|
||||
url=url,
|
||||
notes=notes,
|
||||
label=label,
|
||||
period=period,
|
||||
digits=digits,
|
||||
value=value,
|
||||
)
|
||||
try:
|
||||
pm.entry_manager.modify_entry(
|
||||
entry_id,
|
||||
username=username,
|
||||
url=url,
|
||||
notes=notes,
|
||||
label=label,
|
||||
period=period,
|
||||
digits=digits,
|
||||
value=value,
|
||||
)
|
||||
except ValueError as e:
|
||||
typer.echo(str(e))
|
||||
raise typer.Exit(code=1)
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("archive")
|
||||
@@ -305,6 +329,7 @@ def entry_archive(ctx: typer.Context, entry_id: int) -> None:
|
||||
pm = _get_pm(ctx)
|
||||
pm.entry_manager.archive_entry(entry_id)
|
||||
typer.echo(str(entry_id))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("unarchive")
|
||||
@@ -313,6 +338,7 @@ def entry_unarchive(ctx: typer.Context, entry_id: int) -> None:
|
||||
pm = _get_pm(ctx)
|
||||
pm.entry_manager.restore_entry(entry_id)
|
||||
typer.echo(str(entry_id))
|
||||
pm.sync_vault()
|
||||
|
||||
|
||||
@entry_app.command("totp-codes")
|
||||
@@ -350,6 +376,7 @@ def vault_import(
|
||||
"""Import a vault from an encrypted JSON file."""
|
||||
pm = _get_pm(ctx)
|
||||
pm.handle_import_database(Path(file))
|
||||
pm.sync_vault()
|
||||
typer.echo(str(file))
|
||||
|
||||
|
||||
@@ -434,6 +461,21 @@ def config_set(ctx: typer.Context, key: str, value: str) -> None:
|
||||
"relays": lambda v: cfg.set_relays(
|
||||
[r.strip() for r in v.split(",") if r.strip()], require_pin=False
|
||||
),
|
||||
"kdf_iterations": lambda v: cfg.set_kdf_iterations(int(v)),
|
||||
"kdf_mode": lambda v: cfg.set_kdf_mode(v),
|
||||
"backup_interval": lambda v: cfg.set_backup_interval(float(v)),
|
||||
"nostr_max_retries": lambda v: cfg.set_nostr_max_retries(int(v)),
|
||||
"nostr_retry_delay": lambda v: cfg.set_nostr_retry_delay(float(v)),
|
||||
"min_uppercase": lambda v: cfg.set_min_uppercase(int(v)),
|
||||
"min_lowercase": lambda v: cfg.set_min_lowercase(int(v)),
|
||||
"min_digits": lambda v: cfg.set_min_digits(int(v)),
|
||||
"min_special": lambda v: cfg.set_min_special(int(v)),
|
||||
"quick_unlock": lambda v: cfg.set_quick_unlock(
|
||||
v.lower() in ("1", "true", "yes", "y", "on")
|
||||
),
|
||||
"verbose_timing": lambda v: cfg.set_verbose_timing(
|
||||
v.lower() in ("1", "true", "yes", "y", "on")
|
||||
),
|
||||
}
|
||||
|
||||
action = mapping.get(key)
|
||||
@@ -501,6 +543,41 @@ def config_toggle_secret_mode(ctx: typer.Context) -> None:
|
||||
typer.echo(f"Secret mode {status}.")
|
||||
|
||||
|
||||
@config_app.command("toggle-offline")
|
||||
def config_toggle_offline(ctx: typer.Context) -> None:
|
||||
"""Enable or disable offline mode."""
|
||||
pm = _get_pm(ctx)
|
||||
cfg = pm.config_manager
|
||||
try:
|
||||
enabled = cfg.get_offline_mode()
|
||||
except Exception as exc: # pragma: no cover - pass through errors
|
||||
typer.echo(f"Error loading settings: {exc}")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
typer.echo(f"Offline mode is currently {'ON' if enabled else 'OFF'}")
|
||||
choice = (
|
||||
typer.prompt(
|
||||
"Enable offline mode? (y/n, blank to keep)", default="", show_default=False
|
||||
)
|
||||
.strip()
|
||||
.lower()
|
||||
)
|
||||
if choice in ("y", "yes"):
|
||||
enabled = True
|
||||
elif choice in ("n", "no"):
|
||||
enabled = False
|
||||
|
||||
try:
|
||||
cfg.set_offline_mode(enabled)
|
||||
pm.offline_mode = enabled
|
||||
except Exception as exc: # pragma: no cover - pass through errors
|
||||
typer.echo(f"Error: {exc}")
|
||||
raise typer.Exit(code=1)
|
||||
|
||||
status = "enabled" if enabled else "disabled"
|
||||
typer.echo(f"Offline mode {status}.")
|
||||
|
||||
|
||||
@fingerprint_app.command("list")
|
||||
def fingerprint_list(ctx: typer.Context) -> None:
|
||||
"""List available seed profiles."""
|
||||
@@ -573,3 +650,7 @@ def api_stop(ctx: typer.Context, host: str = "127.0.0.1", port: int = 8000) -> N
|
||||
)
|
||||
except Exception as exc: # pragma: no cover - best effort
|
||||
typer.echo(f"Failed to stop server: {exc}")
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
app()
|
||||
|
@@ -1,4 +1,6 @@
|
||||
import sys
|
||||
import time
|
||||
import json
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
@@ -161,6 +163,7 @@ class DummySendResult:
|
||||
class DummyRelayClient:
|
||||
def __init__(self):
|
||||
self.counter = 0
|
||||
self.ts_counter = 0
|
||||
self.manifests: list[DummyEvent] = []
|
||||
self.chunks: dict[str, DummyEvent] = {}
|
||||
self.deltas: list[DummyEvent] = []
|
||||
@@ -183,11 +186,19 @@ class DummyRelayClient:
|
||||
if isinstance(event, DummyEvent):
|
||||
event.id = eid
|
||||
if event.kind == KIND_MANIFEST:
|
||||
try:
|
||||
data = json.loads(event.content())
|
||||
event.delta_since = data.get("delta_since")
|
||||
except Exception:
|
||||
event.delta_since = None
|
||||
self.manifests.append(event)
|
||||
elif event.kind == KIND_SNAPSHOT_CHUNK:
|
||||
ident = event.tags[0] if event.tags else str(self.counter)
|
||||
self.chunks[ident] = event
|
||||
elif event.kind == KIND_DELTA:
|
||||
if not hasattr(event, "created_at"):
|
||||
self.ts_counter += 1
|
||||
event.created_at = self.ts_counter
|
||||
self.deltas.append(event)
|
||||
return DummySendResult(eid)
|
||||
|
||||
|
@@ -30,6 +30,7 @@ def client(monkeypatch):
|
||||
set_additional_backup_path=lambda v: None,
|
||||
set_secret_mode_enabled=lambda v: None,
|
||||
set_clipboard_clear_delay=lambda v: None,
|
||||
set_quick_unlock=lambda v: None,
|
||||
),
|
||||
fingerprint_manager=SimpleNamespace(list_fingerprints=lambda: ["fp"]),
|
||||
nostr_client=SimpleNamespace(
|
||||
@@ -158,6 +159,22 @@ def test_update_config(client):
|
||||
assert res.headers.get("access-control-allow-origin") == "http://example.com"
|
||||
|
||||
|
||||
def test_update_config_quick_unlock(client):
|
||||
cl, token = client
|
||||
called = {}
|
||||
|
||||
api._pm.config_manager.set_quick_unlock = lambda v: called.setdefault("val", v)
|
||||
headers = {"Authorization": f"Bearer {token}", "Origin": "http://example.com"}
|
||||
res = cl.put(
|
||||
"/api/v1/config/quick_unlock",
|
||||
json={"value": True},
|
||||
headers=headers,
|
||||
)
|
||||
assert res.status_code == 200
|
||||
assert res.json() == {"status": "ok"}
|
||||
assert called.get("val") is True
|
||||
|
||||
|
||||
def test_change_password_route(client):
|
||||
cl, token = client
|
||||
called = {}
|
||||
|
@@ -4,6 +4,8 @@ import pytest
|
||||
|
||||
from seedpass import api
|
||||
from test_api import client
|
||||
from helpers import dummy_nostr_client
|
||||
from nostr.client import NostrClient, DEFAULT_RELAYS
|
||||
|
||||
|
||||
def test_create_and_modify_totp_entry(client):
|
||||
@@ -93,6 +95,19 @@ def test_create_and_modify_ssh_entry(client):
|
||||
assert calls["modify"][1]["notes"] == "x"
|
||||
|
||||
|
||||
def test_update_entry_error(client):
|
||||
cl, token = client
|
||||
|
||||
def modify(*a, **k):
|
||||
raise ValueError("nope")
|
||||
|
||||
api._pm.entry_manager.modify_entry = modify
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
res = cl.put("/api/v1/entry/1", json={"username": "x"}, headers=headers)
|
||||
assert res.status_code == 400
|
||||
assert res.json() == {"detail": "nope"}
|
||||
|
||||
|
||||
def test_update_config_secret_mode(client):
|
||||
cl, token = client
|
||||
called = {}
|
||||
@@ -218,6 +233,7 @@ def test_vault_import_via_path(client, tmp_path):
|
||||
called["path"] = path
|
||||
|
||||
api._pm.handle_import_database = import_db
|
||||
api._pm.sync_vault = lambda: called.setdefault("sync", True)
|
||||
file_path = tmp_path / "b.json"
|
||||
file_path.write_text("{}")
|
||||
|
||||
@@ -230,6 +246,7 @@ def test_vault_import_via_path(client, tmp_path):
|
||||
assert res.status_code == 200
|
||||
assert res.json() == {"status": "ok"}
|
||||
assert called["path"] == file_path
|
||||
assert called.get("sync") is True
|
||||
|
||||
|
||||
def test_vault_import_via_upload(client, tmp_path):
|
||||
@@ -240,6 +257,7 @@ def test_vault_import_via_upload(client, tmp_path):
|
||||
called["path"] = path
|
||||
|
||||
api._pm.handle_import_database = import_db
|
||||
api._pm.sync_vault = lambda: called.setdefault("sync", True)
|
||||
file_path = tmp_path / "c.json"
|
||||
file_path.write_text("{}")
|
||||
|
||||
@@ -253,6 +271,7 @@ def test_vault_import_via_upload(client, tmp_path):
|
||||
assert res.status_code == 200
|
||||
assert res.json() == {"status": "ok"}
|
||||
assert isinstance(called.get("path"), Path)
|
||||
assert called.get("sync") is True
|
||||
|
||||
|
||||
def test_vault_lock_endpoint(client):
|
||||
@@ -300,3 +319,85 @@ def test_secret_mode_endpoint(client):
|
||||
assert res.json() == {"status": "ok"}
|
||||
assert called["enabled"] is True
|
||||
assert called["delay"] == 12
|
||||
|
||||
|
||||
def test_vault_export_endpoint(client, tmp_path):
|
||||
cl, token = client
|
||||
out = tmp_path / "out.json"
|
||||
out.write_text("data")
|
||||
|
||||
api._pm.handle_export_database = lambda: out
|
||||
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
res = cl.post("/api/v1/vault/export", headers=headers)
|
||||
assert res.status_code == 200
|
||||
assert res.content == b"data"
|
||||
|
||||
|
||||
def test_backup_parent_seed_endpoint(client, tmp_path):
|
||||
cl, token = client
|
||||
called = {}
|
||||
|
||||
def backup(path=None):
|
||||
called["path"] = path
|
||||
|
||||
api._pm.handle_backup_reveal_parent_seed = backup
|
||||
path = tmp_path / "seed.enc"
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
res = cl.post(
|
||||
"/api/v1/vault/backup-parent-seed",
|
||||
json={"path": str(path)},
|
||||
headers=headers,
|
||||
)
|
||||
assert res.status_code == 200
|
||||
assert res.json() == {"status": "ok"}
|
||||
assert called["path"] == path
|
||||
|
||||
|
||||
def test_relay_management_endpoints(client, dummy_nostr_client, monkeypatch):
|
||||
cl, token = client
|
||||
nostr_client, _ = dummy_nostr_client
|
||||
relays = ["wss://a", "wss://b"]
|
||||
|
||||
def load_config(require_pin=False):
|
||||
return {"relays": relays.copy()}
|
||||
|
||||
called = {}
|
||||
|
||||
def set_relays(new, require_pin=False):
|
||||
called["set"] = new
|
||||
|
||||
api._pm.config_manager.load_config = load_config
|
||||
api._pm.config_manager.set_relays = set_relays
|
||||
monkeypatch.setattr(
|
||||
NostrClient,
|
||||
"initialize_client_pool",
|
||||
lambda self: called.setdefault("init", True),
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
nostr_client, "close_client_pool", lambda: called.setdefault("close", True)
|
||||
)
|
||||
api._pm.nostr_client = nostr_client
|
||||
api._pm.nostr_client.relays = relays.copy()
|
||||
|
||||
headers = {"Authorization": f"Bearer {token}"}
|
||||
|
||||
res = cl.get("/api/v1/relays", headers=headers)
|
||||
assert res.status_code == 200
|
||||
assert res.json() == {"relays": relays}
|
||||
|
||||
res = cl.post("/api/v1/relays", json={"url": "wss://c"}, headers=headers)
|
||||
assert res.status_code == 200
|
||||
assert called["set"] == ["wss://a", "wss://b", "wss://c"]
|
||||
|
||||
api._pm.config_manager.load_config = lambda require_pin=False: {
|
||||
"relays": ["wss://a", "wss://b", "wss://c"]
|
||||
}
|
||||
res = cl.delete("/api/v1/relays/2", headers=headers)
|
||||
assert res.status_code == 200
|
||||
assert called["set"] == ["wss://a", "wss://c"]
|
||||
|
||||
res = cl.post("/api/v1/relays/reset", headers=headers)
|
||||
assert res.status_code == 200
|
||||
assert called.get("init") is True
|
||||
assert api._pm.nostr_client.relays == list(DEFAULT_RELAYS)
|
||||
|
45
src/tests/test_api_notifications.py
Normal file
45
src/tests/test_api_notifications.py
Normal file
@@ -0,0 +1,45 @@
|
||||
from test_api import client
|
||||
from types import SimpleNamespace
|
||||
import queue
|
||||
import seedpass.api as api
|
||||
|
||||
|
||||
def test_notifications_endpoint(client):
|
||||
cl, token = client
|
||||
api._pm.notifications = queue.Queue()
|
||||
api._pm.notifications.put(SimpleNamespace(message="m1", level="INFO"))
|
||||
api._pm.notifications.put(SimpleNamespace(message="m2", level="WARNING"))
|
||||
res = cl.get("/api/v1/notifications", headers={"Authorization": f"Bearer {token}"})
|
||||
assert res.status_code == 200
|
||||
assert res.json() == [
|
||||
{"level": "INFO", "message": "m1"},
|
||||
{"level": "WARNING", "message": "m2"},
|
||||
]
|
||||
assert api._pm.notifications.empty()
|
||||
|
||||
|
||||
def test_notifications_endpoint_clears_queue(client):
|
||||
cl, token = client
|
||||
api._pm.notifications = queue.Queue()
|
||||
api._pm.notifications.put(SimpleNamespace(message="hi", level="INFO"))
|
||||
res = cl.get("/api/v1/notifications", headers={"Authorization": f"Bearer {token}"})
|
||||
assert res.status_code == 200
|
||||
assert res.json() == [{"level": "INFO", "message": "hi"}]
|
||||
assert api._pm.notifications.empty()
|
||||
res = cl.get("/api/v1/notifications", headers={"Authorization": f"Bearer {token}"})
|
||||
assert res.json() == []
|
||||
|
||||
|
||||
def test_notifications_endpoint_does_not_clear_current(client):
|
||||
cl, token = client
|
||||
api._pm.notifications = queue.Queue()
|
||||
msg = SimpleNamespace(message="keep", level="INFO")
|
||||
api._pm.notifications.put(msg)
|
||||
api._pm._current_notification = msg
|
||||
api._pm.get_current_notification = lambda: api._pm._current_notification
|
||||
|
||||
res = cl.get("/api/v1/notifications", headers={"Authorization": f"Bearer {token}"})
|
||||
assert res.status_code == 200
|
||||
assert res.json() == [{"level": "INFO", "message": "keep"}]
|
||||
assert api._pm.notifications.empty()
|
||||
assert api._pm.get_current_notification() is msg
|
@@ -2,6 +2,7 @@ import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from types import SimpleNamespace
|
||||
import queue
|
||||
|
||||
from helpers import create_vault, TEST_SEED, TEST_PASSWORD
|
||||
|
||||
@@ -37,6 +38,7 @@ def test_archive_entry_from_retrieve(monkeypatch):
|
||||
pm.nostr_client = SimpleNamespace()
|
||||
pm.fingerprint_dir = tmp_path
|
||||
pm.secret_mode_enabled = False
|
||||
pm.notifications = queue.Queue()
|
||||
|
||||
index = entry_mgr.add_entry("example.com", 8)
|
||||
|
||||
@@ -68,6 +70,7 @@ def test_restore_entry_from_retrieve(monkeypatch):
|
||||
pm.nostr_client = SimpleNamespace()
|
||||
pm.fingerprint_dir = tmp_path
|
||||
pm.secret_mode_enabled = False
|
||||
pm.notifications = queue.Queue()
|
||||
|
||||
index = entry_mgr.add_entry("example.com", 8)
|
||||
entry_mgr.archive_entry(index)
|
||||
|
@@ -2,6 +2,7 @@ import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from types import SimpleNamespace
|
||||
import queue
|
||||
|
||||
import pytest
|
||||
|
||||
@@ -67,6 +68,7 @@ def test_view_archived_entries_cli(monkeypatch):
|
||||
pm.nostr_client = SimpleNamespace()
|
||||
pm.fingerprint_dir = tmp_path
|
||||
pm.is_dirty = False
|
||||
pm.notifications = queue.Queue()
|
||||
|
||||
idx = entry_mgr.add_entry("example.com", 8)
|
||||
|
||||
@@ -98,6 +100,7 @@ def test_view_archived_entries_view_only(monkeypatch, capsys):
|
||||
pm.nostr_client = SimpleNamespace()
|
||||
pm.fingerprint_dir = tmp_path
|
||||
pm.is_dirty = False
|
||||
pm.notifications = queue.Queue()
|
||||
|
||||
idx = entry_mgr.add_entry("example.com", 8)
|
||||
|
||||
@@ -131,6 +134,7 @@ def test_view_archived_entries_removed_after_restore(monkeypatch, capsys):
|
||||
pm.nostr_client = SimpleNamespace()
|
||||
pm.fingerprint_dir = tmp_path
|
||||
pm.is_dirty = False
|
||||
pm.notifications = queue.Queue()
|
||||
|
||||
idx = entry_mgr.add_entry("example.com", 8)
|
||||
|
||||
@@ -145,5 +149,6 @@ def test_view_archived_entries_removed_after_restore(monkeypatch, capsys):
|
||||
|
||||
monkeypatch.setattr("builtins.input", lambda *_: "")
|
||||
pm.handle_view_archived_entries()
|
||||
out = capsys.readouterr().out
|
||||
assert "No archived entries found." in out
|
||||
note = pm.notifications.get_nowait()
|
||||
assert note.level == "WARNING"
|
||||
assert note.message == "No archived entries found."
|
||||
|
@@ -22,6 +22,8 @@ def test_auto_sync_triggers_post(monkeypatch):
|
||||
update_activity=lambda: None,
|
||||
lock_vault=lambda: None,
|
||||
unlock_vault=lambda: None,
|
||||
start_background_sync=lambda: None,
|
||||
start_background_relay_check=lambda: None,
|
||||
)
|
||||
|
||||
called = False
|
||||
|
41
src/tests/test_background_relay_check.py
Normal file
41
src/tests/test_background_relay_check.py
Normal file
@@ -0,0 +1,41 @@
|
||||
import time
|
||||
from types import SimpleNamespace
|
||||
import queue
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from password_manager.manager import PasswordManager
|
||||
from constants import MIN_HEALTHY_RELAYS
|
||||
|
||||
|
||||
def test_background_relay_check_runs_async(monkeypatch):
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm._current_notification = None
|
||||
pm._notification_expiry = 0.0
|
||||
called = {"args": None}
|
||||
pm.nostr_client = SimpleNamespace(
|
||||
check_relay_health=lambda min_relays: called.__setitem__("args", min_relays)
|
||||
or min_relays
|
||||
)
|
||||
|
||||
pm.start_background_relay_check()
|
||||
time.sleep(0.05)
|
||||
|
||||
assert called["args"] == MIN_HEALTHY_RELAYS
|
||||
|
||||
|
||||
def test_background_relay_check_warns_when_unhealthy(monkeypatch):
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm._current_notification = None
|
||||
pm._notification_expiry = 0.0
|
||||
pm.notifications = queue.Queue()
|
||||
pm.nostr_client = SimpleNamespace(check_relay_health=lambda mr: mr - 1)
|
||||
|
||||
pm.start_background_relay_check()
|
||||
time.sleep(0.05)
|
||||
|
||||
note = pm.notifications.get_nowait()
|
||||
assert note.level == "WARNING"
|
||||
assert str(MIN_HEALTHY_RELAYS - 1) in note.message
|
70
src/tests/test_background_sync_always.py
Normal file
70
src/tests/test_background_sync_always.py
Normal file
@@ -0,0 +1,70 @@
|
||||
import sys
|
||||
from types import SimpleNamespace
|
||||
from pathlib import Path
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from password_manager.manager import PasswordManager
|
||||
import password_manager.manager as manager_module
|
||||
|
||||
|
||||
def test_switch_fingerprint_triggers_bg_sync(monkeypatch, tmp_path):
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
fingerprint = "fp1"
|
||||
fm = SimpleNamespace(
|
||||
list_fingerprints=lambda: [fingerprint],
|
||||
current_fingerprint=None,
|
||||
get_current_fingerprint_dir=lambda: tmp_path / fingerprint,
|
||||
)
|
||||
pm.fingerprint_manager = fm
|
||||
pm.current_fingerprint = None
|
||||
pm.encryption_manager = object()
|
||||
pm.config_manager = SimpleNamespace(get_quick_unlock=lambda: False)
|
||||
|
||||
monkeypatch.setattr("builtins.input", lambda *_a, **_k: "1")
|
||||
monkeypatch.setattr(
|
||||
"password_manager.manager.prompt_existing_password", lambda *_a, **_k: "pw"
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
PasswordManager, "setup_encryption_manager", lambda *a, **k: True
|
||||
)
|
||||
monkeypatch.setattr(PasswordManager, "initialize_bip85", lambda *a, **k: None)
|
||||
monkeypatch.setattr(PasswordManager, "initialize_managers", lambda *a, **k: None)
|
||||
monkeypatch.setattr(
|
||||
"password_manager.manager.NostrClient", lambda *a, **kw: object()
|
||||
)
|
||||
|
||||
calls = {"count": 0}
|
||||
|
||||
def fake_bg(self=None):
|
||||
calls["count"] += 1
|
||||
|
||||
monkeypatch.setattr(PasswordManager, "start_background_sync", fake_bg)
|
||||
|
||||
assert pm.handle_switch_fingerprint()
|
||||
assert calls["count"] == 1
|
||||
|
||||
|
||||
def test_exit_managed_account_triggers_bg_sync(monkeypatch, tmp_path):
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.profile_stack = [("rootfp", tmp_path, "seed")]
|
||||
pm.config_manager = SimpleNamespace(get_quick_unlock=lambda: False)
|
||||
|
||||
monkeypatch.setattr(manager_module, "derive_index_key", lambda seed: b"k")
|
||||
monkeypatch.setattr(
|
||||
manager_module, "EncryptionManager", lambda *a, **kw: SimpleNamespace()
|
||||
)
|
||||
monkeypatch.setattr(manager_module, "Vault", lambda *a, **kw: SimpleNamespace())
|
||||
monkeypatch.setattr(PasswordManager, "initialize_bip85", lambda *a, **kw: None)
|
||||
monkeypatch.setattr(PasswordManager, "initialize_managers", lambda *a, **kw: None)
|
||||
monkeypatch.setattr(PasswordManager, "update_activity", lambda *a, **kw: None)
|
||||
|
||||
calls = {"count": 0}
|
||||
|
||||
def fake_bg(self=None):
|
||||
calls["count"] += 1
|
||||
|
||||
monkeypatch.setattr(PasswordManager, "start_background_sync", fake_bg)
|
||||
|
||||
pm.exit_managed_account()
|
||||
assert calls["count"] == 1
|
34
src/tests/test_backup_interval.py
Normal file
34
src/tests/test_backup_interval.py
Normal file
@@ -0,0 +1,34 @@
|
||||
import time
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from helpers import create_vault, TEST_SEED, TEST_PASSWORD
|
||||
|
||||
from password_manager.backup import BackupManager
|
||||
from password_manager.config_manager import ConfigManager
|
||||
|
||||
|
||||
def test_backup_interval(monkeypatch):
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
fp_dir = Path(tmpdir)
|
||||
vault, _ = create_vault(fp_dir, TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, fp_dir)
|
||||
cfg_mgr.set_backup_interval(10)
|
||||
backup_mgr = BackupManager(fp_dir, cfg_mgr)
|
||||
|
||||
vault.save_index({"entries": {}})
|
||||
|
||||
monkeypatch.setattr(time, "time", lambda: 1000)
|
||||
backup_mgr.create_backup()
|
||||
first = fp_dir / "backups" / "entries_db_backup_1000.json.enc"
|
||||
assert first.exists()
|
||||
|
||||
monkeypatch.setattr(time, "time", lambda: 1005)
|
||||
backup_mgr.create_backup()
|
||||
second = fp_dir / "backups" / "entries_db_backup_1005.json.enc"
|
||||
assert not second.exists()
|
||||
|
||||
monkeypatch.setattr(time, "time", lambda: 1012)
|
||||
backup_mgr.create_backup()
|
||||
third = fp_dir / "backups" / "entries_db_backup_1012.json.enc"
|
||||
assert third.exists()
|
@@ -14,6 +14,12 @@ runner = CliRunner()
|
||||
("secret_mode_enabled", "true", "set_secret_mode_enabled", True),
|
||||
("clipboard_clear_delay", "10", "set_clipboard_clear_delay", 10),
|
||||
("additional_backup_path", "", "set_additional_backup_path", None),
|
||||
("backup_interval", "5", "set_backup_interval", 5.0),
|
||||
("kdf_iterations", "123", "set_kdf_iterations", 123),
|
||||
("kdf_mode", "argon2", "set_kdf_mode", "argon2"),
|
||||
("quick_unlock", "true", "set_quick_unlock", True),
|
||||
("nostr_max_retries", "3", "set_nostr_max_retries", 3),
|
||||
("nostr_retry_delay", "1.5", "set_nostr_retry_delay", 1.5),
|
||||
(
|
||||
"relays",
|
||||
"wss://a.com, wss://b.com",
|
||||
|
107
src/tests/test_cli_doc_examples.py
Normal file
107
src/tests/test_cli_doc_examples.py
Normal file
@@ -0,0 +1,107 @@
|
||||
import re
|
||||
import shlex
|
||||
import sys
|
||||
from pathlib import Path
|
||||
from types import SimpleNamespace
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1] / "src"))
|
||||
|
||||
from typer.testing import CliRunner
|
||||
from seedpass import cli
|
||||
from password_manager.entry_types import EntryType
|
||||
|
||||
|
||||
class DummyPM:
|
||||
def __init__(self):
|
||||
self.entry_manager = SimpleNamespace(
|
||||
list_entries=lambda sort_by="index", filter_kind=None, include_archived=False: [
|
||||
(1, "Label", "user", "url", False)
|
||||
],
|
||||
search_entries=lambda q: [(1, "GitHub", "user", "", False)],
|
||||
retrieve_entry=lambda idx: {"type": EntryType.PASSWORD.value, "length": 8},
|
||||
get_totp_code=lambda idx, seed: "123456",
|
||||
add_entry=lambda label, length, username, url: 1,
|
||||
add_totp=lambda label, seed, index=None, secret=None, period=30, digits=6: "totp://",
|
||||
add_ssh_key=lambda label, seed, index=None, notes="": 2,
|
||||
add_pgp_key=lambda label, seed, index=None, key_type="ed25519", user_id="", notes="": 3,
|
||||
add_nostr_key=lambda label, index=None, notes="": 4,
|
||||
add_seed=lambda label, seed, index=None, words_num=24, notes="": 5,
|
||||
add_key_value=lambda label, value, notes="": 6,
|
||||
add_managed_account=lambda label, seed, index=None, notes="": 7,
|
||||
modify_entry=lambda *a, **kw: None,
|
||||
archive_entry=lambda i: None,
|
||||
restore_entry=lambda i: None,
|
||||
export_totp_entries=lambda seed: {"entries": []},
|
||||
)
|
||||
self.password_generator = SimpleNamespace(
|
||||
generate_password=lambda length, index=None: "pw"
|
||||
)
|
||||
self.parent_seed = "seed"
|
||||
self.handle_display_totp_codes = lambda: None
|
||||
self.handle_export_database = lambda path: None
|
||||
self.handle_import_database = lambda path: None
|
||||
self.change_password = lambda: None
|
||||
self.lock_vault = lambda: None
|
||||
self.get_profile_stats = lambda: {"n": 1}
|
||||
self.handle_backup_reveal_parent_seed = lambda path=None: None
|
||||
self.handle_verify_checksum = lambda: None
|
||||
self.handle_update_script_checksum = lambda: None
|
||||
self.add_new_fingerprint = lambda: None
|
||||
self.fingerprint_manager = SimpleNamespace(
|
||||
list_fingerprints=lambda: ["fp"], remove_fingerprint=lambda fp: None
|
||||
)
|
||||
self.nostr_client = SimpleNamespace(
|
||||
key_manager=SimpleNamespace(get_npub=lambda: "npub")
|
||||
)
|
||||
self.sync_vault = lambda: "event"
|
||||
self.config_manager = SimpleNamespace(
|
||||
load_config=lambda require_pin=False: {"inactivity_timeout": 30},
|
||||
set_inactivity_timeout=lambda v: None,
|
||||
set_kdf_iterations=lambda v: None,
|
||||
set_backup_interval=lambda v: None,
|
||||
set_secret_mode_enabled=lambda v: None,
|
||||
set_clipboard_clear_delay=lambda v: None,
|
||||
set_additional_backup_path=lambda v: None,
|
||||
set_relays=lambda v, require_pin=False: None,
|
||||
set_nostr_max_retries=lambda v: None,
|
||||
set_nostr_retry_delay=lambda v: None,
|
||||
set_offline_mode=lambda v: None,
|
||||
get_secret_mode_enabled=lambda: True,
|
||||
get_clipboard_clear_delay=lambda: 30,
|
||||
get_offline_mode=lambda: False,
|
||||
)
|
||||
self.secret_mode_enabled = True
|
||||
self.clipboard_clear_delay = 30
|
||||
self.select_fingerprint = lambda fp: None
|
||||
|
||||
|
||||
def load_doc_commands() -> list[str]:
|
||||
text = Path("docs/docs/content/01-getting-started/01-advanced_cli.md").read_text()
|
||||
cmds = set(re.findall(r"`seedpass ([^`<>]+)`", text))
|
||||
cmds = {c for c in cmds if "<" not in c and ">" not in c}
|
||||
cmds.discard("vault export")
|
||||
cmds.discard("vault import")
|
||||
return sorted(cmds)
|
||||
|
||||
|
||||
runner = CliRunner()
|
||||
|
||||
|
||||
def _setup(monkeypatch):
|
||||
monkeypatch.setattr(cli, "PasswordManager", lambda: DummyPM())
|
||||
monkeypatch.setattr(cli.uvicorn, "run", lambda *a, **kw: None)
|
||||
monkeypatch.setattr(cli.api_module, "start_server", lambda fp: "token")
|
||||
monkeypatch.setitem(
|
||||
sys.modules, "requests", SimpleNamespace(post=lambda *a, **kw: None)
|
||||
)
|
||||
monkeypatch.setattr(cli.typer, "prompt", lambda *a, **kw: "")
|
||||
|
||||
|
||||
import pytest
|
||||
|
||||
|
||||
@pytest.mark.parametrize("command", load_doc_commands())
|
||||
def test_doc_cli_examples(monkeypatch, command):
|
||||
_setup(monkeypatch)
|
||||
result = runner.invoke(cli.app, shlex.split(command))
|
||||
assert result.exit_code == 0
|
@@ -11,6 +11,22 @@ runner = CliRunner()
|
||||
@pytest.mark.parametrize(
|
||||
"command,method,cli_args,expected_args,expected_kwargs,stdout",
|
||||
[
|
||||
(
|
||||
"add",
|
||||
"add_entry",
|
||||
[
|
||||
"Label",
|
||||
"--length",
|
||||
"16",
|
||||
"--username",
|
||||
"user",
|
||||
"--url",
|
||||
"https://example.com",
|
||||
],
|
||||
("Label", 16, "user", "https://example.com"),
|
||||
{},
|
||||
"1",
|
||||
),
|
||||
(
|
||||
"add-totp",
|
||||
"add_totp",
|
||||
@@ -99,10 +115,14 @@ def test_entry_add_commands(
|
||||
called["kwargs"] = kwargs
|
||||
return stdout
|
||||
|
||||
def sync_vault():
|
||||
called["sync"] = True
|
||||
|
||||
pm = SimpleNamespace(
|
||||
entry_manager=SimpleNamespace(**{method: func}),
|
||||
parent_seed="seed",
|
||||
select_fingerprint=lambda fp: None,
|
||||
sync_vault=sync_vault,
|
||||
)
|
||||
monkeypatch.setattr(cli, "PasswordManager", lambda: pm)
|
||||
result = runner.invoke(app, ["entry", command] + cli_args)
|
||||
@@ -110,3 +130,4 @@ def test_entry_add_commands(
|
||||
assert stdout in result.stdout
|
||||
assert called["args"] == expected_args
|
||||
assert called["kwargs"] == expected_kwargs
|
||||
assert called.get("sync") is True
|
||||
|
@@ -45,7 +45,7 @@ def test_cli_export_creates_file(monkeypatch, tmp_path):
|
||||
}
|
||||
vault.save_index(data)
|
||||
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -83,7 +83,7 @@ def test_cli_import_round_trip(monkeypatch, tmp_path):
|
||||
|
||||
vault.save_index({"schema_version": 4, "entries": {}})
|
||||
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
|
@@ -46,6 +46,8 @@ def _make_pm(called, locked=None):
|
||||
update_activity=update,
|
||||
lock_vault=lock,
|
||||
unlock_vault=unlock,
|
||||
start_background_sync=lambda: None,
|
||||
start_background_relay_check=lambda: None,
|
||||
)
|
||||
return pm, locked
|
||||
|
||||
|
@@ -28,7 +28,7 @@ def make_pm(search_results, entry=None, totp_code="123456"):
|
||||
|
||||
def test_search_command(monkeypatch, capsys):
|
||||
pm = make_pm([(0, "Example", "user", "", False)])
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -41,7 +41,7 @@ def test_search_command(monkeypatch, capsys):
|
||||
def test_get_command(monkeypatch, capsys):
|
||||
entry = {"type": EntryType.PASSWORD.value, "length": 8}
|
||||
pm = make_pm([(0, "Example", "user", "", False)], entry=entry)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -55,7 +55,7 @@ def test_totp_command(monkeypatch, capsys):
|
||||
entry = {"type": EntryType.TOTP.value, "period": 30, "index": 0}
|
||||
pm = make_pm([(0, "Example", None, None, False)], entry=entry)
|
||||
called = {}
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -72,7 +72,7 @@ def test_totp_command(monkeypatch, capsys):
|
||||
|
||||
def test_search_command_no_results(monkeypatch, capsys):
|
||||
pm = make_pm([])
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -85,7 +85,7 @@ def test_search_command_no_results(monkeypatch, capsys):
|
||||
def test_get_command_multiple_matches(monkeypatch, capsys):
|
||||
matches = [(0, "Example", "user", "", False), (1, "Ex2", "bob", "", False)]
|
||||
pm = make_pm(matches)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -98,7 +98,7 @@ def test_get_command_multiple_matches(monkeypatch, capsys):
|
||||
def test_get_command_wrong_type(monkeypatch, capsys):
|
||||
entry = {"type": EntryType.TOTP.value}
|
||||
pm = make_pm([(0, "Example", "user", "", False)], entry=entry)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -111,7 +111,7 @@ def test_get_command_wrong_type(monkeypatch, capsys):
|
||||
def test_totp_command_multiple_matches(monkeypatch, capsys):
|
||||
matches = [(0, "GH", None, None, False), (1, "Git", None, None, False)]
|
||||
pm = make_pm(matches)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -124,7 +124,7 @@ def test_totp_command_multiple_matches(monkeypatch, capsys):
|
||||
def test_totp_command_wrong_type(monkeypatch, capsys):
|
||||
entry = {"type": EntryType.PASSWORD.value, "length": 8}
|
||||
pm = make_pm([(0, "Example", "user", "", False)], entry=entry)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda: pm)
|
||||
monkeypatch.setattr(main, "PasswordManager", lambda *a, **k: pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
@@ -132,3 +132,21 @@ def test_totp_command_wrong_type(monkeypatch, capsys):
|
||||
assert rc == 1
|
||||
out = capsys.readouterr().out
|
||||
assert "Entry is not a TOTP entry" in out
|
||||
|
||||
|
||||
def test_main_fingerprint_option(monkeypatch):
|
||||
"""Ensure the argparse CLI forwards the fingerprint to PasswordManager."""
|
||||
called = {}
|
||||
|
||||
def fake_pm(fingerprint=None):
|
||||
called["fp"] = fingerprint
|
||||
return make_pm([])
|
||||
|
||||
monkeypatch.setattr(main, "PasswordManager", fake_pm)
|
||||
monkeypatch.setattr(main, "configure_logging", lambda: None)
|
||||
monkeypatch.setattr(main, "initialize_app", lambda: None)
|
||||
monkeypatch.setattr(main.signal, "signal", lambda *a, **k: None)
|
||||
|
||||
rc = main.main(["--fingerprint", "abc", "search", "q"])
|
||||
assert rc == 0
|
||||
assert called.get("fp") == "abc"
|
||||
|
40
src/tests/test_cli_toggle_offline_mode.py
Normal file
40
src/tests/test_cli_toggle_offline_mode.py
Normal file
@@ -0,0 +1,40 @@
|
||||
from types import SimpleNamespace
|
||||
from typer.testing import CliRunner
|
||||
|
||||
from seedpass.cli import app
|
||||
from seedpass import cli
|
||||
|
||||
runner = CliRunner()
|
||||
|
||||
|
||||
def _make_pm(called, enabled=False):
|
||||
cfg = SimpleNamespace(
|
||||
get_offline_mode=lambda: enabled,
|
||||
set_offline_mode=lambda v: called.setdefault("enabled", v),
|
||||
)
|
||||
pm = SimpleNamespace(
|
||||
config_manager=cfg,
|
||||
offline_mode=enabled,
|
||||
select_fingerprint=lambda fp: None,
|
||||
)
|
||||
return pm
|
||||
|
||||
|
||||
def test_toggle_offline_updates(monkeypatch):
|
||||
called = {}
|
||||
pm = _make_pm(called)
|
||||
monkeypatch.setattr(cli, "PasswordManager", lambda: pm)
|
||||
result = runner.invoke(app, ["config", "toggle-offline"], input="y\n")
|
||||
assert result.exit_code == 0
|
||||
assert called == {"enabled": True}
|
||||
assert "Offline mode enabled." in result.stdout
|
||||
|
||||
|
||||
def test_toggle_offline_keep(monkeypatch):
|
||||
called = {}
|
||||
pm = _make_pm(called, enabled=True)
|
||||
monkeypatch.setattr(cli, "PasswordManager", lambda: pm)
|
||||
result = runner.invoke(app, ["config", "toggle-offline"], input="\n")
|
||||
assert result.exit_code == 0
|
||||
assert called == {"enabled": True}
|
||||
assert "Offline mode enabled." in result.stdout
|
@@ -23,6 +23,8 @@ def test_config_defaults_and_round_trip():
|
||||
assert cfg["pin_hash"] == ""
|
||||
assert cfg["password_hash"] == ""
|
||||
assert cfg["additional_backup_path"] == ""
|
||||
assert cfg["quick_unlock"] is False
|
||||
assert cfg["kdf_iterations"] == 50_000
|
||||
|
||||
cfg_mgr.set_pin("1234")
|
||||
cfg_mgr.set_relays(["wss://example.com"], require_pin=False)
|
||||
@@ -146,3 +148,51 @@ def test_secret_mode_round_trip():
|
||||
cfg2 = cfg_mgr.load_config(require_pin=False)
|
||||
assert cfg2["secret_mode_enabled"] is True
|
||||
assert cfg2["clipboard_clear_delay"] == 99
|
||||
|
||||
|
||||
def test_kdf_iterations_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
vault, _ = create_vault(Path(tmpdir), TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, Path(tmpdir))
|
||||
|
||||
assert cfg_mgr.get_kdf_iterations() == 50_000
|
||||
|
||||
cfg_mgr.set_kdf_iterations(200_000)
|
||||
assert cfg_mgr.get_kdf_iterations() == 200_000
|
||||
|
||||
|
||||
def test_backup_interval_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
vault, _ = create_vault(Path(tmpdir), TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, Path(tmpdir))
|
||||
|
||||
assert cfg_mgr.get_backup_interval() == 0
|
||||
|
||||
cfg_mgr.set_backup_interval(15)
|
||||
assert cfg_mgr.get_backup_interval() == 15
|
||||
|
||||
|
||||
def test_quick_unlock_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
vault, _ = create_vault(Path(tmpdir), TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, Path(tmpdir))
|
||||
|
||||
assert cfg_mgr.get_quick_unlock() is False
|
||||
|
||||
cfg_mgr.set_quick_unlock(True)
|
||||
assert cfg_mgr.get_quick_unlock() is True
|
||||
|
||||
|
||||
def test_nostr_retry_settings_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
vault, _ = create_vault(Path(tmpdir), TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, Path(tmpdir))
|
||||
|
||||
cfg = cfg_mgr.load_config(require_pin=False)
|
||||
assert cfg["nostr_max_retries"] == 2
|
||||
assert cfg["nostr_retry_delay"] == 1.0
|
||||
|
||||
cfg_mgr.set_nostr_max_retries(5)
|
||||
cfg_mgr.set_nostr_retry_delay(3.5)
|
||||
assert cfg_mgr.get_nostr_max_retries() == 5
|
||||
assert cfg_mgr.get_nostr_retry_delay() == 3.5
|
||||
|
@@ -3,7 +3,8 @@ import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
import os
|
||||
import base64
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
@@ -14,7 +15,7 @@ from utils.checksum import verify_and_update_checksum
|
||||
def test_encryption_checksum_workflow():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
tmp_path = Path(tmpdir)
|
||||
key = Fernet.generate_key()
|
||||
key = base64.urlsafe_b64encode(os.urandom(32))
|
||||
manager = EncryptionManager(key, tmp_path)
|
||||
|
||||
data = {"value": 1}
|
||||
|
@@ -3,7 +3,8 @@ import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
import os
|
||||
import base64
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
@@ -12,7 +13,7 @@ from password_manager.encryption import EncryptionManager
|
||||
|
||||
def test_json_save_and_load_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
key = Fernet.generate_key()
|
||||
key = base64.urlsafe_b64encode(os.urandom(32))
|
||||
manager = EncryptionManager(key, Path(tmpdir))
|
||||
|
||||
data = {"hello": "world", "nums": [1, 2, 3]}
|
||||
@@ -27,7 +28,7 @@ def test_json_save_and_load_round_trip():
|
||||
|
||||
def test_encrypt_and_decrypt_file_binary_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
key = Fernet.generate_key()
|
||||
key = base64.urlsafe_b64encode(os.urandom(32))
|
||||
manager = EncryptionManager(key, Path(tmpdir))
|
||||
|
||||
payload = b"binary secret"
|
||||
|
@@ -103,7 +103,7 @@ def test_legacy_entry_defaults_to_password():
|
||||
data["entries"][str(index)].pop("type", None)
|
||||
enc_mgr.save_json_data(data, entry_mgr.index_file)
|
||||
|
||||
loaded = entry_mgr._load_index()
|
||||
loaded = entry_mgr._load_index(force_reload=True)
|
||||
assert loaded["entries"][str(index)]["type"] == "password"
|
||||
|
||||
|
||||
|
@@ -3,7 +3,8 @@ import sys
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from cryptography.fernet import Fernet
|
||||
import os
|
||||
import base64
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
@@ -24,7 +25,7 @@ def test_generate_fingerprint_deterministic():
|
||||
|
||||
def test_encryption_round_trip():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
key = Fernet.generate_key()
|
||||
key = base64.urlsafe_b64encode(os.urandom(32))
|
||||
manager = EncryptionManager(key, Path(tmpdir))
|
||||
data = b"secret data"
|
||||
rel_path = Path("testfile.enc")
|
||||
|
65
src/tests/test_full_sync_roundtrip.py
Normal file
65
src/tests/test_full_sync_roundtrip.py
Normal file
@@ -0,0 +1,65 @@
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from helpers import create_vault, dummy_nostr_client
|
||||
|
||||
from password_manager.entry_management import EntryManager
|
||||
from password_manager.backup import BackupManager
|
||||
from password_manager.config_manager import ConfigManager
|
||||
from password_manager.manager import PasswordManager, EncryptionMode
|
||||
|
||||
|
||||
def _init_pm(dir_path: Path, client) -> PasswordManager:
|
||||
vault, enc_mgr = create_vault(dir_path)
|
||||
cfg_mgr = ConfigManager(vault, dir_path)
|
||||
backup_mgr = BackupManager(dir_path, cfg_mgr)
|
||||
entry_mgr = EntryManager(vault, backup_mgr)
|
||||
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.encryption_mode = EncryptionMode.SEED_ONLY
|
||||
pm.encryption_manager = enc_mgr
|
||||
pm.vault = vault
|
||||
pm.entry_manager = entry_mgr
|
||||
pm.backup_manager = backup_mgr
|
||||
pm.config_manager = cfg_mgr
|
||||
pm.nostr_client = client
|
||||
pm.fingerprint_dir = dir_path
|
||||
pm.is_dirty = False
|
||||
return pm
|
||||
|
||||
|
||||
def test_full_sync_roundtrip(dummy_nostr_client):
|
||||
client, relay = dummy_nostr_client
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
base = Path(tmpdir)
|
||||
dir_a = base / "A"
|
||||
dir_b = base / "B"
|
||||
dir_a.mkdir()
|
||||
dir_b.mkdir()
|
||||
|
||||
pm_a = _init_pm(dir_a, client)
|
||||
pm_b = _init_pm(dir_b, client)
|
||||
|
||||
# Manager A publishes initial snapshot
|
||||
pm_a.entry_manager.add_entry("site1", 12)
|
||||
pm_a.sync_vault()
|
||||
manifest_id = relay.manifests[-1].id
|
||||
|
||||
# Manager B retrieves snapshot
|
||||
pm_b.sync_index_from_nostr_if_missing()
|
||||
entries = pm_b.entry_manager.list_entries()
|
||||
assert [e[1] for e in entries] == ["site1"]
|
||||
|
||||
# Manager A publishes delta with second entry
|
||||
pm_a.entry_manager.add_entry("site2", 12)
|
||||
delta_bytes = pm_a.vault.get_encrypted_index() or b""
|
||||
asyncio.run(client.publish_delta(delta_bytes, manifest_id))
|
||||
delta_ts = relay.deltas[-1].created_at
|
||||
assert relay.manifests[-1].delta_since == delta_ts
|
||||
|
||||
# Manager B fetches delta and updates
|
||||
pm_b.sync_index_from_nostr()
|
||||
pm_b.entry_manager.clear_cache()
|
||||
labels = [e[1] for e in pm_b.entry_manager.list_entries()]
|
||||
assert sorted(labels) == ["site1", "site2"]
|
65
src/tests/test_full_sync_roundtrip_new.py
Normal file
65
src/tests/test_full_sync_roundtrip_new.py
Normal file
@@ -0,0 +1,65 @@
|
||||
import asyncio
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
from helpers import create_vault, dummy_nostr_client
|
||||
|
||||
from password_manager.entry_management import EntryManager
|
||||
from password_manager.backup import BackupManager
|
||||
from password_manager.config_manager import ConfigManager
|
||||
from password_manager.manager import PasswordManager, EncryptionMode
|
||||
|
||||
|
||||
def _init_pm(dir_path: Path, client) -> PasswordManager:
|
||||
vault, enc_mgr = create_vault(dir_path)
|
||||
cfg_mgr = ConfigManager(vault, dir_path)
|
||||
backup_mgr = BackupManager(dir_path, cfg_mgr)
|
||||
entry_mgr = EntryManager(vault, backup_mgr)
|
||||
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.encryption_mode = EncryptionMode.SEED_ONLY
|
||||
pm.encryption_manager = enc_mgr
|
||||
pm.vault = vault
|
||||
pm.entry_manager = entry_mgr
|
||||
pm.backup_manager = backup_mgr
|
||||
pm.config_manager = cfg_mgr
|
||||
pm.nostr_client = client
|
||||
pm.fingerprint_dir = dir_path
|
||||
pm.is_dirty = False
|
||||
return pm
|
||||
|
||||
|
||||
def test_full_sync_roundtrip(dummy_nostr_client):
|
||||
client, relay = dummy_nostr_client
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
base = Path(tmpdir)
|
||||
dir_a = base / "A"
|
||||
dir_b = base / "B"
|
||||
dir_a.mkdir()
|
||||
dir_b.mkdir()
|
||||
|
||||
pm_a = _init_pm(dir_a, client)
|
||||
pm_b = _init_pm(dir_b, client)
|
||||
|
||||
# Manager A publishes initial snapshot
|
||||
pm_a.entry_manager.add_entry("site1", 12)
|
||||
pm_a.sync_vault()
|
||||
manifest_id = relay.manifests[-1].id
|
||||
|
||||
# Manager B retrieves snapshot
|
||||
pm_b.sync_index_from_nostr_if_missing()
|
||||
entries = pm_b.entry_manager.list_entries()
|
||||
assert [e[1] for e in entries] == ["site1"]
|
||||
|
||||
# Manager A publishes delta with second entry
|
||||
pm_a.entry_manager.add_entry("site2", 12)
|
||||
delta_bytes = pm_a.vault.get_encrypted_index() or b""
|
||||
asyncio.run(client.publish_delta(delta_bytes, manifest_id))
|
||||
delta_ts = relay.deltas[-1].created_at
|
||||
assert relay.manifests[-1].delta_since == delta_ts
|
||||
|
||||
# Manager B fetches delta and updates
|
||||
pm_b.sync_index_from_nostr()
|
||||
pm_b.entry_manager.clear_cache()
|
||||
labels = [e[1] for e in pm_b.entry_manager.list_entries()]
|
||||
assert sorted(labels) == ["site1", "site2"]
|
63
src/tests/test_fuzz_key_derivation.py
Normal file
63
src/tests/test_fuzz_key_derivation.py
Normal file
@@ -0,0 +1,63 @@
|
||||
import os
|
||||
from pathlib import Path
|
||||
|
||||
from hypothesis import given, strategies as st, settings, HealthCheck
|
||||
from mnemonic import Mnemonic
|
||||
|
||||
from utils.key_derivation import (
|
||||
derive_key_from_password,
|
||||
derive_key_from_password_argon2,
|
||||
derive_index_key,
|
||||
)
|
||||
from password_manager.encryption import EncryptionManager
|
||||
|
||||
|
||||
cfg_values = st.one_of(
|
||||
st.integers(min_value=0, max_value=100),
|
||||
st.text(min_size=0, max_size=20),
|
||||
st.booleans(),
|
||||
)
|
||||
|
||||
|
||||
@given(
|
||||
password=st.text(min_size=8, max_size=32),
|
||||
seed_bytes=st.binary(min_size=16, max_size=16),
|
||||
config=st.dictionaries(st.text(min_size=1, max_size=10), cfg_values, max_size=5),
|
||||
mode=st.sampled_from(["pbkdf2", "argon2"]),
|
||||
)
|
||||
@settings(
|
||||
deadline=None,
|
||||
max_examples=20,
|
||||
suppress_health_check=[HealthCheck.function_scoped_fixture],
|
||||
)
|
||||
def test_fuzz_key_round_trip(password, seed_bytes, config, mode, tmp_path: Path):
|
||||
"""Ensure EncryptionManager round-trips arbitrary data."""
|
||||
seed_phrase = Mnemonic("english").to_mnemonic(seed_bytes)
|
||||
if mode == "argon2":
|
||||
key = derive_key_from_password_argon2(
|
||||
password, time_cost=1, memory_cost=8, parallelism=1
|
||||
)
|
||||
else:
|
||||
key = derive_key_from_password(password, iterations=1)
|
||||
|
||||
enc_mgr = EncryptionManager(key, tmp_path)
|
||||
|
||||
# Parent seed round trip
|
||||
enc_mgr.encrypt_parent_seed(seed_phrase)
|
||||
assert enc_mgr.decrypt_parent_seed() == seed_phrase
|
||||
|
||||
# JSON data round trip
|
||||
enc_mgr.save_json_data(config, Path("config.enc"))
|
||||
loaded = enc_mgr.load_json_data(Path("config.enc"))
|
||||
assert loaded == config
|
||||
|
||||
# Binary data round trip
|
||||
blob = os.urandom(32)
|
||||
enc_mgr.encrypt_and_save_file(blob, Path("blob.enc"))
|
||||
assert enc_mgr.decrypt_file(Path("blob.enc")) == blob
|
||||
|
||||
# Index key derived from seed also decrypts
|
||||
index_key = derive_index_key(seed_phrase)
|
||||
idx_mgr = EncryptionManager(index_key, tmp_path)
|
||||
idx_mgr.save_json_data(config)
|
||||
assert idx_mgr.load_json_data() == config
|
@@ -24,7 +24,8 @@ def test_initialize_profile_creates_directories(monkeypatch):
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(gtp)
|
||||
|
||||
seed, mgr, dir_path, fingerprint = gtp.initialize_profile("test")
|
||||
seed, mgr, dir_path, fingerprint, cfg_mgr = gtp.initialize_profile("test")
|
||||
assert cfg_mgr is not None
|
||||
|
||||
assert constants.APP_DIR.exists()
|
||||
assert (constants.APP_DIR / "test_seed.txt").exists()
|
||||
|
72
src/tests/test_generate_test_profile_sync.py
Normal file
72
src/tests/test_generate_test_profile_sync.py
Normal file
@@ -0,0 +1,72 @@
|
||||
import importlib
|
||||
import importlib.util
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
import asyncio
|
||||
import gzip
|
||||
|
||||
from helpers import dummy_nostr_client
|
||||
|
||||
|
||||
def load_script():
|
||||
script_path = (
|
||||
Path(__file__).resolve().parents[2] / "scripts" / "generate_test_profile.py"
|
||||
)
|
||||
spec = importlib.util.spec_from_file_location("generate_test_profile", script_path)
|
||||
module = importlib.util.module_from_spec(spec)
|
||||
assert spec.loader is not None
|
||||
spec.loader.exec_module(module)
|
||||
return module
|
||||
|
||||
|
||||
def test_generate_test_profile_sync(monkeypatch, dummy_nostr_client):
|
||||
client, _relay = dummy_nostr_client
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
tmp_path = Path(tmpdir)
|
||||
monkeypatch.setattr(Path, "home", lambda: tmp_path)
|
||||
|
||||
constants = importlib.import_module("constants")
|
||||
importlib.reload(constants)
|
||||
gtp = load_script()
|
||||
|
||||
monkeypatch.setattr(gtp, "NostrClient", lambda *a, **k: client)
|
||||
|
||||
seed, entry_mgr, dir_path, fingerprint, cfg_mgr = gtp.initialize_profile("test")
|
||||
gtp.populate(entry_mgr, seed, 5)
|
||||
|
||||
encrypted = entry_mgr.vault.get_encrypted_index()
|
||||
nc = gtp.NostrClient(
|
||||
entry_mgr.vault.encryption_manager,
|
||||
fingerprint,
|
||||
parent_seed=seed,
|
||||
config_manager=cfg_mgr,
|
||||
)
|
||||
asyncio.run(nc.publish_snapshot(encrypted))
|
||||
|
||||
from nostr.client import NostrClient as RealClient
|
||||
|
||||
class DummyKeys:
|
||||
def private_key_hex(self):
|
||||
return "1" * 64
|
||||
|
||||
def public_key_hex(self):
|
||||
return "2" * 64
|
||||
|
||||
class DummyKeyManager:
|
||||
def __init__(self, *a, **k):
|
||||
self.keys = DummyKeys()
|
||||
|
||||
monkeypatch.setattr("nostr.client.KeyManager", DummyKeyManager)
|
||||
client2 = RealClient(
|
||||
entry_mgr.vault.encryption_manager,
|
||||
fingerprint,
|
||||
parent_seed=seed,
|
||||
config_manager=cfg_mgr,
|
||||
)
|
||||
result = asyncio.run(client2.fetch_latest_snapshot())
|
||||
|
||||
assert result is not None
|
||||
_manifest, chunks = result
|
||||
assert _manifest.delta_since is None
|
||||
retrieved = gzip.decompress(b"".join(chunks))
|
||||
assert retrieved == encrypted
|
@@ -34,6 +34,8 @@ def test_inactivity_triggers_lock(monkeypatch):
|
||||
update_activity=update_activity,
|
||||
lock_vault=lock_vault,
|
||||
unlock_vault=unlock_vault,
|
||||
start_background_sync=lambda: None,
|
||||
start_background_relay_check=lambda: None,
|
||||
)
|
||||
|
||||
monkeypatch.setattr(main, "timed_input", lambda *_: "")
|
||||
@@ -70,6 +72,8 @@ def test_input_timeout_triggers_lock(monkeypatch):
|
||||
update_activity=update_activity,
|
||||
lock_vault=lock_vault,
|
||||
unlock_vault=unlock_vault,
|
||||
start_background_sync=lambda: None,
|
||||
start_background_relay_check=lambda: None,
|
||||
)
|
||||
|
||||
responses = iter([TimeoutError(), ""])
|
||||
|
33
src/tests/test_index_cache.py
Normal file
33
src/tests/test_index_cache.py
Normal file
@@ -0,0 +1,33 @@
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from unittest.mock import patch
|
||||
|
||||
from helpers import create_vault, TEST_SEED, TEST_PASSWORD
|
||||
from password_manager.entry_management import EntryManager
|
||||
from password_manager.backup import BackupManager
|
||||
from password_manager.config_manager import ConfigManager
|
||||
|
||||
|
||||
def test_index_caching():
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
vault, _ = create_vault(Path(tmpdir), TEST_SEED, TEST_PASSWORD)
|
||||
cfg_mgr = ConfigManager(vault, Path(tmpdir))
|
||||
backup_mgr = BackupManager(Path(tmpdir), cfg_mgr)
|
||||
entry_mgr = EntryManager(vault, backup_mgr)
|
||||
|
||||
# create initial entry so the index file exists
|
||||
entry_mgr.add_entry("init", 8)
|
||||
entry_mgr.clear_cache()
|
||||
|
||||
with patch.object(vault, "load_index", wraps=vault.load_index) as mocked:
|
||||
idx = entry_mgr.add_entry("example.com", 8)
|
||||
assert mocked.call_count == 1
|
||||
|
||||
entry = entry_mgr.retrieve_entry(idx)
|
||||
assert entry["label"] == "example.com"
|
||||
assert mocked.call_count == 1
|
||||
|
||||
entry_mgr.clear_cache()
|
||||
entry = entry_mgr.retrieve_entry(idx)
|
||||
assert entry["label"] == "example.com"
|
||||
assert mocked.call_count == 2
|
75
src/tests/test_kdf_modes.py
Normal file
75
src/tests/test_kdf_modes.py
Normal file
@@ -0,0 +1,75 @@
|
||||
import bcrypt
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
from types import SimpleNamespace
|
||||
|
||||
from utils.key_derivation import (
|
||||
derive_key_from_password,
|
||||
derive_key_from_password_argon2,
|
||||
derive_index_key,
|
||||
)
|
||||
from password_manager.encryption import EncryptionManager
|
||||
from password_manager.vault import Vault
|
||||
from password_manager.config_manager import ConfigManager
|
||||
from password_manager.manager import PasswordManager, EncryptionMode
|
||||
|
||||
TEST_SEED = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"
|
||||
TEST_PASSWORD = "pw"
|
||||
|
||||
|
||||
def _setup_profile(tmp: Path, mode: str):
|
||||
argon_kwargs = dict(time_cost=1, memory_cost=8, parallelism=1)
|
||||
if mode == "argon2":
|
||||
seed_key = derive_key_from_password_argon2(TEST_PASSWORD, **argon_kwargs)
|
||||
else:
|
||||
seed_key = derive_key_from_password(TEST_PASSWORD, iterations=1)
|
||||
EncryptionManager(seed_key, tmp).encrypt_parent_seed(TEST_SEED)
|
||||
|
||||
index_key = derive_index_key(TEST_SEED)
|
||||
enc_mgr = EncryptionManager(index_key, tmp)
|
||||
vault = Vault(enc_mgr, tmp)
|
||||
cfg_mgr = ConfigManager(vault, tmp)
|
||||
cfg = cfg_mgr.load_config(require_pin=False)
|
||||
cfg["password_hash"] = bcrypt.hashpw(
|
||||
TEST_PASSWORD.encode(), bcrypt.gensalt()
|
||||
).decode()
|
||||
cfg["kdf_mode"] = mode
|
||||
cfg["kdf_iterations"] = 1
|
||||
cfg_mgr.save_config(cfg)
|
||||
return cfg_mgr
|
||||
|
||||
|
||||
def _make_pm(tmp: Path, cfg: ConfigManager):
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.encryption_mode = EncryptionMode.SEED_ONLY
|
||||
pm.config_manager = cfg
|
||||
pm.fingerprint_dir = tmp
|
||||
pm.current_fingerprint = "fp"
|
||||
pm.verify_password = lambda pw: True
|
||||
return pm
|
||||
|
||||
|
||||
def test_setup_encryption_manager_kdf_modes(monkeypatch):
|
||||
with TemporaryDirectory() as td:
|
||||
tmp = Path(td)
|
||||
argon_kwargs = dict(time_cost=1, memory_cost=8, parallelism=1)
|
||||
for mode in ("pbkdf2", "argon2"):
|
||||
path = tmp / mode
|
||||
path.mkdir()
|
||||
cfg = _setup_profile(path, mode)
|
||||
pm = _make_pm(path, cfg)
|
||||
monkeypatch.setattr(
|
||||
"password_manager.manager.prompt_existing_password",
|
||||
lambda *_: TEST_PASSWORD,
|
||||
)
|
||||
if mode == "argon2":
|
||||
monkeypatch.setattr(
|
||||
"password_manager.manager.derive_key_from_password_argon2",
|
||||
lambda pw: derive_key_from_password_argon2(pw, **argon_kwargs),
|
||||
)
|
||||
monkeypatch.setattr(PasswordManager, "initialize_bip85", lambda self: None)
|
||||
monkeypatch.setattr(
|
||||
PasswordManager, "initialize_managers", lambda self: None
|
||||
)
|
||||
assert pm.setup_encryption_manager(path, exit_on_fail=False)
|
||||
assert pm.parent_seed == TEST_SEED
|
@@ -2,6 +2,7 @@ import logging
|
||||
import pytest
|
||||
from utils.key_derivation import (
|
||||
derive_key_from_password,
|
||||
derive_key_from_password_argon2,
|
||||
derive_index_key_seed_only,
|
||||
derive_index_key,
|
||||
)
|
||||
@@ -33,3 +34,11 @@ def test_seed_only_key_deterministic():
|
||||
def test_derive_index_key_seed_only():
|
||||
seed = "abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon abandon about"
|
||||
assert derive_index_key(seed) == derive_index_key_seed_only(seed)
|
||||
|
||||
|
||||
def test_argon2_key_deterministic():
|
||||
pw = "correct horse battery staple"
|
||||
k1 = derive_key_from_password_argon2(pw, time_cost=1, memory_cost=8, parallelism=1)
|
||||
k2 = derive_key_from_password_argon2(pw, time_cost=1, memory_cost=8, parallelism=1)
|
||||
assert k1 == k2
|
||||
assert len(k1) == 44
|
||||
|
60
src/tests/test_last_used_fingerprint.py
Normal file
60
src/tests/test_last_used_fingerprint.py
Normal file
@@ -0,0 +1,60 @@
|
||||
import importlib
|
||||
from pathlib import Path
|
||||
from tempfile import TemporaryDirectory
|
||||
|
||||
import constants
|
||||
import password_manager.manager as manager_module
|
||||
from utils.fingerprint_manager import FingerprintManager
|
||||
from password_manager.manager import EncryptionMode
|
||||
|
||||
from helpers import TEST_SEED
|
||||
|
||||
|
||||
def test_last_used_fingerprint(monkeypatch):
|
||||
with TemporaryDirectory() as tmpdir:
|
||||
tmp_path = Path(tmpdir)
|
||||
monkeypatch.setattr(Path, "home", lambda: tmp_path)
|
||||
|
||||
importlib.reload(constants)
|
||||
importlib.reload(manager_module)
|
||||
|
||||
fm = FingerprintManager(constants.APP_DIR)
|
||||
fp = fm.add_fingerprint(TEST_SEED)
|
||||
assert fm.current_fingerprint == fp
|
||||
|
||||
# Ensure persistence on reload
|
||||
fm2 = FingerprintManager(constants.APP_DIR)
|
||||
assert fm2.current_fingerprint == fp
|
||||
|
||||
def init_fm(self):
|
||||
self.fingerprint_manager = fm2
|
||||
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager, "initialize_fingerprint_manager", init_fm
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager,
|
||||
"setup_encryption_manager",
|
||||
lambda *a, **k: True,
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager, "initialize_bip85", lambda self: None
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager, "initialize_managers", lambda self: None
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager,
|
||||
"sync_index_from_nostr_if_missing",
|
||||
lambda self: None,
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
manager_module.PasswordManager, "verify_password", lambda *a, **k: True
|
||||
)
|
||||
monkeypatch.setattr(
|
||||
"builtins.input",
|
||||
lambda *a, **k: (_ for _ in ()).throw(AssertionError("prompted")),
|
||||
)
|
||||
|
||||
pm = manager_module.PasswordManager()
|
||||
assert pm.current_fingerprint == fp
|
42
src/tests/test_legacy_migration.py
Normal file
42
src/tests/test_legacy_migration.py
Normal file
@@ -0,0 +1,42 @@
|
||||
import json
|
||||
import hashlib
|
||||
from pathlib import Path
|
||||
|
||||
from helpers import create_vault, TEST_SEED, TEST_PASSWORD
|
||||
from utils.key_derivation import derive_index_key
|
||||
from cryptography.fernet import Fernet
|
||||
|
||||
|
||||
def test_legacy_index_migrates(tmp_path: Path):
|
||||
vault, _ = create_vault(tmp_path, TEST_SEED, TEST_PASSWORD)
|
||||
|
||||
key = derive_index_key(TEST_SEED)
|
||||
data = {
|
||||
"schema_version": 4,
|
||||
"entries": {
|
||||
"0": {
|
||||
"label": "a",
|
||||
"length": 8,
|
||||
"type": "password",
|
||||
"kind": "password",
|
||||
"notes": "",
|
||||
"custom_fields": [],
|
||||
"origin": "",
|
||||
"tags": [],
|
||||
}
|
||||
},
|
||||
}
|
||||
enc = Fernet(key).encrypt(json.dumps(data).encode())
|
||||
legacy_file = tmp_path / "seedpass_passwords_db.json.enc"
|
||||
legacy_file.write_bytes(enc)
|
||||
(tmp_path / "seedpass_passwords_db_checksum.txt").write_text(
|
||||
hashlib.sha256(enc).hexdigest()
|
||||
)
|
||||
|
||||
loaded = vault.load_index()
|
||||
assert loaded == data
|
||||
|
||||
new_file = tmp_path / "seedpass_entries_db.json.enc"
|
||||
assert new_file.exists()
|
||||
assert not legacy_file.exists()
|
||||
assert not (tmp_path / "seedpass_passwords_db_checksum.txt").exists()
|
@@ -52,7 +52,9 @@ def test_handle_add_totp(monkeypatch, capsys):
|
||||
]
|
||||
)
|
||||
monkeypatch.setattr("builtins.input", lambda *args, **kwargs: next(inputs))
|
||||
monkeypatch.setattr(pm, "sync_vault", lambda: None)
|
||||
monkeypatch.setattr(
|
||||
pm, "start_background_vault_sync", lambda *a, **k: pm.sync_vault(*a, **k)
|
||||
)
|
||||
|
||||
pm.handle_add_totp()
|
||||
out = capsys.readouterr().out
|
||||
|
@@ -4,6 +4,7 @@ from pathlib import Path
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from password_manager.manager import PasswordManager, EncryptionMode
|
||||
import queue
|
||||
|
||||
|
||||
class FakeBackupManager:
|
||||
@@ -20,6 +21,7 @@ class FakeBackupManager:
|
||||
def _make_pm():
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.encryption_mode = EncryptionMode.SEED_ONLY
|
||||
pm.notifications = queue.Queue()
|
||||
return pm
|
||||
|
||||
|
||||
@@ -56,8 +58,9 @@ def test_handle_verify_checksum_missing(monkeypatch, tmp_path, capsys):
|
||||
|
||||
monkeypatch.setattr("password_manager.manager.verify_checksum", raise_missing)
|
||||
pm.handle_verify_checksum()
|
||||
out = capsys.readouterr().out.lower()
|
||||
assert "generate script checksum" in out
|
||||
note = pm.notifications.get_nowait()
|
||||
assert note.level == "WARNING"
|
||||
assert "generate script checksum" in note.message.lower()
|
||||
|
||||
|
||||
def test_backup_and_restore_database(monkeypatch, capsys):
|
||||
|
45
src/tests/test_manager_current_notification.py
Normal file
45
src/tests/test_manager_current_notification.py
Normal file
@@ -0,0 +1,45 @@
|
||||
import queue
|
||||
|
||||
from pathlib import Path
|
||||
import sys
|
||||
|
||||
sys.path.append(str(Path(__file__).resolve().parents[1]))
|
||||
|
||||
from password_manager.manager import PasswordManager, Notification
|
||||
from constants import NOTIFICATION_DURATION
|
||||
|
||||
|
||||
def _make_pm():
|
||||
pm = PasswordManager.__new__(PasswordManager)
|
||||
pm.notifications = queue.Queue()
|
||||
pm._current_notification = None
|
||||
pm._notification_expiry = 0.0
|
||||
return pm
|
||||
|
||||
|
||||
def test_notify_sets_current(monkeypatch):
|
||||
pm = _make_pm()
|
||||
current = {"val": 100.0}
|
||||
monkeypatch.setattr("password_manager.manager.time.time", lambda: current["val"])
|
||||
pm.notify("hello")
|
||||
note = pm._current_notification
|
||||
assert hasattr(note, "message")
|
||||
assert note.message == "hello"
|
||||
assert pm._notification_expiry == 100.0 + NOTIFICATION_DURATION
|
||||
assert pm.notifications.qsize() == 1
|
||||
|
||||
|
||||
def test_get_current_notification_ttl(monkeypatch):
|
||||
pm = _make_pm()
|
||||
now = {"val": 0.0}
|
||||
monkeypatch.setattr("password_manager.manager.time.time", lambda: now["val"])
|
||||
pm.notify("note1")
|
||||
|
||||
assert pm.get_current_notification().message == "note1"
|
||||
assert pm.notifications.qsize() == 1
|
||||
|
||||
now["val"] += NOTIFICATION_DURATION - 1
|
||||
assert pm.get_current_notification().message == "note1"
|
||||
|
||||
now["val"] += 2
|
||||
assert pm.get_current_notification() is None
|
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user