Compare commits

...

22 Commits

Author SHA1 Message Date
Helmut K. C. Tessarek 2697fe8aba feat: add feature flag export-attachments (#5784) 2025-05-01 17:40:26 +02:00
Stefan Melmuk 674e444d67 respond with cipher json when deleting attachments (#5823) 2025-05-01 17:28:23 +02:00
Timshel 0d16da440d On member invite and edit access_all is not sent anymore (#5673)
* On member invite and edit access_all is not sent anymore

* Use MembershipType ordering for access_all check

Fixes #5711
2025-04-16 17:52:26 +02:00
Mathijs van Veluw 66cf179bca Updates and general fixes (#5762)
Updated all the crates to the latest version.
We can unpin mimalloc, since the musl issues have been fixed
Also fix a RUSTSEC https://osv.dev/vulnerability/RUSTSEC-2025-0023 for tokio

Fixed some clippy lints reported by nightly.

Ensure lints and are also run on the macro crate.
This resulted in some lints being triggered, which I fixed.

Updated some GHA uses.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-09 21:21:10 +02:00
Mathijs van Veluw 025bb90f8f Fix debian docker building (#5752)
In previous attempts to get mysqlclient-sys to build and work I added some extra build variables.
These are not needed if you configure pkg-config correctly.
The same goes for OpenSSL btw.

This PR configures the pkg-config in the right way and allows the crates to build using the right lib paths automatically.
Because of this change also the lib/include paths were not needed anymore for some architectures, except for i386.

Also updated crates again.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-05 17:58:32 +02:00
Mathijs van Veluw d5039d9c17 Add Docker Templates pre-commit check (#5749)
Added the same check as done via GitHub Actions to check template changes to the pre-commit checks.
This should catch these mistakes before they are commited and pushed.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 19:02:19 +02:00
Daniel García e7c796a660 Verify templates in CI (#5748)
* Verify templates in CI

* No need to install packages

* Remove unnecessary fetch depth
2025-04-04 18:14:19 +02:00
Daniel bbbd2f6d15 Update Rust to 1.86.0 (#5744)
- also raise MSRV to 1.84.0

- fix `Dockerfile` template
- remove no longed needed `-vvv` argument for `cargo build`
2025-04-04 18:04:36 +02:00
Mathijs van Veluw a2d7895586 Really fix building (#5745)
Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 16:53:09 +02:00
Mathijs van Veluw 8a0cb1137e Fix mysqlclient-sys building (#5743)
Because of some issues with mysqlclient we need to use buildtime bindgen.
This also needed some extra environment variables to point the bindgen to the correct files and correct version.

Also update some other crates.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 16:37:57 +02:00
Timshel f960bf59bb Fix invited user registration without SMTP (#5712) 2025-04-04 13:54:28 +02:00
Mathijs van Veluw 3a1f1bae00 Update deps and web-vault (#5742)
- Updated crates
  Pinned mimalloc, since it has issues with musl
- Updated web-vault to v2025.3.1
- Updated bootstrap

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-04-04 12:18:09 +02:00
Mathijs van Veluw 8dfe805954 Update Rust, Crates and other deps (#5709)
- Updated Rust to v1.85.1
- Updated crates and fixed breaking changes
- Updated datatables js
- Updated GitHub Actions

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-03-19 17:39:53 +01:00
Mathijs van Veluw 07b869b3ef Some fixes for the new web-vault and updates (#5703)
- Added a new org policy
- Some new lint fixes
- Crate updates
  Switched to `pastey`, since `paste` is unmaintained.

Signed-off-by: BlackDex <black.dex@gmail.com>
2025-03-17 23:02:02 +01:00
Daniel García 2a18665288 Implement new registration flow with email verification (#5215)
* Implement registration with required verified email

* Optional name, emergency access, and signups_allowed

* Implement org invite, remove unneeded invite accept

* fix invitation logic for new registration flow (#5691)

* fix invitation logic for new registration flow

* clarify email_2fa_enforce_on_verified_invite

---------

Co-authored-by: Stefan Melmuk <509385+stefan0xC@users.noreply.github.com>
2025-03-17 16:28:01 +01:00
Josh 71952a4ab5 Add AnonAddy/SimpleLogin self host feature flag (#5694)
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-03-15 19:57:04 +01:00
Ben Sherman 994d157064 Add support for mutual-tls feature flag (#5698)
* Add support for mutual-tls feature flag

* Fix formatting

---------

Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2025-03-15 19:46:42 +01:00
Timshel 1dae6093c9 Use subtle to replace deprecated ring::constant_time::verify_slices_are_equal (#5680) 2025-03-15 19:33:17 +01:00
Daniel 6edceb5f7a Update Rust to 1.85.0 (#5634)
- also update the crates
2025-02-24 12:12:34 +01:00
Stefan Melmuk 359a4a088a allow CLI to upload files with truncated filenames (#5618)
due to a bug in the CLI the filename in the form-data is not complete if
the encrypted filename happens to contain a /
2025-02-19 10:40:59 +01:00
Mathijs van Veluw 3baffeee9a Fix db issues with Option<> values and upd crates (#5594)
Some tables were lacking an option to convert Option<> to NULL.
This commit will fix that.

Also updated the crates to the latest version available.
2025-02-14 17:58:57 +01:00
Daniel d5c353427d Update crates & fix CVE-2025-25188 (#5576) 2025-02-12 10:21:12 +01:00
47 changed files with 1624 additions and 840 deletions
+7 -2
View File
@@ -229,7 +229,8 @@
# SIGNUPS_ALLOWED=true
## Controls if new users need to verify their email address upon registration
## Note that setting this option to true prevents logins until the email address has been verified!
## On new client versions, this will require the user to verify their email at signup time.
## On older clients, it will require the user to verify their email before they can log in.
## The welcome email will include a verification link, and login attempts will periodically
## trigger another verification email to be sent.
# SIGNUPS_VERIFY=false
@@ -353,6 +354,10 @@
## - "inline-menu-positioning-improvements": Enable the use of inline menu password generator and identity suggestions in the browser extension.
## - "ssh-key-vault-item": Enable the creation and use of SSH key vault items. (Needs clients >=2024.12.0)
## - "ssh-agent": Enable SSH agent support on Desktop. (Needs desktop >=2024.12.0)
## - "anon-addy-self-host-alias": Enable configuring self-hosted Anon Addy alias generator. (Needs Android >=2025.2.0)
## - "simple-login-self-host-alias": Enable configuring self-hosted Simple Login alias generator. (Needs Android >=2025.2.0)
## - "mutual-tls": Enable the use of mutual TLS on Android (Client >= 2025.2.0)
## - "export-attachments": Enable support for exporting attachments (Clients >=2025.4.0)
# EXPERIMENTAL_CLIENT_FEATURE_FLAGS=fido2-vault-credentials
## Require new device emails. When a user logs in an email is required to be sent.
@@ -486,7 +491,7 @@
## Maximum attempts before an email token is reset and a new email will need to be sent.
# EMAIL_ATTEMPTS_LIMIT=3
##
## Setup email 2FA regardless of any organization policy
## Setup email 2FA on registration regardless of any organization policy
# EMAIL_2FA_ENFORCE_ON_VERIFIED_INVITE=false
## Automatically setup email 2FA as fallback provider when needed
# EMAIL_2FA_AUTO_FALLBACK=false
+2
View File
@@ -1,3 +1,5 @@
/.github @dani-garcia @BlackDex
/.github/** @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex
/SECURITY.md @dani-garcia @BlackDex
+3 -3
View File
@@ -80,7 +80,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@c5a29ddb4d9d194e7c84ec8c3fba61b1c31fee8c # master @ Jan 30, 2025, 8:16 PM GMT+1
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -90,7 +90,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@c5a29ddb4d9d194e7c84ec8c3fba61b1c31fee8c # master @ Jan 30, 2025, 8:16 PM GMT+1
uses: dtolnay/rust-toolchain@56f84321dbccf38fb67ce29ab63e4754056677e0 # master @ Mar 18, 2025, 8:14 PM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -115,7 +115,7 @@ jobs:
# Enable Rust Caching
- name: Rust Caching
uses: Swatinem/rust-cache@f0deed1e0edfc6a9be95417288c0e1099b1eeec3 # v2.7.7
uses: Swatinem/rust-cache@9d47c6ad4b02e050fd481d890b2ea34778fd09d6 # v2.7.8
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.
+28
View File
@@ -0,0 +1,28 @@
name: Check templates
permissions: {}
on: [ push, pull_request ]
jobs:
docker-templates:
permissions:
contents: read
runs-on: ubuntu-24.04
timeout-minutes: 30
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@11bd71901bbe5b1630ceea73d27597364c9af683 #v4.2.2
with:
persist-credentials: false
# End Checkout the repo
- name: Run make to rebuild templates
working-directory: docker
run: make
- name: Check for unstaged changes
working-directory: docker
run: git diff --exit-code
continue-on-error: false
+1 -1
View File
@@ -14,7 +14,7 @@ jobs:
steps:
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
+14 -14
View File
@@ -70,13 +70,13 @@ jobs:
steps:
- name: Initialize QEMU binfmt support
uses: docker/setup-qemu-action@53851d14592bedcffcf25ea515637cff71ef929a # v3.3.0
uses: docker/setup-qemu-action@29109295f81e9208d7d86ff1c6c12d2833863392 # v3.6.0
with:
platforms: "arm64,arm"
# Start Docker Buildx
- name: Setup Docker Buildx
uses: docker/setup-buildx-action@6524bf65af31da8d45b59e8c27de4bd072b392f5 # v3.8.0
uses: docker/setup-buildx-action@b5ca514318bd6ebac0fb2aedd5d36ec1b5c232a2 # v3.10.0
# https://github.com/moby/buildkit/issues/3969
# Also set max parallelism to 2, the default of 4 breaks GitHub Actions and causes OOMKills
with:
@@ -120,7 +120,7 @@ jobs:
# Login to Docker Hub
- name: Login to Docker Hub
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
@@ -136,7 +136,7 @@ jobs:
# Login to GitHub Container Registry
- name: Login to GitHub Container Registry
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: ghcr.io
username: ${{ github.repository_owner }}
@@ -153,7 +153,7 @@ jobs:
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@9780b0c442fbb1117ed29e0efdff1e18412f7567 # v3.3.0
uses: docker/login-action@74a5d142397b4f367a81961eba4e8cd7edddf772 # v3.4.0
with:
registry: quay.io
username: ${{ secrets.QUAY_USERNAME }}
@@ -192,7 +192,7 @@ jobs:
- name: Bake ${{ matrix.base_image }} containers
id: bake_vw
uses: docker/bake-action@7bff531c65a5cda33e52e43950a795b91d450f63 # v6.3.0
uses: docker/bake-action@4ba453fbc2db7735392b93edf935aaf9b1e8f747 # v6.5.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -220,7 +220,7 @@ jobs:
# Attest container images
- name: Attest - docker.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_DOCKERHUB_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@520d128f165991a6c774bcb264f323e3d70747f4 # v2.2.0
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.DOCKERHUB_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -228,7 +228,7 @@ jobs:
- name: Attest - ghcr.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_GHCR_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@520d128f165991a6c774bcb264f323e3d70747f4 # v2.2.0
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.GHCR_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -236,7 +236,7 @@ jobs:
- name: Attest - quay.io - ${{ matrix.base_image }}
if: ${{ env.HAVE_QUAY_LOGIN == 'true' && steps.bake_vw.outputs.metadata != ''}}
uses: actions/attest-build-provenance@520d128f165991a6c774bcb264f323e3d70747f4 # v2.2.0
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-name: ${{ vars.QUAY_REPO }}
subject-digest: ${{ env.DIGEST_SHA }}
@@ -290,31 +290,31 @@ jobs:
# Upload artifacts to Github Actions and Attest the binaries
- name: "Upload amd64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 #v4.6.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64-${{ matrix.base_image }}
path: vaultwarden-amd64-${{ matrix.base_image }}
- name: "Upload arm64 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 #v4.6.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64-${{ matrix.base_image }}
path: vaultwarden-arm64-${{ matrix.base_image }}
- name: "Upload armv7 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 #v4.6.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7-${{ matrix.base_image }}
path: vaultwarden-armv7-${{ matrix.base_image }}
- name: "Upload armv6 artifact ${{ matrix.base_image }}"
uses: actions/upload-artifact@65c4c4a1ddee5b72f698fdd19549f0f0fb45cf08 #v4.6.0
uses: actions/upload-artifact@ea165f8d65b6e75b540449e92b4886f43607fa02 # v4.6.2
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6-${{ matrix.base_image }}
path: vaultwarden-armv6-${{ matrix.base_image }}
- name: "Attest artifacts ${{ matrix.base_image }}"
uses: actions/attest-build-provenance@520d128f165991a6c774bcb264f323e3d70747f4 # v2.2.0
uses: actions/attest-build-provenance@c074443f1aee8d4aeeae555aebba3282517141b2 # v2.2.3
with:
subject-path: vaultwarden-*
# End Upload artifacts to Github Actions
+1 -1
View File
@@ -36,7 +36,7 @@ jobs:
persist-credentials: false
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@18f2510ee396bbf400402947b394f2dd8c87dbb0 # v0.29.0
uses: aquasecurity/trivy-action@6c175e9c4083a92bbca2f9724c8a5e33bc2d97a5 # v0.30.0
env:
TRIVY_DB_REPOSITORY: docker.io/aquasec/trivy-db:2,public.ecr.aws/aquasecurity/trivy-db:2,ghcr.io/aquasecurity/trivy-db:2
TRIVY_JAVA_DB_REPOSITORY: docker.io/aquasec/trivy-java-db:1,public.ecr.aws/aquasecurity/trivy-java-db:1,ghcr.io/aquasecurity/trivy-java-db:1
+11 -3
View File
@@ -1,7 +1,7 @@
---
repos:
- repo: https://github.com/pre-commit/pre-commit-hooks
rev: v4.6.0
rev: v5.0.0
hooks:
- id: check-yaml
- id: check-json
@@ -31,7 +31,7 @@ repos:
language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|.*\.rs$)
files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false
- id: cargo-clippy
name: cargo clippy
@@ -40,5 +40,13 @@ repos:
language: system
args: ["--features", "sqlite,mysql,postgresql,enable_mimalloc", "--", "-D", "warnings"]
types_or: [rust, file]
files: (Cargo.toml|Cargo.lock|rust-toolchain|clippy.toml|.*\.rs$)
files: (Cargo.toml|Cargo.lock|rust-toolchain.toml|rustfmt.toml|.*\.rs$)
pass_filenames: false
- id: check-docker-templates
name: check docker templates
description: Check if the Docker templates are updated
language: system
entry: sh
args:
- "-c"
- "cd docker && make"
Generated
+634 -358
View File
File diff suppressed because it is too large Load Diff
+41 -32
View File
@@ -1,11 +1,12 @@
workspace = { members = ["macros"] }
[workspace]
members = ["macros"]
[package]
name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.83.0"
rust-version = "1.84.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -44,7 +45,7 @@ syslog = "7.0.0"
macros = { path = "./macros" }
# Logging
log = "0.4.25"
log = "0.4.27"
fern = { version = "0.7.1", features = ["syslog-7", "reopen-1"] }
tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and webauthn-rs trace logging to work
@@ -52,12 +53,12 @@ tracing = { version = "0.1.41", features = ["log"] } # Needed to have lettre and
dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization
once_cell = "1.20.2"
once_cell = "1.21.3"
# Numerical libraries
num-traits = "0.2.19"
num-derive = "0.4.2"
bigdecimal = "0.4.7"
bigdecimal = "0.4.8"
# Web framework
rocket = { version = "0.5.1", features = ["tls", "json"], default-features = false }
@@ -71,43 +72,44 @@ dashmap = "6.1.0"
# Async futures
futures = "0.3.31"
tokio = { version = "1.43.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
tokio = { version = "1.44.2", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal", "net"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.217", features = ["derive"] }
serde_json = "1.0.138"
serde = { version = "1.0.219", features = ["derive"] }
serde_json = "1.0.140"
# A safe, extensible ORM and Query builder
diesel = { version = "2.2.7", features = ["chrono", "r2d2", "numeric"] }
diesel = { version = "2.2.9", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.2.0"
diesel_logger = { version = "0.4.0", optional = true }
derive_more = { version = "2.0.0", features = ["from", "into", "as_ref", "deref", "display"] }
derive_more = { version = "2.0.1", features = ["from", "into", "as_ref", "deref", "display"] }
diesel-derive-newtype = "2.1.2"
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.31.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.32.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = "0.9.0"
ring = "0.17.8"
ring = "0.17.14"
subtle = "2.6.1"
# UUID generation
uuid = { version = "1.12.1", features = ["v4"] }
uuid = { version = "1.16.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.39", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.1"
time = "0.3.37"
chrono = { version = "0.4.40", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.10.3"
time = "0.3.41"
# Job scheduler
job_scheduler_ng = "2.0.5"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.7.0"
data-encoding = "2.8.0"
# JWT library
jsonwebtoken = "9.3.0"
jsonwebtoken = "9.3.1"
# TOTP library
totp-lite = "2.0.1"
@@ -122,47 +124,48 @@ webauthn-rs = "0.3.2"
url = "2.5.4"
# Email libraries
lettre = { version = "0.11.12", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
lettre = { version = "0.11.15", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.9"
# HTML Template library
handlebars = { version = "6.3.0", features = ["dir_source"] }
handlebars = { version = "6.3.2", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.12.12", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.24.2"
reqwest = { version = "0.12.15", features = ["native-tls-alpn", "stream", "json", "gzip", "brotli", "socks", "cookies"] }
hickory-resolver = "0.25.1"
# Favicon extraction libraries
html5gum = "0.7.0"
regex = { version = "1.11.1", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.10.0"
bytes = "1.10.1"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.54.0", features = ["async"] }
cached = { version = "0.55.1", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.18.1"
cookie_store = "0.21.1"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.70"
openssl = "0.10.72"
# CLI argument parsing
pico-args = "0.5.0"
# Macro ident concatenation
paste = "1.0.15"
governor = "0.8.0"
pastey = "0.1.0"
governor = "0.10.0"
# Check client versions for specific features.
semver = "1.0.25"
semver = "1.0.26"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.43", features = ["secure"], default-features = false, optional = true }
which = "7.0.1"
mimalloc = { version = "0.1.46", features = ["secure"], default-features = false, optional = true }
which = "7.0.3"
# Argon2 library with support for the PHC format
argon2 = "0.5.3"
@@ -213,7 +216,7 @@ codegen-units = 16
# Linting config
# https://doc.rust-lang.org/rustc/lints/groups.html
[lints.rust]
[workspace.lints.rust]
# Forbid
unsafe_code = "forbid"
non_ascii_idents = "forbid"
@@ -243,11 +246,14 @@ if_let_rescope = "allow"
tail_expr_drop_order = "allow"
# https://rust-lang.github.io/rust-clippy/stable/index.html
[lints.clippy]
[workspace.lints.clippy]
# Warn
dbg_macro = "warn"
todo = "warn"
# Ignore/Allow
result_large_err = "allow"
# Deny
case_sensitive_file_extension_comparisons = "deny"
cast_lossless = "deny"
@@ -278,3 +284,6 @@ unused_async = "deny"
unused_self = "deny"
verbose_file_reads = "deny"
zero_sized_map_values = "deny"
[lints]
workspace = true
+2 -2
View File
@@ -48,8 +48,8 @@ fn main() {
fn run(args: &[&str]) -> Result<String, std::io::Error> {
let out = Command::new(args[0]).args(&args[1..]).output()?;
if !out.status.success() {
use std::io::{Error, ErrorKind};
return Err(Error::new(ErrorKind::Other, "Command not successful"));
use std::io::Error;
return Err(Error::other("Command not successful"));
}
Ok(String::from_utf8(out.stdout).unwrap().trim().to_string())
}
+3 -3
View File
@@ -1,11 +1,11 @@
---
vault_version: "v2025.1.1"
vault_image_digest: "sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918"
vault_version: "v2025.3.1"
vault_image_digest: "sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd"
# Cross Compile Docker Helper Scripts v1.6.1
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
# https://github.com/tonistiigi/xx | https://hub.docker.com/r/tonistiigi/xx/tags
xx_image_digest: "sha256:9c207bead753dda9430bdd15425c6518fc7a03d866103c516a2c6889188f5894"
rust_version: 1.84.1 # Rust version to be used
rust_version: 1.86.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: "3.21" # Alpine version to be used
# For which platforms/architectures will we try to build images
+10 -10
View File
@@ -19,23 +19,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.1.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.1.1
# [docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.3.1
# [docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918
# [docker.io/vaultwarden/web-vault:v2025.1.1]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd
# [docker.io/vaultwarden/web-vault:v2025.3.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918 AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd AS vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.84.1 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.84.1 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.84.1 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.84.1 AS build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.86.0 AS build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.86.0 AS build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.86.0 AS build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.86.0 AS build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
+17 -17
View File
@@ -19,15 +19,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2025.1.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.1.1
# [docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918]
# $ docker pull docker.io/vaultwarden/web-vault:v2025.3.1
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2025.3.1
# [docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918
# [docker.io/vaultwarden/web-vault:v2025.1.1]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd
# [docker.io/vaultwarden/web-vault:v2025.3.1]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:cb6b2095a4afc1d9d243a33f6d09211f40e3d82c7ae829fd025df5ff175a4918 AS vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:5b11739052c26dc3c2135b28dc5b072bc607f870a3e81fbbcc72e0cd1f124bcd AS vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -36,7 +36,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:9c207bead753dda9430bd
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.84.1-slim-bookworm AS build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.86.0-slim-bookworm AS build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT
@@ -89,24 +89,24 @@ RUN USER=root cargo new --bin /app
WORKDIR /app
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
ARG TARGET_PKG_CONFIG_PATH
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export PKG_CONFIG=/usr/bin/$(xx-info)-pkg-config" >> /env-cargo && \
echo "export CROSS_COMPILE=1" >> /env-cargo && \
echo "export OPENSSL_INCLUDE_DIR=/usr/include/$(xx-info)" >> /env-cargo && \
echo "export OPENSSL_LIB_DIR=/usr/lib/$(xx-info)" >> /env-cargo ; \
echo "export PKG_CONFIG_ALLOW_CROSS=1" >> /env-cargo && \
# For some architectures `xx-info` returns a triple which doesn't matches the path on disk
# In those cases you can override this by setting the `TARGET_PKG_CONFIG_PATH` build-arg
if [[ -n "${TARGET_PKG_CONFIG_PATH}" ]]; then \
echo "export TARGET_PKG_CONFIG_PATH=${TARGET_PKG_CONFIG_PATH}" >> /env-cargo ; \
else \
echo "export PKG_CONFIG_PATH=/usr/lib/$(xx-info)/pkgconfig" >> /env-cargo ; \
fi && \
echo "# End of env-cargo" >> /env-cargo ; \
fi && \
# Output the current contents of the file
cat /env-cargo
+10 -10
View File
@@ -109,24 +109,24 @@ WORKDIR /app
{% if base == "debian" %}
# Environment variables for Cargo on Debian based builds
ARG ARCH_OPENSSL_LIB_DIR \
ARCH_OPENSSL_INCLUDE_DIR
ARG TARGET_PKG_CONFIG_PATH
RUN source /env-cargo && \
if xx-info is-cross ; then \
# Some special variables if needed to override some build paths
if [[ -n "${ARCH_OPENSSL_LIB_DIR}" && -n "${ARCH_OPENSSL_INCLUDE_DIR}" ]]; then \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_LIB_DIR=${ARCH_OPENSSL_LIB_DIR}" >> /env-cargo && \
echo "export $(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_OPENSSL_INCLUDE_DIR=${ARCH_OPENSSL_INCLUDE_DIR}" >> /env-cargo ; \
fi && \
# We can't use xx-cargo since that uses clang, which doesn't work for our libraries.
# Because of this we generate the needed environment variables here which we can load in the needed steps.
echo "export CC_$(echo "${CARGO_TARGET}" | tr '[:upper:]' '[:lower:]' | tr - _)=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export CARGO_TARGET_$(echo "${CARGO_TARGET}" | tr '[:lower:]' '[:upper:]' | tr - _)_LINKER=/usr/bin/$(xx-info)-gcc" >> /env-cargo && \
echo "export PKG_CONFIG=/usr/bin/$(xx-info)-pkg-config" >> /env-cargo && \
echo "export CROSS_COMPILE=1" >> /env-cargo && \
echo "export OPENSSL_INCLUDE_DIR=/usr/include/$(xx-info)" >> /env-cargo && \
echo "export OPENSSL_LIB_DIR=/usr/lib/$(xx-info)" >> /env-cargo ; \
echo "export PKG_CONFIG_ALLOW_CROSS=1" >> /env-cargo && \
# For some architectures `xx-info` returns a triple which doesn't matches the path on disk
# In those cases you can override this by setting the `TARGET_PKG_CONFIG_PATH` build-arg
if [[ -n "${TARGET_PKG_CONFIG_PATH}" ]]; then \
echo "export TARGET_PKG_CONFIG_PATH=${TARGET_PKG_CONFIG_PATH}" >> /env-cargo ; \
else \
echo "export PKG_CONFIG_PATH=/usr/lib/$(xx-info)/pkgconfig" >> /env-cargo ; \
fi && \
echo "# End of env-cargo" >> /env-cargo ; \
fi && \
# Output the current contents of the file
cat /env-cargo
+1 -10
View File
@@ -133,8 +133,7 @@ target "debian-386" {
platforms = ["linux/386"]
tags = generate_tags("", "-386")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/i386-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/i386-linux-gnu"
TARGET_PKG_CONFIG_PATH = "/usr/lib/i386-linux-gnu/pkgconfig"
}
}
@@ -142,20 +141,12 @@ target "debian-ppc64le" {
inherits = ["debian"]
platforms = ["linux/ppc64le"]
tags = generate_tags("", "-ppc64le")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/powerpc64le-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/powerpc64le-linux-gnu"
}
}
target "debian-s390x" {
inherits = ["debian"]
platforms = ["linux/s390x"]
tags = generate_tags("", "-s390x")
args = {
ARCH_OPENSSL_LIB_DIR = "/usr/lib/s390x-linux-gnu"
ARCH_OPENSSL_INCLUDE_DIR = "/usr/include/s390x-linux-gnu"
}
}
// ==== End of unsupported Debian architecture targets ===
+5 -2
View File
@@ -9,5 +9,8 @@ path = "src/lib.rs"
proc-macro = true
[dependencies]
quote = "1.0.38"
syn = "2.0.98"
quote = "1.0.40"
syn = "2.0.100"
[lints]
workspace = true
+4 -6
View File
@@ -1,5 +1,3 @@
extern crate proc_macro;
use proc_macro::TokenStream;
use quote::quote;
@@ -12,7 +10,7 @@ pub fn derive_uuid_from_param(input: TokenStream) -> TokenStream {
fn impl_derive_uuid_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen = quote! {
let gen_derive = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
@@ -27,7 +25,7 @@ fn impl_derive_uuid_macro(ast: &syn::DeriveInput) -> TokenStream {
}
}
};
gen.into()
gen_derive.into()
}
#[proc_macro_derive(IdFromParam)]
@@ -39,7 +37,7 @@ pub fn derive_id_from_param(input: TokenStream) -> TokenStream {
fn impl_derive_safestring_macro(ast: &syn::DeriveInput) -> TokenStream {
let name = &ast.ident;
let gen = quote! {
let gen_derive = quote! {
#[automatically_derived]
impl<'r> rocket::request::FromParam<'r> for #name {
type Error = ();
@@ -54,5 +52,5 @@ fn impl_derive_safestring_macro(ast: &syn::DeriveInput) -> TokenStream {
}
}
};
gen.into()
gen_derive.into()
}
+1 -1
View File
@@ -1,4 +1,4 @@
[toolchain]
channel = "1.84.1"
channel = "1.86.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"
+1 -1
View File
@@ -618,7 +618,7 @@ async fn has_http_access() -> bool {
use cached::proc_macro::cached;
/// Cache this function to prevent API call rate limit. Github only allows 60 requests per hour, and we use 3 here already.
/// It will cache this function for 300 seconds (5 minutes) which should prevent the exhaustion of the rate limit.
#[cached(time = 300, sync_writes = true)]
#[cached(time = 300, sync_writes = "default")]
async fn get_release_info(has_http_access: bool, running_within_container: bool) -> (String, String, String) {
// If the HTTP Check failed, do not even attempt to check for new versions since we were not able to connect with github.com anyway.
if has_http_access {
+95 -13
View File
@@ -70,18 +70,31 @@ pub fn routes() -> Vec<rocket::Route> {
#[serde(rename_all = "camelCase")]
pub struct RegisterData {
email: String,
kdf: Option<i32>,
kdf_iterations: Option<i32>,
kdf_memory: Option<i32>,
kdf_parallelism: Option<i32>,
#[serde(alias = "userSymmetricKey")]
key: String,
#[serde(alias = "userAsymmetricKeys")]
keys: Option<KeysData>,
master_password_hash: String,
master_password_hint: Option<String>,
name: Option<String>,
token: Option<String>,
#[allow(dead_code)]
organization_user_id: Option<MembershipId>,
// Used only from the register/finish endpoint
email_verification_token: Option<String>,
accept_emergency_access_id: Option<EmergencyAccessId>,
accept_emergency_access_invite_token: Option<String>,
#[serde(alias = "token")]
org_invite_token: Option<String>,
}
#[derive(Debug, Deserialize)]
@@ -124,13 +137,78 @@ async fn is_email_2fa_required(member_id: Option<MembershipId>, conn: &mut DbCon
#[post("/accounts/register", data = "<data>")]
async fn register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
_register(data, false, conn).await
}
pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult {
let data: RegisterData = data.into_inner();
pub async fn _register(data: Json<RegisterData>, email_verification: bool, mut conn: DbConn) -> JsonResult {
let mut data: RegisterData = data.into_inner();
let email = data.email.to_lowercase();
let mut email_verified = false;
let mut pending_emergency_access = None;
// First, validate the provided verification tokens
if email_verification {
match (
&data.email_verification_token,
&data.accept_emergency_access_id,
&data.accept_emergency_access_invite_token,
&data.organization_user_id,
&data.org_invite_token,
) {
// Normal user registration, when email verification is required
(Some(email_verification_token), None, None, None, None) => {
let claims = crate::auth::decode_register_verify(email_verification_token)?;
if claims.sub != data.email {
err!("Email verification token does not match email");
}
// During this call we don't get the name, so extract it from the claims
if claims.name.is_some() {
data.name = claims.name;
}
email_verified = claims.verified;
}
// Emergency access registration
(None, Some(accept_emergency_access_id), Some(accept_emergency_access_invite_token), None, None) => {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not enabled.")
}
let claims = crate::auth::decode_emergency_access_invite(accept_emergency_access_invite_token)?;
if claims.email != data.email {
err!("Claim email does not match email")
}
if &claims.emer_id != accept_emergency_access_id {
err!("Claim emer_id does not match accept_emergency_access_id")
}
pending_emergency_access = Some((accept_emergency_access_id, claims));
email_verified = true;
}
// Org invite
(None, None, None, Some(organization_user_id), Some(org_invite_token)) => {
let claims = decode_invite(org_invite_token)?;
if claims.email != data.email {
err!("Claim email does not match email")
}
if &claims.member_id != organization_user_id {
err!("Claim org_user_id does not match organization_user_id")
}
email_verified = true;
}
_ => {
err!("Registration is missing required parameters")
}
}
}
// Check if the length of the username exceeds 50 characters (Same is Upstream Bitwarden)
// This also prevents issues with very long usernames causing to large JWT's. See #2419
if let Some(ref name) = data.name {
@@ -144,20 +222,17 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
let password_hint = clean_password_hint(&data.master_password_hint);
enforce_password_hint_setting(&password_hint)?;
let mut verified_by_invite = false;
let mut user = match User::find_by_mail(&email, &mut conn).await {
Some(mut user) => {
Some(user) => {
if !user.password_hash.is_empty() {
err!("Registration not allowed or user already exists")
}
if let Some(token) = data.token {
if let Some(token) = data.org_invite_token {
let claims = decode_invite(&token)?;
if claims.email == email {
// Verify the email address when signing up via a valid invite token
verified_by_invite = true;
user.verified_at = Some(Utc::now().naive_utc());
email_verified = true;
user
} else {
err!("Registration email does not match invite email")
@@ -181,7 +256,10 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
// Order is important here; the invitation check must come first
// because the vaultwarden admin can invite anyone, regardless
// of other signup restrictions.
if Invitation::take(&email, &mut conn).await || CONFIG.is_signup_allowed(&email) {
if Invitation::take(&email, &mut conn).await
|| CONFIG.is_signup_allowed(&email)
|| pending_emergency_access.is_some()
{
User::new(email.clone())
} else {
err!("Registration not allowed or user already exists")
@@ -216,8 +294,12 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
user.public_key = Some(keys.public_key);
}
if email_verified {
user.verified_at = Some(Utc::now().naive_utc());
}
if CONFIG.mail_enabled() {
if CONFIG.signups_verify() && !verified_by_invite {
if CONFIG.signups_verify() && !email_verified {
if let Err(e) = mail::send_welcome_must_verify(&user.email, &user.uuid).await {
error!("Error sending welcome email: {:#?}", e);
}
@@ -226,7 +308,7 @@ pub async fn _register(data: Json<RegisterData>, mut conn: DbConn) -> JsonResult
error!("Error sending welcome email: {:#?}", e);
}
if verified_by_invite && is_email_2fa_required(data.organization_user_id, &mut conn).await {
if email_verified && is_email_2fa_required(data.organization_user_id, &mut conn).await {
email::activate_email_2fa(&user, &mut conn).await.ok();
}
}
+9 -8
View File
@@ -1376,7 +1376,7 @@ async fn delete_attachment_post_admin(
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
) -> JsonResult {
delete_attachment(cipher_id, attachment_id, headers, conn, nt).await
}
@@ -1387,7 +1387,7 @@ async fn delete_attachment_post(
headers: Headers,
conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
) -> JsonResult {
delete_attachment(cipher_id, attachment_id, headers, conn, nt).await
}
@@ -1398,7 +1398,7 @@ async fn delete_attachment(
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
) -> JsonResult {
_delete_cipher_attachment_by_id(&cipher_id, &attachment_id, &headers, &mut conn, &nt).await
}
@@ -1409,7 +1409,7 @@ async fn delete_attachment_admin(
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
) -> JsonResult {
_delete_cipher_attachment_by_id(&cipher_id, &attachment_id, &headers, &mut conn, &nt).await
}
@@ -1818,7 +1818,7 @@ async fn _delete_cipher_attachment_by_id(
headers: &Headers,
conn: &mut DbConn,
nt: &Notify<'_>,
) -> EmptyResult {
) -> JsonResult {
let Some(attachment) = Attachment::find_by_id(attachment_id, conn).await else {
err!("Attachment doesn't exist")
};
@@ -1847,11 +1847,11 @@ async fn _delete_cipher_attachment_by_id(
)
.await;
if let Some(org_id) = cipher.organization_uuid {
if let Some(ref org_id) = cipher.organization_uuid {
log_event(
EventType::CipherAttachmentDeleted as i32,
&cipher.uuid,
&org_id,
org_id,
&headers.user.uuid,
headers.device.atype,
&headers.ip.ip,
@@ -1859,7 +1859,8 @@ async fn _delete_cipher_attachment_by_id(
)
.await;
}
Ok(())
let cipher_json = cipher.to_json(&headers.host, &headers.user.uuid, None, CipherSyncType::User, conn).await;
Ok(Json(json!({"cipher":cipher_json})))
}
/// This will hold all the necessary data to improve a full sync of all the ciphers
+3
View File
@@ -205,6 +205,9 @@ fn config() -> Json<Value> {
feature_states.insert("key-rotation-improvements".to_string(), true);
feature_states.insert("flexible-collections-v-1".to_string(), false);
feature_states.insert("email-verification".to_string(), true);
feature_states.insert("unauth-ui-refresh".to_string(), true);
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version
+14 -23
View File
@@ -997,8 +997,6 @@ struct InviteData {
r#type: NumberOrString,
collections: Option<Vec<CollectionData>>,
#[serde(default)]
access_all: bool,
#[serde(default)]
permissions: HashMap<String, Value>,
}
@@ -1012,7 +1010,7 @@ async fn send_invite(
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
let mut data: InviteData = data.into_inner();
let data: InviteData = data.into_inner();
// HACK: We need the raw user-type to be sure custom role is selected to determine the access_all permission
// The from_str() will convert the custom role type into a manager role type
@@ -1030,13 +1028,11 @@ async fn send_invite(
// HACK: This converts the Custom role which has the `Manage all collections` box checked into an access_all flag
// Since the parent checkbox is not sent to the server we need to check and verify the child checkboxes
// If the box is not checked, the user will still be a manager, but not with the access_all permission
if raw_type.eq("4")
&& data.permissions.get("editAnyCollection") == Some(&json!(true))
&& data.permissions.get("deleteAnyCollection") == Some(&json!(true))
&& data.permissions.get("createNewCollections") == Some(&json!(true))
{
data.access_all = true;
}
let access_all = new_type >= MembershipType::Admin
|| (raw_type.eq("4")
&& data.permissions.get("editAnyCollection") == Some(&json!(true))
&& data.permissions.get("deleteAnyCollection") == Some(&json!(true))
&& data.permissions.get("createNewCollections") == Some(&json!(true)));
let mut user_created: bool = false;
for email in data.emails.iter() {
@@ -1074,7 +1070,6 @@ async fn send_invite(
};
let mut new_member = Membership::new(user.uuid.clone(), org_id.clone());
let access_all = data.access_all;
new_member.access_all = access_all;
new_member.atype = new_type;
new_member.status = member_status;
@@ -1525,8 +1520,6 @@ struct EditUserData {
collections: Option<Vec<CollectionData>>,
groups: Option<Vec<GroupId>>,
#[serde(default)]
access_all: bool,
#[serde(default)]
permissions: HashMap<String, Value>,
}
@@ -1552,7 +1545,7 @@ async fn edit_member(
if org_id != headers.org_id {
err!("Organization not found", "Organization id's do not match");
}
let mut data: EditUserData = data.into_inner();
let data: EditUserData = data.into_inner();
// HACK: We need the raw user-type to be sure custom role is selected to determine the access_all permission
// The from_str() will convert the custom role type into a manager role type
@@ -1565,13 +1558,11 @@ async fn edit_member(
// HACK: This converts the Custom role which has the `Manage all collections` box checked into an access_all flag
// Since the parent checkbox is not sent to the server we need to check and verify the child checkboxes
// If the box is not checked, the user will still be a manager, but not with the access_all permission
if raw_type.eq("4")
&& data.permissions.get("editAnyCollection") == Some(&json!(true))
&& data.permissions.get("deleteAnyCollection") == Some(&json!(true))
&& data.permissions.get("createNewCollections") == Some(&json!(true))
{
data.access_all = true;
}
let access_all = new_type >= MembershipType::Admin
|| (raw_type.eq("4")
&& data.permissions.get("editAnyCollection") == Some(&json!(true))
&& data.permissions.get("deleteAnyCollection") == Some(&json!(true))
&& data.permissions.get("createNewCollections") == Some(&json!(true)));
let mut member_to_edit = match Membership::find_by_uuid_and_org(&member_id, &org_id, &mut conn).await {
Some(member) => member,
@@ -1617,7 +1608,7 @@ async fn edit_member(
}
}
member_to_edit.access_all = data.access_all;
member_to_edit.access_all = access_all;
member_to_edit.atype = new_type as i32;
// Delete all the odd collections
@@ -1626,7 +1617,7 @@ async fn edit_member(
}
// If no accessAll, add the collections received
if !data.access_all {
if !access_all {
for col in data.collections.iter().flatten() {
match Collection::find_by_uuid_and_org(&col.id, &org_id, &mut conn).await {
None => err!("Collection not found in Organization"),
+5 -1
View File
@@ -378,7 +378,11 @@ async fn post_send_file_v2_data(
};
match data.data.raw_name() {
Some(raw_file_name) if raw_file_name.dangerous_unsafe_unsanitized_raw() == send_data.fileName => (),
Some(raw_file_name)
if raw_file_name.dangerous_unsafe_unsanitized_raw() == send_data.fileName
// be less strict only if using CLI, cf. https://github.com/dani-garcia/vaultwarden/issues/5614
|| (headers.device.is_cli() && send_data.fileName.ends_with(raw_file_name.dangerous_unsafe_unsanitized_raw().as_str())
) => {}
Some(raw_file_name) => err!(
"Send file name does not match.",
format!(
+61 -2
View File
@@ -24,7 +24,7 @@ use crate::{
};
pub fn routes() -> Vec<Route> {
routes![login, prelogin, identity_register]
routes![login, prelogin, identity_register, register_verification_email, register_finish]
}
#[post("/connect/token", data = "<data>")]
@@ -714,7 +714,66 @@ async fn prelogin(data: Json<PreloginData>, conn: DbConn) -> Json<Value> {
#[post("/accounts/register", data = "<data>")]
async fn identity_register(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, conn).await
_register(data, false, conn).await
}
#[derive(Debug, Deserialize)]
#[serde(rename_all = "camelCase")]
struct RegisterVerificationData {
email: String,
name: Option<String>,
// receiveMarketingEmails: bool,
}
#[derive(rocket::Responder)]
enum RegisterVerificationResponse {
NoContent(()),
Token(Json<String>),
}
#[post("/accounts/register/send-verification-email", data = "<data>")]
async fn register_verification_email(
data: Json<RegisterVerificationData>,
mut conn: DbConn,
) -> ApiResult<RegisterVerificationResponse> {
let data = data.into_inner();
if !CONFIG.is_signup_allowed(&data.email) {
err!("Registration not allowed or user already exists")
}
let should_send_mail = CONFIG.mail_enabled() && CONFIG.signups_verify();
let token_claims =
crate::auth::generate_register_verify_claims(data.email.clone(), data.name.clone(), should_send_mail);
let token = crate::auth::encode_jwt(&token_claims);
if should_send_mail {
let user = User::find_by_mail(&data.email, &mut conn).await;
if user.filter(|u| u.private_key.is_some()).is_some() {
// There is still a timing side channel here in that the code
// paths that send mail take noticeably longer than ones that
// don't. Add a randomized sleep to mitigate this somewhat.
use rand::{rngs::SmallRng, Rng, SeedableRng};
let mut rng = SmallRng::from_os_rng();
let delta: i32 = 100;
let sleep_ms = (1_000 + rng.random_range(-delta..=delta)) as u64;
tokio::time::sleep(tokio::time::Duration::from_millis(sleep_ms)).await;
} else {
mail::send_register_verify_email(&data.email, &token).await?;
}
Ok(RegisterVerificationResponse::NoContent(()))
} else {
// If email verification is not required, return the token directly
// the clients will use this token to finish the registration
Ok(RegisterVerificationResponse::Token(Json(token)))
}
}
#[post("/accounts/register/finish", data = "<data>")]
async fn register_finish(data: Json<RegisterData>, conn: DbConn) -> JsonResult {
_register(data, true, conn).await
}
// https://github.com/bitwarden/jslib/blob/master/common/src/models/request/tokenRequest.ts
+3 -3
View File
@@ -495,7 +495,7 @@ impl WebSocketUsers {
pub async fn send_auth_request(
&self,
user_id: &UserId,
auth_request_uuid: &String,
auth_request_uuid: &str,
acting_device_id: &DeviceId,
conn: &mut DbConn,
) {
@@ -504,7 +504,7 @@ impl WebSocketUsers {
return;
}
let data = create_update(
vec![("Id".into(), auth_request_uuid.clone().into()), ("UserId".into(), user_id.to_string().into())],
vec![("Id".into(), auth_request_uuid.to_owned().into()), ("UserId".into(), user_id.to_string().into())],
UpdateType::AuthRequest,
Some(acting_device_id.clone()),
);
@@ -513,7 +513,7 @@ impl WebSocketUsers {
}
if CONFIG.push_enabled() {
push_auth_request(user_id.clone(), auth_request_uuid.to_string(), conn).await;
push_auth_request(user_id.clone(), auth_request_uuid.to_owned(), conn).await;
}
}
+32
View File
@@ -35,6 +35,7 @@ static JWT_ADMIN_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|admin", CONFIG.
static JWT_SEND_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|send", CONFIG.domain_origin()));
static JWT_ORG_API_KEY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|api.organization", CONFIG.domain_origin()));
static JWT_FILE_DOWNLOAD_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|file_download", CONFIG.domain_origin()));
static JWT_REGISTER_VERIFY_ISSUER: Lazy<String> = Lazy::new(|| format!("{}|register_verify", CONFIG.domain_origin()));
static PRIVATE_RSA_KEY: OnceCell<EncodingKey> = OnceCell::new();
static PUBLIC_RSA_KEY: OnceCell<DecodingKey> = OnceCell::new();
@@ -145,6 +146,10 @@ pub fn decode_file_download(token: &str) -> Result<FileDownloadClaims, Error> {
decode_jwt(token, JWT_FILE_DOWNLOAD_ISSUER.to_string())
}
pub fn decode_register_verify(token: &str) -> Result<RegisterVerifyClaims, Error> {
decode_jwt(token, JWT_REGISTER_VERIFY_ISSUER.to_string())
}
#[derive(Debug, Serialize, Deserialize)]
pub struct LoginJwtClaims {
// Not before
@@ -315,6 +320,33 @@ pub fn generate_file_download_claims(cipher_id: CipherId, file_id: AttachmentId)
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct RegisterVerifyClaims {
// Not before
pub nbf: i64,
// Expiration time
pub exp: i64,
// Issuer
pub iss: String,
// Subject
pub sub: String,
pub name: Option<String>,
pub verified: bool,
}
pub fn generate_register_verify_claims(email: String, name: Option<String>, verified: bool) -> RegisterVerifyClaims {
let time_now = Utc::now();
RegisterVerifyClaims {
nbf: time_now.timestamp(),
exp: (time_now + TimeDelta::try_minutes(30).unwrap()).timestamp(),
iss: JWT_REGISTER_VERIFY_ISSUER.to_string(),
sub: email,
name,
verified,
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct BasicJwtClaims {
// Not before
+11 -5
View File
@@ -104,7 +104,7 @@ macro_rules! make_config {
let mut builder = ConfigBuilder::default();
$($(
builder.$name = make_config! { @getenv paste::paste!(stringify!([<$name:upper>])), $ty };
builder.$name = make_config! { @getenv pastey::paste!(stringify!([<$name:upper>])), $ty };
)+)+
builder
@@ -133,7 +133,7 @@ macro_rules! make_config {
builder.$name = v.clone();
if self.$name.is_some() {
overrides.push(paste::paste!(stringify!([<$name:upper>])).into());
overrides.push(pastey::paste!(stringify!([<$name:upper>])).into());
}
}
)+)+
@@ -231,7 +231,7 @@ macro_rules! make_config {
element.insert("default".into(), serde_json::to_value(def.$name).unwrap());
element.insert("type".into(), (_get_form_type(stringify!($ty))).into());
element.insert("doc".into(), (_get_doc(concat!($($doc),+))).into());
element.insert("overridden".into(), (overridden.contains(&paste::paste!(stringify!([<$name:upper>])).into())).into());
element.insert("overridden".into(), (overridden.contains(&pastey::paste!(stringify!([<$name:upper>])).into())).into());
element
}),
)+
@@ -484,7 +484,8 @@ make_config! {
disable_icon_download: bool, true, def, false;
/// Allow new signups |> Controls whether new users can register. Users can be invited by the vaultwarden admin even if this is disabled
signups_allowed: bool, true, def, true;
/// Require email verification on signups. This will prevent logins from succeeding until the address has been verified
/// Require email verification on signups. On new client versions, this will require verification at signup time. On older clients,
/// this will prevent logins from succeeding until the address has been verified
signups_verify: bool, true, def, false;
/// If signups require email verification, automatically re-send verification email if it hasn't been sent for a while (in seconds)
signups_verify_resend_time: u64, true, def, 3_600;
@@ -734,7 +735,7 @@ make_config! {
email_expiration_time: u64, true, def, 600;
/// Maximum attempts |> Maximum attempts before an email token is reset and a new email will need to be sent
email_attempts_limit: u64, true, def, 3;
/// Automatically enforce at login |> Setup email 2FA provider regardless of any organization policy
/// Setup email 2FA at signup |> Setup email 2FA provider on registration regardless of any organization policy
email_2fa_enforce_on_verified_invite: bool, true, def, false;
/// Auto-enable 2FA (Know the risks!) |> Automatically setup email 2FA as fallback provider when needed
email_2fa_auto_fallback: bool, true, def, false;
@@ -842,6 +843,10 @@ fn validate_config(cfg: &ConfigItems) -> Result<(), Error> {
"inline-menu-positioning-improvements",
"ssh-key-vault-item",
"ssh-agent",
"anon-addy-self-host-alias",
"simple-login-self-host-alias",
"mutual-tls",
"export-attachments",
];
let configured_flags = parse_experimental_client_feature_flags(&cfg.experimental_client_feature_flags);
let invalid_flags: Vec<_> = configured_flags.keys().filter(|flag| !KNOWN_FLAGS.contains(&flag.as_str())).collect();
@@ -1383,6 +1388,7 @@ where
reg!("email/protected_action", ".html");
reg!("email/pw_hint_none", ".html");
reg!("email/pw_hint_some", ".html");
reg!("email/register_verify_email", ".html");
reg!("email/send_2fa_removed_from_org", ".html");
reg!("email/send_emergency_access_invite", ".html");
reg!("email/send_org_invite", ".html");
+2 -3
View File
@@ -110,7 +110,6 @@ pub fn generate_api_key() -> String {
// Constant time compare
//
pub fn ct_eq<T: AsRef<[u8]>, U: AsRef<[u8]>>(a: T, b: U) -> bool {
use ring::constant_time::verify_slices_are_equal;
verify_slices_are_equal(a.as_ref(), b.as_ref()).is_ok()
use subtle::ConstantTimeEq;
a.as_ref().ct_eq(b.as_ref()).into()
}

Some files were not shown because too many files have changed in this diff Show More