Compare commits

..

34 Commits

Author SHA1 Message Date
Mathijs van Veluw 4438da39f9 Fix healthcheck when using .env file (#4299)
It seems Debian based images see the `.env` file in the `pwd` path, but
sourcing it via `. .env` breaks. It does work if you provide the full
path `/.env`. Changed the default to `/.env`.

Alpine does not have an issue with both ways.
2024-01-31 22:31:47 +01:00
Stefan Melmuk 0b2383ab56 fix push device registration (#4297)
don't try to register a push device when the device is new
it will be registered when the push token is saved

fixes #4296
2024-01-31 22:31:22 +01:00
gzfrozen ad1d65bdf8 Update env template file (#4276)
* update env template to fit the config.rs

* Categorize env template settings

* Fix a wrong setting

* Fix wrong icon redirect code

* Fix ICON_DOWNLOAD_TIMEOUT default value

Co-authored-by: Daniel <daniel.barabasa@gmail.com>

* Move related settings together.
Merge Yubikey, Duo, Email 2FA sections into one.
Other minor fixes.

* Minor fix of some settings position

* Add some comment

* Minor fix.

---------

Co-authored-by: Daniel <daniel.barabasa@gmail.com>
2024-01-30 19:15:37 +01:00
Stefan Melmuk 3b283c289e register missing push devices at login (#3792)
save the push token of new device even if push notifications are not
enabled and provide a way to register the push device at login

unregister device if there already is a push token saved unless the
new token has already been registered.

also the `unregister_push_device` function used the wrong argument
cf. https://github.com/bitwarden/server/blob/08d380900b540f8d1a734c7abccaf80e59a91ced/src/Core/Services/Implementations/RelayPushRegistrationService.cs#L43
2024-01-30 19:14:25 +01:00
Stefan Melmuk 4b9384cb2b err on invalid feature flag (#4263)
* err on invalid feature flag

* print all invalid flags and improve error message
2024-01-28 23:36:27 +01:00
Mathijs van Veluw 0f39d96518 Fix attachment upload size check (#4282)
The min/max were reversed with the `add` and `sub` functions.
This caused the files to always be out of bounds in the check.

Fixes #4281
2024-01-28 23:32:09 +01:00
Daniel García edf7484a70 Improve file limit handling (#4242)
* Improve file limit handling

* Oops

* Update PostgreSQL migration

* Review comments

---------

Co-authored-by: BlackDex <black.dex@gmail.com>
2024-01-27 02:43:26 +01:00
Jacques B 8b66e34415 Return 404 when user public_key is empty (#4271) 2024-01-26 20:34:36 +01:00
Mathijs van Veluw 1d00e34bbb Update crates, web-vault and GHA (#4275)
- Update GitHub Actions
- Updated crates
- Updated web-vault to v2024.1.2
2024-01-26 20:19:53 +01:00
Stefan Melmuk 1b801406d6 prevent side effects if groups are disabled (#4265) 2024-01-25 22:02:07 +01:00
Helmut K. C. Tessarek 5e46a43306 fix: use black text for update badge (better contrast) (#4245) 2024-01-25 21:58:05 +01:00
Mathijs van Veluw 5c77431c2d Fix bulk collection deletion (#4257)
The bulk collection delete seems to have removed the extra org_id in the
posted data. Now we only use the org_id from the path.

Fixes #4253
2024-01-25 21:57:35 +01:00
dependabot[bot] 2775c6ce8a Bump h2 from 0.3.23 to 0.3.24 (#4260)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.23 to 0.3.24.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.24/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.23...v0.3.24)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-01-25 21:56:33 +01:00
Mathijs van Veluw 890e668071 Update crates and fix icon issue (#4237)
- Fix icon download issue by removing the deflate feature
- Updated all the crates
- Updated Handlebars code

Fixes #4224
2024-01-12 20:44:37 +01:00
Stefan Melmuk 596c167312 improve emergency access when not enabled (#4227)
* improve emergency access when not enabled

* display note that emergency access is disabled
2024-01-10 19:02:36 +01:00
Daniel García ae3a153bdb Update README.md 2024-01-01 19:44:52 +01:00
Stefan Melmuk 2c36993792 enforce 2FA policy on removal of second factor and login (#3803)
* enforce 2fa policy on removal of second factor

users should be revoked when their second factors are removed.

we want to revoke users so they don't have to be invited again and
organization admins and owners are aware that they no longer have
access.

we make an exception for non-confirmed users to speed up the invitation
process as they would have to be restored before they can accept their
invitation or be confirmed.

if email is enabled, invited users have to add a second factor before
they can accept the invitation to an organization with 2fa policy.
and if it is not enabled that check is done when confirming the user.

* use &str instead of String in log_event()

* enforce the 2fa policy on login

if a user doesn't have a second factor check if they are in an
organization that has the 2fa policy enabled to revoke their access
2024-01-01 19:41:40 +01:00
THONY d672ad3f76 US or EU Data Region Selection (#3752)
* add selection of data region for push

* fix cargo check + rewrite config + add check url

* fix clippy error

* add comment in .env.template, adapt config.rs

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Update .env.template

Co-authored-by: William Desportes <williamdes@wdes.fr>

* Revert "Update .env.template"

This reverts commit 5bed974ba7b9f481792d2228834585f053d47dc3.

* Revert "Update .env.template"

This reverts commit 0760eff95dfaf2a9cf97bb25f6cf7660bdf55173.

* fix /connect/token to push identity

* fix /connect/token to push identity

* Fixed formatting when solving merge conflicts

---------

Co-authored-by: William Desportes <williamdes@wdes.fr>
Co-authored-by: Daniel García <dani-garcia@users.noreply.github.com>
2024-01-01 16:01:57 +01:00
Matlink a641b48884 Fix #3413: push to users accessing the collections using groups (#3757)
* Fix #3413: push to users acessing the collections using groups

* Notify groups only when enabled
2024-01-01 15:46:03 +01:00
Philipp Kolberg 98b2178c7d Allow customizing the featureStates (#4168)
* Allow customizing the featureStates

Use a comma separated list of features to enable using the FEATURE_FLAGS env variable

* Move feature flag parsing to util

* Fix formatting

* Update supported feature flags

* Rename feature_flags to experimental_client_feature_flags

Additionally, use a caret (^) instead of an exclamation mark (!) to disable features

* Fix formatting issue.

* Add documentation to env template

* Remove functionality to disable feature flags

* Fix JSON key for feature states

* Convert error to warning when feature flag is unrecognized

* Simplify parsing of feature flags

* Fix default value of feature flags in env template

* Fix formatting
2024-01-01 15:44:02 +01:00
Mathijs van Veluw 76a3f0f531 Fix Single Org Policy check (#4207)
There was an error in the single org policy check to determine how many
users there are in an org. The `or` check was at the wrong location in
the DSL.

This is now fixed.

Fixes #4205
2024-01-01 15:42:57 +01:00
Mathijs van Veluw c5665e7b77 Update Rust and Crates (#4211)
- Updated Rust to v1.75.0
- Updated all the crates
- Fixed warning generated by latest version of Rust
2024-01-01 15:41:54 +01:00
Mathijs van Veluw cbdcf8ef9f Update web-vault to v2023.12.0 (#4201) 2023-12-24 15:50:58 +01:00
Chris 3337594d60 Add additional build target which optimizes for size (#4096)
OpenWRT is a project which builds and distributes firmware for
embedded devies like routers, access points, and so on. These
devices are usually very limited in terms of storage. Therefore,
optimizing binaries for size at the cost of execution speed is
usually desired.

This PR adds an additional build-target, namely "release-micro",
which implements several parameters which optimize in favor of
binary size.

The following parameters were chosen:
- opt-level "z": Optimize for size with disabled loop vectorization
- strip "symbols": Strip debuginfo and symbols from binary
- lto "fat": Enable link-time optimizations across all crates
- codegen-units 1: Disable parallelization of code generation to
  allow for additional optimizations
- panic "abort": Abort on Panic() instead of unwinding

All these build parameters significantly reduce the binary size
from >40MB to <15MB - the actual amount depends on the target
architecture.

We would like to upstream this new build target to keep our build
environment simple. Other projects which deploy vaultwarden on
size-constrained environments may benefit from this change too.

Signed-off-by: Christian Lachner <gladiac@gmail.com>
2023-12-18 21:46:53 +01:00
Mathijs van Veluw 2daa8be1f1 Update crates (#4173)
Update all crates instead of only the zerocopy from dependabot.
Closes #4170
2023-12-18 21:45:54 +01:00
Mathijs van Veluw eccb3ab947 Decrease JWT Refresh/Auth token (#4163)
Large JWT's could cause issue because of header or body sizes of the
HTTP request could get too large when you are a member of a lot of organizations.

This PR removes these specific keys since they are not used either
client side or server side.

Because Bitwarden does add these in there JWT's i would suggest to keep
the code we had but then commented out as a reference.

Removing it and searching for this when needed would be a waist of time.

Fixes #4156
2023-12-13 17:49:35 +01:00
Mathijs van Veluw 3246251f29 Fix the version string (#4153)
For some reason still not known, the `.git` directory was not copied
into the container. I think buildkit (buildx) did this by default before, and
stopped this with newer versions.

This PR fixes this by also touching `build.rs` besides `src/main.rs`.

This PR also updates Rust to v1.74.1 and some crates, including the
latest version of Alpine 3.19.

Fixes #4150
2023-12-09 23:04:33 +01:00
Mathijs van Veluw 8ab200224e Several small fixes for open issues (#4143)
* Fix BWDC when re-run with cleared cache

Using the BWDC with a cleared cache caused invited users to be converted
to accepted users.

The problem was a wrong check for the `restore` function.

Fixes #4114

* Remove useless variable

During some refactoring this seems to be overlooked.
This variable gets filled but isn't used at all afterwards.

Fixes #4105

* Check some `.git` paths to force a rebuild

When a checked-out repo switches to a specific tag, and that tag does
not have anything else changed in the files except the tag, it could
happen that the build process doesn't see any changes, while it could be
that the version string needs to be different.

This commit ensures that if some specific paths are changed within the
.git directory, cargo will be triggered to rebuild.

Fixes #4087

* Do not delete dir on file delete

Previously during a `delete_file` check we also tried to delete the
parent directory and ignored all errors, like not being empty for
example.

Since this function is called `delete_file` and does not mention
anything in regards to a directory i have removed that code and it will
now only delete the file and leave the rest as-is.

If this somehow is still needed or wanted, which i do not think we want,
then we should create a new function.

Fixes #4081

* Fix healthcheck when using an ENV file

If someone is using a `.env` file or configured the `ENV_FILE` variable
to use that as it's configuration, this was missed by the healthcheck.

So, `DOMAIN` and `ROCKET_TLS` were not seen, and not used in these cases.

This commit fixes this by checking for this file and if it exists, then
it will load those variables first.

Fixes #4112

* Add missing route

While there was a function and a derive, this endpoint wasn't part of
the routes. Since Bitwarden does have this endpoint ill add the route
instead of deleting it.

Fixes #4076
Fixes #4144

* Update crates to update the openssl crate

Because of a bug in the openssl-sys crate we pinned the version to an
older version. This issue has been fixed and was released 2 days ago.

This commit updates the openssl crates including others.
This should also fix the issues with building Vaultwarden using newer
versions of LibreSSL.

Fixes #4051
2023-12-09 01:21:14 +01:00
Mathijs van Veluw 34e00e1478 Update Rust, Crates, Profile and Actions (#4126)
- Updated Rust to v1.74.0
- Updated all crates (where possible)
- Changed release profile to use
  * fat lto
  * 1 codegen-unit
  This should optimize a bit for speed and a lot for size ~15MB smaller
- Updated Github actions to use caching for the bake process
- Added a schedule to clean the cache every week to prevent stale Debian/Alpine base images
- During the release action, the Alpine/static binaries are added as artifects.
  Later we could also automatically add them to the releases maybe.
- Added CODEWONERS to prevent unchecked changes to github actions workflows
2023-12-04 20:26:11 +01:00
Mathijs van Veluw 0fdda3bc2f Prevent generating an error during ws close (#4127)
When a WebSocket connection was closing it was sending a message after
it was closed already. This generated an error in the logs.
While this error didn't harm any of the functionallity of Vaultwarden it
isn't nice to see them of course.

This PR Fixes this by catching the close message and breaks the loop at
that point. This prevents the `_` catch-all from replying the close
message back to the client, which was causing the error message.

Fixes #4090
2023-12-04 20:20:13 +01:00
Mathijs van Veluw 48836501bf Update crates (#4074)
* Remove another header for websocket connections

* Fix small bake issue

* Update crates

Updated crates and adjusted code where needed.
One major update is Rocket rc4, no need anymore (again) for crates.io patching.

The only item still pending is openssl/openssl-sys for which we need to
wait if https://github.com/sfackler/rust-openssl/pull/2094 will be
merged. If, then we can remove the pinned versions for the openssl crate.
2023-11-15 10:41:14 +01:00
Mathijs van Veluw f863ffb89a Add Protected Actions Check (#4067)
Since the feature `Login with device` some actions done via the
web-vault need to be verified via an OTP instead of providing the MasterPassword.

This only happens if a user used the `Login with device` on a device
which uses either Biometrics login or PIN. These actions prevent the
athorizing device to send the MasterPasswordHash. When this happens, the
web-vault requests an OTP to be filled-in and this OTP is send to the
users email address which is the same as the email address to login.

The only way to bypass this is by logging in with the your password, in
those cases a password is requested instead of an OTP.

In case SMTP is not enabled, it will show an error message telling to
user to login using there password.

Fixes #4042
2023-11-12 22:15:44 +01:00
Mathijs van Veluw 03c6ed2e07 Disable autofill-v2 (#4056)
Disabled autofill-v2 as it seems to cause strange issues as reported
here: https://github.com/dani-garcia/vaultwarden/discussions/4052

Also added the Vaultwarden server version back again but at a different
location.

Fixes #4052
2023-11-09 00:16:27 +01:00
Mathijs van Veluw efc6eb0073 Fix missing alpine tag during buildx bake (#4043)
The bake recipt was missing the single `:alpine` tag for the alpine
builds when we were releasing a `stable/latest` version of Vaultwarden.

This PR fixes this by checking for those conditions and add the
`:alpine` tag too.

We will keep the `:latest-alpine` also, which i find even nicer then just
`:alpine`

Fixes #4035
2023-11-07 10:50:58 +01:00
62 changed files with 2474 additions and 1416 deletions
+302 -225
View File
File diff suppressed because it is too large Load Diff
+3
View File
@@ -0,0 +1,3 @@
/.github @dani-garcia @BlackDex
/.github/CODEOWNERS @dani-garcia @BlackDex
/.github/workflows/** @dani-garcia @BlackDex
+4 -4
View File
@@ -46,7 +46,7 @@ jobs:
steps:
# Checkout the repo
- name: "Checkout"
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
# End Checkout the repo
@@ -74,7 +74,7 @@ jobs:
# Only install the clippy and rustfmt components on the default rust-toolchain
- name: "Install rust-toolchain version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2
uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
if: ${{ matrix.channel == 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -84,7 +84,7 @@ jobs:
# Install the any other channel to be used for which we do not execute clippy and rustfmt
- name: "Install MSRV version"
uses: dtolnay/rust-toolchain@439cf607258077187679211f12aa6f19af4a0af7 # master @ 2023-09-19 - 05:31 PM GMT+2
uses: dtolnay/rust-toolchain@be73d7920c329f220ce78e0234b8f96b7ae60248 # master @ 2023-12-07 - 10:22 PM GMT+1
if: ${{ matrix.channel != 'rust-toolchain' }}
with:
toolchain: "${{steps.toolchain.outputs.RUST_TOOLCHAIN}}"
@@ -106,7 +106,7 @@ jobs:
# End Show environment
# Enable Rust Caching
- uses: Swatinem/rust-cache@a95ba195448af2da9b00fb742d14ffaaf3c21f43 # v2.7.0
- uses: Swatinem/rust-cache@23bce251a8cd2ffc3c1075eaa2367cf899916d84 # v2.7.3
with:
# Use a custom prefix-key to force a fresh start. This is sometimes needed with bigger changes.
# Like changing the build host from Ubuntu 20.04 to 22.04 for example.
+1 -1
View File
@@ -13,7 +13,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
# End Checkout the repo
# Download hadolint - https://github.com/hadolint/hadolint/releases
+108 -10
View File
@@ -14,7 +14,6 @@ on:
branches: # Only on paths above
- main
- release-build-revision
tags: # Always, regardless of paths above
- '*'
@@ -31,7 +30,7 @@ jobs:
steps:
- name: Skip Duplicates Actions
id: skip_check
uses: fkirc/skip-duplicate-actions@12aca0a884f6137d619d6a8a09fcc3406ced5281 # v5.3.0
uses: fkirc/skip-duplicate-actions@f75f66ce1886f00957d99748a42c724f4330bdcf # v5.3.1
with:
cancel_others: 'true'
# Only run this when not creating a tag
@@ -42,12 +41,12 @@ jobs:
timeout-minutes: 120
needs: skip_check
if: ${{ needs.skip_check.outputs.should_skip != 'true' && github.repository == 'dani-garcia/vaultwarden' }}
# TODO: Start a local docker registry to be used to extract the final Alpine static build images
# services:
# registry:
# image: registry:2
# ports:
# - 5000:5000
# Start a local docker registry to extract the final Alpine static build binaries
services:
registry:
image: registry:2
ports:
- 5000:5000
env:
SOURCE_COMMIT: ${{ github.sha }}
SOURCE_REPOSITORY_URL: "https://github.com/${{ github.repository }}"
@@ -69,7 +68,7 @@ jobs:
steps:
# Checkout the repo
- name: Checkout
uses: actions/checkout@8ade135a41bc03ea155e62e844d188df1ea18608 # v4.1.0
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 # v4.1.1
with:
fetch-depth: 0
@@ -140,6 +139,12 @@ jobs:
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Add registry for ghcr.io
if: ${{ env.HAVE_GHCR_LOGIN == 'true' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.GHCR_REPO }}" | tee -a "${GITHUB_ENV}"
# Login to Quay.io
- name: Login to Quay.io
uses: docker/login-action@343f7c4344506bcbf9b4de18042ae17996df046d # v3.0.0
@@ -155,8 +160,28 @@ jobs:
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}${{ vars.QUAY_REPO }}" | tee -a "${GITHUB_ENV}"
- name: Configure build cache from/to
shell: bash
run: |
#
# Check if there is a GitHub Container Registry Login and use it for caching
if [[ -n "${HAVE_GHCR_LOGIN}" ]]; then
echo "BAKE_CACHE_FROM=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }}" | tee -a "${GITHUB_ENV}"
echo "BAKE_CACHE_TO=type=registry,ref=${{ vars.GHCR_REPO }}-buildcache:${{ matrix.base_image }},mode=max" | tee -a "${GITHUB_ENV}"
else
echo "BAKE_CACHE_FROM="
echo "BAKE_CACHE_TO="
fi
#
- name: Add localhost registry
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
echo "CONTAINER_REGISTRIES=${CONTAINER_REGISTRIES:+${CONTAINER_REGISTRIES},}localhost:5000/vaultwarden/server" | tee -a "${GITHUB_ENV}"
- name: Bake ${{ matrix.base_image }} containers
uses: docker/bake-action@511fde2517761e303af548ec9e0ea74a8a100112 # v4.0.0
uses: docker/bake-action@849707117b03d39aba7924c50a10376a69e88d7d # v4.1.0
env:
BASE_TAGS: "${{ env.BASE_TAGS }}"
SOURCE_COMMIT: "${{ env.SOURCE_COMMIT }}"
@@ -168,3 +193,76 @@ jobs:
push: true
files: docker/docker-bake.hcl
targets: "${{ matrix.base_image }}-multi"
set: |
*.cache-from=${{ env.BAKE_CACHE_FROM }}
*.cache-to=${{ env.BAKE_CACHE_TO }}
# Extract the Alpine binaries from the containers
- name: Extract binaries
if: ${{ matrix.base_image == 'alpine' }}
shell: bash
run: |
# Check which main tag we are going to build determined by github.ref_type
if [[ "${{ github.ref_type }}" == "tag" ]]; then
EXTRACT_TAG="latest"
elif [[ "${{ github.ref_type }}" == "branch" ]]; then
EXTRACT_TAG="testing"
fi
# After each extraction the image is removed.
# This is needed because using different platforms doesn't trigger a new pull/download
# Extract amd64 binary
docker create --name amd64 --platform=linux/amd64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp amd64:/vaultwarden vaultwarden-amd64
docker rm --force amd64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract arm64 binary
docker create --name arm64 --platform=linux/arm64 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp arm64:/vaultwarden vaultwarden-arm64
docker rm --force arm64
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv7 binary
docker create --name armv7 --platform=linux/arm/v7 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv7:/vaultwarden vaultwarden-armv7
docker rm --force armv7
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Extract armv6 binary
docker create --name armv6 --platform=linux/arm/v6 "vaultwarden/server:${EXTRACT_TAG}-alpine"
docker cp armv6:/vaultwarden vaultwarden-armv6
docker rm --force armv6
docker rmi --force "vaultwarden/server:${EXTRACT_TAG}-alpine"
# Upload artifacts to Github Actions
- name: "Upload amd64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-amd64
path: vaultwarden-amd64
- name: "Upload arm64 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-arm64
path: vaultwarden-arm64
- name: "Upload armv7 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv7
path: vaultwarden-armv7
- name: "Upload armv6 artifact"
uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
if: ${{ matrix.base_image == 'alpine' }}
with:
name: vaultwarden-${{ env.SOURCE_VERSION }}-linux-armv6
path: vaultwarden-armv6
# End Upload artifacts to Github Actions
@@ -0,0 +1,25 @@
on:
workflow_dispatch:
inputs:
manual_trigger:
description: "Manual trigger buildcache cleanup"
required: false
default: ""
schedule:
- cron: '0 1 * * FRI'
name: Cleanup
jobs:
releasecache-cleanup:
name: Releasecache Cleanup
runs-on: ubuntu-22.04
timeout-minutes: 30
steps:
- name: Delete vaultwarden-buildcache containers
uses: actions/delete-package-versions@0d39a63126868f5eefaa47169615edd3c0f61e20 # v4.1.1
with:
package-name: 'vaultwarden-buildcache'
package-type: 'container'
min-versions-to-keep: 0
delete-only-untagged-versions: 'false'
+2 -3
View File
@@ -4,7 +4,6 @@ on:
push:
branches:
- main
- release-build-revision
tags:
- '*'
pull_request:
@@ -29,7 +28,7 @@ jobs:
uses: actions/checkout@b4ffde65f46336ab88eb53be808477a3936bae11 #v4.1.1
- name: Run Trivy vulnerability scanner
uses: aquasecurity/trivy-action@f78e9ecf42a1271402d4f484518b9313235990e1 # v0.13.1
uses: aquasecurity/trivy-action@d43c1f16c00cfd3978dde6c07f4bbcf9eb6993ca # v0.16.1
with:
scan-type: repo
ignore-unfixed: true
@@ -38,6 +37,6 @@ jobs:
severity: CRITICAL,HIGH
- name: Upload Trivy scan results to GitHub Security tab
uses: github/codeql-action/upload-sarif@bad341350a2f5616f9e048e51360cedc49181ce8 # v2.22.4
uses: github/codeql-action/upload-sarif@b7bf0a3ed3ecfa44160715d7c442788f65f0f923 # v3.23.2
with:
sarif_file: 'trivy-results.sarif'
Generated
+637 -503
View File
File diff suppressed because it is too large Load Diff
+46 -43
View File
@@ -3,7 +3,7 @@ name = "vaultwarden"
version = "1.0.0"
authors = ["Daniel García <dani-garcia@users.noreply.github.com>"]
edition = "2021"
rust-version = "1.71.1"
rust-version = "1.73.0"
resolver = "2"
repository = "https://github.com/dani-garcia/vaultwarden"
@@ -48,63 +48,63 @@ tracing = { version = "0.1.40", features = ["log"] } # Needed to have lettre and
dotenvy = { version = "0.15.7", default-features = false }
# Lazy initialization
once_cell = "1.18.0"
once_cell = "1.19.0"
# Numerical libraries
num-traits = "0.2.17"
num-derive = "0.4.1"
bigdecimal = "0.4.2"
# Web framework
rocket = { version = "0.5.0-rc.3", features = ["tls", "json"], default-features = false }
# rocket_ws = { version ="0.1.0-rc.3" }
rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = "ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa" } # v0.5 branch
rocket = { version = "0.5.0", features = ["tls", "json"], default-features = false }
rocket_ws = { version ="0.1.0" }
# WebSockets libraries
tokio-tungstenite = "0.19.0"
tokio-tungstenite = "0.20.1"
rmpv = "1.0.1" # MessagePack library
# Concurrent HashMap used for WebSocket messaging and favicons
dashmap = "5.5.3"
# Async futures
futures = "0.3.28"
tokio = { version = "1.33.0", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
futures = "0.3.30"
tokio = { version = "1.35.1", features = ["rt-multi-thread", "fs", "io-util", "parking_lot", "time", "signal"] }
# A generic serialization/deserialization framework
serde = { version = "1.0.189", features = ["derive"] }
serde_json = "1.0.107"
serde = { version = "1.0.195", features = ["derive"] }
serde_json = "1.0.111"
# A safe, extensible ORM and Query builder
diesel = { version = "2.1.3", features = ["chrono", "r2d2"] }
diesel = { version = "2.1.4", features = ["chrono", "r2d2", "numeric"] }
diesel_migrations = "2.1.0"
diesel_logger = { version = "0.3.0", optional = true }
# Bundled/Static SQLite
libsqlite3-sys = { version = "0.26.0", features = ["bundled"], optional = true }
libsqlite3-sys = { version = "0.27.0", features = ["bundled"], optional = true }
# Crypto-related libraries
rand = { version = "0.8.5", features = ["small_rng"] }
ring = "0.17.5"
ring = "0.17.7"
# UUID generation
uuid = { version = "1.5.0", features = ["v4"] }
uuid = { version = "1.7.0", features = ["v4"] }
# Date and time libraries
chrono = { version = "0.4.31", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.3"
time = "0.3.30"
chrono = { version = "0.4.33", features = ["clock", "serde"], default-features = false }
chrono-tz = "0.8.5"
time = "0.3.31"
# Job scheduler
job_scheduler_ng = "2.0.4"
# Data encoding library Hex/Base32/Base64
data-encoding = "2.4.0"
data-encoding = "2.5.0"
# JWT library
jsonwebtoken = "9.0.0"
jsonwebtoken = "9.2.0"
# TOTP library
totp-lite = "2.0.0"
totp-lite = "2.0.1"
# Yubico Library
yubico = { version = "0.11.0", features = ["online-tokio"], default-features = false }
@@ -113,37 +113,34 @@ yubico = { version = "0.11.0", features = ["online-tokio"], default-features = f
webauthn-rs = "0.3.2"
# Handling of URL's for WebAuthn and favicons
url = "2.4.1"
url = "2.5.0"
# Email libraries
lettre = { version = "0.11.0", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.0" # URL encoding library used for URL's in the emails
lettre = { version = "0.11.3", features = ["smtp-transport", "sendmail-transport", "builder", "serde", "tokio1-native-tls", "hostname", "tracing", "tokio1"], default-features = false }
percent-encoding = "2.3.1" # URL encoding library used for URL's in the emails
email_address = "0.2.4"
# HTML Template library
handlebars = { version = "4.4.0", features = ["dir_source"] }
handlebars = { version = "5.1.1", features = ["dir_source"] }
# HTTP client (Used for favicons, version check, DUO and HIBP API)
reqwest = { version = "0.11.22", features = ["stream", "json", "deflate", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
reqwest = { version = "0.11.23", features = ["stream", "json", "gzip", "brotli", "socks", "cookies", "trust-dns", "native-tls-alpn"] }
# Favicon extraction libraries
html5gum = "0.5.7"
regex = { version = "1.10.2", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.0"
regex = { version = "1.10.3", features = ["std", "perf", "unicode-perl"], default-features = false }
data-url = "0.3.1"
bytes = "1.5.0"
# Cache function results (Used for version check and favicon fetching)
cached = { version = "0.46.0", features = ["async"] }
cached = { version = "0.48.1", features = ["async"] }
# Used for custom short lived cookie jar during favicon extraction
cookie = "0.16.2"
cookie_store = "0.19.1"
# Used by U2F, JWT and PostgreSQL
openssl = "0.10.57"
# Set openssl-sys fixed to v0.9.92 to prevent building issues with musl, arm and 32bit pointer width
# It will force add a dynamically linked library which prevents the build from being static
openssl-sys = "=0.9.92"
openssl = "0.10.63"
# CLI argument parsing
pico-args = "0.5.0"
@@ -153,30 +150,27 @@ paste = "1.0.14"
governor = "0.6.0"
# Check client versions for specific features.
semver = "1.0.20"
semver = "1.0.21"
# Allow overriding the default memory allocator
# Mainly used for the musl builds, since the default musl malloc is very slow
mimalloc = { version = "0.1.39", features = ["secure"], default-features = false, optional = true }
which = "5.0.0"
which = "6.0.0"
# Argon2 library with support for the PHC format
argon2 = "0.5.2"
argon2 = "0.5.3"
# Reading a password from the cli for generating the Argon2id ADMIN_TOKEN
rpassword = "7.2.0"
[patch.crates-io]
rocket = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
# rocket_ws = { git = 'https://github.com/SergioBenitez/Rocket', rev = 'ce441b5f46fdf5cd99cb32b8b8638835e4c2a5fa' } # v0.5 branch
rpassword = "7.3.1"
# Strip debuginfo from the release builds
# Also enable thin LTO for some optimizations
# The symbols are the provide better panic traces
# Also enable fat LTO and use 1 codegen unit for optimizations
[profile.release]
strip = "debuginfo"
lto = "thin"
lto = "fat"
codegen-units = 1
# A little bit of a speedup
@@ -187,3 +181,12 @@ split-debuginfo = "unpacked"
# This is a huge speed improvement during testing
[profile.dev.package.argon2]
opt-level = 3
# Optimize for size
[profile.release-micro]
inherits = "release"
opt-level = "z"
strip = "symbols"
lto = "fat"
codegen-units = 1
panic = "abort"
+7
View File
@@ -92,4 +92,11 @@ Thanks for your contribution to the project!
</a>
</td>
</tr>
<tr>
<td align="center">
<a href="https://github.com/IQ333777" style="width: 75px">
<sub><b>IQ333777</b></sub>
</a>
</td>
</tr>
</table>
+7
View File
@@ -17,6 +17,13 @@ fn main() {
"You need to enable one DB backend. To build with previous defaults do: cargo build --features sqlite"
);
// Rerun when these paths are changed.
// Someone could have checked-out a tag or specific commit, but no other files changed.
println!("cargo:rerun-if-changed=.git");
println!("cargo:rerun-if-changed=.git/HEAD");
println!("cargo:rerun-if-changed=.git/index");
println!("cargo:rerun-if-changed=.git/refs/tags");
#[cfg(all(not(debug_assertions), feature = "query_logger"))]
compile_error!("Query Logging is only allowed during development, it is not intended for production usage!");
+4 -4
View File
@@ -1,12 +1,12 @@
---
vault_version: "v2023.10.0"
vault_image_digest: "sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935"
vault_version: "v2024.1.2"
vault_image_digest: "sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b"
# Cross Compile Docker Helper Scripts v1.3.0
# We use the linux/amd64 platform shell scripts since there is no difference between the different platform scripts
xx_image_digest: "sha256:c9609ace652bbe51dd4ce90e0af9d48a4590f1214246da5bc70e46f6dd586edc"
rust_version: 1.73.0 # Rust version to be used
rust_version: 1.75.0 # Rust version to be used
debian_version: bookworm # Debian release name to be used
alpine_version: 3.18 # Alpine version to be used
alpine_version: 3.19 # Alpine version to be used
# For which platforms/architectures will we try to build images
platforms: ["linux/amd64", "linux/arm64", "linux/arm/v7", "linux/arm/v6"]
# Determine the build images per OS/Arch
+13 -12
View File
@@ -18,23 +18,23 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935]
# $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935
# [docker.io/vaultwarden/web-vault:v2023.10.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2024.1.2]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
########################## ALPINE BUILD IMAGES ##########################
## NOTE: The Alpine Base Images do not support other platforms then linux/amd64
## And for Alpine we define all build images here, they will only be loaded when actually used
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.73.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.73.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.73.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.73.0 as build_armv6
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:x86_64-musl-stable-1.75.0 as build_amd64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:aarch64-musl-stable-1.75.0 as build_arm64
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:armv7-musleabihf-stable-1.75.0 as build_armv7
FROM --platform=linux/amd64 ghcr.io/blackdex/rust-musl:arm-musleabi-stable-1.75.0 as build_armv6
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
@@ -100,7 +100,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
@@ -126,7 +127,7 @@ RUN source /env-cargo && \
# To uninstall: docker run --privileged --rm tonistiigi/binfmt --uninstall 'qemu-*'
#
# We need to add `--platform` here, because of a podman bug: https://github.com/containers/buildah/issues/4742
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.18
FROM --platform=$TARGETPLATFORM docker.io/library/alpine:3.19
ENV ROCKET_PROFILE="release" \
ROCKET_ADDRESS=0.0.0.0 \
+11 -9
View File
@@ -18,15 +18,15 @@
# - From https://hub.docker.com/r/vaultwarden/web-vault/tags,
# click the tag name to view the digest of the image it currently points to.
# - From the command line:
# $ docker pull docker.io/vaultwarden/web-vault:v2023.10.0
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2023.10.0
# [docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935]
# $ docker pull docker.io/vaultwarden/web-vault:v2024.1.2
# $ docker image inspect --format "{{.RepoDigests}}" docker.io/vaultwarden/web-vault:v2024.1.2
# [docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b]
#
# - Conversely, to get the tag name from the digest:
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935
# [docker.io/vaultwarden/web-vault:v2023.10.0]
# $ docker image inspect --format "{{.RepoTags}}" docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b
# [docker.io/vaultwarden/web-vault:v2024.1.2]
#
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:419e4976921f98f1124f296ed02e68bf7f8ff29b3f1fba59e7e715228a065935 as vault
FROM --platform=linux/amd64 docker.io/vaultwarden/web-vault@sha256:ac07a71cbcd199e3c9a0639c04234ba2f1ba16cfa2a45b08a7ae27eb82f8e13b as vault
########################## Cross Compile Docker Helper Scripts ##########################
## We use the linux/amd64 no matter which Build Platform, since these are all bash scripts
@@ -35,7 +35,7 @@ FROM --platform=linux/amd64 docker.io/tonistiigi/xx@sha256:c9609ace652bbe51dd4ce
########################## BUILD IMAGE ##########################
# hadolint ignore=DL3006
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.73.0-slim-bookworm as build
FROM --platform=$BUILDPLATFORM docker.io/library/rust:1.75.0-slim-bookworm as build
COPY --from=xx / /
ARG TARGETARCH
ARG TARGETVARIANT
@@ -73,7 +73,8 @@ RUN xx-apt-get install -y \
libmariadb3 \
libpq-dev \
libpq5 \
libssl-dev && \
libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
@@ -130,7 +131,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
+4 -2
View File
@@ -91,7 +91,8 @@ RUN xx-apt-get install -y \
libmariadb3 \
libpq-dev \
libpq5 \
libssl-dev && \
libssl-dev \
zlib1g-dev && \
# Force install arch dependend mariadb dev packages
# Installing them the normal way breaks several other packages (again)
apt-get download "libmariadb-dev-compat:$(xx-info debian-arch)" "libmariadb-dev:$(xx-info debian-arch)" && \
@@ -161,7 +162,8 @@ COPY . .
# Builds again, this time it will be the actual source files being build
RUN source /env-cargo && \
# Make sure that we actually build the project by updating the src/main.rs timestamp
touch src/main.rs && \
# Also do this for build.rs to ensure the version is rechecked
touch build.rs src/main.rs && \
# Create a symlink to the binary target folder to easy copy the binary in the final stage
cargo build --features ${DB} --profile "${CARGO_PROFILE}" --target="${CARGO_TARGET}" && \
if [[ "${CARGO_PROFILE}" == "dev" ]] ; then \
+9 -3
View File
@@ -88,7 +88,7 @@ target "debian" {
inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.debian"
tags = generate_tags("", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))]
output = ["type=docker"]
}
// Multi Platform target, will build one tagged manifest with all supported architectures
@@ -138,7 +138,7 @@ target "alpine" {
inherits = ["_default_attributes"]
dockerfile = "docker/Dockerfile.alpine"
tags = generate_tags("-alpine", platform_tag())
output = [join(",", flatten([["type=docker"], image_index_annotations()]))]
output = ["type=docker"]
}
// Multi Platform target, will build one tagged manifest with all supported architectures
@@ -216,7 +216,13 @@ function "generate_tags" {
result = flatten([
for registry in get_container_registries() :
[for base_tag in get_base_tags() :
concat(["${registry}:${base_tag}${suffix}${platform}"])]
concat(
# If the base_tag contains latest, and the suffix contains `-alpine` add a `:alpine` tag too
base_tag == "latest" ? suffix == "-alpine" ? ["${registry}:alpine${platform}"] : [] : [],
# The default tagging strategy
["${registry}:${base_tag}${suffix}${platform}"]
)
]
])
}
+10 -2
View File
@@ -1,12 +1,20 @@
#!/bin/sh
#!/usr/bin/env sh
# Use the value of the corresponding env var (if present),
# or a default value otherwise.
: "${DATA_FOLDER:="data"}"
: "${DATA_FOLDER:="/data"}"
: "${ROCKET_PORT:="80"}"
: "${ENV_FILE:="/.env"}"
CONFIG_FILE="${DATA_FOLDER}"/config.json
# Check if the $ENV_FILE file exist and is readable
# If that is the case, load it into the environment before running any check
if [ -r "${ENV_FILE}" ]; then
# shellcheck disable=SC1090
. "${ENV_FILE}"
fi
# Given a config key, return the corresponding config value from the
# config file. If the key doesn't exist, return an empty string.
get_config_val() {
@@ -0,0 +1 @@
ALTER TABLE attachments MODIFY file_size BIGINT NOT NULL;
@@ -0,0 +1,3 @@
ALTER TABLE attachments
ALTER COLUMN file_size TYPE BIGINT,
ALTER COLUMN file_size SET NOT NULL;
@@ -0,0 +1 @@
-- Integer size in SQLite is already i64, so we don't need to do anything
+1 -1
View File
@@ -1,4 +1,4 @@
[toolchain]
channel = "1.73.0"
channel = "1.75.0"
components = [ "rustfmt", "clippy" ]
profile = "minimal"
+18 -14
View File
@@ -13,7 +13,10 @@ use rocket::{
};
use crate::{
api::{core::log_event, unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify, NumberOrString},
api::{
core::{log_event, two_factor},
unregister_push_device, ApiResult, EmptyResult, JsonResult, Notify,
},
auth::{decode_admin, encode_jwt, generate_admin_claims, ClientIp},
config::ConfigBuilder,
db::{backup_database, get_sql_server_version, models::*, DbConn, DbConnType},
@@ -21,6 +24,7 @@ use crate::{
mail,
util::{
docker_base_image, format_naive_datetime_local, get_display_size, get_reqwest_client, is_running_in_docker,
NumberOrString,
},
CONFIG, VERSION,
};
@@ -184,12 +188,11 @@ fn post_admin_login(data: Form<LoginForm>, cookies: &CookieJar<'_>, ip: ClientIp
let claims = generate_admin_claims();
let jwt = encode_jwt(&claims);
let cookie = Cookie::build(COOKIE_NAME, jwt)
let cookie = Cookie::build((COOKIE_NAME, jwt))
.path(admin_path())
.max_age(rocket::time::Duration::minutes(CONFIG.admin_session_lifetime()))
.same_site(SameSite::Strict)
.http_only(true)
.finish();
.http_only(true);
cookies.add(cookie);
if let Some(redirect) = redirect {
@@ -313,7 +316,7 @@ async fn test_smtp(data: Json<InviteData>, _token: AdminToken) -> EmptyResult {
#[get("/logout")]
fn logout(cookies: &CookieJar<'_>) -> Redirect {
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
Redirect::to(admin_path())
}
@@ -343,7 +346,7 @@ async fn users_overview(_token: AdminToken, mut conn: DbConn) -> ApiResult<Html<
let mut usr = u.to_json(&mut conn).await;
usr["cipher_count"] = json!(Cipher::count_owned_by_user(&u.uuid, &mut conn).await);
usr["attachment_count"] = json!(Attachment::count_by_user(&u.uuid, &mut conn).await);
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await as i32));
usr["attachment_size"] = json!(get_display_size(Attachment::size_by_user(&u.uuid, &mut conn).await));
usr["user_enabled"] = json!(u.enabled);
usr["created_at"] = json!(format_naive_datetime_local(&u.created_at, DT_FMT));
usr["last_active"] = match u.last_active(&mut conn).await {
@@ -391,7 +394,7 @@ async fn delete_user(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyRe
EventType::OrganizationUserRemoved as i32,
&user_org.uuid,
&user_org.org_uuid,
String::from(ACTING_ADMIN_USER),
ACTING_ADMIN_USER,
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
@@ -410,7 +413,7 @@ async fn deauth_user(uuid: &str, _token: AdminToken, mut conn: DbConn, nt: Notif
if CONFIG.push_enabled() {
for device in Device::find_push_devices_by_user(&user.uuid, &mut conn).await {
match unregister_push_device(device.uuid).await {
match unregister_push_device(device.push_uuid).await {
Ok(r) => r,
Err(e) => error!("Unable to unregister devices from Bitwarden server: {}", e),
};
@@ -446,9 +449,10 @@ async fn enable_user(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyR
}
#[post("/users/<uuid>/remove-2fa")]
async fn remove_2fa(uuid: &str, _token: AdminToken, mut conn: DbConn) -> EmptyResult {
async fn remove_2fa(uuid: &str, token: AdminToken, mut conn: DbConn) -> EmptyResult {
let mut user = get_user_or_404(uuid, &mut conn).await?;
TwoFactor::delete_all_by_user(&user.uuid, &mut conn).await?;
two_factor::enforce_2fa_policy(&user, ACTING_ADMIN_USER, 14, &token.ip.ip, &mut conn).await?;
user.totp_recover = None;
user.save(&mut conn).await
}
@@ -518,7 +522,7 @@ async fn update_user_org_type(data: Json<UserOrgTypeData>, token: AdminToken, mu
EventType::OrganizationUserUpdated as i32,
&user_to_edit.uuid,
&data.org_uuid,
String::from(ACTING_ADMIN_USER),
ACTING_ADMIN_USER,
14, // Use UnknownBrowser type
&token.ip.ip,
&mut conn,
@@ -546,7 +550,7 @@ async fn organizations_overview(_token: AdminToken, mut conn: DbConn) -> ApiResu
org["group_count"] = json!(Group::count_by_org(&o.uuid, &mut conn).await);
org["event_count"] = json!(Event::count_by_org(&o.uuid, &mut conn).await);
org["attachment_count"] = json!(Attachment::count_by_org(&o.uuid, &mut conn).await);
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await as i32));
org["attachment_size"] = json!(get_display_size(Attachment::size_by_org(&o.uuid, &mut conn).await));
organizations_json.push(org);
}
@@ -786,16 +790,16 @@ impl<'r> FromRequest<'r> for AdminToken {
if requested_page.is_empty() {
return Outcome::Forward(Status::Unauthorized);
} else {
return Outcome::Failure((Status::Unauthorized, "Unauthorized"));
return Outcome::Error((Status::Unauthorized, "Unauthorized"));
}
}
};
if decode_admin(access_token).is_err() {
// Remove admin cookie
cookies.remove(Cookie::build(COOKIE_NAME, "").path(admin_path()).finish());
cookies.remove(Cookie::build(COOKIE_NAME).path(admin_path()));
error!("Invalid or expired admin JWT. IP: {}.", &ip.ip);
return Outcome::Failure((Status::Unauthorized, "Session expired"));
return Outcome::Error((Status::Unauthorized, "Session expired"));
}
Outcome::Success(Self {
+41 -45
View File
@@ -6,12 +6,14 @@ use serde_json::Value;
use crate::{
api::{
core::log_user_event, register_push_device, unregister_push_device, AnonymousNotify, EmptyResult, JsonResult,
JsonUpcase, Notify, NumberOrString, PasswordData, UpdateType,
JsonUpcase, Notify, PasswordOrOtpData, UpdateType,
},
auth::{decode_delete, decode_invite, decode_verify_email, ClientHeaders, Headers},
crypto,
db::{models::*, DbConn},
mail, CONFIG,
mail,
util::NumberOrString,
CONFIG,
};
use rocket::{
@@ -279,8 +281,9 @@ async fn put_avatar(data: JsonUpcase<AvatarData>, headers: Headers, mut conn: Db
#[get("/users/<uuid>/public-key")]
async fn get_public_keys(uuid: &str, _headers: Headers, mut conn: DbConn) -> JsonResult {
let user = match User::find_by_uuid(uuid, &mut conn).await {
Some(user) => user,
None => err!("User doesn't exist"),
Some(user) if user.public_key.is_some() => user,
Some(_) => err_code!("User has no public_key", Status::NotFound.code),
None => err_code!("User doesn't exist", Status::NotFound.code),
};
Ok(Json(json!({
@@ -503,17 +506,15 @@ async fn post_rotatekey(data: JsonUpcase<KeyData>, headers: Headers, mut conn: D
#[post("/accounts/security-stamp", data = "<data>")]
async fn post_sstamp(
data: JsonUpcase<PasswordData>,
data: JsonUpcase<PasswordOrOtpData>,
headers: Headers,
mut conn: DbConn,
nt: Notify<'_>,
) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
Device::delete_all_by_user(&user.uuid, &mut conn).await?;
user.reset_security_stamp();
@@ -736,18 +737,16 @@ async fn post_delete_recover_token(data: JsonUpcase<DeleteRecoverTokenData>, mut
}
#[post("/accounts/delete", data = "<data>")]
async fn post_delete_account(data: JsonUpcase<PasswordData>, headers: Headers, conn: DbConn) -> EmptyResult {
async fn post_delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> EmptyResult {
delete_account(data, headers, conn).await
}
#[delete("/accounts", data = "<data>")]
async fn delete_account(data: JsonUpcase<PasswordData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordData = data.into_inner().data;
async fn delete_account(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
let data: PasswordOrOtpData = data.into_inner().data;
let user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
user.delete(&mut conn).await
}
@@ -854,20 +853,13 @@ fn verify_password(data: JsonUpcase<SecretVerificationRequest>, headers: Headers
Ok(())
}
async fn _api_key(
data: JsonUpcase<SecretVerificationRequest>,
rotate: bool,
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
async fn _api_key(data: JsonUpcase<PasswordOrOtpData>, rotate: bool, headers: Headers, mut conn: DbConn) -> JsonResult {
use crate::util::format_date;
let data: SecretVerificationRequest = data.into_inner().data;
let data: PasswordOrOtpData = data.into_inner().data;
let mut user = headers.user;
if !user.check_valid_password(&data.MasterPasswordHash) {
err!("Invalid password")
}
data.validate(&user, true, &mut conn).await?;
if rotate || user.api_key.is_none() {
user.api_key = Some(crypto::generate_api_key());
@@ -882,12 +874,12 @@ async fn _api_key(
}
#[post("/accounts/api-key", data = "<data>")]
async fn api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
async fn api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, false, headers, conn).await
}
#[post("/accounts/rotate-api-key", data = "<data>")]
async fn rotate_api_key(data: JsonUpcase<SecretVerificationRequest>, headers: Headers, conn: DbConn) -> JsonResult {
async fn rotate_api_key(data: JsonUpcase<PasswordOrOtpData>, headers: Headers, conn: DbConn) -> JsonResult {
_api_key(data, true, headers, conn).await
}
@@ -921,26 +913,23 @@ impl<'r> FromRequest<'r> for KnownDevice {
let email_bytes = match data_encoding::BASE64URL_NOPAD.decode(email_b64.as_bytes()) {
Ok(bytes) => bytes,
Err(_) => {
return Outcome::Failure((
Status::BadRequest,
"X-Request-Email value failed to decode as base64url",
));
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as base64url"));
}
};
match String::from_utf8(email_bytes) {
Ok(email) => email,
Err(_) => {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
return Outcome::Error((Status::BadRequest, "X-Request-Email value failed to decode as UTF-8"));
}
}
} else {
return Outcome::Failure((Status::BadRequest, "X-Request-Email value is required"));
return Outcome::Error((Status::BadRequest, "X-Request-Email value is required"));
};
let uuid = if let Some(uuid) = req.headers().get_one("X-Device-Identifier") {
uuid.to_string()
} else {
return Outcome::Failure((Status::BadRequest, "X-Device-Identifier value is required"));
return Outcome::Error((Status::BadRequest, "X-Device-Identifier value is required"));
};
Outcome::Success(KnownDevice {
@@ -963,26 +952,33 @@ async fn post_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Hea
#[put("/devices/identifier/<uuid>/token", data = "<data>")]
async fn put_device_token(uuid: &str, data: JsonUpcase<PushToken>, headers: Headers, mut conn: DbConn) -> EmptyResult {
if !CONFIG.push_enabled() {
return Ok(());
}
let data = data.into_inner().data;
let token = data.PushToken;
let mut device = match Device::find_by_uuid_and_user(&headers.device.uuid, &headers.user.uuid, &mut conn).await {
Some(device) => device,
None => err!(format!("Error: device {uuid} should be present before a token can be assigned")),
};
device.push_token = Some(token);
if device.push_uuid.is_none() {
device.push_uuid = Some(uuid::Uuid::new_v4().to_string());
// if the device already has been registered
if device.is_registered() {
// check if the new token is the same as the registered token
if device.push_token.is_some() && device.push_token.unwrap() == token.clone() {
debug!("Device {} is already registered and token is the same", uuid);
return Ok(());
} else {
// Try to unregister already registered device
let _ = unregister_push_device(device.push_uuid).await;
}
// clear the push_uuid
device.push_uuid = None;
}
device.push_token = Some(token);
if let Err(e) = device.save(&mut conn).await {
err!(format!("An error occurred while trying to save the device push token: {e}"));
}
if let Err(e) = register_push_device(headers.user.uuid, device).await {
err!(format!("An error occurred while proceeding registration of a device: {e}"));
}
register_push_device(&mut device, &mut conn).await?;
Ok(())
}
@@ -999,7 +995,7 @@ async fn put_clear_device_token(uuid: &str, mut conn: DbConn) -> EmptyResult {
if let Some(device) = Device::find_by_uuid(uuid, &mut conn).await {
Device::clear_push_token_by_uuid(uuid, &mut conn).await?;
unregister_push_device(device.uuid).await?;
unregister_push_device(device.push_uuid).await?;
}
Ok(())
+87 -56
View File
File diff suppressed because it is too large Load Diff
+47 -27
View File
@@ -5,11 +5,13 @@ use serde_json::Value;
use crate::{
api::{
core::{CipherSyncData, CipherSyncType},
EmptyResult, JsonResult, JsonUpcase, NumberOrString,
EmptyResult, JsonResult, JsonUpcase,
},
auth::{decode_emergency_access_invite, Headers},
db::{models::*, DbConn, DbPool},
mail, CONFIG,
mail,
util::NumberOrString,
CONFIG,
};
pub fn routes() -> Vec<Route> {
@@ -18,6 +20,7 @@ pub fn routes() -> Vec<Route> {
get_grantees,
get_emergency_access,
put_emergency_access,
post_emergency_access,
delete_emergency_access,
post_delete_emergency_access,
send_invite,
@@ -37,42 +40,59 @@ pub fn routes() -> Vec<Route> {
// region get
#[get("/emergency-access/trusted")]
async fn get_contacts(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
async fn get_contacts(headers: Headers, mut conn: DbConn) -> Json<Value> {
if !CONFIG.emergency_access_allowed() {
return Json(json!({
"Data": [{
"Id": "",
"Status": 2,
"Type": 0,
"WaitTimeDays": 0,
"GranteeId": "",
"Email": "",
"Name": "NOTE: Emergency Access is disabled!",
"Object": "emergencyAccessGranteeDetails",
}],
"Object": "list",
"ContinuationToken": null
}));
}
let emergency_access_list = EmergencyAccess::find_all_by_grantor_uuid(&headers.user.uuid, &mut conn).await;
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantee_details(&mut conn).await);
}
Ok(Json(json!({
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}))
}
#[get("/emergency-access/granted")]
async fn get_grantees(headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
let emergency_access_list = EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await;
async fn get_grantees(headers: Headers, mut conn: DbConn) -> Json<Value> {
let emergency_access_list = if CONFIG.emergency_access_allowed() {
EmergencyAccess::find_all_by_grantee_uuid(&headers.user.uuid, &mut conn).await
} else {
Vec::new()
};
let mut emergency_access_list_json = Vec::with_capacity(emergency_access_list.len());
for ea in emergency_access_list {
emergency_access_list_json.push(ea.to_json_grantor_details(&mut conn).await);
}
Ok(Json(json!({
Json(json!({
"Data": emergency_access_list_json,
"Object": "list",
"ContinuationToken": null
})))
}))
}
#[get("/emergency-access/<emer_id>")]
async fn get_emergency_access(emer_id: &str, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emergency_access) => Ok(Json(emergency_access.to_json_grantee_details(&mut conn).await)),
@@ -103,7 +123,7 @@ async fn post_emergency_access(
data: JsonUpcase<EmergencyAccessUpdateData>,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessUpdateData = data.into_inner().data;
@@ -133,7 +153,7 @@ async fn post_emergency_access(
#[delete("/emergency-access/<emer_id>")]
async fn delete_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let grantor_user = headers.user;
@@ -169,7 +189,7 @@ struct EmergencyAccessInviteData {
#[post("/emergency-access/invite", data = "<data>")]
async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessInviteData = data.into_inner().data;
let email = data.Email.to_lowercase();
@@ -252,7 +272,7 @@ async fn send_invite(data: JsonUpcase<EmergencyAccessInviteData>, headers: Heade
#[post("/emergency-access/<emer_id>/reinvite")]
async fn resend_invite(emer_id: &str, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@@ -312,7 +332,7 @@ struct AcceptData {
#[post("/emergency-access/<emer_id>/accept", data = "<data>")]
async fn accept_invite(emer_id: &str, data: JsonUpcase<AcceptData>, headers: Headers, mut conn: DbConn) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: AcceptData = data.into_inner().data;
let token = &data.Token;
@@ -395,7 +415,7 @@ async fn confirm_emergency_access(
headers: Headers,
mut conn: DbConn,
) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let confirming_user = headers.user;
let data: ConfirmData = data.into_inner().data;
@@ -444,7 +464,7 @@ async fn confirm_emergency_access(
#[post("/emergency-access/<emer_id>/initiate")]
async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let initiating_user = headers.user;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@@ -484,7 +504,7 @@ async fn initiate_emergency_access(emer_id: &str, headers: Headers, mut conn: Db
#[post("/emergency-access/<emer_id>/approve")]
async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@@ -522,7 +542,7 @@ async fn approve_emergency_access(emer_id: &str, headers: Headers, mut conn: DbC
#[post("/emergency-access/<emer_id>/reject")]
async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let mut emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@@ -565,7 +585,7 @@ async fn reject_emergency_access(emer_id: &str, headers: Headers, mut conn: DbCo
#[post("/emergency-access/<emer_id>/view")]
async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
Some(emer) => emer,
@@ -602,7 +622,7 @@ async fn view_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn
#[post("/emergency-access/<emer_id>/takeover")]
async fn takeover_emergency_access(emer_id: &str, headers: Headers, mut conn: DbConn) -> JsonResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let requesting_user = headers.user;
let emergency_access = match EmergencyAccess::find_by_uuid(emer_id, &mut conn).await {
@@ -645,7 +665,7 @@ async fn password_emergency_access(
headers: Headers,
mut conn: DbConn,
) -> EmptyResult {
check_emergency_access_allowed()?;
check_emergency_access_enabled()?;
let data: EmergencyAccessPasswordData = data.into_inner().data;
let new_master_password_hash = &data.NewMasterPasswordHash;
@@ -722,9 +742,9 @@ fn is_valid_request(
&& emergency_access.atype == requested_access_type as i32
}
fn check_emergency_access_allowed() -> EmptyResult {
fn check_emergency_access_enabled() -> EmptyResult {
if !CONFIG.emergency_access_allowed() {
err!("Emergency access is not allowed.")
err!("Emergency access is not enabled.")
}
Ok(())
}
+2 -2
View File
@@ -263,7 +263,7 @@ pub async fn log_event(
event_type: i32,
source_uuid: &str,
org_uuid: &str,
act_user_uuid: String,
act_user_uuid: &str,
device_type: i32,
ip: &IpAddr,
conn: &mut DbConn,
@@ -271,7 +271,7 @@ pub async fn log_event(
if !CONFIG.org_events_enabled() {
return;
}
_log_event(event_type, source_uuid, org_uuid, &act_user_uuid, device_type, None, ip, conn).await;
_log_event(event_type, source_uuid, org_uuid, act_user_uuid, device_type, None, ip, conn).await;
}
#[allow(clippy::too_many_arguments)]
+6 -12
View File
@@ -13,7 +13,6 @@ pub use ciphers::{purge_trashed_ciphers, CipherData, CipherSyncData, CipherSyncT
pub use emergency_access::{emergency_notification_reminder_job, emergency_request_timeout_job};
pub use events::{event_cleanup_job, log_event, log_user_event};
pub use sends::purge_sends;
pub use two_factor::send_incomplete_2fa_notifications;
pub fn routes() -> Vec<Route> {
let mut eq_domains_routes = routes![get_eq_domains, post_eq_domains, put_eq_domains];
@@ -47,15 +46,14 @@ pub fn events_routes() -> Vec<Route> {
//
// Move this somewhere else
//
use rocket::{serde::json::Json, Catcher, Route};
use serde_json::Value;
use rocket::{serde::json::Json, serde::json::Value, Catcher, Route};
use crate::{
api::{JsonResult, JsonUpcase, Notify, UpdateType},
auth::Headers,
db::DbConn,
error::Error,
util::get_reqwest_client,
util::{get_reqwest_client, parse_experimental_client_feature_flags},
};
#[derive(Serialize, Deserialize, Debug)]
@@ -193,6 +191,7 @@ fn version() -> Json<&'static str> {
#[get("/config")]
fn config() -> Json<Value> {
let domain = crate::CONFIG.domain();
let feature_states = parse_experimental_client_feature_flags(&crate::CONFIG.experimental_client_feature_flags());
Json(json!({
// Note: The clients use this version to handle backwards compatibility concerns
// This means they expect a version that closely matches the Bitwarden server version
@@ -203,7 +202,8 @@ fn config() -> Json<Value> {
"gitHash": option_env!("GIT_REV"),
"server": {
"name": "Vaultwarden",
"url": "https://github.com/dani-garcia/vaultwarden"
"url": "https://github.com/dani-garcia/vaultwarden",
"version": crate::VERSION
},
"environment": {
"vault": domain,
@@ -212,13 +212,7 @@ fn config() -> Json<Value> {
"notifications": format!("{domain}/notifications"),
"sso": "",
},
"featureStates": {
// Any feature flags that we want the clients to use
// Can check the enabled ones at:
// https://vault.bitwarden.com/api/config
"autofill-v2": true,
"fido2-vault-credentials": true
},
"featureStates": feature_states,
"object": "config",
}))
}

Some files were not shown because too many files have changed in this diff Show More