Remove trailing whitespace

Trailing whitespace is generally considered harmful and untidy in both
code and text-based documentation.  Remove the instances of it in our
codebase.
This commit is contained in:
brian m. carlson 2018-10-03 20:34:13 +00:00
parent b8f9b11a2a
commit b0e5bac13f
No known key found for this signature in database
GPG Key ID: 2D0C9BC12F82B3A1
13 changed files with 92 additions and 92 deletions

@ -4,11 +4,11 @@
1. Run the dockers
./docker/run_dockers.bsh
2. **Enjoy** all your new package files in
./repos/
### Slightly longer version ###
1. Generate GPG keys for everything (See GPG Signing)
@ -16,7 +16,7 @@
4. Generate git-lfs/repo packages and sign all packages
./docker/run_dockers.bsh
5. Host the `/repo` on the `REPO_HOSTNAME` server
6. Test the repos and git-lfs in a client environment
@ -24,23 +24,23 @@
## Using the Dockers ##
All docker commands need to either be run as root **or** as a user with docker
permissions. Adding your user name to the docker group (or setting up boot2docker
All docker commands need to either be run as root **or** as a user with docker
permissions. Adding your user name to the docker group (or setting up boot2docker
environment) is probably the easiest.
For Mac and Windows users, the git-lfs repo needs to be in your Users directory
For Mac and Windows users, the git-lfs repo needs to be in your Users directory
or else boot2docker magic won't work. Alternatively, you could add addition
mount points like
mount points like
[this](http://stackoverflow.com/questions/26639968/boot2docker-startup-script-to-mount-local-shared-folder-with-host)
### Running Dockers ###
In order to run the dockers, the docker has to be run with a
lot of arguments to get the mount points right, etc... A convenient script is
lot of arguments to get the mount points right, etc... A convenient script is
supplied to make this all easy. Simply run
./docker/run_docker.bsh
All the images are pulled automatically, and then run.
To only run certain docker images, supply them as arguments, e.g.
@ -54,12 +54,12 @@ And only those images will be run.
### Development in Dockers ###
Sometimes you don't want to just build git-lfs and destroy the container, you
want to get in there, run a lot of command, debug, develop, etc... To do this,
want to get in there, run a lot of command, debug, develop, etc... To do this,
the best command to run is bash, and then you have an interactive shell to use
./docker/run_docker.bsh {image name(s)} -- bash
After listing the image(s) you want to run, add a double dash (--) and then any
After listing the image(s) you want to run, add a double dash (--) and then any
command (and arguments) you want executed in the docker. Remember, the command
you are executing has to be in the docker image.
@ -71,33 +71,33 @@ There are currently three type of docker images:
git-lfs and save the package/repository in the `/repo` direrctory. This image
also signs all rpms/debs if gpg signing is setup
2. Environment building images: `{OS_NAME}_{OS_VERSION}_env` -
These build or install the environment (dependencies) for building git-lfs. These
are mostly important for CentOS because without these, many dependencies have
to be built by a developer. These containers should create packages for these
These build or install the environment (dependencies) for building git-lfs. These
are mostly important for CentOS because without these, many dependencies have
to be built by a developer. These containers should create packages for these
dependencies and place them in `/repo`
3. Testing images: `{OS_NAME}_{OS_VERSION}_test` - These images
should install the repo and download the git-lfs packages and dependencies to test
that everything is working, including the GPG signatures. Unlike the first two types,
testing images are not guaranteed to work without GPG signatures. They should
testing images are not guaranteed to work without GPG signatures. They should
also run the test and integration scripts after installing git-lfs to verify
everything is working in a **non-developer** setup. (With the exception that go
is needed to build the tests...)
This default behavior for `./docker/run_dockers.bsh`
is to run all of the _building images_. These
containers will use the currently checked-out version of git-lfs and copy it
into the docker, and run `git clean -xdf` to remove any non-tracked files,
(but non-committed changes are kept). git-lfs is built, and a packages/repo is
containers will use the currently checked-out version of git-lfs and copy it
into the docker, and run `git clean -xdf` to remove any non-tracked files,
(but non-committed changes are kept). git-lfs is built, and a packages/repo is
created for each container.
These are all a developer would need to test the different OSes. And create the
git-lfs rpm or deb packages in the `/repo` directory.
git-lfs rpm or deb packages in the `/repo` directory.
In order to distribute git-lfs **and** build dependencies, the dependencies that
In order to distribute git-lfs **and** build dependencies, the dependencies that
that were built to create the docker images need to be saved too. Most of these
are downloaded by yum/apt-get and do not need to be saved, but a few are not.
In order to save the necessary dependencies, call `./docker/run_dockers.bsh` on
`{OS_NAME}_{OS_VERSION}_env` and the rpms
In order to save the necessary dependencies, call `./docker/run_dockers.bsh` on
`{OS_NAME}_{OS_VERSION}_env` and the rpms
will be extracted from the images and saved in the `./repo` directory.
(This _can_ be done in one command)
@ -121,8 +121,8 @@ of the `run_docker.bsh` script.
the lfs dockers. If set to 0, it will not check to see if a new pull is needed,
and you will always run off of your currently cached images docker images.
`AUTO_REMOVE` - Default 1. Docker containers are automatically deleted on
exit. If set to 0, the docker containers will not be automatically deleted upon
`AUTO_REMOVE` - Default 1. Docker containers are automatically deleted on
exit. If set to 0, the docker containers will not be automatically deleted upon
exit. This can be useful for a post mortem analysis (using other docker commands
not covered here). Just make sure you clean up the docker containers manually.
@ -146,8 +146,8 @@ The two major packages included are:
`git-lfs-repo-release....*` - A package to install the repo.
When building, all **untracked** files are removed during RPM generation (except
any stray directories containing a .git folder will not be cleared. This
shouldn't be the case, unless you are temporarily storing another git repo in
any stray directories containing a .git folder will not be cleared. This
shouldn't be the case, unless you are temporarily storing another git repo in
the git repo. This is a safety mechanism in git, so just keep in mind if you
are producing packages.)
@ -157,9 +157,9 @@ The git-lfs-repo-release must contain the URL where the repo is to be hosted.
The current default value is `git-lfs.github.com` but this can be overridden
using the `REPO_HOSTNAME` env var, e.g.
export REPO_HOSTNAME=www.notgithub.uk.co
export REPO_HOSTNAME=www.notgithub.uk.co
./docker/run_dockers.bsh
Now all the `git-lfs-repo-release....*` files will point to that URL instead
_Hint_: `REPO_HOSTNAME` can also be `www.notgithub.uk.co:2213/not_root_dir`
@ -170,7 +170,7 @@ To test that all the OSes can download the packages, install, and run the tests
again, run
./test_dockers.bsh
(which is basically just `./docker/run_dockers.bsh ./docker/git-lfs-test_*`)
Remember to set `REPO_HOSTNAME` if you changed it for `./docker/build_docker.bsh`
@ -189,10 +189,10 @@ or
## GPG signing ###
For private repo testing, GPG signing can be skipped. apt-get and yum can
For private repo testing, GPG signing can be skipped. apt-get and yum can
install .deb/.rpm directly without gpg keys and everything will work (with
certain flags). This section is for distribution in a repo. Most if not all
this functionality is automatically disabled when there is no signing key
certain flags). This section is for distribution in a repo. Most if not all
this functionality is automatically disabled when there is no signing key
(`./docker/git-lfs_*.key`).
In order to sign packages, you need to generate and place GPG keys in the right
@ -210,11 +210,11 @@ place. The general procedure for this is
8. O for Okay
9. Enter a secure password, make sure you will not forget it
10. Generate Entropy!
gpg --export-secret-key '<key ID>!' > filename.key
e.g. `gpg --export-secret-key '547CF247!' > ./docker/git-lfs_centos_7.key`
*NOTE*: the **!** is important in this command
Keep in mind, .key files must NEVER be accidentally committed to the repo.
@ -229,37 +229,37 @@ that docker and save to the `/key` directory
To prevent MANY passphrase entries at random times, a gpg-agent docker is used to
cache your signing key. This is done automatically for you, whenever you call
`./docker/run_dockers.bsh` on a building image (`git-lfs_*.dockerfile`). It can
be manually preloaded by calling `./docker/gpg-agent_preload.bsh`. It will ask
be manually preloaded by calling `./docker/gpg-agent_preload.bsh`. It will ask
you for your passphrase, once for each unique key out of all the dockers. So if
you use the same key for every docker, it will only prompt once. If you have 5
different keys, you'll have prompts, with only the key ID to tell you which
is which.
The gpg agent TTL is set to 1 year. If this is not acceptable for you, set the
The gpg agent TTL is set to 1 year. If this is not acceptable for you, set the
`GPG_MAX_CACHE` and `GPG_DEFAULT_CACHE` environment variables (in seconds) before
starting the gpg-agent daemon.
`./docker/gpg-agent_start.bsh` starts the gpg-agent daemon. It is called
`./docker/gpg-agent_start.bsh` starts the gpg-agent daemon. It is called
automatically by `./docker/gpg-agent_preload.bsh`
`./docker/gpg-agent_stop.bsh` stops the gpg-agent daemon. It is called
`./docker/gpg-agent_stop.bsh` stops the gpg-agent daemon. It is called
automatically by `./docker/gpg-agent_preload.bsh`
`./docker/gpg-agent_preload.bsh` is called automatically by
`./docker/run_dockers.bsh` when running any of the signing dockers.
`./docker/gpg-agent_preload.bsh` is called automatically by
`./docker/run_dockers.bsh` when running any of the signing dockers.
`./docker/gpg-agent_preload.bsh -r` - Stops and restarts the gpg agent daemon.
This is useful for reloading keys when you update them in your host.
### GPG capabilities by Distro ###
Every distro has its own GPG signing capability. This is why every signing
Every distro has its own GPG signing capability. This is why every signing
docker (`git-lfs_*.dockerfile`) can have an associated key (`git-lfs_*.key`)
Debian **will** work with 4096 bit RSA signing subkeys like [1] suggests, but will
also work with 4096 bit RSA signing keys.
CentOS will **not** work with subkeys[3]. CentOS 6 and 7 will work with 4096 bit
CentOS will **not** work with subkeys[3]. CentOS 6 and 7 will work with 4096 bit
RSA signing keys
You can make a 4096 RSA key for Debian and CentOS 6/7 (4 for step 1 above, and
@ -275,11 +275,11 @@ You can make a 4096 RSA key for Debian and CentOS 6/7 (4 for step 1 above, and
## Adding additional OSes ##
To add another operating system, it needs to be added to the lfs_dockers
repo and uploaded to docker hub. Then all that is left is to add it to the
To add another operating system, it needs to be added to the lfs_dockers
repo and uploaded to docker hub. Then all that is left is to add it to the
IMAGES list in `run_dockers.bsh` and `test_dockers.bsh`
Follow the already existing pattern `{OS NAME}_{OS VERSION #}` where
Follow the already existing pattern `{OS NAME}_{OS VERSION #}` where
**{OS NAME}** and **{OS VERSION #}** should not contain underscores (\_).
## Docker Cheat sheet ##
@ -289,15 +289,15 @@ Install https://docs.docker.com/installation/
* list running dockers
docker ps
* list stopped dockers too
docker ps -a
* Remove all stopped dockers
docker rm $(docker ps --filter=status=exited -q)
* List docker images
docker images
@ -305,7 +305,7 @@ Install https://docs.docker.com/installation/
* Remove unused docker images
docker rmi $(docker images -a --filter=dangling=true -q)
* Run another command (like bash) in a running docker
docker exec -i {docker name} {command}
@ -329,19 +329,19 @@ ignoring many Ctrl+C's
name/id and then used `docker stop` (signal 15) or `docker kill`
(signal 9) to stop the docker. You can also use 'docker exec' to start another
bash or kill command inside that container
2. How do I re-enter a docker after it failed/succeeded?
Dockers are immediately deleted upon exit. The best way to work in a docker
is to run bash (See Development in Dockers). This will let you to run the
is to run bash (See Development in Dockers). This will let you to run the
main build command and then continue.
3. That answer's not good enough. How do I resume a docker?
Well, first you have to set the environment variable `AUTO_REMOVE=0`
before running the image you want to resume. This will keep the docker
Well, first you have to set the environment variable `AUTO_REMOVE=0`
before running the image you want to resume. This will keep the docker
around after stopping. (Be careful! They multiply like rabbits.) Then
docker commit {container name/id} {new_name}
Then you can `docker run` that new image.

@ -13,6 +13,6 @@ CONTAINER_NAME=git-lfs-gpg
if [ "$(docker inspect -f {{.State.Running}} ${CONTAINER_NAME})" != "true" ]; then
OTHER_OPTIONS=("-e" "GPG_DEFAULT_CACHE=${GPG_DEFAULT_CACHE:-31536000}")
OTHER_OPTIONS+=("-e" "GPG_MAX_CACHE=${GPG_MAX_CACHE:-31536000}")
${SUDO} docker run -d -t "${OTHER_OPTIONS[@]}" --name ${CONTAINER_NAME} ${IMAGE_NAME}
fi

@ -26,7 +26,7 @@ simplest use case: single branch locking. The API is designed to be extensible
as we experiment with more advanced locking scenarios, as defined in the
[original proposal](/docs/proposals/locking.md).
The [Batch API's `ref` property docs](./batch.md#ref-property) describe how the `ref` property can be used to support auth schemes that include the server ref. Locking API implementations should also only use it for authentication, until advanced locking scenarios have been developed.
The [Batch API's `ref` property docs](./batch.md#ref-property) describe how the `ref` property can be used to support auth schemes that include the server ref. Locking API implementations should also only use it for authentication, until advanced locking scenarios have been developed.
## Create Lock

@ -20,10 +20,10 @@ wheezy. On wheezy it requires `wheezy-backports` versions of `dh-golang`,
## Building an rpm
An rpm package can be built by running ```./rpm/build_rpms.bsh```. All
An rpm package can be built by running ```./rpm/build_rpms.bsh```. All
dependencies will be downloaded, compiled, and installed for you, provided
you have sudo/root permissions. The resulting ./rpm/RPMS/x86_64/git-lfs*.rpm
Can be installed using ```yum install``` or distributed.
Can be installed using ```yum install``` or distributed.
- CentOS 7 - build_rpms.bsh will take care of everything. You only need the
git-lfs rpm

@ -14,7 +14,7 @@ options below.
## OPTIONS
* `--manual` `-m`
Print instructions for manually updating your hooks to include git-lfs
Print instructions for manually updating your hooks to include git-lfs
functionality. Use this option if `git lfs update` fails because of existing
hooks and you want to retain their functionality.

@ -115,7 +115,7 @@ support:
This is useful because it means it provides a reminder that the user should be
locking the file before they start to edit it, to avoid the case of an unexpected
merge later on.
merge later on.
I've done some tests with chmod and discovered:
@ -151,7 +151,7 @@ I've done some tests with chmod and discovered:
* Calls `post-checkout` with pre/post SHA and branch=1 (even though it's a plain SHA)
* Checkout named files (e.g. discard changes)
* Calls `post-checkout` with identical pre/post SHA (HEAD) and branch=0
* Reset all files (discard all changes ie git reset --hard HEAD)
* Reset all files (discard all changes ie git reset --hard HEAD)
* Doesn't call `post-checkout` - could restore write bit, but must have been
set anyway for file to be edited, so not a problem?
* Reset a branch to a previous commit
@ -161,7 +161,7 @@ I've done some tests with chmod and discovered:
* Rebase a branch with lockable files (non-conflicting)
* Merge conflicts - fix then commit
* Rebase conflicts - fix then continue
*
*
## Implementation details (Initial simple API-only pass)

@ -18,12 +18,12 @@ are optional extras so there are no breaking changes to the API.
The current HTTP GET/PUT system will remain the default. When a version of the
git-lfs client supports alternative transfer mechanisms, it notifies the server
in the API request using the `accept-transfers` field.
in the API request using the `accept-transfers` field.
If the server also supports one of the mechanisms the client advertised, it may
select one and alter the upload / download URLs to point at resources
compatible with this transfer mechanism. It must also indicate the chosen
transfer mechanism in the response using the `transfer` field.
If the server also supports one of the mechanisms the client advertised, it may
select one and alter the upload / download URLs to point at resources
compatible with this transfer mechanism. It must also indicate the chosen
transfer mechanism in the response using the `transfer` field.
The URLs provided in this case may not be HTTP, they may be custom protocols.
It is up to each individual transfer mechanism to define how URLs are used.
@ -32,20 +32,20 @@ It is up to each individual transfer mechanism to define how URLs are used.
### Phase 1: refactoring & abstraction
1. Introduce a new concept of 'transfer adapter'.
2. Adapters can provide either upload or download support, or both. This is
1. Introduce a new concept of 'transfer adapter'.
2. Adapters can provide either upload or download support, or both. This is
necessary because some mechanisms are unidirectional, e.g. HTTP Content-Range
is download only, tus.io is upload only.
3. Refactor our current HTTP GET/PUT mechanism to be the default implementation
3. Refactor our current HTTP GET/PUT mechanism to be the default implementation
for both upload & download
4. The LFS core will pass oids to transfer to this adapter in bulk, and receive
4. The LFS core will pass oids to transfer to this adapter in bulk, and receive
events back from the adapter for transfer progress, and file completion.
5. Each adapter is responsible for its own parallelism, but should respect the
`lfs.concurrenttransfers` setting. For example the default (current) approach
will parallelise on files (oids), but others may parallelise in other ways
e.g. downloading multiple parts of the same file at once
6. Each adapter should store its own temporary files. On file completion it must
notify the core which in the case of a download is then responsible for
notify the core which in the case of a download is then responsible for
moving a completed file into permanent storage.
7. Update the core to have a registry of available transfer mechanisms which it
passes to the API, and can recognise a chosen one in the response. Default
@ -71,8 +71,8 @@ Because Go is statically linked it's not possible to extend client functionality
at runtime through loading libaries, so instead I propose allowing an external
process to be invoked, and communicated with via a defined stream protocol. This
protocol will be logically identical to the internal adapters; the core passing
oids and receiving back progress and completion notifications; just that the
implementation will be in an external process and the messages will be
oids and receiving back progress and completion notifications; just that the
implementation will be in an external process and the messages will be
serialised over streams.
Only one process will be launched and will remain for the entire period of all
@ -83,7 +83,7 @@ multiple transfers at once.
1. Build a generic 'external' adapter which can invoke a named process and
communicate with it using the standard stream protocol (probably just over
stdout / stdin)
2. Establish a configuration for external adapters; minimum is an identifier
2. Establish a configuration for external adapters; minimum is an identifier
(client and server must agree on what that is) and a path to invoke
3. Implement a small test process in Go which simply wraps the default HTTP
mechanism in an external process, to prove the approach (not in release)

@ -81,7 +81,7 @@ BUILD_LOCAL=1 ./build_rpms.bsh
### Troubleshooting ###
**Q**) I ran build_rpms.bsh as root and now there are root owned files in the
**Q**) I ran build_rpms.bsh as root and now there are root owned files in the
rpm dir
**A**) That happens. Either run build_rpms.bsh as a user with sudo permissions

@ -5,7 +5,7 @@ Summary: Packges for git-lfs for Enterprise Linux repository configuratio
Group: System Environment/Base
License: MIT
%if 0%{?fedora}
%if 0%{?fedora}
URL: https://git-lfs.github.com/fedora/%{fedora}/
%elseif 0%{?rhel}
URL: https://git-lfs.github.com/centos/%{rhel}/

@ -12,7 +12,7 @@ BuildRequires: patch, libyaml-devel, glibc-headers, autoconf, gcc-c++, glibc-dev
Provides: gem = %{version}-%{release}
%description
A dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
A dynamic, open source programming language with a focus on simplicity and productivity. It has an elegant syntax that is natural to read and easy to write.
%prep
%setup -q

@ -1,32 +1,32 @@
# Git LFS Server API compliance test utility
This package exists to provide automated testing of server API implementations,
This package exists to provide automated testing of server API implementations,
to ensure that they conform to the behaviour expected by the client. You can
run this utility against any server that implements the Git LFS API.
run this utility against any server that implements the Git LFS API.
## Automatic or data-driven testing
This utility is primarily intended to test the API implementation, but in order
to correctly test the responses, the tests have to know what objects exist on
the server already and which don't.
the server already and which don't.
In 'automatic' mode, the tests require that both the API and the content server
it links to via upload and download links are both available & free to use.
it links to via upload and download links are both available & free to use.
The content server must be empty at the start of the tests, and the tests will
upload some data as part of the tests. Therefore obviously this cannot be a
production system.
Alternatively, in 'data-driven' mode, the tests must be provided with a list of
Alternatively, in 'data-driven' mode, the tests must be provided with a list of
object IDs that already exist on the server (minimum 10), and a list of other
object IDs that are known to not exist. The test will use these IDs to
object IDs that are known to not exist. The test will use these IDs to
construct its data sets, will only call the API (not the content server), and
thus will not update any data - meaning you can in theory run this against a
production system.
thus will not update any data - meaning you can in theory run this against a
production system.
## Calling the test tool
```
git-lfs-test-server-api [--url=<apiurl> | --clone=<cloneurl>]
git-lfs-test-server-api [--url=<apiurl> | --clone=<cloneurl>]
[<oid-exists-file> <oid-missing-file>]
[--save=<fileprefix>]
```

@ -44,7 +44,7 @@ begin_test "post-commit"
# files should remain writeable since locked
assert_file_writeable pcfile1.dat
assert_file_writeable pcfile2.dat
assert_file_writeable pcfile2.dat
)
end_test

@ -610,7 +610,7 @@ begin_test "track: escaped pattern in .gitattributes"
[ "Tracking \"$filename\"" = "$(git lfs track "$filename")" ]
[ "\"$filename\" already supported" = "$(git lfs track "$filename")" ]
#changing flags should track the file again
[ "Tracking \"$filename\"" = "$(git lfs track -l "$filename")" ]