fix merge conflict

This commit is contained in:
risk danger olson 2016-07-08 09:44:23 -06:00
commit 3718379a97
867 changed files with 31653 additions and 8971 deletions

3
.gitignore vendored

@ -1,5 +1,6 @@
bin/
benchmark/
out/
# only allow man/*.\d.ronn files
man/*
@ -24,3 +25,5 @@ docker/*.key
src
commands/mancontent_gen.go
lfstest-*

@ -15,7 +15,6 @@ matrix:
fast_finish: true
allow_failures:
- os: osx
go: 1.6
include:
- env: git-latest
os: linux

@ -1,5 +1,30 @@
# Git LFS Changelog
## 1.2.1 (2 June 2016)
### Features
* Add missing config details to `env` command #1217 (@sinbad)
* Allow smudge filter to return 0 on download failure #1213 (@sinbad)
* Add `git lfs update --manual` option & promote it on hook install fail #1182 (@sinbad)
* Pass `git lfs clone` flags through to `git clone` correctly, respect some options #1160 (@sinbad)
### Bugs
* Clean trailing `/` from include/exclude paths #1278 (@ttaylorr)
* Fix problems with user prompts in `git lfs clone` #1185 (@sinbad)
* Fix failure to return non-zero exit code when lfs install/update fails to install hooks #1178 (@sinbad)
* Fix missing man page #1149 (@javabrett)
* fix concurrent map read and map write #1179 (@technoweenie)
### Misc
* Allow additional fields on request & response schema #1276 (@sinbad)
* Fix installer error on win32. #1198 (@teo-tsirpanis)
* Applied same -ldflags -X name value -> name=value fix #1193 (@javabrett)
* add instructions to install from MacPorts #1186 (@skymoo)
* Add xenial repo #1170 (@graingert)
## 1.2.0 (14 April 2016)
### Features
@ -18,7 +43,7 @@
* Fix silent failure to push LFS objects when ref matches a filename in the working copy #1096 (@epriestley)
* Fix problems with using LFS in symlinked folders #818 (@sinbad)
* Fix git lfs push silently misbehaving on ambiguous refs; fail like git push instead #1118 (@sinbad)
* Whitelist lfs.*.access config in local ~/.lfsconfig #1122 (@rjbell4)
* Whitelist `lfs.*.access` config in local ~/.lfsconfig #1122 (@rjbell4)
* Only write the encoded pointer information to Stdout #1105 (@sschuberth)
* Use hardcoded auth from remote or lfs config when accessing the storage api #1136 (@technoweenie, @jonmagic)
* SSH should be called more strictly with command as one argument #1134 (@sinbad)

@ -94,8 +94,8 @@ tests:
## Updating 3rd party packages
0. Update `Nut.toml`.
0. Run `script/vendor` to update the code in the `.vendor/src` directory.
0. Update `glide.yaml`.
0. Run `script/vendor` to update the code in the `vendor` directory.
0. Commit the change. Git LFS vendors the full source code in the repository.
0. Submit a pull request.

52
Makefile Normal file

@ -0,0 +1,52 @@
GOC ?= gccgo
AR ?= ar
SRCDIR := $(dir $(lastword $(MAKEFILE_LIST)))
LIBDIR := out/github.com/github/git-lfs
GOFLAGS := -Iout
ifeq ($(MAKEFILE_GEN),)
MAKEFILE_GEN := out/Makefile.gen
all: $(MAKEFILE_GEN)
@$(MAKE) -f $(lastword $(MAKEFILE_LIST)) $(MAKEFLAGS) MAKEFILE_GEN=$(MAKEFILE_GEN) $@
$(MAKEFILE_GEN) : out/genmakefile $(SRCDIR)commands/mancontent_gen.go
@mkdir -p $(dir $@)
$< "$(SRCDIR)" github.com/github/git-lfs/ > $@
else
all : bin/git-lfs
include $(MAKEFILE_GEN)
$(LIBDIR)/git-lfs.o : $(SRC_main) $(DEPS_main)
@mkdir -p $(dir $@)
$(GOC) $(GOFLAGS) -c -o $@ $(SRC_main)
bin/git-lfs : $(LIBDIR)/git-lfs.o $(DEPS_main)
@mkdir -p $(dir $@)
$(GOC) $(GOFLAGS) -o $@ $^
%.a : %.o
$(AR) rc $@ $<
endif
$(SRCDIR)commands/mancontent_gen.go : out/mangen
cd $(SRCDIR)commands && $(CURDIR)/out/mangen
out/mangen : $(SRCDIR)docs/man/mangen.go
@mkdir -p $(dir $@)
$(GOC) -o $@ $<
out/genmakefile : $(SRCDIR)script/genmakefile/genmakefile.go
@mkdir -p $(dir $@)
$(GOC) -o $@ $<
clean :
rm -rf out bin
rm -f $(SRCDIR)commands/mancontent_gen.go

@ -1,24 +0,0 @@
[application]
name = "git-lfs"
version = "1.2.0"
authors = [
"Rick Olson <technoweenie@gmail.com>",
"Scott Barron <rubyist@github.com>",
]
[dependencies]
"github.com/bgentry/go-netrc/netrc" = "9fd32a8b3d3d3f9d43c341bfe098430e07609480"
"github.com/cheggaaa/pb" = "bd14546a551971ae7f460e6d6e527c5b56cd38d7"
"github.com/kr/pretty" = "088c856450c08c03eb32f7a6c221e6eefaa10e6f"
"github.com/kr/pty" = "5cf931ef8f76dccd0910001d74a58a7fca84a83d"
"github.com/kr/text" = "6807e777504f54ad073ecef66747de158294b639"
"github.com/inconshreveable/mousetrap" = "76626ae9c91c4f2a10f34cad8ce83ea42c93bb75"
"github.com/olekukonko/ts" = "ecf753e7c962639ab5a1fb46f7da627d4c0a04b8"
"github.com/rubyist/tracerx" = "d7bcc0bc315bed2a841841bee5dbecc8d7d7582f"
"github.com/spf13/cobra" = "c55cdf33856a08e4822738728b41783292812889"
"github.com/spf13/pflag" = "580b9be06c33d8ba9dcc8757ea56b7642472c2f5"
"github.com/ThomsonReutersEikon/go-ntlm/ntlm" = "52b7efa603f1b809167b528b8bbaa467e36fdc02"
"github.com/technoweenie/assert" = "b25ea301d127043ffacf3b2545726e79b6632139"
"github.com/technoweenie/go-contentaddressable" = "38171def3cd15e3b76eb156219b3d48704643899"

@ -22,8 +22,8 @@ preferences.
Note: Git LFS requires Git v1.8.2 or higher.
One installed, you need to setup the global Git hooks for Git LFS. This only
needs to be run once per machine.
Once installed, you need to setup the global Git hooks for Git LFS. This only
needs to be done once per machine.
```bash
$ git lfs install
@ -102,12 +102,20 @@ page][impl]. You can also join [the project's chat room][chat].
[impl]: https://github.com/github/git-lfs/wiki/Implementations
### Using LFS from other Go code
At the moment git-lfs is only focussed on the stability of its command line
interface, and the [server APIs](docs/api/README.md). The contents of the
source packages is subject to change. We therefore currently discourage other
Go code from depending on the git-lfs packages directly; an API to be used by
external Go code may be provided in future.
## Core Team
These are the humans that form the Git LFS core team, which runs the project.
In alphabetical order:
| [@andyneff](https://github.com/andyneff) | [@rubyist](https://github.com/rubyist) | [@sinbad](https://github.com/sinbad) | [@technoweenie](https://github.com/technoweenie) |
|---|---|---|---|---|
| [![](https://avatars1.githubusercontent.com/u/7596961?v=3&s=100)](https://github.com/andyneff) | [![](https://avatars1.githubusercontent.com/u/143?v=3&s=100)](https://github.com/rubyist) | [![](https://avatars1.githubusercontent.com/u/142735?v=3&s=100)](https://github.com/sinbad) | [![](https://avatars3.githubusercontent.com/u/21?v=3&s=100)](https://github.com/technoweenie) |
| [@andyneff](https://github.com/andyneff) | [@rubyist](https://github.com/rubyist) | [@sinbad](https://github.com/sinbad) | [@technoweenie](https://github.com/technoweenie) | [@ttaylorr](https://github.com/ttaylorr) |
|---|---|---|---|---|---|
| [![](https://avatars1.githubusercontent.com/u/7596961?v=3&s=100)](https://github.com/andyneff) | [![](https://avatars1.githubusercontent.com/u/143?v=3&s=100)](https://github.com/rubyist) | [![](https://avatars1.githubusercontent.com/u/142735?v=3&s=100)](https://github.com/sinbad) | [![](https://avatars3.githubusercontent.com/u/21?v=3&s=100)](https://github.com/technoweenie) | [![](https://avatars3.githubusercontent.com/u/443245?v=3&s=100)](https://github.com/ttaylorr) |

@ -7,7 +7,6 @@ Git LFS. If you have an idea for a new feature, open an issue for discussion.
* git index issues [#937](https://github.com/github/git-lfs/issues/937)
* `authenticated` property on urls [#960](https://github.com/github/git-lfs/issues/960)
* Use `expires_at` to quickly put objects in the queue to hit the API again to refresh tokens.
* Add ref information to upload request [#969](https://github.com/github/git-lfs/issues/969)
* Accept raw remote URLs as valid [#1085](https://github.com/github/git-lfs/issues/1085)
* use git proxy settings [#1125](https://github.com/github/git-lfs/issues/1125)
@ -19,7 +18,6 @@ Git LFS. If you have an idea for a new feature, open an issue for discussion.
* Investigate `--shared` and `--dissociate` options for `git clone` (similar to `--references`)
* Investigate `GIT_SSH_COMMAND` [#1142](https://github.com/github/git-lfs/issues/1142)
* Teach `git lfs install` to use `git config --system` instead of `git config --global` by default [#1177](https://github.com/github/git-lfs/pull/1177)
* Don't allow `git lfs track` to operate on `.git*` or `.lfs*` files [#1099](https://github.com/github/git-lfs/issues/1099)
* Investigate `git -c lfs.url=... lfs clone` usage
* Test that manpages are built and included [#1149](https://github.com/github/git-lfs/pull/1149)
* Update CI to build from source outside of git repo [#1156](https://github.com/github/git-lfs/issues/1156#issuecomment-211574343)

202
api/api.go Normal file

@ -0,0 +1,202 @@
// Package api provides the interface for querying LFS servers (metadata)
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import (
"bytes"
"encoding/json"
"fmt"
"strconv"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/httputil"
"github.com/github/git-lfs/tools"
"github.com/rubyist/tracerx"
)
// BatchOrLegacy calls the Batch API and falls back on the Legacy API
// This is for simplicity, legacy route is not most optimal (serial)
// TODO LEGACY API: remove when legacy API removed
func BatchOrLegacy(objects []*ObjectResource, operation string, transferAdapters []string) (objs []*ObjectResource, transferAdapter string, e error) {
if !config.Config.BatchTransfer() {
objs, err := Legacy(objects, operation)
return objs, "", err
}
objs, adapterName, err := Batch(objects, operation, transferAdapters)
if err != nil {
if errutil.IsNotImplementedError(err) {
git.Config.SetLocal("", "lfs.batch", "false")
objs, err := Legacy(objects, operation)
return objs, "", err
}
return nil, "", err
}
return objs, adapterName, nil
}
func BatchOrLegacySingle(inobj *ObjectResource, operation string, transferAdapters []string) (obj *ObjectResource, transferAdapter string, e error) {
objs, adapterName, err := BatchOrLegacy([]*ObjectResource{inobj}, operation, transferAdapters)
if err != nil {
return nil, "", err
}
if len(objs) > 0 {
return objs[0], adapterName, nil
}
return nil, "", fmt.Errorf("Object not found")
}
// Batch calls the batch API and returns object results
func Batch(objects []*ObjectResource, operation string, transferAdapters []string) (objs []*ObjectResource, transferAdapter string, e error) {
if len(objects) == 0 {
return nil, "", nil
}
o := &batchRequest{Operation: operation, Objects: objects, TransferAdapterNames: transferAdapters}
by, err := json.Marshal(o)
if err != nil {
return nil, "", errutil.Error(err)
}
req, err := NewBatchRequest(operation)
if err != nil {
return nil, "", errutil.Error(err)
}
req.Header.Set("Content-Type", MediaType)
req.Header.Set("Content-Length", strconv.Itoa(len(by)))
req.ContentLength = int64(len(by))
req.Body = tools.NewReadSeekCloserWrapper(bytes.NewReader(by))
tracerx.Printf("api: batch %d files", len(objects))
res, bresp, err := DoBatchRequest(req)
if err != nil {
if res == nil {
return nil, "", errutil.NewRetriableError(err)
}
if res.StatusCode == 0 {
return nil, "", errutil.NewRetriableError(err)
}
if errutil.IsAuthError(err) {
httputil.SetAuthType(req, res)
return Batch(objects, operation, transferAdapters)
}
switch res.StatusCode {
case 404, 410:
tracerx.Printf("api: batch not implemented: %d", res.StatusCode)
return nil, "", errutil.NewNotImplementedError(nil)
}
tracerx.Printf("api error: %s", err)
return nil, "", errutil.Error(err)
}
httputil.LogTransfer("lfs.batch", res)
if res.StatusCode != 200 {
return nil, "", errutil.Error(fmt.Errorf("Invalid status for %s: %d", httputil.TraceHttpReq(req), res.StatusCode))
}
return bresp.Objects, bresp.TransferAdapterName, nil
}
// Legacy calls the legacy API serially and returns ObjectResources
// TODO LEGACY API: remove when legacy API removed
func Legacy(objects []*ObjectResource, operation string) ([]*ObjectResource, error) {
retobjs := make([]*ObjectResource, 0, len(objects))
dl := operation == "download"
var globalErr error
for _, o := range objects {
var ret *ObjectResource
var err error
if dl {
ret, err = DownloadCheck(o.Oid)
} else {
ret, err = UploadCheck(o.Oid, o.Size)
}
if err != nil {
// Store for the end, likely only one
globalErr = err
}
retobjs = append(retobjs, ret)
}
return retobjs, globalErr
}
// TODO LEGACY API: remove when legacy API removed
func DownloadCheck(oid string) (*ObjectResource, error) {
req, err := NewRequest("GET", oid)
if err != nil {
return nil, errutil.Error(err)
}
res, obj, err := DoLegacyRequest(req)
if err != nil {
return nil, err
}
httputil.LogTransfer("lfs.download", res)
_, err = obj.NewRequest("download", "GET")
if err != nil {
return nil, errutil.Error(err)
}
return obj, nil
}
// TODO LEGACY API: remove when legacy API removed
func UploadCheck(oid string, size int64) (*ObjectResource, error) {
reqObj := &ObjectResource{
Oid: oid,
Size: size,
}
by, err := json.Marshal(reqObj)
if err != nil {
return nil, errutil.Error(err)
}
req, err := NewRequest("POST", oid)
if err != nil {
return nil, errutil.Error(err)
}
req.Header.Set("Content-Type", MediaType)
req.Header.Set("Content-Length", strconv.Itoa(len(by)))
req.ContentLength = int64(len(by))
req.Body = tools.NewReadSeekCloserWrapper(bytes.NewReader(by))
tracerx.Printf("api: uploading (%s)", oid)
res, obj, err := DoLegacyRequest(req)
if err != nil {
if errutil.IsAuthError(err) {
httputil.SetAuthType(req, res)
return UploadCheck(oid, size)
}
return nil, errutil.NewRetriableError(err)
}
httputil.LogTransfer("lfs.upload", res)
if res.StatusCode == 200 {
return nil, nil
}
if obj.Oid == "" {
obj.Oid = oid
}
if obj.Size == 0 {
obj.Size = reqObj.Size
}
return obj, nil
}

75
api/client.go Normal file

@ -0,0 +1,75 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import "github.com/github/git-lfs/config"
type Operation string
const (
UploadOperation Operation = "upload"
DownloadOperation Operation = "download"
)
// Client exposes the LFS API to callers through a multitude of different
// services and transport mechanisms. Callers can make a *RequestSchema using
// any service that is attached to the Client, and then execute a request based
// on that schema using the `Do()` method.
//
// A prototypical example follows:
// ```
// apiResponse, schema := client.Locks.Lock(request)
// resp, err := client.Do(schema)
// if err != nil {
// handleErr(err)
// }
//
// fmt.Println(apiResponse.Lock)
// ```
type Client struct {
// Locks is the LockService used to interact with the Git LFS file-
// locking API.
Locks LockService
// lifecycle is the lifecycle used by all requests through this client.
lifecycle Lifecycle
}
// NewClient instantiates and returns a new instance of *Client, with the given
// lifecycle.
//
// If no lifecycle is given, a HttpLifecycle is used by default.
func NewClient(lifecycle Lifecycle) *Client {
if lifecycle == nil {
lifecycle = NewHttpLifecycle(config.Config)
}
return &Client{lifecycle: lifecycle}
}
// Do preforms the request assosicated with the given *RequestSchema by
// delegating into the Lifecycle in use.
//
// If any error was encountered while either building, executing or cleaning up
// the request, then it will be returned immediately, and the request can be
// treated as invalid.
//
// If no error occured, an api.Response will be returned, along with a `nil`
// error. At this point, the body of the response has been serialized into
// `schema.Into`, and the body has been closed.
func (c *Client) Do(schema *RequestSchema) (Response, error) {
req, err := c.lifecycle.Build(schema)
if err != nil {
return nil, err
}
resp, err := c.lifecycle.Execute(req, schema.Into)
if err != nil {
return nil, err
}
if err = c.lifecycle.Cleanup(resp); err != nil {
return nil, err
}
return resp, nil
}

76
api/client_test.go Normal file

@ -0,0 +1,76 @@
package api_test
import (
"errors"
"net/http"
"testing"
"github.com/github/git-lfs/api"
"github.com/stretchr/testify/assert"
)
func TestClientUsesLifecycleToExecuteSchemas(t *testing.T) {
schema := new(api.RequestSchema)
req := new(http.Request)
resp := new(api.HttpResponse)
lifecycle := new(MockLifecycle)
lifecycle.On("Build", schema).Return(req, nil).Once()
lifecycle.On("Execute", req, schema.Into).Return(resp, nil).Once()
lifecycle.On("Cleanup", resp).Return(nil).Once()
client := api.NewClient(lifecycle)
r1, err := client.Do(schema)
assert.Equal(t, resp, r1)
assert.Nil(t, err)
lifecycle.AssertExpectations(t)
}
func TestClientHaltsIfSchemaCannotBeBuilt(t *testing.T) {
schema := new(api.RequestSchema)
lifecycle := new(MockLifecycle)
lifecycle.On("Build", schema).Return(nil, errors.New("uh-oh!")).Once()
client := api.NewClient(lifecycle)
resp, err := client.Do(schema)
lifecycle.AssertExpectations(t)
assert.Nil(t, resp)
assert.Equal(t, "uh-oh!", err.Error())
}
func TestClientHaltsIfSchemaCannotBeExecuted(t *testing.T) {
schema := new(api.RequestSchema)
req := new(http.Request)
lifecycle := new(MockLifecycle)
lifecycle.On("Build", schema).Return(req, nil).Once()
lifecycle.On("Execute", req, schema.Into).Return(nil, errors.New("uh-oh!")).Once()
client := api.NewClient(lifecycle)
resp, err := client.Do(schema)
lifecycle.AssertExpectations(t)
assert.Nil(t, resp)
assert.Equal(t, "uh-oh!", err.Error())
}
func TestClientReturnsCleanupErrors(t *testing.T) {
schema := new(api.RequestSchema)
req := new(http.Request)
resp := new(api.HttpResponse)
lifecycle := new(MockLifecycle)
lifecycle.On("Build", schema).Return(req, nil).Once()
lifecycle.On("Execute", req, schema.Into).Return(resp, nil).Once()
lifecycle.On("Cleanup", resp).Return(errors.New("uh-oh!")).Once()
client := api.NewClient(lifecycle)
r1, err := client.Do(schema)
lifecycle.AssertExpectations(t)
assert.Nil(t, r1)
assert.Equal(t, "uh-oh!", err.Error())
}

377
api/download_test.go Normal file

@ -0,0 +1,377 @@
package api_test
import (
"encoding/base64"
"encoding/json"
"fmt"
"io/ioutil"
"net/http"
"net/http/httptest"
"net/url"
"os"
"strconv"
"strings"
"testing"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/auth"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/httputil"
)
func TestSuccessfulDownload(t *testing.T) {
SetupTestCredentialsFunc()
defer func() {
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
tmp := tempdir(t)
defer os.RemoveAll(tmp)
mux.HandleFunc("/media/objects/oid", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
t.Logf("request header: %v", r.Header)
if r.Method != "GET" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Error("Invalid Accept")
}
if r.Header.Get("Authorization") != expectedAuth(t, server) {
t.Error("Invalid Authorization")
}
obj := &api.ObjectResource{
Oid: "oid",
Size: 4,
Actions: map[string]*api.LinkRelation{
"download": &api.LinkRelation{
Href: server.URL + "/download",
Header: map[string]string{"A": "1"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(200)
w.Write(by)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.batch", "false")
config.Config.SetConfig("lfs.url", server.URL+"/media")
obj, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: "oid"}, "download", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatalf("unexpected error: %s", err)
}
if obj.Size != 4 {
t.Errorf("unexpected size: %d", obj.Size)
}
}
// nearly identical to TestSuccessfulDownload
// called multiple times to return different 3xx status codes
func TestSuccessfulDownloadWithRedirects(t *testing.T) {
SetupTestCredentialsFunc()
defer func() {
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
tmp := tempdir(t)
defer os.RemoveAll(tmp)
// all of these should work for GET requests
redirectCodes := []int{301, 302, 303, 307}
redirectIndex := 0
mux.HandleFunc("/redirect/objects/oid", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
t.Logf("request header: %v", r.Header)
if r.Method != "GET" {
w.WriteHeader(405)
return
}
w.Header().Set("Location", server.URL+"/redirect2/objects/oid")
w.WriteHeader(redirectCodes[redirectIndex])
t.Logf("redirect with %d", redirectCodes[redirectIndex])
})
mux.HandleFunc("/redirect2/objects/oid", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
t.Logf("request header: %v", r.Header)
if r.Method != "GET" {
w.WriteHeader(405)
return
}
w.Header().Set("Location", server.URL+"/media/objects/oid")
w.WriteHeader(redirectCodes[redirectIndex])
t.Logf("redirect again with %d", redirectCodes[redirectIndex])
redirectIndex += 1
})
mux.HandleFunc("/media/objects/oid", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
t.Logf("request header: %v", r.Header)
if r.Method != "GET" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Error("Invalid Accept")
}
if r.Header.Get("Authorization") != expectedAuth(t, server) {
t.Error("Invalid Authorization")
}
obj := &api.ObjectResource{
Oid: "oid",
Size: 4,
Actions: map[string]*api.LinkRelation{
"download": &api.LinkRelation{
Href: server.URL + "/download",
Header: map[string]string{"A": "1"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(200)
w.Write(by)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.batch", "false")
config.Config.SetConfig("lfs.url", server.URL+"/redirect")
for _, redirect := range redirectCodes {
obj, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: "oid"}, "download", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatalf("unexpected error for %d status: %s", redirect, err)
}
if obj.Size != 4 {
t.Errorf("unexpected size for %d status: %d", redirect, obj.Size)
}
}
}
// nearly identical to TestSuccessfulDownload
// the api request returns a custom Authorization header
func TestSuccessfulDownloadWithAuthorization(t *testing.T) {
SetupTestCredentialsFunc()
defer func() {
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
tmp := tempdir(t)
defer os.RemoveAll(tmp)
mux.HandleFunc("/media/objects/oid", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
t.Logf("request header: %v", r.Header)
if r.Method != "GET" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Error("Invalid Accept")
}
if r.Header.Get("Authorization") != expectedAuth(t, server) {
t.Error("Invalid Authorization")
}
obj := &api.ObjectResource{
Oid: "oid",
Size: 4,
Actions: map[string]*api.LinkRelation{
"download": &api.LinkRelation{
Href: server.URL + "/download",
Header: map[string]string{
"A": "1",
"Authorization": "custom",
},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
head := w.Header()
head.Set("Content-Type", "application/json; charset=utf-8")
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(200)
w.Write(by)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.batch", "false")
config.Config.SetConfig("lfs.url", server.URL+"/media")
obj, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: "oid"}, "download", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatalf("unexpected error: %s", err)
}
if obj.Size != 4 {
t.Errorf("unexpected size: %d", obj.Size)
}
}
func TestDownloadAPIError(t *testing.T) {
SetupTestCredentialsFunc()
defer func() {
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
defer server.Close()
tmp := tempdir(t)
defer os.RemoveAll(tmp)
mux.HandleFunc("/media/objects/oid", func(w http.ResponseWriter, r *http.Request) {
w.WriteHeader(404)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.batch", "false")
config.Config.SetConfig("lfs.url", server.URL+"/media")
_, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: "oid"}, "download", []string{"basic"})
if err == nil {
t.Fatal("no error?")
}
if errutil.IsFatalError(err) {
t.Fatal("should not panic")
}
if isDockerConnectionError(err) {
return
}
if err.Error() != fmt.Sprintf(httputil.GetDefaultError(404), server.URL+"/media/objects/oid") {
t.Fatalf("Unexpected error: %s", err.Error())
}
}
// guards against connection errors that only seem to happen on debian docker
// images.
func isDockerConnectionError(err error) bool {
if err == nil {
return false
}
if os.Getenv("TRAVIS") == "true" {
return false
}
e := err.Error()
return strings.Contains(e, "connection reset by peer") ||
strings.Contains(e, "connection refused")
}
func tempdir(t *testing.T) string {
dir, err := ioutil.TempDir("", "git-lfs-test")
if err != nil {
t.Fatalf("Error getting temp dir: %s", err)
}
return dir
}
func expectedAuth(t *testing.T, server *httptest.Server) string {
u, err := url.Parse(server.URL)
if err != nil {
t.Fatal(err)
}
token := fmt.Sprintf("%s:%s", u.Host, "monkey")
return "Basic " + strings.TrimSpace(base64.StdEncoding.EncodeToString([]byte(token)))
}
var (
TestCredentialsFunc auth.CredentialFunc
origCredentialsFunc auth.CredentialFunc
)
func init() {
TestCredentialsFunc = func(input auth.Creds, subCommand string) (auth.Creds, error) {
output := make(auth.Creds)
for key, value := range input {
output[key] = value
}
if _, ok := output["username"]; !ok {
output["username"] = input["host"]
}
output["password"] = "monkey"
return output, nil
}
}
// Override the credentials func for testing
func SetupTestCredentialsFunc() {
origCredentialsFunc = auth.SetCredentialsFunc(TestCredentialsFunc)
}
// Put the original credentials func back
func RestoreCredentialsFunc() {
auth.SetCredentialsFunc(origCredentialsFunc)
}

184
api/http_lifecycle.go Normal file

@ -0,0 +1,184 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import (
"bytes"
"encoding/json"
"errors"
"io"
"io/ioutil"
"net/http"
"net/url"
"github.com/github/git-lfs/auth"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/httputil"
)
var (
// ErrNoOperationGiven is an error which is returned when no operation
// is provided in a RequestSchema object.
ErrNoOperationGiven = errors.New("lfs/api: no operation provided in schema")
)
// EndpointSource is an interface which encapsulates the behavior of returning
// `config.Endpoint`s based on a particular operation.
type EndpointSource interface {
// Endpoint returns the `config.Endpoint` assosciated with a given
// operation.
Endpoint(operation string) config.Endpoint
}
// HttpLifecycle serves as the default implementation of the Lifecycle interface
// for HTTP requests. Internally, it leverages the *http.Client type to execute
// HTTP requests against a root *url.URL, as given in `NewHttpLifecycle`.
type HttpLifecycle struct {
endpoints EndpointSource
}
var _ Lifecycle = new(HttpLifecycle)
// NewHttpLifecycle initializes a new instance of the *HttpLifecycle type with a
// new *http.Client, and the given root (see above).
func NewHttpLifecycle(endpoints EndpointSource) *HttpLifecycle {
return &HttpLifecycle{
endpoints: endpoints,
}
}
// Build implements the Lifecycle.Build function.
//
// HttpLifecycle in particular, builds an absolute path by parsing and then
// relativizing the `schema.Path` with respsect to the `HttpLifecycle.root`. If
// there was an error in determining this URL, then that error will be returned,
//
// After this is complete, a body is attached to the request if the
// schema contained one. If a body was present, and there an error occurred while
// serializing it into JSON, then that error will be returned and the
// *http.Request will not be generated.
//
// In all cases, credentials are attached to the HTTP request as described in
// the `auth` package (see github.com/github/git-lfs/auth#GetCreds).
//
// Finally, all of these components are combined together and the resulting
// request is returned.
func (l *HttpLifecycle) Build(schema *RequestSchema) (*http.Request, error) {
path, err := l.absolutePath(schema.Operation, schema.Path)
if err != nil {
return nil, err
}
body, err := l.body(schema)
if err != nil {
return nil, err
}
req, err := http.NewRequest(schema.Method, path.String(), body)
if err != nil {
return nil, err
}
if _, err = auth.GetCreds(req); err != nil {
return nil, err
}
req.URL.RawQuery = l.queryParameters(schema).Encode()
return req, nil
}
// Execute implements the Lifecycle.Execute function.
//
// Internally, the *http.Client is used to execute the underlying *http.Request.
// If the client returned an error corresponding to a failure to make the
// request, then that error will be returned immediately, and the response is
// guaranteed not to be serialized.
//
// Once the response has been gathered from the server, it is unmarshled into
// the given `into interface{}` which is identical to the one provided in the
// original RequestSchema. If an error occured while decoding, then that error
// is returned.
//
// Otherwise, the api.Response is returned, along with no error, signaling that
// the request completed successfully.
func (l *HttpLifecycle) Execute(req *http.Request, into interface{}) (Response, error) {
resp, err := httputil.DoHttpRequestWithRedirects(req, []*http.Request{}, true)
if err != nil {
return nil, err
}
// TODO(taylor): check status >=500, handle content type, return error,
// halt immediately.
if into != nil {
decoder := json.NewDecoder(resp.Body)
if err = decoder.Decode(into); err != nil {
return nil, err
}
}
return WrapHttpResponse(resp), nil
}
// Cleanup implements the Lifecycle.Cleanup function by closing the Body
// attached to the response.
func (l *HttpLifecycle) Cleanup(resp Response) error {
return resp.Body().Close()
}
// absolutePath returns the absolute path made by combining a given relative
// path with the root URL of the endpoint corresponding to the given operation.
//
// If there was an error in parsing the relative path, then that error will be
// returned.
func (l *HttpLifecycle) absolutePath(operation Operation, path string) (*url.URL, error) {
if len(operation) == 0 {
return nil, ErrNoOperationGiven
}
root, err := url.Parse(l.endpoints.Endpoint(string(operation)).Url)
if err != nil {
return nil, err
}
rel, err := url.Parse(path)
if err != nil {
return nil, err
}
return root.ResolveReference(rel), nil
}
// body returns an io.Reader which reads out a JSON-encoded copy of the payload
// attached to a given *RequestSchema, if it is present. If no body is present
// in the request, then nil is returned instead.
//
// If an error was encountered while attempting to marshal the body, then that
// will be returned instead, along with a nil io.Reader.
func (l *HttpLifecycle) body(schema *RequestSchema) (io.ReadCloser, error) {
if schema.Body == nil {
return nil, nil
}
body, err := json.Marshal(schema.Body)
if err != nil {
return nil, err
}
return ioutil.NopCloser(bytes.NewReader(body)), nil
}
// queryParameters returns a url.Values containing all of the provided query
// parameters as given in the *RequestSchema. If no query parameters were given,
// then an empty url.Values is returned instead.
func (l *HttpLifecycle) queryParameters(schema *RequestSchema) url.Values {
vals := url.Values{}
if schema.Query != nil {
for k, v := range schema.Query {
vals.Add(k, v)
}
}
return vals
}

148
api/http_lifecycle_test.go Normal file

@ -0,0 +1,148 @@
package api_test
import (
"io/ioutil"
"net/http"
"net/http/httptest"
"testing"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/stretchr/testify/assert"
)
type NopEndpointSource struct {
Root string
}
func (e *NopEndpointSource) Endpoint(op string) config.Endpoint {
return config.Endpoint{Url: e.Root}
}
var (
source = &NopEndpointSource{"https://example.com"}
)
func TestHttpLifecycleMakesRequestsAgainstAbsolutePath(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
l := api.NewHttpLifecycle(source)
req, err := l.Build(&api.RequestSchema{
Path: "/foo",
Operation: api.DownloadOperation,
})
assert.Nil(t, err)
assert.Equal(t, "https://example.com/foo", req.URL.String())
}
func TestHttpLifecycleAttachesQueryParameters(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
l := api.NewHttpLifecycle(source)
req, err := l.Build(&api.RequestSchema{
Path: "/foo",
Operation: api.DownloadOperation,
Query: map[string]string{
"a": "b",
},
})
assert.Nil(t, err)
assert.Equal(t, "https://example.com/foo?a=b", req.URL.String())
}
func TestHttpLifecycleAttachesBodyWhenPresent(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
l := api.NewHttpLifecycle(source)
req, err := l.Build(&api.RequestSchema{
Operation: api.DownloadOperation,
Body: struct {
Foo string `json:"foo"`
}{"bar"},
})
assert.Nil(t, err)
body, err := ioutil.ReadAll(req.Body)
assert.Nil(t, err)
assert.Equal(t, "{\"foo\":\"bar\"}", string(body))
}
func TestHttpLifecycleDoesNotAttachBodyWhenEmpty(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
l := api.NewHttpLifecycle(source)
req, err := l.Build(&api.RequestSchema{
Operation: api.DownloadOperation,
})
assert.Nil(t, err)
assert.Nil(t, req.Body)
}
func TestHttpLifecycleErrsWithoutOperation(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
l := api.NewHttpLifecycle(source)
req, err := l.Build(&api.RequestSchema{
Path: "/foo",
})
assert.Equal(t, api.ErrNoOperationGiven, err)
assert.Nil(t, req)
}
func TestHttpLifecycleExecutesRequestWithoutBody(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
var called bool
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
called = true
assert.Equal(t, "/path", r.URL.RequestURI())
}))
defer server.Close()
req, _ := http.NewRequest("GET", server.URL+"/path", nil)
l := api.NewHttpLifecycle(source)
_, err := l.Execute(req, nil)
assert.True(t, called)
assert.Nil(t, err)
}
func TestHttpLifecycleExecutesRequestWithBody(t *testing.T) {
SetupTestCredentialsFunc()
defer RestoreCredentialsFunc()
type Response struct {
Foo string `json:"foo"`
}
var called bool
server := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
called = true
w.Write([]byte("{\"foo\":\"bar\"}"))
}))
defer server.Close()
req, _ := http.NewRequest("GET", server.URL+"/path", nil)
l := api.NewHttpLifecycle(source)
resp := new(Response)
_, err := l.Execute(req, resp)
assert.True(t, called)
assert.Nil(t, err)
assert.Equal(t, "bar", resp.Foo)
}

53
api/http_response.go Normal file

@ -0,0 +1,53 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import (
"io"
"net/http"
)
// HttpResponse is an implementation of the Response interface capable of
// handling HTTP responses. At its core, it works by wrapping an *http.Response.
type HttpResponse struct {
// r is the underlying *http.Response that is being wrapped.
r *http.Response
}
// WrapHttpResponse returns a wrapped *HttpResponse implementing the Repsonse
// type by using the given *http.Response.
func WrapHttpResponse(r *http.Response) *HttpResponse {
return &HttpResponse{
r: r,
}
}
var _ Response = new(HttpResponse)
// Status implements the Response.Status function, and returns the status given
// by the underlying *http.Response.
func (h *HttpResponse) Status() string {
return h.r.Status
}
// StatusCode implements the Response.StatusCode function, and returns the
// status code given by the underlying *http.Response.
func (h *HttpResponse) StatusCode() int {
return h.r.StatusCode
}
// Proto implements the Response.Proto function, and returns the proto given by
// the underlying *http.Response.
func (h *HttpResponse) Proto() string {
return h.r.Proto
}
// Body implements the Response.Body function, and returns the body as given by
// the underlying *http.Response.
func (h *HttpResponse) Body() io.ReadCloser {
return h.r.Body
}
// Header returns the underlying *http.Response's header.
func (h *HttpResponse) Header() http.Header {
return h.r.Header
}

27
api/http_response_test.go Normal file

@ -0,0 +1,27 @@
package api_test
import (
"bytes"
"io/ioutil"
"net/http"
"testing"
"github.com/github/git-lfs/api"
"github.com/stretchr/testify/assert"
)
func TestWrappedHttpResponsesMatchInternal(t *testing.T) {
resp := &http.Response{
Status: "200 OK",
StatusCode: 200,
Proto: "HTTP/1.1",
Body: ioutil.NopCloser(new(bytes.Buffer)),
}
wrapped := api.WrapHttpResponse(resp)
assert.Equal(t, resp.Status, wrapped.Status())
assert.Equal(t, resp.StatusCode, wrapped.StatusCode())
assert.Equal(t, resp.Proto, wrapped.Proto())
assert.Equal(t, resp.Body, wrapped.Body())
assert.Equal(t, resp.Header, wrapped.Header())
}

32
api/lifecycle.go Normal file

@ -0,0 +1,32 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import "net/http"
// TODO: extract interface for *http.Request; update methods. This will be in a
// later iteration of the API client.
// A Lifecycle represents and encapsulates the behavior on an API request from
// inception to cleanup.
//
// At a high level, it turns an *api.RequestSchema into an
// api.Response (and optionally an error). Lifecycle does so by providing
// several more fine-grained methods that are used by the client to manage the
// lifecycle of a request in a platform-agnostic fashion.
type Lifecycle interface {
// Build creates a sendable request by using the given RequestSchema as
// a model.
Build(req *RequestSchema) (*http.Request, error)
// Execute transforms generated request into a wrapped repsonse, (and
// optionally an error, if the request failed), and serializes the
// response into the `into interface{}`, if one was provided.
Execute(req *http.Request, into interface{}) (Response, error)
// Cleanup is called after the request has been completed and its
// response has been processed. It is meant to preform any post-request
// actions necessary, like closing or resetting the connection. If an
// error was encountered in doing this operation, it should be returned
// from this method, otherwise nil.
Cleanup(resp Response) error
}

39
api/lifecycle_test.go Normal file

@ -0,0 +1,39 @@
package api_test
import (
"net/http"
"github.com/github/git-lfs/api"
"github.com/stretchr/testify/mock"
)
type MockLifecycle struct {
mock.Mock
}
var _ api.Lifecycle = new(MockLifecycle)
func (l *MockLifecycle) Build(req *api.RequestSchema) (*http.Request, error) {
args := l.Called(req)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(*http.Request), args.Error(1)
}
func (l *MockLifecycle) Execute(req *http.Request, into interface{}) (api.Response, error) {
args := l.Called(req, into)
if args.Get(0) == nil {
return nil, args.Error(1)
}
return args.Get(0).(api.Response), args.Error(1)
}
func (l *MockLifecycle) Cleanup(resp api.Response) error {
args := l.Called(resp)
return args.Error(0)
}

256
api/lock_api.go Normal file

@ -0,0 +1,256 @@
package api
import (
"fmt"
"strconv"
"time"
"github.com/github/git-lfs/config"
)
// LockService is an API service which encapsulates the Git LFS Locking API.
type LockService struct{}
// Lock generates a *RequestSchema that is used to preform the "attempt lock"
// API method.
//
// If a lock is already present, or if the server was unable to generate the
// lock, the Err field of the LockResponse type will be populated with a more
// detailed error describing the situation.
//
// If the caller does not have the minimum commit necessary to obtain the lock
// on that file, then the CommitNeeded field will be populated in the
// LockResponse, signaling that more commits are needed.
//
// In the successful case, a new Lock will be returned and granted to the
// caller.
func (s *LockService) Lock(req *LockRequest) (*RequestSchema, *LockResponse) {
var resp LockResponse
return &RequestSchema{
Method: "POST",
Path: "/locks",
Operation: UploadOperation,
Body: req,
Into: &resp,
}, &resp
}
// Search generates a *RequestSchema that is used to preform the "search for
// locks" API method.
//
// Searches can be scoped to match specific parameters by using the Filters
// field in the given LockSearchRequest. If no matching Locks were found, then
// the Locks field of the response will be empty.
//
// If the client expects that the server will return many locks, then the client
// can choose to paginate that response. Pagination is preformed by limiting the
// amount of results per page, and the server will inform the client of the ID
// of the last returned lock. Since the server is guaranteed to return results
// in reverse chronological order, the client simply sends the last ID it
// processed along with the next request, and the server will continue where it
// left off.
//
// If the server was unable to process the lock search request, then the Error
// field will be populated in the response.
//
// In the successful case, one or more locks will be returned as a part of the
// response.
func (s *LockService) Search(req *LockSearchRequest) (*RequestSchema, *LockList) {
var resp LockList
query := make(map[string]string)
for _, filter := range req.Filters {
query[filter.Property] = filter.Value
}
if req.Cursor != "" {
query["cursor"] = req.Cursor
}
if req.Limit != 0 {
query["limit"] = strconv.Itoa(req.Limit)
}
return &RequestSchema{
Method: "GET",
Path: "/locks",
Operation: UploadOperation,
Query: query,
Into: &resp,
}, &resp
}
// Unlock generates a *RequestSchema that is used to preform the "unlock" API
// method, against a particular lock potentially with --force.
//
// This method's corresponding response type will either contain a reference to
// the lock that was unlocked, or an error that was experienced by the server in
// unlocking it.
func (s *LockService) Unlock(id string, force bool) (*RequestSchema, *UnlockResponse) {
var resp UnlockResponse
return &RequestSchema{
Method: "POST",
Path: fmt.Sprintf("/locks/%s/unlock", id),
Operation: UploadOperation,
Body: &UnlockRequest{id, force},
Into: &resp,
}, &resp
}
// Lock represents a single lock that against a particular path.
//
// Locks returned from the API may or may not be currently active, according to
// the Expired flag.
type Lock struct {
// Id is the unique identifier corresponding to this particular Lock. It
// must be consistent with the local copy, and the server's copy.
Id string `json:"id"`
// Path is an absolute path to the file that is locked as a part of this
// lock.
Path string `json:"path"`
// Committer is the author who initiated this lock.
Committer Committer `json:"committer"`
// CommitSHA is the commit that this Lock was created against. It is
// strictly equal to the SHA of the minimum commit negotiated in order
// to create this lock.
CommitSHA string `json:"commit_sha"`
// LockedAt is a required parameter that represents the instant in time
// that this lock was created. For most server implementations, this
// should be set to the instant at which the lock was initially
// received.
LockedAt time.Time `json:"locked_at"`
// ExpiresAt is an optional parameter that represents the instant in
// time that the lock stopped being active. If the lock is still active,
// the server can either a) not send this field, or b) send the
// zero-value of time.Time.
UnlockedAt time.Time `json:"unlocked_at,omitempty"`
}
// Active returns whether or not the given lock is still active against the file
// that it is protecting.
func (l *Lock) Active() bool {
return l.UnlockedAt.IsZero()
}
// Committer represents a "First Last <email@domain.com>" pair.
type Committer struct {
// Name is the name of the individual who would like to obtain the
// lock, for instance: "Rick Olson".
Name string `json:"name"`
// Email is the email assopsicated with the individual who would
// like to obtain the lock, for instance: "rick@github.com".
Email string `json:"email"`
}
// CurrentCommitter returns a Committer instance populated with the same
// credentials as would be used to author a commit. In particular, the
// "user.name" and "user.email" configuration values are used from the
// config.Config singleton.
func CurrentCommitter() Committer {
name, _ := config.Config.GitConfig("user.name")
email, _ := config.Config.GitConfig("user.email")
return Committer{name, email}
}
// LockRequest encapsulates the payload sent across the API when a client would
// like to obtain a lock against a particular path on a given remote.
type LockRequest struct {
// Path is the path that the client would like to obtain a lock against.
Path string `json:"path"`
// LatestRemoteCommit is the SHA of the last known commit from the
// remote that we are trying to create the lock against, as found in
// `.git/refs/origin/<name>`.
LatestRemoteCommit string `json:"latest_remote_commit"`
// Committer is the individual that wishes to obtain the lock.
Committer Committer `json:"committer"`
}
// LockResponse encapsulates the information sent over the API in response to
// a `LockRequest`.
type LockResponse struct {
// Lock is the Lock that was optionally created in response to the
// payload that was sent (see above). If the lock already exists, then
// the existing lock is sent in this field instead, and the author of
// that lock remains the same, meaning that the client failed to obtain
// that lock. An HTTP status of "409 - Conflict" is used here.
//
// If the lock was unable to be created, this field will hold the
// zero-value of Lock and the Err field will provide a more detailed set
// of information.
//
// If an error was experienced in creating this lock, then the
// zero-value of Lock should be sent here instead.
Lock *Lock `json:"lock"`
// CommitNeeded holds the minimum commit SHA that client must have to
// obtain the lock.
CommitNeeded string `json:"commit_needed,omitempty"`
// Err is the optional error that was encountered while trying to create
// the above lock.
Err string `json:"error,omitempty"`
}
// UnlockRequest encapsulates the data sent in an API request to remove a lock.
type UnlockRequest struct {
// Id is the Id of the lock that the user wishes to unlock.
Id string `json:"id"`
// Force determines whether or not the lock should be "forcibly"
// unlocked; that is to say whether or not a given individual should be
// able to break a different individual's lock.
Force bool `json:"force"`
}
// UnlockResponse is the result sent back from the API when asked to remove a
// lock.
type UnlockResponse struct {
// Lock is the lock corresponding to the asked-about lock in the
// `UnlockPayload` (see above). If no matching lock was found, this
// field will take the zero-value of Lock, and Err will be non-nil.
Lock *Lock `json:"lock"`
// Err is an optional field which holds any error that was experienced
// while removing the lock.
Err string `json:"error,omitempty"`
}
// Filter represents a single qualifier to apply against a set of locks.
type Filter struct {
// Property is the property to search against.
// Value is the value that the property must take.
Property, Value string
}
// LockSearchRequest encapsulates the request sent to the server when the client
// would like a list of locks that match the given criteria.
type LockSearchRequest struct {
// Filters is the set of filters to query against. If the client wishes
// to obtain a list of all locks, an empty array should be passed here.
Filters []Filter
// Cursor is an optional field used to tell the server which lock was
// seen last, if scanning through multiple pages of results.
//
// Servers must return a list of locks sorted in reverse chronological
// order, so the Cursor provides a consistent method of viewing all
// locks, even if more were created between two requests.
Cursor string
// Limit is the maximum number of locks to return in a single page.
Limit int
}
// LockList encapsulates a set of Locks.
type LockList struct {
// Locks is the set of locks returned back, typically matching the query
// parameters sent in the LockListRequest call. If no locks were matched
// from a given query, then `Locks` will be represented as an empty
// array.
Locks []Lock `json:"locks"`
// NextCursor returns the Id of the Lock the client should update its
// cursor to, if there are multiple pages of results for a particular
// `LockListRequest`.
NextCursor string `json:"next_cursor,omitempty"`
// Err populates any error that was encountered during the search. If no
// error was encountered and the operation was succesful, then a value
// of nil will be passed here.
Err string `json:"error,omitempty"`
}

220
api/lock_api_test.go Normal file

@ -0,0 +1,220 @@
package api_test
import (
"testing"
"time"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/api/schema"
)
var LockService api.LockService
func TestSuccessfullyObtainingALock(t *testing.T) {
got, body := LockService.Lock(new(api.LockRequest))
AssertRequestSchema(t, &api.RequestSchema{
Method: "POST",
Path: "/locks",
Operation: api.UploadOperation,
Body: new(api.LockRequest),
Into: body,
}, got)
}
func TestLockSearchWithFilters(t *testing.T) {
got, body := LockService.Search(&api.LockSearchRequest{
Filters: []api.Filter{
{"branch", "master"},
{"path", "/path/to/file"},
},
})
AssertRequestSchema(t, &api.RequestSchema{
Method: "GET",
Query: map[string]string{
"branch": "master",
"path": "/path/to/file",
},
Path: "/locks",
Operation: api.UploadOperation,
Into: body,
}, got)
}
func TestLockSearchWithNextCursor(t *testing.T) {
got, body := LockService.Search(&api.LockSearchRequest{
Cursor: "some-lock-id",
})
AssertRequestSchema(t, &api.RequestSchema{
Method: "GET",
Query: map[string]string{
"cursor": "some-lock-id",
},
Path: "/locks",
Operation: api.UploadOperation,
Into: body,
}, got)
}
func TestLockSearchWithLimit(t *testing.T) {
got, body := LockService.Search(&api.LockSearchRequest{
Limit: 20,
})
AssertRequestSchema(t, &api.RequestSchema{
Method: "GET",
Query: map[string]string{
"limit": "20",
},
Path: "/locks",
Operation: api.UploadOperation,
Into: body,
}, got)
}
func TestUnlockingALock(t *testing.T) {
got, body := LockService.Unlock("some-lock-id", true)
AssertRequestSchema(t, &api.RequestSchema{
Method: "POST",
Path: "/locks/some-lock-id/unlock",
Operation: api.UploadOperation,
Body: &api.UnlockRequest{
Id: "some-lock-id",
Force: true,
},
Into: body,
}, got)
}
func TestLockRequest(t *testing.T) {
schema.Validate(t, schema.LockRequestSchema, &api.LockRequest{
Path: "/path/to/lock",
LatestRemoteCommit: "deadbeef",
Committer: api.Committer{
Name: "Jane Doe",
Email: "jane@example.com",
},
})
}
func TestLockResponseWithLockedLock(t *testing.T) {
schema.Validate(t, schema.LockResponseSchema, &api.LockResponse{
Lock: &api.Lock{
Id: "some-lock-id",
Path: "/lock/path",
Committer: api.Committer{
Name: "Jane Doe",
Email: "jane@example.com",
},
LockedAt: time.Now(),
},
})
}
func TestLockResponseWithUnlockedLock(t *testing.T) {
schema.Validate(t, schema.LockResponseSchema, &api.LockResponse{
Lock: &api.Lock{
Id: "some-lock-id",
Path: "/lock/path",
Committer: api.Committer{
Name: "Jane Doe",
Email: "jane@example.com",
},
LockedAt: time.Now(),
UnlockedAt: time.Now(),
},
})
}
func TestLockResponseWithError(t *testing.T) {
schema.Validate(t, schema.LockResponseSchema, &api.LockResponse{
Err: "some error",
})
}
func TestLockResponseWithCommitNeeded(t *testing.T) {
schema.Validate(t, schema.LockResponseSchema, &api.LockResponse{
CommitNeeded: "deadbeef",
})
}
func TestLockResponseInvalidWithCommitAndError(t *testing.T) {
schema.Refute(t, schema.LockResponseSchema, &api.LockResponse{
Err: "some error",
CommitNeeded: "deadbeef",
})
}
func TestUnlockRequest(t *testing.T) {
schema.Validate(t, schema.UnlockRequestSchema, &api.UnlockRequest{
Id: "some-lock-id",
Force: false,
})
}
func TestUnlockResponseWithLock(t *testing.T) {
schema.Validate(t, schema.UnlockResponseSchema, &api.UnlockResponse{
Lock: &api.Lock{
Id: "some-lock-id",
},
})
}
func TestUnlockResponseWithError(t *testing.T) {
schema.Validate(t, schema.UnlockResponseSchema, &api.UnlockResponse{
Err: "some-error",
})
}
func TestUnlockResponseDoesNotAllowLockAndError(t *testing.T) {
schema.Refute(t, schema.UnlockResponseSchema, &api.UnlockResponse{
Lock: &api.Lock{
Id: "some-lock-id",
},
Err: "some-error",
})
}
func TestLockListWithLocks(t *testing.T) {
schema.Validate(t, schema.LockListSchema, &api.LockList{
Locks: []api.Lock{
api.Lock{Id: "foo"},
api.Lock{Id: "bar"},
},
})
}
func TestLockListWithNoResults(t *testing.T) {
schema.Validate(t, schema.LockListSchema, &api.LockList{
Locks: []api.Lock{},
})
}
func TestLockListWithNextCursor(t *testing.T) {
schema.Validate(t, schema.LockListSchema, &api.LockList{
Locks: []api.Lock{
api.Lock{Id: "foo"},
api.Lock{Id: "bar"},
},
NextCursor: "baz",
})
}
func TestLockListWithError(t *testing.T) {
schema.Validate(t, schema.LockListSchema, &api.LockList{
Err: "some error",
})
}
func TestLockListWithErrorAndLocks(t *testing.T) {
schema.Refute(t, schema.LockListSchema, &api.LockList{
Locks: []api.Lock{
api.Lock{Id: "foo"},
api.Lock{Id: "bar"},
},
Err: "this isn't possible!",
})
}

80
api/object.go Normal file

@ -0,0 +1,80 @@
package api
import (
"errors"
"fmt"
"net/http"
"time"
"github.com/github/git-lfs/httputil"
)
type ObjectError struct {
Code int `json:"code"`
Message string `json:"message"`
}
func (e *ObjectError) Error() string {
return fmt.Sprintf("[%d] %s", e.Code, e.Message)
}
type ObjectResource struct {
Oid string `json:"oid,omitempty"`
Size int64 `json:"size"`
Actions map[string]*LinkRelation `json:"actions,omitempty"`
Links map[string]*LinkRelation `json:"_links,omitempty"`
Error *ObjectError `json:"error,omitempty"`
}
// TODO LEGACY API: remove when legacy API removed
func (o *ObjectResource) NewRequest(relation, method string) (*http.Request, error) {
rel, ok := o.Rel(relation)
if !ok {
if relation == "download" {
return nil, errors.New("Object not found on the server.")
}
return nil, fmt.Errorf("No %q action for this object.", relation)
}
req, err := httputil.NewHttpRequest(method, rel.Href, rel.Header)
if err != nil {
return nil, err
}
return req, nil
}
func (o *ObjectResource) Rel(name string) (*LinkRelation, bool) {
var rel *LinkRelation
var ok bool
if o.Actions != nil {
rel, ok = o.Actions[name]
} else {
rel, ok = o.Links[name]
}
return rel, ok
}
// IsExpired returns true if any of the actions in this object resource have an
// ExpiresAt field that is after the given instant "now".
//
// If the object contains no actions, or none of the actions it does contain
// have non-zero ExpiresAt fields, the object is not expired.
func (o *ObjectResource) IsExpired(now time.Time) bool {
for _, a := range o.Actions {
if !a.ExpiresAt.IsZero() && a.ExpiresAt.Before(now) {
return true
}
}
return false
}
type LinkRelation struct {
Href string `json:"href"`
Header map[string]string `json:"header,omitempty"`
ExpiresAt time.Time `json:"expires_at,omitempty"`
}

49
api/object_test.go Normal file

@ -0,0 +1,49 @@
package api_test
import (
"testing"
"time"
"github.com/github/git-lfs/api"
"github.com/stretchr/testify/assert"
)
func TestObjectsWithNoActionsAreNotExpired(t *testing.T) {
o := &api.ObjectResource{
Oid: "some-oid",
Actions: map[string]*api.LinkRelation{},
}
assert.False(t, o.IsExpired(time.Now()))
}
func TestObjectsWithZeroValueTimesAreNotExpired(t *testing.T) {
o := &api.ObjectResource{
Oid: "some-oid",
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: "http://your-lfs-server.com",
ExpiresAt: time.Time{},
},
},
}
assert.False(t, o.IsExpired(time.Now()))
}
func TestObjectsWithExpirationDatesAreExpired(t *testing.T) {
now := time.Now()
expires := time.Now().Add(-60 * 60 * time.Second)
o := &api.ObjectResource{
Oid: "some-oid",
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: "http://your-lfs-server.com",
ExpiresAt: expires,
},
},
}
assert.True(t, o.IsExpired(now))
}

21
api/request_schema.go Normal file

@ -0,0 +1,21 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
// RequestSchema provides a schema from which to generate sendable requests.
type RequestSchema struct {
// Method is the method that should be used when making a particular API
// call.
Method string
// Path is the relative path that this API call should be made against.
Path string
// Operation is the operation used to determine which endpoint to make
// the request against (see github.com/github/git-lfs/config).
Operation Operation
// Query is the query parameters used in the request URI.
Query map[string]string
// Body is the body of the request.
Body interface{}
// Into is an optional field used to represent the data structure into
// which a response should be serialized.
Into interface{}
}

@ -0,0 +1,23 @@
package api_test
import (
"testing"
"github.com/github/git-lfs/api"
"github.com/stretchr/testify/assert"
)
// AssertRequestSchema encapsulates a single assertion of equality against two
// generated RequestSchema instances.
//
// This assertion is meant only to test that the request schema generated by an
// API service matches what we expect it to be. It does not make use of the
// *api.Client, any particular lifecycle, or spin up a test server. All of that
// behavior is tested at a higher strata in the client/lifecycle tests.
//
// - t is the *testing.T used to preform the assertion.
// - Expected is the *api.RequestSchema that we expected to be generated.
// - Got is the *api.RequestSchema that was generated by a service.
func AssertRequestSchema(t *testing.T, expected, got *api.RequestSchema) {
assert.Equal(t, expected, got)
}

24
api/response.go Normal file

@ -0,0 +1,24 @@
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package api
import "io"
// Response is an interface that represents a response returned as a result of
// executing an API call. It is designed to represent itself across multiple
// response type, be it HTTP, SSH, or something else.
//
// The Response interface is meant to be small enough such that it can be
// sufficiently general, but is easily accessible if a caller needs more
// information specific to a particular protocol.
type Response interface {
// Status is a human-readable string representing the status the
// response was returned with.
Status() string
// StatusCode is the numeric code associated with a particular status.
StatusCode() int
// Proto is the protocol with which the response was delivered.
Proto() string
// Body returns an io.ReadCloser containg the contents of the response's
// body.
Body() io.ReadCloser
}

@ -0,0 +1,65 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"oneOf": [
{
"properties": {
"locks": {
"type": "array",
"items": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"path": {
"type": "string"
},
"committer": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string"
}
},
"required": ["name", "email"]
},
"commit_sha": {
"type": "string"
},
"locked_at": {
"type": "string"
},
"unlocked_at": {
"type": "string"
}
},
"required": ["id", "path", "commit_sha", "locked_at"],
"additionalItems": false
}
},
"next_cursor": {
"type": "string"
}
},
"additionalProperties": false,
"required": ["locks"]
},
{
"properties": {
"locks": {
"type": "null"
},
"error": {
"type": "string"
}
},
"additionalProperties": false,
"required": ["error"]
}
]
}

@ -0,0 +1,25 @@
{
"type": "object",
"properties": {
"path": {
"type": "string"
},
"latest_remote_commit": {
"type": "string"
},
"committer": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string"
}
},
"required": ["name", "email"]
}
},
"required": ["path", "latest_remote_commit", "committer"]
}

@ -0,0 +1,61 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"oneOf": [
{
"properties": {
"error": {
"type": "string"
}
},
"required": ["error"]
},
{
"properties": {
"commit_needed": {
"type": "string"
}
},
"required": ["commit_needed"]
},
{
"properties": {
"lock": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"path": {
"type": "string"
},
"committer": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string"
}
},
"required": ["name", "email"]
},
"commit_sha": {
"type": "string"
},
"locked_at": {
"type": "string"
},
"unlocked_at": {
"type": "string"
}
},
"required": ["id", "path", "commit_sha", "locked_at"]
}
},
"required": ["lock"]
}
]
}

@ -0,0 +1,82 @@
package schema
import (
"encoding/json"
"fmt"
"os"
"path/filepath"
"testing"
"github.com/xeipuuv/gojsonschema"
)
// SchemaValidator uses the gojsonschema library to validate the JSON encoding
// of Go objects against a pre-defined JSON schema.
type SchemaValidator struct {
// Schema is the JSON schema to validate against.
//
// Subject is the instance of Go type that will be validated.
Schema, Subject gojsonschema.JSONLoader
}
func NewSchemaValidator(t *testing.T, schemaName string, got interface{}) *SchemaValidator {
dir, err := os.Getwd()
if err != nil {
t.Fatal(err)
}
schema := gojsonschema.NewReferenceLoader(fmt.Sprintf(
"file:///%s",
filepath.Join(dir, "schema/", schemaName),
))
marshalled, err := json.Marshal(got)
if err != nil {
t.Fatal(err)
}
subject := gojsonschema.NewStringLoader(string(marshalled))
return &SchemaValidator{
Schema: schema,
Subject: subject,
}
}
// Validate validates a Go object against JSON schema in a testing environment.
// If the validation fails, then the test will fail after logging all of the
// validation errors experienced by the validator.
func Validate(t *testing.T, schemaName string, got interface{}) {
NewSchemaValidator(t, schemaName, got).Assert(t)
}
// Refute ensures that a particular Go object does not validate the JSON schema
// given.
//
// If validation against the schema is successful, then the test will fail after
// logging.
func Refute(t *testing.T, schemaName string, got interface{}) {
NewSchemaValidator(t, schemaName, got).Refute(t)
}
// Assert preforms the validation assertion against the given *testing.T.
func (v *SchemaValidator) Assert(t *testing.T) {
if result, err := gojsonschema.Validate(v.Schema, v.Subject); err != nil {
t.Fatal(err)
} else if !result.Valid() {
for _, err := range result.Errors() {
t.Logf("Validation error: %s", err.Description())
}
t.Fail()
}
}
// Refute refutes that the given subject will validate against a particular
// schema.
func (v *SchemaValidator) Refute(t *testing.T) {
if result, err := gojsonschema.Validate(v.Schema, v.Subject); err != nil {
t.Fatal(err)
} else if result.Valid() {
t.Fatal("api/schema: expected validation to fail, succeeded")
}
}

24
api/schema/schemas.go Normal file

@ -0,0 +1,24 @@
// schema provides a testing utility for testing API types against a predefined
// JSON schema.
//
// The core philosophy for this package is as follows: when a new API is
// accepted, JSON Schema files should be added to document the types that are
// exchanged over this new API. Those files are placed in the `/api/schema`
// directory, and are used by the schema.Validate function to test that
// particular instances of these types as represented in Go match the predefined
// schema that was proposed as a part of the API.
//
// For ease of use, this file defines several constants, one for each schema
// file's name, to easily pass around during tests.
//
// As briefly described above, to validate that a Go type matches the schema for
// a particular API call, one should use the schema.Validate() function.
package schema
const (
LockListSchema = "lock_list_schema.json"
LockRequestSchema = "lock_request_schema.json"
LockResponseSchema = "lock_response_schema.json"
UnlockRequestSchema = "unlock_request_schema.json"
UnlockResponseSchema = "unlock_response_schema.json"
)

@ -0,0 +1,15 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"properties": {
"id": {
"type": "string"
},
"force": {
"type": "boolean"
}
},
"required": ["id", "force"],
"additionalItems": false
}

@ -0,0 +1,53 @@
{
"$schema": "http://json-schema.org/draft-04/schema#",
"type": "object",
"oneOf": [
{
"properties": {
"lock": {
"type": "object",
"properties": {
"id": {
"type": "string"
},
"path": {
"type": "string"
},
"committer": {
"type": "object",
"properties": {
"name": {
"type": "string"
},
"email": {
"type": "string"
}
},
"required": ["name", "email"]
},
"commit_sha": {
"type": "string"
},
"locked_at": {
"type": "string"
},
"unlocked_at": {
"type": "string"
}
},
"required": ["id", "path", "commit_sha", "locked_at"]
}
},
"required": ["lock"]
},
{
"properties": {
"error": {
"type": "string"
}
},
"required": ["error"]
}
]
}

580
api/upload_test.go Normal file

@ -0,0 +1,580 @@
package api_test // prevent import cycles
import (
"bytes"
"encoding/json"
"fmt"
"io"
"io/ioutil"
"net/http"
"net/http/httptest"
"os"
"path/filepath"
"strconv"
"testing"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/httputil"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/test"
)
func TestExistingUpload(t *testing.T) {
SetupTestCredentialsFunc()
repo := test.NewRepo(t)
repo.Pushd()
defer func() {
repo.Popd()
repo.Cleanup()
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
tmp := tempdir(t)
defer server.Close()
defer os.RemoveAll(tmp)
postCalled := false
mux.HandleFunc("/media/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Errorf("Invalid Accept")
}
if r.Header.Get("Content-Type") != api.MediaType {
t.Errorf("Invalid Content-Type")
}
buf := &bytes.Buffer{}
tee := io.TeeReader(r.Body, buf)
reqObj := &api.ObjectResource{}
err := json.NewDecoder(tee).Decode(reqObj)
t.Logf("request header: %v", r.Header)
t.Logf("request body: %s", buf.String())
if err != nil {
t.Fatal(err)
}
if reqObj.Oid != "988881adc9fc3655077dc2d4d757d480b5ea0e11" {
t.Errorf("invalid oid from request: %s", reqObj.Oid)
}
if reqObj.Size != 4 {
t.Errorf("invalid size from request: %d", reqObj.Size)
}
obj := &api.ObjectResource{
Oid: reqObj.Oid,
Size: reqObj.Size,
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: server.URL + "/upload",
Header: map[string]string{"A": "1"},
},
"verify": &api.LinkRelation{
Href: server.URL + "/verify",
Header: map[string]string{"B": "2"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
postCalled = true
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(200)
w.Write(by)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.url", server.URL+"/media")
oidPath, _ := lfs.LocalMediaPath("988881adc9fc3655077dc2d4d757d480b5ea0e11")
if err := ioutil.WriteFile(oidPath, []byte("test"), 0744); err != nil {
t.Fatal(err)
}
oid := filepath.Base(oidPath)
stat, _ := os.Stat(oidPath)
o, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: oid, Size: stat.Size()}, "upload", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatal(err)
}
if o != nil {
t.Errorf("Got an object back")
}
if !postCalled {
t.Errorf("POST not called")
}
}
func TestUploadWithRedirect(t *testing.T) {
SetupTestCredentialsFunc()
repo := test.NewRepo(t)
repo.Pushd()
defer func() {
repo.Popd()
repo.Cleanup()
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
tmp := tempdir(t)
defer server.Close()
defer os.RemoveAll(tmp)
mux.HandleFunc("/redirect/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
w.Header().Set("Location", server.URL+"/redirect2/objects")
w.WriteHeader(307)
})
mux.HandleFunc("/redirect2/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
w.Header().Set("Location", server.URL+"/media/objects")
w.WriteHeader(307)
})
mux.HandleFunc("/media/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Errorf("Invalid Accept")
}
if r.Header.Get("Content-Type") != api.MediaType {
t.Errorf("Invalid Content-Type")
}
buf := &bytes.Buffer{}
tee := io.TeeReader(r.Body, buf)
reqObj := &api.ObjectResource{}
err := json.NewDecoder(tee).Decode(reqObj)
t.Logf("request header: %v", r.Header)
t.Logf("request body: %s", buf.String())
if err != nil {
t.Fatal(err)
}
if reqObj.Oid != "988881adc9fc3655077dc2d4d757d480b5ea0e11" {
t.Errorf("invalid oid from request: %s", reqObj.Oid)
}
if reqObj.Size != 4 {
t.Errorf("invalid size from request: %d", reqObj.Size)
}
obj := &api.ObjectResource{
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: server.URL + "/upload",
Header: map[string]string{"A": "1"},
},
"verify": &api.LinkRelation{
Href: server.URL + "/verify",
Header: map[string]string{"B": "2"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(200)
w.Write(by)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.url", server.URL+"/redirect")
oidPath, _ := lfs.LocalMediaPath("988881adc9fc3655077dc2d4d757d480b5ea0e11")
if err := ioutil.WriteFile(oidPath, []byte("test"), 0744); err != nil {
t.Fatal(err)
}
oid := filepath.Base(oidPath)
stat, _ := os.Stat(oidPath)
o, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: oid, Size: stat.Size()}, "upload", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatal(err)
}
if o != nil {
t.Fatal("Received an object")
}
}
func TestSuccessfulUploadWithVerify(t *testing.T) {
SetupTestCredentialsFunc()
repo := test.NewRepo(t)
repo.Pushd()
defer func() {
repo.Popd()
repo.Cleanup()
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
tmp := tempdir(t)
defer server.Close()
defer os.RemoveAll(tmp)
postCalled := false
verifyCalled := false
mux.HandleFunc("/media/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Errorf("Invalid Accept")
}
if r.Header.Get("Content-Type") != api.MediaType {
t.Errorf("Invalid Content-Type")
}
buf := &bytes.Buffer{}
tee := io.TeeReader(r.Body, buf)
reqObj := &api.ObjectResource{}
err := json.NewDecoder(tee).Decode(reqObj)
t.Logf("request header: %v", r.Header)
t.Logf("request body: %s", buf.String())
if err != nil {
t.Fatal(err)
}
if reqObj.Oid != "988881adc9fc3655077dc2d4d757d480b5ea0e11" {
t.Errorf("invalid oid from request: %s", reqObj.Oid)
}
if reqObj.Size != 4 {
t.Errorf("invalid size from request: %d", reqObj.Size)
}
obj := &api.ObjectResource{
Oid: reqObj.Oid,
Size: reqObj.Size,
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: server.URL + "/upload",
Header: map[string]string{"A": "1"},
},
"verify": &api.LinkRelation{
Href: server.URL + "/verify",
Header: map[string]string{"B": "2"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
postCalled = true
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(202)
w.Write(by)
})
mux.HandleFunc("/verify", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
if r.Header.Get("B") != "2" {
t.Error("Invalid B")
}
if r.Header.Get("Content-Type") != api.MediaType {
t.Error("Invalid Content-Type")
}
buf := &bytes.Buffer{}
tee := io.TeeReader(r.Body, buf)
reqObj := &api.ObjectResource{}
err := json.NewDecoder(tee).Decode(reqObj)
t.Logf("request header: %v", r.Header)
t.Logf("request body: %s", buf.String())
if err != nil {
t.Fatal(err)
}
if reqObj.Oid != "988881adc9fc3655077dc2d4d757d480b5ea0e11" {
t.Errorf("invalid oid from request: %s", reqObj.Oid)
}
if reqObj.Size != 4 {
t.Errorf("invalid size from request: %d", reqObj.Size)
}
verifyCalled = true
w.WriteHeader(200)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.url", server.URL+"/media")
oidPath, _ := lfs.LocalMediaPath("988881adc9fc3655077dc2d4d757d480b5ea0e11")
if err := ioutil.WriteFile(oidPath, []byte("test"), 0744); err != nil {
t.Fatal(err)
}
oid := filepath.Base(oidPath)
stat, _ := os.Stat(oidPath)
o, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: oid, Size: stat.Size()}, "upload", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatal(err)
}
api.VerifyUpload(o)
if !postCalled {
t.Errorf("POST not called")
}
if !verifyCalled {
t.Errorf("verify not called")
}
}
func TestUploadApiError(t *testing.T) {
SetupTestCredentialsFunc()
repo := test.NewRepo(t)
repo.Pushd()
defer func() {
repo.Popd()
repo.Cleanup()
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
tmp := tempdir(t)
defer server.Close()
defer os.RemoveAll(tmp)
postCalled := false
mux.HandleFunc("/media/objects", func(w http.ResponseWriter, r *http.Request) {
postCalled = true
w.WriteHeader(404)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.url", server.URL+"/media")
oidPath, _ := lfs.LocalMediaPath("988881adc9fc3655077dc2d4d757d480b5ea0e11")
if err := ioutil.WriteFile(oidPath, []byte("test"), 0744); err != nil {
t.Fatal(err)
}
oid := filepath.Base(oidPath)
stat, _ := os.Stat(oidPath)
_, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: oid, Size: stat.Size()}, "upload", []string{"basic"})
if err == nil {
t.Fatal(err)
}
if errutil.IsFatalError(err) {
t.Fatal("should not panic")
}
if isDockerConnectionError(err) {
return
}
if err.Error() != fmt.Sprintf(httputil.GetDefaultError(404), server.URL+"/media/objects") {
t.Fatalf("Unexpected error: %s", err.Error())
}
if !postCalled {
t.Errorf("POST not called")
}
}
func TestUploadVerifyError(t *testing.T) {
SetupTestCredentialsFunc()
repo := test.NewRepo(t)
repo.Pushd()
defer func() {
repo.Popd()
repo.Cleanup()
RestoreCredentialsFunc()
}()
mux := http.NewServeMux()
server := httptest.NewServer(mux)
tmp := tempdir(t)
defer server.Close()
defer os.RemoveAll(tmp)
postCalled := false
verifyCalled := false
mux.HandleFunc("/media/objects", func(w http.ResponseWriter, r *http.Request) {
t.Logf("Server: %s %s", r.Method, r.URL)
if r.Method != "POST" {
w.WriteHeader(405)
return
}
if r.Header.Get("Accept") != api.MediaType {
t.Errorf("Invalid Accept")
}
if r.Header.Get("Content-Type") != api.MediaType {
t.Errorf("Invalid Content-Type")
}
buf := &bytes.Buffer{}
tee := io.TeeReader(r.Body, buf)
reqObj := &api.ObjectResource{}
err := json.NewDecoder(tee).Decode(reqObj)
t.Logf("request header: %v", r.Header)
t.Logf("request body: %s", buf.String())
if err != nil {
t.Fatal(err)
}
if reqObj.Oid != "988881adc9fc3655077dc2d4d757d480b5ea0e11" {
t.Errorf("invalid oid from request: %s", reqObj.Oid)
}
if reqObj.Size != 4 {
t.Errorf("invalid size from request: %d", reqObj.Size)
}
obj := &api.ObjectResource{
Oid: reqObj.Oid,
Size: reqObj.Size,
Actions: map[string]*api.LinkRelation{
"upload": &api.LinkRelation{
Href: server.URL + "/upload",
Header: map[string]string{"A": "1"},
},
"verify": &api.LinkRelation{
Href: server.URL + "/verify",
Header: map[string]string{"B": "2"},
},
},
}
by, err := json.Marshal(obj)
if err != nil {
t.Fatal(err)
}
postCalled = true
head := w.Header()
head.Set("Content-Type", api.MediaType)
head.Set("Content-Length", strconv.Itoa(len(by)))
w.WriteHeader(202)
w.Write(by)
})
mux.HandleFunc("/verify", func(w http.ResponseWriter, r *http.Request) {
verifyCalled = true
w.WriteHeader(404)
})
defer config.Config.ResetConfig()
config.Config.SetConfig("lfs.url", server.URL+"/media")
oidPath, _ := lfs.LocalMediaPath("988881adc9fc3655077dc2d4d757d480b5ea0e11")
if err := ioutil.WriteFile(oidPath, []byte("test"), 0744); err != nil {
t.Fatal(err)
}
oid := filepath.Base(oidPath)
stat, _ := os.Stat(oidPath)
o, _, err := api.BatchOrLegacySingle(&api.ObjectResource{Oid: oid, Size: stat.Size()}, "upload", []string{"basic"})
if err != nil {
if isDockerConnectionError(err) {
return
}
t.Fatal(err)
}
err = api.VerifyUpload(o)
if err == nil {
t.Fatal("verify should fail")
}
if errutil.IsFatalError(err) {
t.Fatal("should not panic")
}
if err.Error() != fmt.Sprintf(httputil.GetDefaultError(404), server.URL+"/verify") {
t.Fatalf("Unexpected error: %s", err.Error())
}
if !postCalled {
t.Errorf("POST not called")
}
if !verifyCalled {
t.Errorf("verify not called")
}
}

165
api/v1.go Normal file

@ -0,0 +1,165 @@
package api
import (
"net/http"
"net/url"
"path"
"github.com/github/git-lfs/auth"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/httputil"
"github.com/rubyist/tracerx"
)
const (
MediaType = "application/vnd.git-lfs+json; charset=utf-8"
)
// doLegacyApiRequest runs the request to the LFS legacy API.
func DoLegacyRequest(req *http.Request) (*http.Response, *ObjectResource, error) {
via := make([]*http.Request, 0, 4)
res, err := httputil.DoHttpRequestWithRedirects(req, via, true)
if err != nil {
return res, nil, err
}
obj := &ObjectResource{}
err = httputil.DecodeResponse(res, obj)
if err != nil {
httputil.SetErrorResponseContext(err, res)
return nil, nil, err
}
return res, obj, nil
}
type batchRequest struct {
TransferAdapterNames []string `json:"transfers"`
Operation string `json:"operation"`
Objects []*ObjectResource `json:"objects"`
}
type batchResponse struct {
TransferAdapterName string `json:"transfer"`
Objects []*ObjectResource `json:"objects"`
}
// doApiBatchRequest runs the request to the LFS batch API. If the API returns a
// 401, the repo will be marked as having private access and the request will be
// re-run. When the repo is marked as having private access, credentials will
// be retrieved.
func DoBatchRequest(req *http.Request) (*http.Response, *batchResponse, error) {
res, err := DoRequest(req, config.Config.PrivateAccess(auth.GetOperationForRequest(req)))
if err != nil {
if res != nil && res.StatusCode == 401 {
return res, nil, errutil.NewAuthError(err)
}
return res, nil, err
}
resp := &batchResponse{}
err = httputil.DecodeResponse(res, resp)
if err != nil {
httputil.SetErrorResponseContext(err, res)
}
return res, resp, err
}
// DoRequest runs a request to the LFS API, without parsing the response
// body. If the API returns a 401, the repo will be marked as having private
// access and the request will be re-run. When the repo is marked as having
// private access, credentials will be retrieved.
func DoRequest(req *http.Request, useCreds bool) (*http.Response, error) {
via := make([]*http.Request, 0, 4)
return httputil.DoHttpRequestWithRedirects(req, via, useCreds)
}
func NewRequest(method, oid string) (*http.Request, error) {
objectOid := oid
operation := "download"
if method == "POST" {
if oid != "batch" {
objectOid = ""
operation = "upload"
}
}
endpoint := config.Config.Endpoint(operation)
res, err := auth.SshAuthenticate(endpoint, operation, oid)
if err != nil {
tracerx.Printf("ssh: attempted with %s. Error: %s",
endpoint.SshUserAndHost, err.Error(),
)
return nil, err
}
if len(res.Href) > 0 {
endpoint.Url = res.Href
}
u, err := ObjectUrl(endpoint, objectOid)
if err != nil {
return nil, err
}
req, err := httputil.NewHttpRequest(method, u.String(), res.Header)
if err != nil {
return nil, err
}
req.Header.Set("Accept", MediaType)
return req, nil
}
func NewBatchRequest(operation string) (*http.Request, error) {
endpoint := config.Config.Endpoint(operation)
res, err := auth.SshAuthenticate(endpoint, operation, "")
if err != nil {
tracerx.Printf("ssh: %s attempted with %s. Error: %s",
operation, endpoint.SshUserAndHost, err.Error(),
)
return nil, err
}
if len(res.Href) > 0 {
endpoint.Url = res.Href
}
u, err := ObjectUrl(endpoint, "batch")
if err != nil {
return nil, err
}
req, err := httputil.NewHttpRequest("POST", u.String(), nil)
if err != nil {
return nil, err
}
req.Header.Set("Accept", MediaType)
if res.Header != nil {
for key, value := range res.Header {
req.Header.Set(key, value)
}
}
return req, nil
}
func ObjectUrl(endpoint config.Endpoint, oid string) (*url.URL, error) {
u, err := url.Parse(endpoint.Url)
if err != nil {
return nil, err
}
u.Path = path.Join(u.Path, "objects")
if len(oid) > 0 {
u.Path = path.Join(u.Path, oid)
}
return u, nil
}

46
api/verify.go Normal file

@ -0,0 +1,46 @@
package api
import (
"bytes"
"encoding/json"
"io"
"io/ioutil"
"strconv"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/httputil"
)
// VerifyUpload calls the "verify" API link relation on obj if it exists
func VerifyUpload(obj *ObjectResource) error {
// Do we need to do verify?
if _, ok := obj.Rel("verify"); !ok {
return nil
}
req, err := obj.NewRequest("verify", "POST")
if err != nil {
return errutil.Error(err)
}
by, err := json.Marshal(obj)
if err != nil {
return errutil.Error(err)
}
req.Header.Set("Content-Type", MediaType)
req.Header.Set("Content-Length", strconv.Itoa(len(by)))
req.ContentLength = int64(len(by))
req.Body = ioutil.NopCloser(bytes.NewReader(by))
res, err := DoRequest(req, true)
if err != nil {
return err
}
httputil.LogTransfer("lfs.data.verify", res)
io.Copy(ioutil.Discard, res.Body)
res.Body.Close()
return err
}

3
auth/auth.go Normal file

@ -0,0 +1,3 @@
// Package auth provides common authentication tools
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package auth

@ -1,37 +1,41 @@
package lfs
package auth
import (
"bytes"
"encoding/base64"
"errors"
"fmt"
"net"
"net/http"
"net/url"
"os"
"os/exec"
"strings"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/rubyist/tracerx"
)
// getCreds gets the credentials for LFS API requests and sets the given
// getCreds gets the credentials for a HTTP request and sets the given
// request's Authorization header with them using Basic Authentication.
// 1. Check the LFS URL for authentication. Ex: http://user:pass@example.com
// 1. Check the URL for authentication. Ex: http://user:pass@example.com
// 2. Check netrc for authentication.
// 3. Check the Git remote URL for authentication IF it's the same scheme and
// host of the LFS URL.
// host of the URL.
// 4. Ask 'git credential' to fill in the password from one of the above URLs.
//
// This prefers the Git remote URL for checking credentials so that users only
// have to enter their passwords once for Git and Git LFS. It uses the same
// URL path that Git does, in case 'useHttpPath' is enabled in the Git config.
func getCreds(req *http.Request) (Creds, error) {
func GetCreds(req *http.Request) (Creds, error) {
if skipCredsCheck(req) {
return nil, nil
}
credsUrl, err := getCredURLForAPI(req)
if err != nil {
return nil, Error(err)
return nil, errutil.Error(err)
}
if credsUrl == nil {
@ -46,8 +50,8 @@ func getCreds(req *http.Request) (Creds, error) {
}
func getCredURLForAPI(req *http.Request) (*url.URL, error) {
operation := getOperationForHttpRequest(req)
apiUrl, err := url.Parse(Config.Endpoint(operation).Url)
operation := GetOperationForRequest(req)
apiUrl, err := url.Parse(config.Config.Endpoint(operation).Url)
if err != nil {
return nil, err
}
@ -64,8 +68,8 @@ func getCredURLForAPI(req *http.Request) (*url.URL, error) {
}
credsUrl := apiUrl
if len(Config.CurrentRemote) > 0 {
if u := Config.GitRemoteUrl(Config.CurrentRemote, operation == "upload"); u != "" {
if len(config.Config.CurrentRemote) > 0 {
if u := config.Config.GitRemoteUrl(config.Config.CurrentRemote, operation == "upload"); u != "" {
gitRemoteUrl, err := url.Parse(u)
if err != nil {
return nil, err
@ -100,7 +104,7 @@ func setCredURLFromNetrc(req *http.Request) bool {
host = hostname
}
machine, err := Config.FindNetrcHost(host)
machine, err := config.Config.FindNetrcHost(host)
if err != nil {
tracerx.Printf("netrc: error finding match for %q: %s", hostname, err)
return false
@ -115,7 +119,7 @@ func setCredURLFromNetrc(req *http.Request) bool {
}
func skipCredsCheck(req *http.Request) bool {
if Config.NtlmAccess(getOperationForHttpRequest(req)) {
if config.Config.NtlmAccess(GetOperationForRequest(req)) {
return false
}
@ -155,7 +159,7 @@ func fillCredentials(req *http.Request, u *url.URL) (Creds, error) {
return creds, err
}
func saveCredentials(creds Creds, res *http.Response) {
func SaveCredentials(creds Creds, res *http.Response) {
if creds == nil {
return
}
@ -185,7 +189,8 @@ func (c Creds) Buffer() *bytes.Buffer {
return buf
}
type credentialFunc func(Creds, string) (Creds, error)
// Credentials function which will be called whenever credentials are requested
type CredentialFunc func(Creds, string) (Creds, error)
func execCredsCommand(input Creds, subCommand string) (Creds, error) {
output := new(bytes.Buffer)
@ -207,7 +212,7 @@ func execCredsCommand(input Creds, subCommand string) (Creds, error) {
}
if _, ok := err.(*exec.ExitError); ok {
if !Config.GetenvBool("GIT_TERMINAL_PROMPT", true) {
if !config.Config.GetenvBool("GIT_TERMINAL_PROMPT", true) {
return nil, fmt.Errorf("Change the GIT_TERMINAL_PROMPT env var to be prompted to enter your credentials for %s://%s.",
input["protocol"], input["host"])
}
@ -235,4 +240,52 @@ func execCredsCommand(input Creds, subCommand string) (Creds, error) {
return creds, nil
}
var execCreds credentialFunc = execCredsCommand
func setRequestAuthFromUrl(req *http.Request, u *url.URL) bool {
if !config.Config.NtlmAccess(GetOperationForRequest(req)) && u.User != nil {
if pass, ok := u.User.Password(); ok {
fmt.Fprintln(os.Stderr, "warning: current Git remote contains credentials")
setRequestAuth(req, u.User.Username(), pass)
return true
}
}
return false
}
func setRequestAuth(req *http.Request, user, pass string) {
if config.Config.NtlmAccess(GetOperationForRequest(req)) {
return
}
if len(user) == 0 && len(pass) == 0 {
return
}
token := fmt.Sprintf("%s:%s", user, pass)
auth := "Basic " + strings.TrimSpace(base64.StdEncoding.EncodeToString([]byte(token)))
req.Header.Set("Authorization", auth)
}
var execCreds CredentialFunc = execCredsCommand
// GetCredentialsFunc returns the current credentials function
func GetCredentialsFunc() CredentialFunc {
return execCreds
}
// SetCredentialsFunc overrides the default credentials function (which is to call git)
// Returns the previous credentials func
func SetCredentialsFunc(f CredentialFunc) CredentialFunc {
oldf := execCreds
execCreds = f
return oldf
}
// GetOperationForRequest determines the operation type for a http.Request
func GetOperationForRequest(req *http.Request) string {
operation := "download"
if req.Method == "POST" || req.Method == "PUT" {
operation = "upload"
}
return operation
}

@ -1,4 +1,4 @@
package lfs
package auth
import (
"encoding/base64"
@ -8,11 +8,14 @@ import (
"strings"
"testing"
"github.com/github/git-lfs/vendor/_nuts/github.com/bgentry/go-netrc/netrc"
"github.com/bgentry/go-netrc/netrc"
"github.com/github/git-lfs/config"
)
func TestGetCredentialsForApi(t *testing.T) {
checkGetCredentials(t, getCreds, []*getCredentialCheck{
SetupTestCredentialsFunc()
checkGetCredentials(t, GetCreds, []*getCredentialCheck{
{
Desc: "simple",
Config: map[string]string{"lfs.url": "https://git-server.com"},
@ -110,6 +113,8 @@ func TestGetCredentialsForApi(t *testing.T) {
SkipAuth: true,
},
})
RestoreCredentialsFunc()
}
type fakeNetrc struct{}
@ -122,7 +127,9 @@ func (n *fakeNetrc) FindMachine(host string) *netrc.Machine {
}
func TestNetrcWithHostAndPort(t *testing.T) {
Config.parsedNetrc = &fakeNetrc{}
SetupTestCredentialsFunc()
config.Config.SetNetrc(&fakeNetrc{})
u, err := url.Parse("http://some-host:123/foo/bar")
if err != nil {
t.Fatal(err)
@ -141,10 +148,14 @@ func TestNetrcWithHostAndPort(t *testing.T) {
if auth != "Basic YWJjOmRlZg==" {
t.Fatalf("bad basic auth: %q", auth)
}
RestoreCredentialsFunc()
}
func TestNetrcWithHost(t *testing.T) {
Config.parsedNetrc = &fakeNetrc{}
SetupTestCredentialsFunc()
config.Config.SetNetrc(&fakeNetrc{})
u, err := url.Parse("http://some-host/foo/bar")
if err != nil {
t.Fatal(err)
@ -163,10 +174,14 @@ func TestNetrcWithHost(t *testing.T) {
if auth != "Basic YWJjOmRlZg==" {
t.Fatalf("bad basic auth: %q", auth)
}
RestoreCredentialsFunc()
}
func TestNetrcWithBadHost(t *testing.T) {
Config.parsedNetrc = &fakeNetrc{}
SetupTestCredentialsFunc()
config.Config.SetNetrc(&fakeNetrc{})
u, err := url.Parse("http://other-host/foo/bar")
if err != nil {
t.Fatal(err)
@ -185,16 +200,18 @@ func TestNetrcWithBadHost(t *testing.T) {
if auth != "" {
t.Fatalf("bad basic auth: %q", auth)
}
RestoreCredentialsFunc()
}
func checkGetCredentials(t *testing.T, getCredsFunc func(*http.Request) (Creds, error), checks []*getCredentialCheck) {
existingRemote := Config.CurrentRemote
existingRemote := config.Config.CurrentRemote
for _, check := range checks {
t.Logf("Checking %q", check.Desc)
Config.CurrentRemote = check.CurrentRemote
config.Config.CurrentRemote = check.CurrentRemote
for key, value := range check.Config {
Config.SetConfig(key, value)
config.Config.SetConfig(key, value)
}
req, err := http.NewRequest(check.Method, check.Href, nil)
@ -259,8 +276,8 @@ func checkGetCredentials(t *testing.T, getCredsFunc func(*http.Request) (Creds,
}
}
Config.ResetConfig()
Config.CurrentRemote = existingRemote
config.Config.ResetConfig()
config.Config.CurrentRemote = existingRemote
}
}
@ -285,8 +302,13 @@ func (c *getCredentialCheck) ExpectCreds() bool {
len(c.Password) > 0 || len(c.Path) > 0
}
var (
TestCredentialsFunc CredentialFunc
origCredentialsFunc CredentialFunc
)
func init() {
execCreds = func(input Creds, subCommand string) (Creds, error) {
TestCredentialsFunc = func(input Creds, subCommand string) (Creds, error) {
output := make(Creds)
for key, value := range input {
output[key] = value
@ -298,3 +320,13 @@ func init() {
return output, nil
}
}
// Override the credentials func for testing
func SetupTestCredentialsFunc() {
origCredentialsFunc = SetCredentialsFunc(TestCredentialsFunc)
}
// Put the original credentials func back
func RestoreCredentialsFunc() {
SetCredentialsFunc(origCredentialsFunc)
}

@ -1,4 +1,4 @@
package lfs
package auth
import (
"bytes"
@ -8,22 +8,23 @@ import (
"path/filepath"
"strings"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/config"
"github.com/rubyist/tracerx"
)
type sshAuthResponse struct {
type SshAuthResponse struct {
Message string `json:"-"`
Href string `json:"href"`
Header map[string]string `json:"header"`
ExpiresAt string `json:"expires_at"`
}
func sshAuthenticate(endpoint Endpoint, operation, oid string) (sshAuthResponse, error) {
func SshAuthenticate(endpoint config.Endpoint, operation, oid string) (SshAuthResponse, error) {
// This is only used as a fallback where the Git URL is SSH but server doesn't support a full SSH binary protocol
// and therefore we derive a HTTPS endpoint for binaries instead; but check authentication here via SSH
res := sshAuthResponse{}
res := SshAuthResponse{}
if len(endpoint.SshUserAndHost) == 0 {
return res, nil
}
@ -60,7 +61,7 @@ func sshAuthenticate(endpoint Endpoint, operation, oid string) (sshAuthResponse,
// Return the executable name for ssh on this machine and the base args
// Base args includes port settings, user/host, everything pre the command to execute
func sshGetExeAndArgs(endpoint Endpoint) (exe string, baseargs []string) {
func sshGetExeAndArgs(endpoint config.Endpoint) (exe string, baseargs []string) {
if len(endpoint.SshUserAndHost) == 0 {
return "", nil
}
@ -68,7 +69,13 @@ func sshGetExeAndArgs(endpoint Endpoint) (exe string, baseargs []string) {
isPlink := false
isTortoise := false
ssh := Config.Getenv("GIT_SSH")
ssh := config.Config.Getenv("GIT_SSH")
cmdArgs := strings.Fields(config.Config.Getenv("GIT_SSH_COMMAND"))
if len(cmdArgs) > 0 {
ssh = cmdArgs[0]
cmdArgs = cmdArgs[1:]
}
if ssh == "" {
ssh = "ssh"
} else {
@ -81,7 +88,11 @@ func sshGetExeAndArgs(endpoint Endpoint) (exe string, baseargs []string) {
isTortoise = strings.EqualFold(basessh, "tortoiseplink")
}
args := make([]string, 0, 4)
args := make([]string, 0, 4+len(cmdArgs))
if len(cmdArgs) > 0 {
args = append(args, cmdArgs...)
}
if isTortoise {
// TortoisePlink requires the -batch argument to behave like ssh/plink
args = append(args, "-batch")

208
auth/ssh_test.go Normal file

@ -0,0 +1,208 @@
package auth
import (
"path/filepath"
"testing"
"github.com/github/git-lfs/config"
"github.com/stretchr/testify/assert"
)
func TestSSHGetExeAndArgsSsh(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
config.Config.Setenv("GIT_SSH", "")
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, "ssh", exe)
assert.Equal(t, []string{"user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsSshCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
config.Config.Setenv("GIT_SSH", "")
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, "ssh", exe)
assert.Equal(t, []string{"-p", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsPlink(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "plink.exe")
config.Config.Setenv("GIT_SSH", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsPlinkCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "plink")
config.Config.Setenv("GIT_SSH", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-P", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsTortoisePlink(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "tortoiseplink.exe")
config.Config.Setenv("GIT_SSH", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-batch", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsTortoisePlinkCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "")
oldGITSSH := config.Config.Getenv("GIT_SSH")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "tortoiseplink")
config.Config.Setenv("GIT_SSH", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-batch", "-P", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsSshCommandPrecedence(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "sshcmd")
oldGITSSH := config.Config.Getenv("GIT_SSH")
config.Config.Setenv("GIT_SSH", "bad")
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, "sshcmd", exe)
assert.Equal(t, []string{"user@foo.com"}, args)
config.Config.Setenv("GIT_SSH", oldGITSSH)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsSshCommandArgs(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "sshcmd --args 1")
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, "sshcmd", exe)
assert.Equal(t, []string{"--args", "1", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsSshCommandCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
config.Config.Setenv("GIT_SSH_COMMAND", "sshcmd")
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, "sshcmd", exe)
assert.Equal(t, []string{"-p", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsPlinkCommand(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "plink.exe")
config.Config.Setenv("GIT_SSH_COMMAND", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsPlinkCommandCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "plink")
config.Config.Setenv("GIT_SSH_COMMAND", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-P", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsTortoisePlinkCommand(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "tortoiseplink.exe")
config.Config.Setenv("GIT_SSH_COMMAND", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-batch", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}
func TestSSHGetExeAndArgsTortoisePlinkCommandCustomPort(t *testing.T) {
endpoint := config.Config.Endpoint("download")
endpoint.SshUserAndHost = "user@foo.com"
endpoint.SshPort = "8888"
oldGITSSHCommand := config.Config.Getenv("GIT_SSH_COMMAND")
// this will run on non-Windows platforms too but no biggie
plink := filepath.Join("Users", "joebloggs", "bin", "tortoiseplink")
config.Config.Setenv("GIT_SSH_COMMAND", plink)
exe, args := sshGetExeAndArgs(endpoint)
assert.Equal(t, plink, exe)
assert.Equal(t, []string{"-batch", "-P", "8888", "user@foo.com"}, args)
config.Config.Setenv("GIT_SSH_COMMAND", oldGITSSHCommand)
}

@ -6,10 +6,13 @@ import (
"os/exec"
"sync"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/progress"
"github.com/rubyist/tracerx"
"github.com/spf13/cobra"
)
var (
@ -116,7 +119,7 @@ func checkoutWithIncludeExclude(include []string, exclude []string) {
for _, pointer := range pointers {
totalBytes += pointer.Size
}
progress := lfs.NewProgressMeter(len(pointers), totalBytes, false)
progress := progress.NewProgressMeter(len(pointers), totalBytes, false, config.Config.Getenv("GIT_LFS_PROGRESS"))
progress.Start()
totalBytes = 0
for _, pointer := range pointers {
@ -176,7 +179,7 @@ func checkoutWithChan(in <-chan *lfs.WrappedPointer) {
// Check the content - either missing or still this pointer (not exist is ok)
filepointer, err := lfs.DecodePointerFromFile(pointer.Name)
if err != nil && !os.IsNotExist(err) {
if lfs.IsNotAPointerError(err) {
if errutil.IsNotAPointerError(err) {
// File has non-pointer content, leave it alone
continue
}
@ -195,7 +198,7 @@ func checkoutWithChan(in <-chan *lfs.WrappedPointer) {
err = lfs.PointerSmudgeToFile(cwdfilepath, pointer.Pointer, false, nil)
if err != nil {
if lfs.IsDownloadDeclinedError(err) {
if errutil.IsDownloadDeclinedError(err) {
// acceptable error, data not local (fetch not run or include/exclude)
LoggedError(err, "Skipped checkout for %v, content not local. Use fetch to download.", pointer.Name)
} else {

@ -3,8 +3,10 @@ package commands
import (
"os"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/progress"
"github.com/spf13/cobra"
)
var (
@ -19,7 +21,7 @@ func cleanCommand(cmd *cobra.Command, args []string) {
lfs.InstallHooks(false)
var fileName string
var cb lfs.CopyCallback
var cb progress.CopyCallback
var file *os.File
var fileSize int64
if len(args) > 0 {
@ -48,8 +50,8 @@ func cleanCommand(cmd *cobra.Command, args []string) {
defer cleaned.Teardown()
}
if lfs.IsCleanPointerError(err) {
os.Stdout.Write(lfs.ErrorGetContext(err, "bytes").([]byte))
if errutil.IsCleanPointerError(err) {
os.Stdout.Write(errutil.ErrorGetContext(err, "bytes").([]byte))
return
}

@ -6,9 +6,11 @@ import (
"path/filepath"
"strings"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/localstorage"
"github.com/github/git-lfs/tools"
"github.com/spf13/cobra"
)
var (
@ -17,7 +19,9 @@ var (
Run: cloneCommand,
}
cloneFlags git.CloneFlags
cloneFlags git.CloneFlags
cloneIncludeArg string
cloneExcludeArg string
)
func cloneCommand(cmd *cobra.Command, args []string) {
@ -37,14 +41,14 @@ func cloneCommand(cmd *cobra.Command, args []string) {
// Either the last argument was a relative or local dir, or we have to
// derive it from the clone URL
clonedir, err := filepath.Abs(args[len(args)-1])
if err != nil || !lfs.DirExists(clonedir) {
if err != nil || !tools.DirExists(clonedir) {
// Derive from clone URL instead
base := path.Base(args[len(args)-1])
if strings.HasSuffix(base, ".git") {
base = base[:len(base)-4]
}
clonedir, _ = filepath.Abs(base)
if !lfs.DirExists(clonedir) {
if !tools.DirExists(clonedir) {
Exit("Unable to find clone dir at %q", clonedir)
}
}
@ -58,22 +62,23 @@ func cloneCommand(cmd *cobra.Command, args []string) {
defer os.Chdir(cwd)
// Also need to derive dirs now
lfs.ResolveDirs()
localstorage.ResolveDirs()
requireInRepo()
// Now just call pull with default args
// Support --origin option to clone
if len(cloneFlags.Origin) > 0 {
lfs.Config.CurrentRemote = cloneFlags.Origin
config.Config.CurrentRemote = cloneFlags.Origin
} else {
lfs.Config.CurrentRemote = "origin"
config.Config.CurrentRemote = "origin"
}
include, exclude := determineIncludeExcludePaths(config.Config, cloneIncludeArg, cloneExcludeArg)
if cloneFlags.NoCheckout || cloneFlags.Bare {
// If --no-checkout or --bare then we shouldn't check out, just fetch instead
fetchRef("HEAD", nil, nil)
fetchRef("HEAD", include, exclude)
} else {
pull(nil, nil)
pull(include, exclude)
}
}
@ -104,5 +109,9 @@ func init() {
cloneCmd.Flags().BoolVarP(&cloneFlags.Verbose, "verbose", "", false, "See 'git clone --help'")
cloneCmd.Flags().BoolVarP(&cloneFlags.Ipv4, "ipv4", "", false, "See 'git clone --help'")
cloneCmd.Flags().BoolVarP(&cloneFlags.Ipv6, "ipv6", "", false, "See 'git clone --help'")
cloneCmd.Flags().StringVarP(&cloneIncludeArg, "include", "I", "", "Include a list of paths")
cloneCmd.Flags().StringVarP(&cloneExcludeArg, "exclude", "X", "", "Exclude a list of paths")
RootCmd.AddCommand(cloneCmd)
}

@ -1,9 +1,10 @@
package commands
import (
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -14,29 +15,29 @@ var (
)
func envCommand(cmd *cobra.Command, args []string) {
lfs.ShowConfigWarnings = true
config := lfs.Config
endpoint := config.Endpoint("download")
config.ShowConfigWarnings = true
cfg := config.Config
endpoint := cfg.Endpoint("download")
gitV, err := git.Config.Version()
if err != nil {
gitV = "Error getting git version: " + err.Error()
}
Print(lfs.UserAgent)
Print(config.VersionDesc)
Print(gitV)
Print("")
if len(endpoint.Url) > 0 {
Print("Endpoint=%s (auth=%s)", endpoint.Url, config.EndpointAccess(endpoint))
Print("Endpoint=%s (auth=%s)", endpoint.Url, cfg.EndpointAccess(endpoint))
if len(endpoint.SshUserAndHost) > 0 {
Print(" SSH=%s:%s", endpoint.SshUserAndHost, endpoint.SshPath)
}
}
for _, remote := range config.Remotes() {
remoteEndpoint := config.RemoteEndpoint(remote, "download")
Print("Endpoint (%s)=%s (auth=%s)", remote, remoteEndpoint.Url, config.EndpointAccess(remoteEndpoint))
for _, remote := range cfg.Remotes() {
remoteEndpoint := cfg.RemoteEndpoint(remote, "download")
Print("Endpoint (%s)=%s (auth=%s)", remote, remoteEndpoint.Url, cfg.EndpointAccess(remoteEndpoint))
if len(remoteEndpoint.SshUserAndHost) > 0 {
Print(" SSH=%s:%s", remoteEndpoint.SshUserAndHost, remoteEndpoint.SshPath)
}
@ -47,7 +48,7 @@ func envCommand(cmd *cobra.Command, args []string) {
}
for _, key := range []string{"filter.lfs.smudge", "filter.lfs.clean"} {
value, _ := lfs.Config.GitConfig(key)
value, _ := cfg.GitConfig(key)
Print("git config %s = %q", key, value)
}
}

@ -3,8 +3,8 @@ package commands
import (
"fmt"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/config"
"github.com/spf13/cobra"
)
var (
@ -31,17 +31,17 @@ func extListCommand(cmd *cobra.Command, args []string) {
return
}
config := lfs.Config
cfg := config.Config
for _, key := range args {
ext := config.Extensions()[key]
ext := cfg.Extensions()[key]
printExt(ext)
}
}
func printAllExts() {
config := lfs.Config
cfg := config.Config
extensions, err := lfs.SortExtensions(config.Extensions())
extensions, err := cfg.SortedExtensions()
if err != nil {
fmt.Println(err)
return
@ -51,7 +51,7 @@ func printAllExts() {
}
}
func printExt(ext lfs.Extension) {
func printExt(ext config.Extension) {
Print("Extension: %s", ext.Name)
Print(" clean = %s", ext.Clean)
Print(" smudge = %s", ext.Smudge)

@ -4,10 +4,12 @@ import (
"fmt"
"time"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/progress"
"github.com/rubyist/tracerx"
"github.com/spf13/cobra"
)
var (
@ -32,14 +34,14 @@ func fetchCommand(cmd *cobra.Command, args []string) {
if err := git.ValidateRemote(args[0]); err != nil {
Exit("Invalid remote name %q", args[0])
}
lfs.Config.CurrentRemote = args[0]
config.Config.CurrentRemote = args[0]
} else {
// Actively find the default remote, don't just assume origin
defaultRemote, err := git.DefaultRemote()
if err != nil {
Exit("No default remote")
}
lfs.Config.CurrentRemote = defaultRemote
config.Config.CurrentRemote = defaultRemote
}
if len(args) > 1 {
@ -48,7 +50,7 @@ func fetchCommand(cmd *cobra.Command, args []string) {
Panic(err, "Invalid ref argument: %v", args[1:])
}
refs = resolvedrefs
} else {
} else if !fetchAllArg {
ref, err := git.CurrentRef()
if err != nil {
Panic(err, "Could not fetch")
@ -64,13 +66,13 @@ func fetchCommand(cmd *cobra.Command, args []string) {
if fetchIncludeArg != "" || fetchExcludeArg != "" {
Exit("Cannot combine --all with --include or --exclude")
}
if len(lfs.Config.FetchIncludePaths()) > 0 || len(lfs.Config.FetchExcludePaths()) > 0 {
if len(config.Config.FetchIncludePaths()) > 0 || len(config.Config.FetchExcludePaths()) > 0 {
Print("Ignoring global include / exclude paths to fulfil --all")
}
success = fetchAll()
} else { // !all
includePaths, excludePaths := determineIncludeExcludePaths(fetchIncludeArg, fetchExcludeArg)
includePaths, excludePaths := determineIncludeExcludePaths(config.Config, fetchIncludeArg, fetchExcludeArg)
// Fetch refs sequentially per arg order; duplicates in later refs will be ignored
for _, ref := range refs {
@ -79,14 +81,14 @@ func fetchCommand(cmd *cobra.Command, args []string) {
success = success && s
}
if fetchRecentArg || lfs.Config.FetchPruneConfig().FetchRecentAlways {
if fetchRecentArg || config.Config.FetchPruneConfig().FetchRecentAlways {
s := fetchRecent(refs, includePaths, excludePaths)
success = success && s
}
}
if fetchPruneArg {
verify := lfs.Config.FetchPruneConfig().PruneVerifyRemoteAlways
verify := config.Config.FetchPruneConfig().PruneVerifyRemoteAlways
// no dry-run or verbose options in fetch, assume false
prune(verify, false, false)
}
@ -146,7 +148,7 @@ func fetchPreviousVersions(ref string, since time.Time, include, exclude []strin
// Fetch recent objects based on config
func fetchRecent(alreadyFetchedRefs []*git.Ref, include, exclude []string) bool {
fetchconf := lfs.Config.FetchPruneConfig()
fetchconf := config.Config.FetchPruneConfig()
if fetchconf.FetchRecentRefsDays == 0 && fetchconf.FetchRecentCommitsDays == 0 {
return true
@ -162,7 +164,7 @@ func fetchRecent(alreadyFetchedRefs []*git.Ref, include, exclude []string) bool
if fetchconf.FetchRecentRefsDays > 0 {
Print("Fetching recent branches within %v days", fetchconf.FetchRecentRefsDays)
refsSince := time.Now().AddDate(0, 0, -fetchconf.FetchRecentRefsDays)
refs, err := git.RecentBranches(refsSince, fetchconf.FetchRecentRefsIncludeRemotes, lfs.Config.CurrentRemote)
refs, err := git.RecentBranches(refsSince, fetchconf.FetchRecentRefsIncludeRemotes, config.Config.CurrentRemote)
if err != nil {
Panic(err, "Could not scan for recent refs")
}
@ -214,7 +216,7 @@ func scanAll() []*lfs.WrappedPointer {
// This could be a long process so use the chan version & report progress
Print("Scanning for all objects ever referenced...")
spinner := lfs.NewSpinner()
spinner := progress.NewSpinner()
var numObjs int64
pointerchan, err := lfs.ScanRefsToChan("", "", opts)
if err != nil {
@ -287,6 +289,8 @@ func fetchAndReportToChan(pointers []*lfs.WrappedPointer, include, exclude []str
tracerx.Printf("fetch %v [%v]", p.Name, p.Oid)
q.Add(lfs.NewDownloadable(p))
} else {
// Ensure progress matches
q.Skip(p.Size)
if !passFilter {
tracerx.Printf("Skipping %v [%v], include/exclude filters applied", p.Name, p.Oid)
} else {

@ -7,9 +7,10 @@ import (
"os"
"path/filepath"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -55,7 +56,7 @@ func doFsck() (bool, error) {
ok := true
for oid, name := range pointerIndex {
path := filepath.Join(lfs.LocalMediaDir, oid[0:2], oid[2:4], oid)
path := lfs.LocalMediaPathReadOnly(oid)
Debug("Examining %v (%v)", name, path)
@ -84,7 +85,7 @@ func doFsck() (bool, error) {
continue
}
badDir := filepath.Join(lfs.LocalGitStorageDir, "lfs", "bad")
badDir := filepath.Join(config.LocalGitStorageDir, "lfs", "bad")
if err := os.MkdirAll(badDir, 0755); err != nil {
return false, err
}

@ -4,7 +4,7 @@ import (
"fmt"
"os"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
// TODO: Remove for Git LFS v2.0 https://github.com/github/git-lfs/issues/839

@ -2,7 +2,7 @@ package commands
import (
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (

113
commands/command_lock.go Normal file

@ -0,0 +1,113 @@
package commands
import (
"fmt"
"os"
"path/filepath"
"strings"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/spf13/cobra"
)
var (
lockRemote string
lockRemoteHelp = "specify which remote to use when interacting with locks"
// TODO(taylor): consider making this (and the above flag) a property of
// some parent-command, or another similarly less ugly way of handling
// this
setLockRemoteFor = func(c *config.Configuration) {
c.CurrentRemote = lockRemote
}
lockCmd = &cobra.Command{
Use: "lock",
Run: lockCommand,
}
)
func lockCommand(cmd *cobra.Command, args []string) {
setLockRemoteFor(config.Config)
if len(args) == 0 {
Print("Usage: git lfs lock <path>")
return
}
latest, err := git.CurrentRemoteRef()
if err != nil {
Error(err.Error())
Exit("Unable to determine lastest remote ref for branch.")
}
path, err := lockPath(args[0])
if err != nil {
Exit(err.Error())
}
s, resp := API.Locks.Lock(&api.LockRequest{
Path: path,
Committer: api.CurrentCommitter(),
LatestRemoteCommit: latest.Sha,
})
if _, err := API.Do(s); err != nil {
Error(err.Error())
Exit("Error communicating with LFS API.")
}
if len(resp.Err) > 0 {
Error(resp.Err)
Exit("Server unable to create lock.")
}
Print("\n'%s' was locked (%s)", args[0], resp.Lock.Id)
}
// lockPaths relativizes the given filepath such that it is relative to the root
// path of the repository it is contained within, taking into account the
// working directory of the caller.
//
// If the root directory, working directory, or file cannot be
// determined/opened, an error will be returned. If the file in question is
// actually a directory, an error will be returned. Otherwise, the cleaned path
// will be returned.
//
// For example:
// - Working directory: /code/foo/bar/
// - Repository root: /code/foo/
// - File to lock: ./baz
// - Resolved path bar/baz
func lockPath(file string) (string, error) {
repo, err := git.RootDir()
if err != nil {
return "", err
}
wd, err := os.Getwd()
if err != nil {
return "", err
}
abs := filepath.Join(wd, file)
path := strings.TrimPrefix(abs, repo)
if stat, err := os.Stat(abs); err != nil {
return "", err
} else {
if stat.IsDir() {
return "", fmt.Errorf("lfs: cannot lock directory: %s", file)
}
return path[1:], nil
}
}
func init() {
lockCmd.Flags().StringVarP(&lockRemote, "remote", "r", config.Config.CurrentRemote, lockRemoteHelp)
RootCmd.AddCommand(lockCmd)
}

102
commands/command_locks.go Normal file

@ -0,0 +1,102 @@
package commands
import (
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/spf13/cobra"
)
var (
locksCmdFlags = new(locksFlags)
locksCmd = &cobra.Command{
Use: "locks",
Run: locksCommand,
}
)
func locksCommand(cmd *cobra.Command, args []string) {
setLockRemoteFor(config.Config)
filters, err := locksCmdFlags.Filters()
if err != nil {
Error(err.Error())
}
var locks []api.Lock
query := &api.LockSearchRequest{Filters: filters}
for {
s, resp := API.Locks.Search(query)
if _, err := API.Do(s); err != nil {
Error(err.Error())
Exit("Error communicating with LFS API.")
}
if resp.Err != "" {
Error(resp.Err)
}
locks = append(locks, resp.Locks...)
if locksCmdFlags.Limit > 0 && len(locks) > locksCmdFlags.Limit {
locks = locks[:locksCmdFlags.Limit]
break
}
if resp.NextCursor != "" {
query.Cursor = resp.NextCursor
} else {
break
}
}
Print("\n%d lock(s) matched query:", len(locks))
for _, lock := range locks {
Print("%s\t%s <%s>", lock.Path, lock.Committer.Name, lock.Committer.Email)
}
}
func init() {
locksCmd.Flags().StringVarP(&lockRemote, "remote", "r", config.Config.CurrentRemote, lockRemoteHelp)
locksCmd.Flags().StringVarP(&locksCmdFlags.Path, "path", "p", "", "filter locks results matching a particular path")
locksCmd.Flags().StringVarP(&locksCmdFlags.Id, "id", "i", "", "filter locks results matching a particular ID")
locksCmd.Flags().IntVarP(&locksCmdFlags.Limit, "limit", "l", 0, "optional limit for number of results to return")
RootCmd.AddCommand(locksCmd)
}
// locksFlags wraps up and holds all of the flags that can be given to the
// `git lfs locks` command.
type locksFlags struct {
// Path is an optional filter parameter to filter against the lock's
// path
Path string
// Id is an optional filter parameter used to filtere against the lock's
// ID.
Id string
// limit is an optional request parameter sent to the server used to
// limit the
Limit int
}
// Filters produces a slice of api.Filter instances based on the internal state
// of this locksFlags instance. The return value of this method is capable (and
// recommend to be used with) the api.LockSearchRequest type.
func (l *locksFlags) Filters() ([]api.Filter, error) {
filters := make([]api.Filter, 0)
if l.Path != "" {
path, err := lockPath(l.Path)
if err != nil {
return nil, err
}
filters = append(filters, api.Filter{"path", path})
}
if l.Id != "" {
filters = append(filters, api.Filter{"id", l.Id})
}
return filters, nil
}

@ -6,8 +6,9 @@ import (
"os"
"path/filepath"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/spf13/cobra"
)
var (
@ -60,7 +61,7 @@ func logsShowCommand(cmd *cobra.Command, args []string) {
}
name := args[0]
by, err := ioutil.ReadFile(filepath.Join(lfs.LocalLogDir, name))
by, err := ioutil.ReadFile(filepath.Join(config.LocalLogDir, name))
if err != nil {
Exit("Error reading log: %s", name)
}
@ -70,23 +71,23 @@ func logsShowCommand(cmd *cobra.Command, args []string) {
}
func logsClearCommand(cmd *cobra.Command, args []string) {
err := os.RemoveAll(lfs.LocalLogDir)
err := os.RemoveAll(config.LocalLogDir)
if err != nil {
Panic(err, "Error clearing %s", lfs.LocalLogDir)
Panic(err, "Error clearing %s", config.LocalLogDir)
}
Print("Cleared %s", lfs.LocalLogDir)
Print("Cleared %s", config.LocalLogDir)
}
func logsBoomtownCommand(cmd *cobra.Command, args []string) {
Debug("Debug message")
err := lfs.Errorf(errors.New("Inner error message!"), "Error!")
err := errutil.Errorf(errors.New("Inner error message!"), "Error!")
Panic(err, "Welcome to Boomtown")
Debug("Never seen")
}
func sortedLogs() []string {
fileinfos, err := ioutil.ReadDir(lfs.LocalLogDir)
fileinfos, err := ioutil.ReadDir(config.LocalLogDir)
if err != nil {
return []string{}
}

@ -5,7 +5,7 @@ import (
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (

@ -11,7 +11,7 @@ import (
"os/exec"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (

@ -5,9 +5,10 @@ import (
"os"
"strings"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -52,12 +53,12 @@ func prePushCommand(cmd *cobra.Command, args []string) {
Exit("Invalid remote name %q", args[0])
}
lfs.Config.CurrentRemote = args[0]
config.Config.CurrentRemote = args[0]
ctx := newUploadContext(prePushDryRun)
scanOpt := lfs.NewScanRefsOptions()
scanOpt.ScanMode = lfs.ScanLeftToRemoteMode
scanOpt.RemoteName = lfs.Config.CurrentRemote
scanOpt.RemoteName = config.Config.CurrentRemote
// We can be passed multiple lines of refs
scanner := bufio.NewScanner(os.Stdin)

@ -7,12 +7,13 @@ import (
"sync"
"time"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/localstorage"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/progress"
"github.com/rubyist/tracerx"
"github.com/spf13/cobra"
)
var (
@ -35,7 +36,7 @@ func pruneCommand(cmd *cobra.Command, args []string) {
}
verify := !pruneDoNotVerifyArg &&
(lfs.Config.FetchPruneConfig().PruneVerifyRemoteAlways || pruneVerifyArg)
(config.Config.FetchPruneConfig().PruneVerifyRemoteAlways || pruneVerifyArg)
prune(verify, pruneDryRunArg, pruneVerboseArg)
@ -121,9 +122,9 @@ func prune(verifyRemote, dryRun, verbose bool) {
var verifyc chan string
if verifyRemote {
lfs.Config.CurrentRemote = lfs.Config.FetchPruneConfig().PruneRemoteName
config.Config.CurrentRemote = config.Config.FetchPruneConfig().PruneRemoteName
// build queue now, no estimates or progress output
verifyQueue = lfs.NewDownloadCheckQueue(0, 0, true)
verifyQueue = lfs.NewDownloadCheckQueue(0, 0)
verifiedObjects = lfs.NewStringSetWithCapacity(len(localObjects) / 2)
// this channel is filled with oids for which Check() succeeded & Transfer() was called
@ -142,7 +143,7 @@ func prune(verifyRemote, dryRun, verbose bool) {
if verifyRemote {
tracerx.Printf("VERIFYING: %v", file.Oid)
pointer := lfs.NewPointer(file.Oid, file.Size, nil)
verifyQueue.Add(lfs.NewDownloadCheckable(&lfs.WrappedPointer{Pointer: pointer}))
verifyQueue.Add(lfs.NewDownloadable(&lfs.WrappedPointer{Pointer: pointer}))
}
}
}
@ -222,7 +223,7 @@ func pruneCheckErrors(taskErrors []error) {
func pruneTaskDisplayProgress(progressChan PruneProgressChan, waitg *sync.WaitGroup) {
defer waitg.Done()
spinner := lfs.NewSpinner()
spinner := progress.NewSpinner()
localCount := 0
retainCount := 0
verifyCount := 0
@ -267,7 +268,7 @@ func pruneTaskCollectErrors(outtaskErrors *[]error, errorChan chan error, errorw
}
func pruneDeleteFiles(prunableObjects []string) {
spinner := lfs.NewSpinner()
spinner := progress.NewSpinner()
var problems bytes.Buffer
// In case we fail to delete some
var deletedFiles int
@ -363,7 +364,7 @@ func pruneTaskGetRetainedCurrentAndRecentRefs(retainChan chan string, errorChan
go pruneTaskGetRetainedAtRef(ref.Sha, retainChan, errorChan, waitg)
// Now recent
fetchconf := lfs.Config.FetchPruneConfig()
fetchconf := config.Config.FetchPruneConfig()
if fetchconf.FetchRecentRefsDays > 0 {
pruneRefDays := fetchconf.FetchRecentRefsDays + fetchconf.PruneOffsetDays
tracerx.Printf("PRUNE: Retaining non-HEAD refs within %d (%d+%d) days", pruneRefDays, fetchconf.FetchRecentRefsDays, fetchconf.PruneOffsetDays)
@ -404,7 +405,7 @@ func pruneTaskGetRetainedCurrentAndRecentRefs(retainChan chan string, errorChan
func pruneTaskGetRetainedUnpushed(retainChan chan string, errorChan chan error, waitg *sync.WaitGroup) {
defer waitg.Done()
remoteName := lfs.Config.FetchPruneConfig().PruneRemoteName
remoteName := config.Config.FetchPruneConfig().PruneRemoteName
refchan, err := lfs.ScanUnpushedToChan(remoteName)
if err != nil {
@ -427,7 +428,7 @@ func pruneTaskGetRetainedWorktree(retainChan chan string, errorChan chan error,
// Retain other worktree HEADs too
// Working copy, branch & maybe commit is different but repo is shared
allWorktreeRefs, err := git.GetAllWorkTreeHEADs(lfs.LocalGitStorageDir)
allWorktreeRefs, err := git.GetAllWorkTreeHEADs(config.LocalGitStorageDir)
if err != nil {
errorChan <- err
return

@ -3,9 +3,9 @@ package commands
import (
"fmt"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -25,17 +25,17 @@ func pullCommand(cmd *cobra.Command, args []string) {
if err := git.ValidateRemote(args[0]); err != nil {
Panic(err, fmt.Sprintf("Invalid remote name '%v'", args[0]))
}
lfs.Config.CurrentRemote = args[0]
config.Config.CurrentRemote = args[0]
} else {
// Actively find the default remote, don't just assume origin
defaultRemote, err := git.DefaultRemote()
if err != nil {
Panic(err, "No default remote")
}
lfs.Config.CurrentRemote = defaultRemote
config.Config.CurrentRemote = defaultRemote
}
pull(determineIncludeExcludePaths(pullIncludeArg, pullExcludeArg))
pull(determineIncludeExcludePaths(config.Config, pullIncludeArg, pullExcludeArg))
}

@ -4,10 +4,11 @@ import (
"io/ioutil"
"os"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/rubyist/tracerx"
"github.com/spf13/cobra"
)
var (
@ -28,7 +29,7 @@ func uploadsBetweenRefs(ctx *uploadContext, left string, right string) {
scanOpt := lfs.NewScanRefsOptions()
scanOpt.ScanMode = lfs.ScanRefsMode
scanOpt.RemoteName = lfs.Config.CurrentRemote
scanOpt.RemoteName = config.Config.CurrentRemote
pointers, err := lfs.ScanRefs(left, right, scanOpt)
if err != nil {
@ -39,11 +40,11 @@ func uploadsBetweenRefs(ctx *uploadContext, left string, right string) {
}
func uploadsBetweenRefAndRemote(ctx *uploadContext, refnames []string) {
tracerx.Printf("Upload refs %v to remote %v", refnames, lfs.Config.CurrentRemote)
tracerx.Printf("Upload refs %v to remote %v", refnames, config.Config.CurrentRemote)
scanOpt := lfs.NewScanRefsOptions()
scanOpt.ScanMode = lfs.ScanLeftToRemoteMode
scanOpt.RemoteName = lfs.Config.CurrentRemote
scanOpt.RemoteName = config.Config.CurrentRemote
if pushAll {
scanOpt.ScanMode = lfs.ScanRefsMode
@ -122,7 +123,7 @@ func pushCommand(cmd *cobra.Command, args []string) {
Exit("Invalid remote name %q", args[0])
}
lfs.Config.CurrentRemote = args[0]
config.Config.CurrentRemote = args[0]
ctx := newUploadContext(pushDryRun)
if useStdin {

@ -6,8 +6,10 @@ import (
"os"
"path/filepath"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -60,7 +62,7 @@ func smudgeCommand(cmd *cobra.Command, args []string) {
Error(err.Error())
}
cfg := lfs.Config
cfg := config.Config
download := lfs.FilenamePassesIncludeExcludeFilter(filename, cfg.FetchIncludePaths(), cfg.FetchExcludePaths())
if smudgeSkip || cfg.GetenvBool("GIT_LFS_SKIP_SMUDGE", false) {
@ -75,7 +77,7 @@ func smudgeCommand(cmd *cobra.Command, args []string) {
if err != nil {
ptr.Encode(os.Stdout)
// Download declined error is ok to skip if we weren't requesting download
if !(lfs.IsDownloadDeclinedError(err) && !download) {
if !(errutil.IsDownloadDeclinedError(err) && !download) {
LoggedError(err, "Error downloading object: %s (%s)", filename, ptr.Oid)
if !cfg.SkipDownloadErrors() {
os.Exit(2)
@ -89,8 +91,8 @@ func smudgeFilename(args []string, err error) string {
return args[0]
}
if lfs.IsSmudgeError(err) {
return filepath.Base(lfs.ErrorGetContext(err, "FileName").(string))
if errutil.IsSmudgeError(err) {
return filepath.Base(errutil.ErrorGetContext(err, "FileName").(string))
}
return "<unknown file>"

@ -5,7 +5,7 @@ import (
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (

@ -9,26 +9,34 @@ import (
"strings"
"time"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
prefixBlocklist = []string{
".git", ".lfs",
}
trackCmd = &cobra.Command{
Use: "track",
Run: trackCommand,
}
trackVerboseLoggingFlag bool
trackDryRunFlag bool
)
func trackCommand(cmd *cobra.Command, args []string) {
if lfs.LocalGitDir == "" {
if config.LocalGitDir == "" {
Print("Not a git repository.")
os.Exit(128)
}
if lfs.LocalWorkingDir == "" {
if config.LocalWorkingDir == "" {
Print("This operation must be run in a work tree.")
os.Exit(128)
}
@ -59,9 +67,9 @@ func trackCommand(cmd *cobra.Command, args []string) {
}
wd, _ := os.Getwd()
relpath, err := filepath.Rel(lfs.LocalWorkingDir, wd)
relpath, err := filepath.Rel(config.LocalWorkingDir, wd)
if err != nil {
Exit("Current directory %q outside of git working directory %q.", wd, lfs.LocalWorkingDir)
Exit("Current directory %q outside of git working directory %q.", wd, config.LocalWorkingDir)
}
ArgsLoop:
@ -73,32 +81,63 @@ ArgsLoop:
}
}
encodedArg := strings.Replace(pattern, " ", "[[:space:]]", -1)
_, err := attributesFile.WriteString(fmt.Sprintf("%s filter=lfs diff=lfs merge=lfs -text\n", encodedArg))
if err != nil {
Print("Error adding path %s", pattern)
continue
}
Print("Tracking %s", pattern)
// Make sure any existing git tracked files have their timestamp updated
// so they will now show as modifed
// note this is relative to current dir which is how we write .gitattributes
// deliberately not done in parallel as a chan because we'll be marking modified
//
// NOTE: `git ls-files` does not do well with leading slashes.
// Since all `git-lfs track` calls are relative to the root of
// the repository, the leading slash is simply removed for its
// implicit counterpart.
if trackVerboseLoggingFlag {
Print("Searching for files matching pattern: %s", pattern)
}
gittracked, err := git.GetTrackedFiles(pattern)
if err != nil {
LoggedError(err, "Error getting git tracked files")
continue
}
if trackVerboseLoggingFlag {
Print("Found %d files previously added to Git matching pattern: %s", len(gittracked), pattern)
}
now := time.Now()
var matchedBlocklist bool
for _, f := range gittracked {
err := os.Chtimes(f, now, now)
if forbidden := blocklistItem(f); forbidden != "" {
Print("Pattern %s matches forbidden file %s. If you would like to track %s, modify .gitattributes manually.", pattern, f, f)
matchedBlocklist = true
}
}
if matchedBlocklist {
continue
}
if !trackDryRunFlag {
encodedArg := strings.Replace(pattern, " ", "[[:space:]]", -1)
_, err := attributesFile.WriteString(fmt.Sprintf("%s filter=lfs diff=lfs merge=lfs -text\n", encodedArg))
if err != nil {
LoggedError(err, "Error marking %q modified", f)
Print("Error adding path %s", pattern)
continue
}
}
Print("Tracking %s", pattern)
for _, f := range gittracked {
if trackVerboseLoggingFlag || trackDryRunFlag {
Print("Git LFS: touching %s", f)
}
if !trackDryRunFlag {
err := os.Chtimes(f, now, now)
if err != nil {
LoggedError(err, "Error marking %q modified", f)
continue
}
}
}
}
}
@ -122,7 +161,7 @@ func findPaths() []mediaPath {
line := scanner.Text()
if strings.Contains(line, "filter=lfs") {
fields := strings.Fields(line)
relfile, _ := filepath.Rel(lfs.LocalWorkingDir, path)
relfile, _ := filepath.Rel(config.LocalWorkingDir, path)
pattern := fields[0]
if reldir := filepath.Dir(relfile); len(reldir) > 0 {
pattern = filepath.Join(reldir, pattern)
@ -139,12 +178,12 @@ func findPaths() []mediaPath {
func findAttributeFiles() []string {
paths := make([]string, 0)
repoAttributes := filepath.Join(lfs.LocalGitDir, "info", "attributes")
repoAttributes := filepath.Join(config.LocalGitDir, "info", "attributes")
if info, err := os.Stat(repoAttributes); err == nil && !info.IsDir() {
paths = append(paths, repoAttributes)
}
filepath.Walk(lfs.LocalWorkingDir, func(path string, info os.FileInfo, err error) error {
filepath.Walk(config.LocalWorkingDir, func(path string, info os.FileInfo, err error) error {
if err != nil {
return err
}
@ -180,6 +219,23 @@ func needsTrailingLinebreak(filename string) bool {
return !strings.HasSuffix(string(buf[0:bytesRead]), "\n")
}
// blocklistItem returns the name of the blocklist item preventing the given
// file-name from being tracked, or an empty string, if there is none.
func blocklistItem(name string) string {
base := filepath.Base(name)
for _, p := range prefixBlocklist {
if strings.HasPrefix(base, p) {
return p
}
}
return ""
}
func init() {
trackCmd.Flags().BoolVarP(&trackVerboseLoggingFlag, "verbose", "v", false, "log which files are being tracked and modified")
trackCmd.Flags().BoolVarP(&trackDryRunFlag, "dry-run", "d", false, "preview results of running `git lfs track`")
RootCmd.AddCommand(trackCmd)
}

@ -4,7 +4,7 @@ import (
"fmt"
"os"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
// TODO: Remove for Git LFS v2.0 https://github.com/github/git-lfs/issues/839

@ -2,7 +2,7 @@ package commands
import (
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (

107
commands/command_unlock.go Normal file

@ -0,0 +1,107 @@
package commands
import (
"errors"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/spf13/cobra"
)
var (
// errNoMatchingLocks is an error returned when no matching locks were
// able to be resolved
errNoMatchingLocks = errors.New("lfs: no matching locks found")
// errLockAmbiguous is an error returned when multiple matching locks
// were found
errLockAmbiguous = errors.New("lfs: multiple locks found; ambiguous")
unlockCmdFlags unlockFlags
unlockCmd = &cobra.Command{
Use: "unlock",
Run: unlockCommand,
}
)
// unlockFlags holds the flags given to the `git lfs unlock` command
type unlockFlags struct {
// Id is the Id of the lock that is being unlocked.
Id string
// Force specifies whether or not the `lfs unlock` command was invoked
// with "--force", signifying the user's intent to break another
// individual's lock(s).
Force bool
}
func unlockCommand(cmd *cobra.Command, args []string) {
setLockRemoteFor(config.Config)
var id string
if len(args) != 0 {
path, err := lockPath(args[0])
if err != nil {
Error(err.Error())
}
if id, err = lockIdFromPath(path); err != nil {
Error(err.Error())
}
} else if unlockCmdFlags.Id != "" {
id = unlockCmdFlags.Id
} else {
Error("Usage: git lfs unlock (--id my-lock-id | <path>)")
}
s, resp := API.Locks.Unlock(id, unlockCmdFlags.Force)
if _, err := API.Do(s); err != nil {
Error(err.Error())
Exit("Error communicating with LFS API.")
}
if len(resp.Err) > 0 {
Error(resp.Err)
Exit("Server unable to unlock lock.")
}
Print("'%s' was unlocked (%s)", args[0], resp.Lock.Id)
}
// lockIdFromPath makes a call to the LFS API and resolves the ID for the locked
// locked at the given path.
//
// If the API call failed, an error will be returned. If multiple locks matched
// the given path (should not happen during real-world usage), an error will be
// returnd. If no locks matched the given path, an error will be returned.
//
// If the API call is successful, and only one lock matches the given filepath,
// then its ID will be returned, along with a value of "nil" for the error.
func lockIdFromPath(path string) (string, error) {
s, resp := API.Locks.Search(&api.LockSearchRequest{
Filters: []api.Filter{
{"path", path},
},
})
if _, err := API.Do(s); err != nil {
return "", err
}
switch len(resp.Locks) {
case 0:
return "", errNoMatchingLocks
case 1:
return resp.Locks[0].Id, nil
default:
return "", errLockAmbiguous
}
}
func init() {
unlockCmd.Flags().StringVarP(&lockRemote, "remote", "r", config.Config.CurrentRemote, lockRemoteHelp)
unlockCmd.Flags().StringVarP(&unlockCmdFlags.Id, "id", "i", "", "unlock a lock by its ID")
unlockCmd.Flags().BoolVarP(&unlockCmdFlags.Force, "force", "f", false, "forcibly break another user's lock(s)")
RootCmd.AddCommand(unlockCmd)
}

@ -6,8 +6,9 @@ import (
"os"
"strings"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -20,11 +21,11 @@ var (
// untrackCommand takes a list of paths as an argument, and removes each path from the
// default attributes file (.gitattributes), if it exists.
func untrackCommand(cmd *cobra.Command, args []string) {
if lfs.LocalGitDir == "" {
if config.LocalGitDir == "" {
Print("Not a git repository.")
os.Exit(128)
}
if lfs.LocalWorkingDir == "" {
if config.LocalWorkingDir == "" {
Print("This operation must be run in a work tree.")
os.Exit(128)
}

@ -3,9 +3,10 @@ package commands
import (
"regexp"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/spf13/cobra"
)
var (
@ -24,7 +25,7 @@ func updateCommand(cmd *cobra.Command, args []string) {
requireInRepo()
lfsAccessRE := regexp.MustCompile(`\Alfs\.(.*)\.access\z`)
for key, value := range lfs.Config.AllGitConfig() {
for key, value := range config.Config.AllGitConfig() {
matches := lfsAccessRE.FindStringSubmatch(key)
if len(matches) < 2 {
continue

@ -1,8 +1,8 @@
package commands
import (
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/httputil"
"github.com/spf13/cobra"
)
var (
@ -15,7 +15,7 @@ var (
)
func versionCommand(cmd *cobra.Command, args []string) {
Print(lfs.UserAgent)
Print(httputil.UserAgent)
if lovesComics {
Print("Nothing may see Gah Lak Tus and survive!")

@ -11,15 +11,23 @@ import (
"strings"
"time"
"github.com/github/git-lfs/api"
"github.com/github/git-lfs/config"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/lfs"
"github.com/github/git-lfs/vendor/_nuts/github.com/spf13/cobra"
"github.com/github/git-lfs/tools"
"github.com/spf13/cobra"
)
// Populate man pages
//go:generate go run ../docs/man/mangen.go
var (
// API is a package-local instance of the API client for use within
// various command implementations.
API = api.NewClient(nil)
Debugging = false
ErrorBuffer = &bytes.Buffer{}
ErrorWriter = io.MultiWriter(os.Stderr, ErrorBuffer)
@ -58,10 +66,10 @@ func Exit(format string, args ...interface{}) {
}
func ExitWithError(err error) {
if Debugging || lfs.IsFatalError(err) {
if Debugging || errutil.IsFatalError(err) {
Panic(err, err.Error())
} else {
if inner := lfs.GetInnerError(err); inner != nil {
if inner := errutil.GetInnerError(err); inner != nil {
Error(inner.Error())
}
Exit(err.Error())
@ -112,9 +120,17 @@ func PipeCommand(name string, args ...string) error {
}
func requireStdin(msg string) {
stat, _ := os.Stdin.Stat()
if (stat.Mode() & os.ModeCharDevice) != 0 {
Error("Cannot read from STDIN. %s", msg)
var out string
stat, err := os.Stdin.Stat()
if err != nil {
out = fmt.Sprintf("Cannot read from STDIN. %s (%s)", msg, err)
} else if (stat.Mode() & os.ModeCharDevice) != 0 {
out = fmt.Sprintf("Cannot read from STDIN. %s", msg)
}
if len(out) > 0 {
Error(out)
os.Exit(1)
}
}
@ -139,11 +155,11 @@ func logPanic(loggedError error) string {
now := time.Now()
name := now.Format("20060102T150405.999999999")
full := filepath.Join(lfs.LocalLogDir, name+".log")
full := filepath.Join(config.LocalLogDir, name+".log")
if err := os.MkdirAll(lfs.LocalLogDir, 0755); err != nil {
if err := os.MkdirAll(config.LocalLogDir, 0755); err != nil {
full = ""
fmt.Fprintf(fmtWriter, "Unable to log panic to %s: %s\n\n", lfs.LocalLogDir, err.Error())
fmt.Fprintf(fmtWriter, "Unable to log panic to %s: %s\n\n", config.LocalLogDir, err.Error())
} else if file, err := os.Create(full); err != nil {
filename := full
full = ""
@ -168,7 +184,7 @@ func logPanicToWriter(w io.Writer, loggedError error) {
gitV = "Error getting git version: " + err.Error()
}
fmt.Fprintln(w, lfs.UserAgent)
fmt.Fprintln(w, config.VersionDesc)
fmt.Fprintln(w, gitV)
// log the command that was run
@ -192,7 +208,7 @@ func logPanicToWriter(w io.Writer, loggedError error) {
}
w.Write(err.Stack())
} else {
w.Write(lfs.Stack())
w.Write(errutil.Stack())
}
fmt.Fprintln(w, "\nENV:")
@ -208,28 +224,9 @@ type ErrorWithStack interface {
Stack() []byte
}
// determineIncludeExcludePaths is a common function to take the string arguments
// for include/exclude and derive slices either from these options or from the
// common global config
func determineIncludeExcludePaths(includeArg, excludeArg string) (include, exclude []string) {
var includePaths, excludePaths []string
if len(includeArg) > 0 {
for _, inc := range strings.Split(includeArg, ",") {
inc = strings.TrimSpace(inc)
includePaths = append(includePaths, inc)
}
} else {
includePaths = lfs.Config.FetchIncludePaths()
}
if len(excludeArg) > 0 {
for _, ex := range strings.Split(excludeArg, ",") {
ex = strings.TrimSpace(ex)
excludePaths = append(excludePaths, ex)
}
} else {
excludePaths = lfs.Config.FetchExcludePaths()
}
return includePaths, excludePaths
func determineIncludeExcludePaths(config *config.Configuration, includeArg, excludeArg string) (include, exclude []string) {
return tools.CleanPathsDefault(includeArg, ",", config.FetchIncludePaths()),
tools.CleanPathsDefault(excludeArg, ",", config.FetchExcludePaths())
}
func printHelp(commandName string) {

29
commands/commands_test.go Normal file

@ -0,0 +1,29 @@
package commands
import (
"testing"
"github.com/github/git-lfs/config"
"github.com/stretchr/testify/assert"
)
var (
cfg = config.NewFromValues(map[string]string{
"lfs.fetchinclude": "/default/include",
"lfs.fetchexclude": "/default/exclude",
})
)
func TestDetermineIncludeExcludePathsReturnsCleanedPaths(t *testing.T) {
i, e := determineIncludeExcludePaths(cfg, "/some/include", "/some/exclude")
assert.Equal(t, []string{"/some/include"}, i)
assert.Equal(t, []string{"/some/exclude"}, e)
}
func TestDetermineIncludeExcludePathsReturnsDefaultsWhenAbsent(t *testing.T) {
i, e := determineIncludeExcludePaths(cfg, "", "")
assert.Equal(t, []string{"/default/include"}, i)
assert.Equal(t, []string{"/default/exclude"}, e)
}

@ -3,6 +3,7 @@ package commands
import (
"os"
"github.com/github/git-lfs/errutil"
"github.com/github/git-lfs/lfs"
)
@ -87,13 +88,13 @@ func (c *uploadContext) checkMissing(missing []*lfs.WrappedPointer, missingSize
return
}
checkQueue := lfs.NewDownloadCheckQueue(numMissing, missingSize, true)
checkQueue := lfs.NewDownloadCheckQueue(numMissing, missingSize)
// this channel is filled with oids for which Check() succeeded & Transfer() was called
transferc := checkQueue.Watch()
for _, p := range missing {
checkQueue.Add(lfs.NewDownloadCheckable(p))
checkQueue.Add(lfs.NewDownloadable(p))
}
done := make(chan int)
@ -127,8 +128,8 @@ func upload(c *uploadContext, unfiltered []*lfs.WrappedPointer) {
for _, p := range pointers {
u, err := lfs.NewUploadable(p.Oid, p.Name)
if err != nil {
if lfs.IsCleanPointerError(err) {
Exit(uploadMissingErr, p.Oid, p.Name, lfs.ErrorGetContext(err, "pointer").(*lfs.Pointer).Oid)
if errutil.IsCleanPointerError(err) {
Exit(uploadMissingErr, p.Oid, p.Name, errutil.ErrorGetContext(err, "pointer").(*lfs.Pointer).Oid)
} else {
ExitWithError(err)
}
@ -141,10 +142,10 @@ func upload(c *uploadContext, unfiltered []*lfs.WrappedPointer) {
q.Wait()
for _, err := range q.Errors() {
if Debugging || lfs.IsFatalError(err) {
if Debugging || errutil.IsFatalError(err) {
LoggedError(err, err.Error())
} else {
if inner := lfs.GetInnerError(err); inner != nil {
if inner := errutil.GetInnerError(err); inner != nil {
Error(inner.Error())
}
Error(err.Error())

@ -1,18 +1,21 @@
package lfs
// Package config collects together all configuration settings
// NOTE: Subject to change, do not rely on this package from outside git-lfs source
package config
import (
"bytes"
"fmt"
"net/http"
"os"
"path/filepath"
"strconv"
"strings"
"sync"
"github.com/ThomsonReutersEikon/go-ntlm/ntlm"
"github.com/bgentry/go-netrc/netrc"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/vendor/_nuts/github.com/ThomsonReutersEikon/go-ntlm/ntlm"
"github.com/github/git-lfs/vendor/_nuts/github.com/bgentry/go-netrc/netrc"
"github.com/github/git-lfs/vendor/_nuts/github.com/rubyist/tracerx"
"github.com/github/git-lfs/tools"
"github.com/rubyist/tracerx"
)
var (
@ -44,16 +47,13 @@ type FetchPruneConfig struct {
}
type Configuration struct {
CurrentRemote string
httpClients map[string]*HttpClient
httpClientsMutex sync.Mutex
redirectingHttpClient *http.Client
ntlmSession ntlm.ClientSession
envVars map[string]string
envVarsMutex sync.Mutex
isTracingHttp bool
isDebuggingHttp bool
isLoggingStats bool
CurrentRemote string
NtlmSession ntlm.ClientSession
envVars map[string]string
envVarsMutex sync.Mutex
IsTracingHttp bool
IsDebuggingHttp bool
IsLoggingStats bool
loading sync.Mutex // guards initialization of gitConfig and remotes
gitConfig map[string]string
@ -72,12 +72,35 @@ func NewConfig() *Configuration {
CurrentRemote: defaultRemote,
envVars: make(map[string]string),
}
c.isTracingHttp = c.GetenvBool("GIT_CURL_VERBOSE", false)
c.isDebuggingHttp = c.GetenvBool("LFS_DEBUG_HTTP", false)
c.isLoggingStats = c.GetenvBool("GIT_LOG_STATS", false)
c.IsTracingHttp = c.GetenvBool("GIT_CURL_VERBOSE", false)
c.IsDebuggingHttp = c.GetenvBool("LFS_DEBUG_HTTP", false)
c.IsLoggingStats = c.GetenvBool("GIT_LOG_STATS", false)
return c
}
// NewFromValues returns a new *config.Configuration instance as if it had
// been read from the .gitconfig specified by "gitconfig" parameter.
//
// NOTE: this method should only be called during testing.
func NewFromValues(gitconfig map[string]string) *Configuration {
config := &Configuration{
gitConfig: make(map[string]string, 0),
}
buf := bytes.NewBuffer([]byte{})
for k, v := range gitconfig {
fmt.Fprintf(buf, "%s=%s\n", k, v)
}
config.readGitConfig(
string(buf.Bytes()),
map[string]bool{},
false,
)
return config
}
func (c *Configuration) Getenv(key string) string {
c.envVarsMutex.Lock()
defer c.envVarsMutex.Unlock()
@ -104,6 +127,27 @@ func (c *Configuration) Setenv(key, value string) error {
return os.Setenv(key, value)
}
func (c *Configuration) GetAllEnv() map[string]string {
c.envVarsMutex.Lock()
defer c.envVarsMutex.Unlock()
ret := make(map[string]string)
for k, v := range c.envVars {
ret[k] = v
}
return ret
}
func (c *Configuration) SetAllEnv(env map[string]string) {
c.envVarsMutex.Lock()
defer c.envVarsMutex.Unlock()
c.envVars = make(map[string]string)
for k, v := range env {
c.envVars[k] = v
}
}
// GetenvBool parses a boolean environment variable and returns the result as a bool.
// If the environment variable is unset, empty, or if the parsing fails,
// the value of def (default) is returned instead.
@ -183,6 +227,22 @@ func (c *Configuration) ConcurrentTransfers() int {
return uploads
}
// BasicTransfersOnly returns whether to only allow "basic" HTTP transfers
// Default is false, including if the lfs.basictransfersonly is invalid
func (c *Configuration) BasicTransfersOnly() bool {
value, ok := c.GitConfig("lfs.basictransfersonly")
if !ok || len(value) == 0 {
return false
}
basicOnly, err := parseConfigBool(value)
if err != nil {
return false
}
return basicOnly
}
func (c *Configuration) BatchTransfer() bool {
value, ok := c.GitConfig("lfs.batch")
if !ok || len(value) == 0 {
@ -233,6 +293,11 @@ func (c *Configuration) FindNetrcHost(host string) (*netrc.Machine, error) {
return c.parsedNetrc.FindMachine(host), nil
}
// Manually override the netrc config
func (c *Configuration) SetNetrc(n netrcfinder) {
c.parsedNetrc = n
}
func (c *Configuration) EndpointAccess(e Endpoint) string {
key := fmt.Sprintf("lfs.%s.access", e.Url)
if v, ok := c.GitConfig(key); ok && len(v) > 0 {
@ -271,6 +336,7 @@ func (c *Configuration) FetchIncludePaths() []string {
c.loadGitConfig()
return c.fetchIncludePaths
}
func (c *Configuration) FetchExcludePaths() []string {
c.loadGitConfig()
return c.fetchExcludePaths
@ -318,6 +384,11 @@ func (c *Configuration) Extensions() map[string]Extension {
return c.extensions
}
// SortedExtensions gets the list of extensions ordered by Priority
func (c *Configuration) SortedExtensions() ([]Extension, error) {
return SortExtensions(c.Extensions())
}
// GitConfigInt parses a git config value and returns it as an integer.
func (c *Configuration) GitConfigInt(key string, def int) int {
s, _ := c.GitConfig(key)
@ -558,15 +629,12 @@ func (c *Configuration) readGitConfig(output string, uniqRemotes map[string]bool
c.gitConfig[key] = value
if len(keyParts) == 2 && keyParts[0] == "lfs" && keyParts[1] == "fetchinclude" {
for _, inc := range strings.Split(value, ",") {
inc = strings.TrimSpace(inc)
c.fetchIncludePaths = append(c.fetchIncludePaths, inc)
}
} else if len(keyParts) == 2 && keyParts[0] == "lfs" && keyParts[1] == "fetchexclude" {
for _, ex := range strings.Split(value, ",") {
ex = strings.TrimSpace(ex)
c.fetchExcludePaths = append(c.fetchExcludePaths, ex)
if len(keyParts) == 2 && keyParts[0] == "lfs" {
switch keyParts[1] {
case "fetchinclude":
c.fetchIncludePaths = tools.CleanPaths(value, ",")
case "fetchexclude":
c.fetchExcludePaths = tools.CleanPaths(value, ",")
}
}
}
@ -587,3 +655,39 @@ var safeKeys = []string{
"lfs.gitprotocol",
"lfs.url",
}
// only used for tests
func (c *Configuration) SetConfig(key, value string) {
if c.loadGitConfig() {
c.loading.Lock()
c.origConfig = make(map[string]string)
for k, v := range c.gitConfig {
c.origConfig[k] = v
}
c.loading.Unlock()
}
c.gitConfig[key] = value
}
func (c *Configuration) ClearConfig() {
if c.loadGitConfig() {
c.loading.Lock()
c.origConfig = make(map[string]string)
for k, v := range c.gitConfig {
c.origConfig[k] = v
}
c.loading.Unlock()
}
c.gitConfig = make(map[string]string)
}
func (c *Configuration) ResetConfig() {
c.loading.Lock()
c.gitConfig = make(map[string]string)
for k, v := range c.origConfig {
c.gitConfig[k] = v
}
c.loading.Unlock()
}

@ -1,10 +1,10 @@
package lfs
package config
import (
"os"
"path/filepath"
"github.com/github/git-lfs/vendor/_nuts/github.com/bgentry/go-netrc/netrc"
"github.com/bgentry/go-netrc/netrc"
)
type netrcfinder interface {

@ -1,5 +1,5 @@
// +build !windows
package lfs
package config
var netrcBasename = ".netrc"

@ -1,9 +1,9 @@
package lfs
package config
import (
"testing"
"github.com/github/git-lfs/vendor/_nuts/github.com/technoweenie/assert"
"github.com/stretchr/testify/assert"
)
func TestEndpointDefaultsToOrigin(t *testing.T) {
@ -336,6 +336,35 @@ func TestConcurrentTransfersNegativeValue(t *testing.T) {
assert.Equal(t, 3, n)
}
func TestBasicTransfersOnlySetValue(t *testing.T) {
config := &Configuration{
gitConfig: map[string]string{
"lfs.basictransfersonly": "true",
},
}
b := config.BasicTransfersOnly()
assert.Equal(t, true, b)
}
func TestBasicTransfersOnlyDefault(t *testing.T) {
config := &Configuration{}
b := config.BasicTransfersOnly()
assert.Equal(t, false, b)
}
func TestBasicTransfersOnlyInvalidValue(t *testing.T) {
config := &Configuration{
gitConfig: map[string]string{
"lfs.basictransfersonly": "wat",
},
}
b := config.BasicTransfersOnly()
assert.Equal(t, false, b)
}
func TestBatch(t *testing.T) {
tests := map[string]bool{
"": true,
@ -363,7 +392,7 @@ func TestBatchAbsentIsTrue(t *testing.T) {
config := &Configuration{}
v := config.BatchTransfer()
assert.Equal(t, true, v)
assert.True(t, v)
}
func TestAccessConfig(t *testing.T) {
@ -438,8 +467,8 @@ func TestAccessAbsentConfig(t *testing.T) {
config := &Configuration{}
assert.Equal(t, "none", config.Access("download"))
assert.Equal(t, "none", config.Access("upload"))
assert.Equal(t, false, config.PrivateAccess("download"))
assert.Equal(t, false, config.PrivateAccess("upload"))
assert.False(t, config.PrivateAccess("download"))
assert.False(t, config.PrivateAccess("upload"))
}
func TestLoadValidExtension(t *testing.T) {
@ -481,10 +510,10 @@ func TestFetchPruneConfigDefault(t *testing.T) {
assert.Equal(t, 7, fp.FetchRecentRefsDays)
assert.Equal(t, 0, fp.FetchRecentCommitsDays)
assert.Equal(t, 3, fp.PruneOffsetDays)
assert.Equal(t, true, fp.FetchRecentRefsIncludeRemotes)
assert.True(t, fp.FetchRecentRefsIncludeRemotes)
assert.Equal(t, 3, fp.PruneOffsetDays)
assert.Equal(t, "origin", fp.PruneRemoteName)
assert.Equal(t, false, fp.PruneVerifyRemoteAlways)
assert.False(t, fp.PruneVerifyRemoteAlways)
}
func TestFetchPruneConfigCustom(t *testing.T) {
@ -502,31 +531,18 @@ func TestFetchPruneConfigCustom(t *testing.T) {
assert.Equal(t, 12, fp.FetchRecentRefsDays)
assert.Equal(t, 9, fp.FetchRecentCommitsDays)
assert.Equal(t, false, fp.FetchRecentRefsIncludeRemotes)
assert.False(t, fp.FetchRecentRefsIncludeRemotes)
assert.Equal(t, 30, fp.PruneOffsetDays)
assert.Equal(t, "upstream", fp.PruneRemoteName)
assert.Equal(t, true, fp.PruneVerifyRemoteAlways)
assert.True(t, fp.PruneVerifyRemoteAlways)
}
// only used for tests
func (c *Configuration) SetConfig(key, value string) {
if c.loadGitConfig() {
c.loading.Lock()
c.origConfig = make(map[string]string)
for k, v := range c.gitConfig {
c.origConfig[k] = v
}
c.loading.Unlock()
}
func TestFetchIncludeExcludesAreCleaned(t *testing.T) {
config := NewFromValues(map[string]string{
"lfs.fetchinclude": "/path/to/clean/",
"lfs.fetchexclude": "/other/path/to/clean/",
})
c.gitConfig[key] = value
}
func (c *Configuration) ResetConfig() {
c.loading.Lock()
c.gitConfig = make(map[string]string)
for k, v := range c.origConfig {
c.gitConfig[k] = v
}
c.loading.Unlock()
assert.Equal(t, []string{"/path/to/clean"}, config.FetchIncludePaths())
assert.Equal(t, []string{"/other/path/to/clean"}, config.FetchExcludePaths())
}

@ -1,4 +1,4 @@
// +build windows
package lfs
package config
var netrcBasename = "_netrc"

@ -1,4 +1,4 @@
package lfs
package config
import (
"fmt"
@ -143,16 +143,3 @@ func endpointFromGitUrl(u *url.URL, c *Configuration) Endpoint {
u.Scheme = c.GitProtocol()
return Endpoint{Url: u.String()}
}
func ObjectUrl(endpoint Endpoint, oid string) (*url.URL, error) {
u, err := url.Parse(endpoint.Url)
if err != nil {
return nil, err
}
u.Path = path.Join(u.Path, "objects")
if len(oid) > 0 {
u.Path = path.Join(u.Path, oid)
}
return u, nil
}

39
config/extension.go Normal file

@ -0,0 +1,39 @@
package config
import (
"fmt"
"sort"
)
// An Extension describes how to manipulate files during smudge and clean.
// Extensions are parsed from the Git config.
type Extension struct {
Name string
Clean string
Smudge string
Priority int
}
// SortExtensions sorts a map of extensions in ascending order by Priority
func SortExtensions(m map[string]Extension) ([]Extension, error) {
pMap := make(map[int]Extension)
priorities := make([]int, 0, len(m))
for n, ext := range m {
p := ext.Priority
if _, exist := pMap[p]; exist {
err := fmt.Errorf("duplicate priority %d on %s", p, n)
return nil, err
}
pMap[p] = ext
priorities = append(priorities, p)
}
sort.Ints(priorities)
result := make([]Extension, len(priorities))
for i, p := range priorities {
result[i] = pMap[p]
}
return result, nil
}

@ -1,9 +1,9 @@
package lfs
package config
import (
"testing"
"github.com/github/git-lfs/vendor/_nuts/github.com/technoweenie/assert"
"github.com/stretchr/testify/assert"
)
func TestSortExtensions(t *testing.T) {
@ -32,7 +32,7 @@ func TestSortExtensions(t *testing.T) {
sorted, err := SortExtensions(m)
assert.Equal(t, err, nil)
assert.Nil(t, err)
for i, ext := range sorted {
name := names[i]
@ -61,6 +61,6 @@ func TestSortExtensionsDuplicatePriority(t *testing.T) {
sorted, err := SortExtensions(m)
assert.NotEqual(t, err, nil)
assert.Equal(t, len(sorted), 0)
assert.NotNil(t, err)
assert.Empty(t, sorted)
}

102
config/filesystem.go Normal file

@ -0,0 +1,102 @@
package config
import (
"fmt"
"io/ioutil"
"os"
"path/filepath"
"strings"
"github.com/github/git-lfs/git"
"github.com/github/git-lfs/tools"
"github.com/rubyist/tracerx"
)
var (
LocalWorkingDir string
LocalGitDir string // parent of index / config / hooks etc
LocalGitStorageDir string // parent of objects/lfs (may be same as LocalGitDir but may not)
LocalReferenceDir string // alternative local media dir (relative to clone reference repo)
LocalLogDir string
)
// Determins the LocalWorkingDir, LocalGitDir etc
func ResolveGitBasicDirs() {
var err error
LocalGitDir, LocalWorkingDir, err = git.GitAndRootDirs()
if err == nil {
// Make sure we've fully evaluated symlinks, failure to do consistently
// can cause discrepancies
LocalGitDir = tools.ResolveSymlinks(LocalGitDir)
LocalWorkingDir = tools.ResolveSymlinks(LocalWorkingDir)
LocalGitStorageDir = resolveGitStorageDir(LocalGitDir)
LocalReferenceDir = resolveReferenceDir(LocalGitStorageDir)
} else {
errMsg := err.Error()
tracerx.Printf("Error running 'git rev-parse': %s", errMsg)
if !strings.Contains(errMsg, "Not a git repository") {
fmt.Fprintf(os.Stderr, "Error: %s\n", errMsg)
}
}
}
func resolveReferenceDir(gitStorageDir string) string {
cloneReferencePath := filepath.Join(gitStorageDir, "objects", "info", "alternates")
if tools.FileExists(cloneReferencePath) {
buffer, err := ioutil.ReadFile(cloneReferencePath)
if err == nil {
path := strings.TrimSpace(string(buffer[:]))
referenceLfsStoragePath := filepath.Join(filepath.Dir(path), "lfs", "objects")
if tools.DirExists(referenceLfsStoragePath) {
return referenceLfsStoragePath
}
}
}
return ""
}
// From a git dir, get the location that objects are to be stored (we will store lfs alongside)
// Sometimes there is an additional level of redirect on the .git folder by way of a commondir file
// before you find object storage, e.g. 'git worktree' uses this. It redirects to gitdir either by GIT_DIR
// (during setup) or .git/git-dir: (during use), but this only contains the index etc, the objects
// are found in another git dir via 'commondir'.
func resolveGitStorageDir(gitDir string) string {
commondirpath := filepath.Join(gitDir, "commondir")
if tools.FileExists(commondirpath) && !tools.DirExists(filepath.Join(gitDir, "objects")) {
// no git-dir: prefix in commondir
storage, err := processGitRedirectFile(commondirpath, "")
if err == nil {
return storage
}
}
return gitDir
}
func processGitRedirectFile(file, prefix string) (string, error) {
data, err := ioutil.ReadFile(file)
if err != nil {
return "", err
}
contents := string(data)
var dir string
if len(prefix) > 0 {
if !strings.HasPrefix(contents, prefix) {
// Prefix required & not found
return "", nil
}
dir = strings.TrimSpace(contents[len(prefix):])
} else {
dir = strings.TrimSpace(contents)
}
if !filepath.IsAbs(dir) {
// The .git file contains a relative path.
// Create an absolute path based on the directory the .git file is located in.
dir = filepath.Join(filepath.Dir(file), dir)
}
return dir, nil
}

28
config/version.go Normal file

@ -0,0 +1,28 @@
package config
import (
"fmt"
"runtime"
"strings"
)
var (
GitCommit string
Version = "1.2.0"
VersionDesc string
)
func init() {
gitCommit := ""
if len(GitCommit) > 0 {
gitCommit = "; git " + GitCommit
}
VersionDesc = fmt.Sprintf("git-lfs/%s (GitHub; %s %s; go %s%s)",
Version,
runtime.GOOS,
runtime.GOARCH,
strings.Replace(runtime.Version(), "go", "", 1),
gitCommit,
)
}

6
debian/changelog vendored

@ -1,3 +1,9 @@
git-lfs (1.2.1) stable; urgency=low
* New upstream version
-- Stephen Gelman <gelman@getbraintree.com> Thu, 2 Jun 2016 14:29:00 +0000
git-lfs (1.2.0) stable; urgency=low
* New upstream version

9
debian/rules vendored

@ -1,7 +1,7 @@
#!/usr/bin/make -f
export DH_OPTIONS
export GO15VENDOREXPERIMENT=0
export GO15VENDOREXPERIMENT=1
#dh_golang doesn't do this for you
ifeq ($(DEB_HOST_ARCH), i386)
@ -12,8 +12,11 @@ endif
BUILD_DIR := obj-$(DEB_HOST_GNU_TYPE)
export DH_GOPKG := github.com/github/git-lfs
export DH_GOLANG_EXCLUDES := test
# DH_GOLANG_EXCLUDES typically incorporates vendor exclusions from script/test
export DH_GOLANG_EXCLUDES := test github.com/olekukonko/ts/* github.com/xeipuuv/gojsonschema/* github.com/technoweenie/go-contentaddressable/* github.com/spf13/cobra/*
export PATH := $(CURDIR)/$(BUILD_DIR)/bin:$(PATH)
# by-default, dh_golang only copies *.go and other source - this upsets a bunch of vendor test routines
export DH_GOLANG_INSTALL_ALL := 1
%:
dh $@ --buildsystem=golang --with=golang
@ -25,7 +28,7 @@ override_dh_clean:
override_dh_auto_build:
dh_auto_build
#dh_golang doesn't do anything here in deb 8, and it's needed in both
#dh_golang doesn't do anything here in deb 8, and it's needed in both
if [ "$(DEB_HOST_GNU_TYPE)" != "$(DEB_BUILD_GNU_TYPE)" ]; then\
cp -rf $(BUILD_DIR)/bin/*/* $(BUILD_DIR)/bin/; \
cp -rf $(BUILD_DIR)/pkg/*/* $(BUILD_DIR)/pkg/; \

@ -47,7 +47,7 @@ To only run certain docker images, supply them as arguments, e.g.
./docker/run_docker.bsh debian_7
./docker/run_docker.bsh centos_7 debian_8
./docker/run_docker.bsh centos_{5,6,7}
./docker/run_docker.bsh centos_{6,7}
And only those images will be run.
@ -104,7 +104,7 @@ will be extracted from the images and saved in the `./repo` directory.
./docker/run_dockers.bsh centos_6_env centos_6
This isn't all that important anymore, unless you want ruby2 and the gems used to
make the man pages for Centos 5/6 where ruby2 is not natively available. Calling
make the man pages for CentOS 6 where ruby2 is not natively available. Calling
the environment building images only needs to be done once, they should remain in
the `./repo` directory afterwards.
@ -262,19 +262,9 @@ also work with 4096 bit RSA signing keys.
CentOS will **not** work with subkeys[3]. CentOS 6 and 7 will work with 4096 bit
RSA signing keys
CentOS 5 will **not** work with v4 signatures. The rpms will be so unrecognizable
that it can't even be installed with --nogpgcheck. It should work with RSA on v3.
However, it does not. It builds v3 correctly, but for some reason the GPG check
fails for RSA. CentOS 5 will **not** work with 2048 bit DSA keys... I suspect
2048 is too big for it to fathom. CentOS 5 **will** work with 1024 bit DSA keys.
You can make a 4096 RSA key for Debian and CentOS 6/7 (4 for step 1 above, and
4096 for step 2) and a 1024 DSA key for CentOS 5 (3 for step 1 above, and 1024
for step 2. And be sure to make the key in a CentOS 5 docker.). And only have two
keys... Or optionally a 4096 RSA subkey for Debain
[1]. Or a key for each distro. Dealers choice. You should have least two since
1024 bit isn't that great and you are only using it for CentOS 5 because nothing
else works.
4096 for step 2). And only have two keys... Or optionally a 4096 RSA subkey for Debain
[1]. Or a key for each distro. Dealers choice.
[1] https://www.digitalocean.com/community/tutorials/how-to-use-reprepro-for-a-secure-package-repository-on-ubuntu-14-04

@ -2,7 +2,7 @@
# Usage:
# ./run_dockers.bsh - Run all the docker images
# ./run_dockers.bsh centos_5 centos_7 - Run only CentOS 5 & 7 image
# ./run_dockers.bsh centos_6 centos_7 - Run only CentOS 6 & 7 image
# ./run_dockers.bsh centos_6 -- bash #Runs bash in the CentOS 6 docker
#
# Special Environmet Variables
@ -54,7 +54,7 @@ while [[ $# > 0 ]]; do
done
if [[ ${#IMAGES[@]} == 0 ]]; then
IMAGES=(centos_5 centos_6 centos_7 debian_7 debian_8)
IMAGES=(centos_6 centos_7 debian_7 debian_8)
fi
mkdir -p "${PACKAGE_DIR}"
@ -104,4 +104,4 @@ for IMAGE_NAME in "${IMAGES[@]}"; do
done
echo "Docker run completed successfully!"
echo "Docker run completed successfully!"

@ -4,4 +4,4 @@
CUR_DIR=$(dirname "${BASH_SOURCE[0]}")
${CUR_DIR}/run_dockers.bsh centos_5_test centos_6_test centos_7_test debian_7_test debian_8_test "${@}"
${CUR_DIR}/run_dockers.bsh centos_6_test centos_7_test debian_7_test debian_8_test "${@}"

@ -7,12 +7,12 @@ flow might look like:
push`) objects.
2. The client contacts the Git LFS API to get information about transferring
the objects.
3. The client then transfers the objects through the storage API.
3. The client then transfers the objects through the transfer API.
## HTTP API
The Git LFS HTTP API is responsible for authenticating the user requests, and
returning the proper info for the Git LFS client to use the storage API. By
returning the proper info for the Git LFS client to use the transfer API. By
default, API endpoint is based on the current Git remote. For example:
```
@ -26,11 +26,17 @@ Git LFS endpoint: https://git-server.com/user/repo.git/info/lfs
The [specification](/docs/spec.md) describes how clients can configure the Git LFS
API endpoint manually.
The [original v1 API][v1] is used for Git LFS v0.5.x. An experimental [v1
batch API][batch] is in the works for v0.6.x.
The [legacy v1 API][legacy] was used for Git LFS v0.5.x. From 0.6.x the
[batch API][batch] should always be used where available.
[legacy]: ./v1/http-v1-legacy.md
[batch]: ./v1/http-v1-batch.md
From v1.3 there are [optional extensions to the batch API][batch v1.3] for more
flexible transfers.
[batch v1.3]: ./v1.3/http-v1.3-batch.md
[v1]: ./http-v1-original.md
[batch]: ./http-v1-batch.md
### Authentication
@ -81,43 +87,19 @@ HTTPS is strongly encouraged for all production Git LFS servers.
If your Git LFS server authenticates with NTLM then you must provide your credentials to `git-credential`
in the form `username:DOMAIN\user password:password`.
### Hypermedia
## Transfer API
The Git LFS API uses hypermedia hints to instruct the client what to do next.
These links are included in a `_links` property. Possible relations for objects
include:
* `self` - This points to the object's canonical API URL.
* `download` - Follow this link with a GET and the optional header values to
download the object content.
* `upload` - Upload the object content to this link with a PUT.
* `verify` - Optional link for the client to POST after an upload. If
included, the client assumes this step is required after uploading an object.
See the "Verification" section below for more.
Link relations specify the `href`, and optionally a collection of header values
to set for the request. These are optional, and depend on the backing object
store that the Git LFS API is using.
The Git LFS client will automatically send the same credentials to the followed
link relation as Basic Authentication IF:
* The url scheme, host, and port all match the Git LFS API endpoint's.
* The link relation does not specify an Authorization header.
If the host name is different, the Git LFS API needs to send enough information
through the href query or header values to authenticate the request.
The Git LFS client expects a 200 or 201 response from these hypermedia requests.
Any other response code is treated as an error.
## Storage API
The Storage API is a generic API for directly uploading and downloading objects.
The transfer API is a generic API for directly uploading and downloading objects.
Git LFS servers can offload object storage to cloud services like S3, or
implemented natively in the Git LFS server. The only requirement is that
hypermedia objects from the Git LFS API return the correct headers so clients
can access the storage API properly.
can access the transfer API properly.
As of v1.3 there can be multiple ways files can be uploaded or downloaded, see
the [v1.3 API doc](v1.3/http-v1.3-batch.md) for details. The following section
describes the basic transfer method which is the default.
### The basic transfer API
The client downloads objects through individual GET requests. The URL and any
special headers are provided by a "download" hypermedia link:
@ -125,7 +107,7 @@ special headers are provided by a "download" hypermedia link:
```
# the hypermedia object from the Git LFS API
# {
# "_links": {
# "actions": {
# "download": {
# "href": "https://storage-server.com/OID",
# "header": {
@ -152,7 +134,7 @@ are provided by an "upload" hypermedia link:
```
# the hypermedia object from the Git LFS API
# {
# "_links": {
# "actions": {
# "upload": {
# "href": "https://storage-server.com/OID",
# "header": {

@ -0,0 +1,33 @@
{
"$schema": "http://json-schema.org/draft-04/schema",
"title": "Git LFS HTTPS Batch API v1.3 Request",
"type": "object",
"properties": {
"transfers": {
"type": "array",
"items": {
"type": "string"
},
},
"operation": {
"type": "string"
},
"objects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"oid": {
"type": "string"
},
"size": {
"type": "number"
}
},
"required": ["oid", "size"],
"additionalProperties": false
}
}
},
"required": ["objects", "operation"]
}

@ -0,0 +1,79 @@
{
"$schema": "http://json-schema.org/draft-04/schema",
"title": "Git LFS HTTPS Batch API v1.3 Response",
"type": "object",
"definitions": {
"action": {
"type": "object",
"properties": {
"href": {
"type": "string"
},
"header": {
"type": "object",
"additionalProperties": true
},
"expires_at": {
"type": "string"
}
},
"required": ["href"],
"additionalProperties": false
}
},
"properties": {
"transfer": {
"type": "string"
},
"objects": {
"type": "array",
"items": {
"type": "object",
"properties": {
"oid": {
"type": "string"
},
"size": {
"type": "number"
},
"actions": {
"type": "object",
"properties": {
"download": { "$ref": "#/definitions/action" },
"upload": { "$ref": "#/definitions/action" },
"verify": { "$ref": "#/definitions/action" }
},
"additionalProperties": false
},
"error": {
"type": "object",
"properties": {
"code": {
"type": "number"
},
"message": {
"type": "string"
}
},
"required": ["code", "message"],
"additionalProperties": false
}
},
"required": ["oid", "size"],
"additionalProperties": false
}
},
"message": {
"type": "string"
},
"request_id": {
"type": "string"
},
"documentation_url": {
"type": "string"
},
},
"required": ["objects"]
}

@ -0,0 +1,98 @@
# Git LFS v1.3 Batch API
The Git LFS Batch API extends the [batch v1 API](../v1/http-v1-batch.md), adding
optional fields to the request and response to negotiate transfer methods.
Only the differences from the v1 API will be listed here, everything else is
unchanged.
## POST /objects/batch
### Request changes
The v1.3 request adds an additional optional top-level field, `transfers`,
which is an array of strings naming the transfer methods this client supports.
The transfer methods are in decreasing order of preference.
The default transfer method which simply uploads and downloads using simple HTTP
`PUT` and `GET`, named "basic", is always supported and is implied.
Example request:
```
> POST https://git-lfs-server.com/objects/batch HTTP/1.1
> Accept: application/vnd.git-lfs+json
> Content-Type: application/vnd.git-lfs+json
> Authorization: Basic ... (if authentication is needed)
>
> {
> "operation": "upload",
> "transfers": [ "tus.io", "basic" ],
> "objects": [
> {
> "oid": "1111111",
> "size": 123
> }
> ]
> }
>
```
In the example above `"basic"` is included for illustration but is actually
unnecessary since it is always the fallback. The client is indicating that it is
able to upload using the resumable `"tus.io"` method, should the server support
that. The server may include a chosen method in the response, which must be
one of those listed, or `"basic"`.
### Response changes
If the server understands the new optional `transfers` field in the request, it
should determine which of the named transfer methods it also supports, and
include the chosen one in the response in the new `transfer` field. If only
`"basic"` is supported, the field is optional since that is the default.
If the server supports more than one of the named transfer methods, it should
pick the first one it supports, since the client will list them in order of
preference.
Example response to the previous request if the server also supports `tus.io`:
```
< HTTP/1.1 200 Ok
< Content-Type: application/vnd.git-lfs+json
<
< {
< "transfer": "tus.io",
< "objects": [
< {
< "oid": "1111111",
< "size": 123,
< "actions": {
< "upload": {
< "href": "https://some-tus-io-upload.com",
< "header": {
< "Key": "value"
< }
< },
< "verify": {
< "href": "https://some-callback.com",
< "header": {
< "Key": "value"
< }
< }
< }
> }
< ]
< }
```
Apart from naming the chosen transfer method in `transfer`, the server should
also return upload / download links in the `href` field which are compatible
with the method chosen. If the server supports more than one method (and it's
advisable that the server implement at least `"basic` in all cases in addition
to more sophisticated methods, to support older clients), the `href` is likely
to be different for each.
## Updated schemas
* [Batch request](./http-v1.3-batch-request-schema.json)
* [Batch response](./http-v1.3-batch-response-schema.json)

@ -23,6 +23,5 @@
}
}
},
"required": ["objects", "operation"],
"additionalProperties": false
"required": ["objects", "operation"]
}

@ -72,6 +72,5 @@
"type": "string"
},
},
"required": ["objects"],
"additionalProperties": false
"required": ["objects"]
}

@ -1,12 +1,12 @@
# Git LFS v1 Batch API
The Git LFS Batch API works like the [original v1 API][v1], but uses a single
The Git LFS Batch API works like the [legacy v1 API][v1], but uses a single
endpoint that accepts multiple OIDs. All requests should have the following:
Accept: application/vnd.git-lfs+json
Content-Type: application/vnd.git-lfs+json
[v1]: ./http-v1-original.md
[v1]: ./http-v1-legacy.md
This is a newer API introduced in Git LFS v0.5.2, and made the default in
Git LFS v0.6.0. The client automatically detects if the server does not
@ -21,7 +21,7 @@ manually through the Git config:
## Authentication
The Batch API authenticates the same as the original v1 API with one exception:
The Batch API authenticates the same as the legacy v1 API with one exception:
The client will attempt to make requests without any authentication. This
slight change allows anonymous access to public Git LFS objects. The client
stores the result of this in the `lfs.<url>.access` config setting, where <url>
@ -285,3 +285,65 @@ track usage.
Some server errors may trigger the client to retry requests, such as 500, 502,
503, and 504.
## Extended upload & download protocols
By default it is assumed that all transfers (uploads & downloads) will be
performed via a singular HTTP resource and that the URLs provided in the
response are implemented as such. In this case each object is uploaded or
downloaded in its entirety through that one URL.
However, in order to support more advanced transfer features such as resuming,
chunking or delegation to other services, the client can indicate in the request
its ability to handle other transfer mechanisms.
Here's a possible example:
```json
{
"operation": "upload",
"accept-transfers": "tus,resumable.js",
"objects": [
{
"oid": "1111111",
"size": 123
}
]
}
```
The `accept-transfers` field is a comma-separated list of identifiers which the
client is able to support, in order of preference. In this hypothetical example
the client is indicating it is able to support resumable uploads using either
the tus.io protocol, or the resumable.js protocol. It is implicit that basic
HTTP resources are always supported regardless of the presence or content of
this item.
If the server is able to support one of the extended transfer mechanisms, it can
provide resources specific to that mechanism in the response, with an indicator
of which one it picked:
```json
{
"transfer": "tus",
"objects": [
{
"oid": "1111111",
"size": 123,
"actions": {
"upload": {
"href": "https://my.tus.server.com/files/1111111"
}
}
}
]
}
```
In this case the server has chosen [tus.io](http://tus.io); in this case the
underlying transport is still HTTP, so the `href` is still a web URL, but the
exact sequence of calls and the headers sent & received are different from a
single resource upload. Other transfers may use other protocols.
__Note__: these API features are provided for future extension and the examples
shown may not represent actual features present in the current client).

@ -1,6 +1,6 @@
# Git LFS v1 Original API
# Git LFS v1 Legacy API
This describes the original API for Git LFS v0.5.x. It's already deprecated by
This describes the legacy API for Git LFS v0.5.x. It's already deprecated by
the [batch API][batch]. All requests should have:
Accept: application/vnd.git-lfs+json

@ -19,6 +19,23 @@ downloads performed by 'git lfs pull'.
All options supported by 'git clone'
* `-I` <paths> `--include=`<paths>:
See [INCLUDE AND EXCLUDE]
* `-X` <paths> `--exclude=`<paths>:
See [INCLUDE AND EXCLUDE]
## INCLUDE AND EXCLUDE
You can configure Git LFS to only fetch objects to satisfy references in certain
paths of the repo, and/or to exclude certain paths of the repo, to reduce the
time you spend downloading things you do not use.
In lfsconfig, set lfs.fetchinclude and lfs.fetchexclude to comma-separated lists
of paths to include/exclude in the fetch (wildcard matching as per gitignore).
Only paths which are matched by fetchinclude and not matched by fetchexclude
will have objects fetched for them.
## SEE ALSO
git-clone(1), git-lfs-pull(1).

@ -26,6 +26,12 @@ lfs option can be scoped inside the configuration for a remote.
The number of concurrent uploads/downloads. Default 3.
* `lfs.basictransfersonly`
If set to true, only basic HTTP upload/download transfers will be used,
ignoring any more advanced transfers that the client/server may support.
This is primarily to work around bugs or incompatibilities.
* `lfs.batch`
Whether to use the batch API instead of requesting objects individually.

@ -5,14 +5,22 @@ git-lfs-logs(1) - Show errors from the git-lfs command
`git lfs logs`<br>
`git lfs logs` <file><br>
`git lfs logs` --clear<br>
`git lfs logs` --boomtown<br>
`git lfs logs clear`<br>
`git lfs logs boomtown`<br>
## DESCRIPTION
Display errors from the git-lfs command. Any time it crashes, the details are
saved to ".git/lfs/logs".
## COMMANDS
* `clear`:
Clears all of the existing logged errors.
* `boomtown`:
Triggers a dummy exception.
## OPTIONS
Without any options, `git lfs logs` simply shows the list of error logs.
@ -20,12 +28,6 @@ Without any options, `git lfs logs` simply shows the list of error logs.
* <file>:
Shows the specified error log. Use "last" to show the most recent error.
* `--clear`:
Clears all of the existing logged errors.
* `--boomtown`:
Triggers a dummy exception.
## SEE ALSO
Part of the git-lfs(1) suite.

@ -3,7 +3,7 @@ git-lfs-track(1) - View or add Git LFS paths to Git attributes
## SYNOPSIS
`git lfs track` [<path>...]
`git lfs track` [options] [<path>...]
## DESCRIPTION
@ -11,6 +11,22 @@ Start tracking the given path(s) through Git LFS. The <path> argument
can be a pattern or a file path. If no paths are provided, simply list
the currently-tracked paths.
## OPTIONS
* `--verbose` `-v`:
If enabled, have `git lfs track` log files which it will touch. Disabled by
default.
* `--dry-run` `-d`:
If enabled, have `git lfs track` log all actions it would normally take
(adding entries to .gitattributes, touching files on disk, etc) without
performing any mutative operations to the disk.
`git lfs track --dry-run [files]` also implicitly mocks the behavior of
passing the `--verbose`, and will log in greater detail what it is doing.
Disabled by default.
## EXAMPLES
* List the paths that Git LFS is currently tracking:

315
docs/proposals/locking.md Normal file

@ -0,0 +1,315 @@
# Locking feature proposal
We need the ability to lock files to discourage (we can never prevent) parallel
editing of binary files which will result in an unmergeable situation. This is
not a common theme in git (for obvious reasons, it conflicts with its
distributed, parallel nature), but is a requirement of any binary management
system, since files are very often completely unmergeable, and no-one likes
having to throw their work away & do it again.
## What not to do: single branch model
The simplest way to organise locking is to require that if binary files are only
ever edited on a single branch, and therefore editing this file can follow a
simple sequence:
1. File starts out read-only locally
2. User locks the file, user is required to have the latest version locally from
the 'main' branch
3. User edits file & commits 1 or more times
4. User pushes these commits to the main branch
5. File is unlocked (and made read only locally again)
## A more usable approach: multi-branch model
In practice teams need to work on more than one branch, and sometimes that work
will have corresponding binary edits.
It's important to remember that the core requirement is to prevent *unintended
parallel edits of an unmergeable file*.
One way to address this would be to say that locking a file locks it across all
branches, and that lock is only released when the branch where the edit is is
merged back into a 'primary' branch. The problem is that although that allows
branching and also prevents merge conflicts, it forces merging of feature
branches before a further edit can be made by someone else.
An alternative is that locking a file locks it across all branches, but when the
lock is released, further locks on that file can only be taken on a descendant
of the latest edit that has been made, whichever branch it is on. That means
a change to the rules of the lock sequence, namely:
1. File starts out read-only locally
2. User tries to lock a file. This is only allowed if:
* The file is not already locked by anyone else, AND
* One of the following are true:
* The user has, or agrees to check out, a descendant of the latest commit
that was made for that file, whatever branch that was on, OR
* The user stays on their current commit but resets the locked file to the
state of the latest commit (making it modified locally, and
also cherry-picking changes for that file in practice).
3. User edits file & commits 1 or more times, on any branch they like
4. User pushes the commits
5. File is unlocked if:
* the latest commit to that file has been pushed (on any branch), and
* the file is not locally edited
This means that long-running branches can be maintained but that editing of a
binary file must always incorporate the latest binary edits. This means that if
this system is always respected, there is only ever one linear stream of
development for this binary file, even though that 'thread' may wind its way
across many different branches in the process.
This does mean that no-one's changes are accidentally lost, but it does mean
that we are either making new branches dependent on others, OR we're
cherry-picking changes to individual files across branches. This does change
the traditional git workflow, but importantly it achieves the core requirement
of never *accidentally* losing anyone's changes. How changes are threaded
across branches is always under the user's control.
## Breaking the rules
We must allow the user to break the rules if they know what they are doing.
Locking is there to prevent unintended binary merge conflicts, but sometimes you
might want to intentionally create one, with the full knowledge that you're
going to have to manually merge the result (or more likely, pick one side and
discard the other) later down the line. There are 2 cases of rule breaking to
support:
1. **Break someone else's lock**
People lock files and forget they've locked them, then go on holiday, or
worse, leave the company. You can't be stuck not being able to edit that file
so must be able to forcibly break someone else's lock. Ideally this should
result in some kind of notification to the original locker (might need to be a
special value-add on BB/Stash). This effectively removes the other person's
lock and is likely to cause them problems if they had edited and try to push
next time.
2. **Allow a parallel lock**
Actually similar to breaking someone else's lock, except it lets you take
another lock on a file in parallel, leaving their lock in place too, and
knowing that you're going to have to resolve the merge problem later. You
could handle this just by manually making files read/write, then using 'force
push' to override hooks that prevent pushing when not locked. However by
explicitly registering a parallel lock (possible form: 'git lfs lock
--force') this could be recorded and communicated to anyone else with a lock,
letting them know about possible merge issues down the line.
## Detailed feature points
|No | Feature | Notes
|---|---------|------------------
|1 |Lock server must be available at same API URL|
|2 |Identify unmergeable files as subset of lfs files|`git lfs track -b` ?
|3 |Make unmergeable files read-only on checkout|Perform in smudge filter
|4 |Lock a file<ul><li>Check with server which must atomically check/set</li><li>Check person requesting the lock is checked out on a commit which is a descendent of the last edit of that file (locally or on server, although last lock shouldn't have been released until push anyway), or allow --force to break rule</li><li>Record lock on server</li><li>Make file read/write locally if success</li></ul>|`git lfs lock <file>`?
|5 |Release a lock<ul><li>Check if locally modified, if so must discard</li><li>Check if user has more recent commit of this file than server, if so must push first</li><li>Release lock on server atomically</li><li>Make local file read-only</li></ul>|`git lfs unlock <file>`?
|6 |Break a lock, ie override someone else's lock and take it yourself.<ul><li>Release lock on server atomically</li><li>Proceed as per 'Lock a file'</li><li>Notify original lock holder HOW?</li></ul>|`git lfs lock -break <file>`?
|7 |Release lock on reset (maybe). Configurable option / prompt? May be resetting just to start editing again|
|8 |Release lock on push (maybe, if unmodified). See above|
|9 |Cater for read-only binary files when merging locally<ul><li>Because files are read-only this might prevent merge from working when actually it's valid.</li><li>Always fine to merge the latest version of a binary file to anywhere else</li><li>Fine to merge the non-latest version if user is aware that this may cause merge problems (see Breaking the rules)</li><li>Therefore this feature is about dealing with the read-only flag and issuing a warning if not the latest</li></ul>|
|10 |List current locks<ul><li>That the current user has</li><li>That anyone has</li><li>Potentially scoped to folder</li></ul>|`git lfs lock --list [paths...]`
|11 |Reject a push containing a binary file currently locked by someone else|pre-receive hook on server, allow --force to override (i.e. existing parameter to git push)
## Implementation details
### Types
To make the implementing locking on the lfs-test-server as well as other servers
in the future easier, it makes sense to create a `lock` package that can be
depended upon from any server. This will go along with Steve's refactor which
touches the `lfs` package quite a bit.
Below are enumerated some of the types that will presumably land in this
sub-package.
```go
// Lock represents a single lock that against a particular path.
//
// Locks returned from the API may or may not be currently active, according to
// the Expired flag.
type Lock struct {
// Id is the unique identifier corresponding to this particular Lock. It
// must be consistent with the local copy, and the server's copy.
Id string `json:"id"`
// Path is an absolute path to the file that is locked as a part of this
// lock.
Path string `json:"path"`
// Committer is the author who initiated this lock.
Committer struct {
Name string `json:"name"`
Email string `json:"email"`
} `json:"creator"`
// CommitSHA is the commit that this Lock was created against. It is
// strictly equal to the SHA of the minimum commit negotiated in order
// to create this lock.
CommitSHA string `json:"commit_sha"
// LockedAt is a required parameter that represents the instant in time
// that this lock was created. For most server implementations, this
// should be set to the instant at which the lock was initially
// received.
LockedAt time.Time `json:"locked_at"`
// ExpiresAt is an optional parameter that represents the instant in
// time that the lock stopped being active. If the lock is still active,
// the server can either a) not send this field, or b) send the
// zero-value of time.Time.
UnlockedAt time.Time `json:"unlocked_at,omitempty"`
}
// Active returns whether or not the given lock is still active against the file
// that it is protecting.
func (l *Lock) Active() bool {
return time.IsZero(l.UnlockedAt)
}
```
### Proposed Commands
#### `git lfs lock <path>`
The `lock` command will be used in accordance with the multi-branch flow as
proposed above to request that lock be granted to the specific path passed an
argument to the command.
```go
// LockRequest encapsulates the payload sent across the API when a client would
// like to obtain a lock against a particular path on a given remote.
type LockRequest struct {
// Path is the path that the client would like to obtain a lock against.
Path string `json:"path"`
// LatestRemoteCommit is the SHA of the last known commit from the
// remote that we are trying to create the lock against, as found in
// `.git/refs/origin/<name>`.
LatestRemoteCommit string `json:"latest_remote_commit"`
// Committer is the individual that wishes to obtain the lock.
Committer struct {
// Name is the name of the individual who would like to obtain the
// lock, for instance: "Rick Olson".
Name string `json:"name"`
// Email is the email assopsicated with the individual who would
// like to obtain the lock, for instance: "rick@github.com".
Email string `json:"email"`
} `json:"committer"`
}
```
```go
// LockResponse encapsulates the information sent over the API in response to
// a `LockRequest`.
type LockResponse struct {
// Lock is the Lock that was optionally created in response to the
// payload that was sent (see above). If the lock already exists, then
// the existing lock is sent in this field instead, and the author of
// that lock remains the same, meaning that the client failed to obtain
// that lock. An HTTP status of "409 - Conflict" is used here.
//
// If the lock was unable to be created, this field will hold the
// zero-value of Lock and the Err field will provide a more detailed set
// of information.
//
// If an error was experienced in creating this lock, then the
// zero-value of Lock should be sent here instead.
Lock Lock `json:"lock"`
// CommitNeeded holds the minimum commit SHA that client must have to
// obtain the lock.
CommitNeeded string `json:"commit_needed"`
// Err is the optional error that was encountered while trying to create
// the above lock.
Err error `json:"error,omitempty"`
}
```
#### `git lfs unlock <path>`
The `unlock` command is responsible for releasing the lock against a particular
file. The command takes a `<path>` argument which the LFS client will have to
internally resolve into a Id to unlock.
The API associated with this command can also be used on the server to remove
existing locks after a push.
```go
// An UnlockRequest is sent by the client over the API when they wish to remove
// a lock associated with the given Id.
type UnlockRequest struct {
// Id is the identifier of the lock that the client wishes to remove.
Id string `json:"id"`
}
```
```go
// UnlockResult is the result sent back from the API when asked to remove a
// lock.
type UnlockResult struct {
// Lock is the lock corresponding to the asked-about lock in the
// `UnlockPayload` (see above). If no matching lock was found, this
// field will take the zero-value of Lock, and Err will be non-nil.
Lock Lock `json:"lock"`
// Err is an optional field which holds any error that was experienced
// while removing the lock.
Err error `json:"error,omitempty"`
}
```
Clients can determine whether or not their lock was removed by calling the
`Active()` method on the returned Lock, if `UnlockResult.Err` is nil.
#### `git lfs locks (-r <remote>|-b <branch|-p <path>)|(-i id)`
For many operations, the LFS client will need to have knowledge of existing
locks on the server. Additionally, the client should not have to self-sort/index
this (potentially) large set. To remove this need, both the `locks` command and
corresponding API method take several filters.
Clients should turn the flag-values that were passed during the command
invocation into `Filter`s as described below, and batched up into the `Filters`
field in the `LockListRequest`.
```go
// Property is a constant-type that narrows fields pertaining to the server's
// Locks.
type Property string
const (
Branch Property = "branch"
Id Property = "id"
// (etc) ...
)
// LockListRequest encapsulates the request sent to the server when the client
// would like a list of locks that match the given criteria.
type LockListRequest struct {
// Filters is the set of filters to query against. If the client wishes
// to obtain a list of all locks, an empty array should be passed here.
Filters []{
// Prop is the property to search against.
Prop Property `json:"prop"`
// Value is the value that the property must take.
Value string `json:"value"`
} `json:"filters"`
// Cursor is an optional field used to tell the server which lock was
// seen last, if scanning through multiple pages of results.
//
// Servers must return a list of locks sorted in reverse chronological
// order, so the Cursor provides a consistent method of viewing all
// locks, even if more were created between two requests.
Cursor string `json:"cursor,omitempty"`
// Limit is the maximum number of locks to return in a single page.
Limit int `json:"limit"`
}
```
```go
// LockList encapsulates a set of Locks.
type LockList struct {
// Locks is the set of locks returned back, typically matching the query
// parameters sent in the LockListRequest call. If no locks were matched
// from a given query, then `Locks` will be represented as an empty
// array.
Locks []Lock `json:"locks"`
// NextCursor returns the Id of the Lock the client should update its
// cursor to, if there are multiple pages of results for a particular
// `LockListRequest`.
NextCursor string `json:"next_cursor,omitempty"`
// Err populates any error that was encountered during the search. If no
// error was encountered and the operation was succesful, then a value
// of nil will be passed here.
Err error `json:"error,omitempty"`
}

@ -0,0 +1,180 @@
# Locking API proposal
## POST /locks
| Method | Accept | Content-Type | Authorization |
|---------|--------------------------------|--------------------------------|---------------|
| `POST` | `application/vnd.git-lfs+json` | `application/vnd.git-lfs+json` | Basic |
### Request
```
> GET https://git-lfs-server.com/locks
> Accept: application/vnd.git-lfs+json
> Authorization: Basic
> Content-Type: application/vnd.git-lfs+json
>
> {
> path: "/path/to/file",
> remote: "origin",
> latest_remote_commit: "d3adbeef",
> committer: {
> name: "Jane Doe",
> email: "jane@example.com"
> }
> }
```
### Response
* **Successful response**
```
< HTTP/1.1 201 Created
< Content-Type: application/vnd.git-lfs+json
<
< {
< lock: {
< id: "some-uuid",
< path: "/path/to/file",
< committer: {
< name: "Jane Doe",
< email: "jane@example.com"
< },
< commit_sha: "d3adbeef",
< locked_at: "2016-05-17T15:49:06+00:00"
< }
< }
```
* **Bad request: minimum commit not met**
```
< HTTP/1.1 400 Bad request
< Content-Type: application/vnd.git-lfs+json
<
< {
< "commit_needed": "other_sha"
< }
```
* **Bad request: lock already present**
```
< HTTP/1.1 409 Conflict
< Content-Type: application/vnd.git-lfs+json
<
< {
< lock: {
< /* the previously created lock */
< },
< error: "already created lock"
< }
```
* **Bad repsonse: server error**
```
< HTTP/1.1 500 Internal server error
< Content-Type: application/vnd.git-lfs+json
<
< {
< error: "unable to create lock"
< }
```
## POST /locks/:id/unlock
| Method | Accept | Content-Type | Authorization |
|---------|--------------------------------|--------------|---------------|
| `POST` | `application/vnd.git-lfs+json` | None | Basic |
### Request
```
> POST https://git-lfs-server.com/locks/:id/unlock
> Accept: application/vnd.git-lfs+json
> Authorization: Basic
```
### Repsonse
* **Success: unlocked**
```
< HTTP/1.1 200 Ok
< Content-Type: application/vnd.git-lfs+json
<
< {
< lock: {
< id: "some-uuid",
< path: "/path/to/file",
< committer: {
< name: "Jane Doe",
< email: "jane@example.com"
< },
< commit_sha: "d3adbeef",
< locked_at: "2016-05-17T15:49:06+00:00",
< unlocked_at: "2016-05-17T15:49:06+00:00"
< }
< }
}
```
* **Bad response: server error**
```
< HTTP/1.1 500 Internal error
< Content-Type: application/vnd.git-lfs+json
<
< {
< error: "github/git-lfs: internal server error"
< }
```
## GET /locks
| Method | Accept | Content-Type | Authorization |
|--------|-------------------------------|--------------|---------------|
| `GET` | `application/vnd.git-lfs+json | None | Basic |
### Request
```
> GET https://git-lfs-server.com/locks?filters...&cursor=&limit=
> Accept: application/vnd.git-lfs+json
> Authorization: Basic
```
### Response
* **Success: locks found**
Note: no matching locks yields a payload of `locks: []`, and a status of 200.
```
< HTTP/1.1 200 Ok
< Content-Type: application/vnd.git-lfs+json
<
< {
< locks: [
< {
< id: "some-uuid",
< path: "/path/to/file",
< committer": {
< name: "Jane Doe",
< email: "jane@example.com"
< },
< commit_sha: "1ec245f",
< locked_at: "2016-05-17T15:49:06+00:00"
< }
< ],
< next_cursor: "optional-next-id",
< error: "optional error"
< }
```
* **Bad response: some locks may have matched, but the server encountered an error**
```
< HTTP/1.1 500 Internal error
< Content-Type: application/vnd.git-lfs+json
<
< {
< locks: [],
< error: "github/git-lfs: internal server error"
< }
```

@ -0,0 +1,91 @@
# Transfer adapters for resumable upload / download
## Concept
To allow the uploading and downloading of LFS content to be implemented in more
ways than the current simple HTTP GET/PUT approach. Features that could be
supported by opening this up to other protocols might include:
- Resumable transfers
- Block-level de-duplication
- Delegation to 3rd party services like Dropbox / Google Drive / OneDrive
- Non-HTTP services
## API extensions
See the [API documentation](../http-v1-batch.md) for specifics. All changes
are optional extras so there are no breaking changes to the API.
The current HTTP GET/PUT system will remain the default. When a version of the
git-lfs client supports alternative transfer mechanisms, it notifies the server
in the API request using the `accept-transfers` field.
If the server also supports one of the mechanisms the client advertised, it may
select one and alter the upload / download URLs to point at resources
compatible with this transfer mechanism. It must also indicate the chosen
transfer mechanism in the response using the `transfer` field.
The URLs provided in this case may not be HTTP, they may be custom protocols.
It is up to each individual transfer mechanism to define how URLs are used.
## Client extensions
### Phase 1: refactoring & abstraction
1. Introduce a new concept of 'transfer adapter'.
2. Adapters can provide either upload or download support, or both. This is
necessary because some mechanisms are unidirectional, e.g. HTTP Content-Range
is download only, tus.io is upload only.
3. Refactor our current HTTP GET/PUT mechanism to be the default implementation
for both upload & download
4. The LFS core will pass oids to transfer to this adapter in bulk, and receive
events back from the adapter for transfer progress, and file completion.
5. Each adapter is responsible for its own parallelism, but should respect the
`lfs.concurrenttransfers` setting. For example the default (current) approach
will parallelise on files (oids), but others may parallelise in other ways
e.g. downloading multiple parts of the same file at once
6. Each adapter should store its own temporary files. On file completion it must
notify the core which in the case of a download is then responsible for
moving a completed file into permanent storage.
7. Update the core to have a registry of available transfer mechanisms which it
passes to the API, and can recognise a chosen one in the response. Default
to our refactored original.
### Phase 2: basic resumable downloads
1. Add a client transfer adapter for [HTTP Range headers](https://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html#sec14.35)
2. Add a range request reference implementation to our integration test server
### Phase 3: basic resumable uploads
1. Add a client transfer adapter for [tus.io](http://tus.io) (upload only)
2. Add a tus.io reference implementation to our integration test server
### Phase 4: external transfer adapters
Ideally we should allow people to add other transfer implementations so that
we don't have to implement everything, or bloat the git-lfs binary with every
custom system possible.
Because Go is statically linked it's not possible to extend client functionality
at runtime through loading libaries, so instead I propose allowing an external
process to be invoked, and communicated with via a defined stream protocol. This
protocol will be logically identical to the internal adapters; the core passing
oids and receiving back progress and completion notifications; just that the
implementation will be in an external process and the messages will be
serialised over streams.
Only one process will be launched and will remain for the entire period of all
transfers. Like internal adapters, the external process will be responsible for
its own parallelism and temporary storage, so internally they can (should) do
multiple transfers at once.
1. Build a generic 'external' adapter which can invoke a named process and
communicate with it using the standard stream protocol (probably just over
stdout / stdin)
2. Establish a configuration for external adapters; minimum is an identifier
(client and server must agree on what that is) and a path to invoke
3. Implement a small test process in Go which simply wraps the default HTTP
mechanism in an external process, to prove the approach (not in release)

Some files were not shown because too many files have changed in this diff Show More