Go 1.5 is genuinely parallel and we cannot guarantee that the transfer
channel from TransferQueue is fully read from before q.Wait() returns;
they can be parallel. Instead use our own sync channel on the closing
of the transfer channel.
This commit introduces two new types into the API: Hook, and Filter.
Both `Hook` and `Filter` are abstractions built on Git hooks and filters
respectively. Each type knows how to install and uninstall itself. These
processes are utilized by the setup method in the `lfs` package, and the
appropriate calls have been updated in the init and uninit commands.
These abstractions were introduced to make adding/removing required filters and
hooks easier for future projects, including the migration away from the smudge
filter.
Eventually it seems appropriate to move both new types into the `git` package,
as opposed to the `lfs` package. At the time of writing this commit, there is
some coupling against state defined in the `lfs` package (i.e., whether or not
we're currently in a git repo, the local working directory, etc). At somepoint
it would be nice to remove that coupling and move these new types into the
`git` package.
Previously only local branches were included by default. However this
way is more useful in practice; otherwise when you needed to checkout
or pull new commits from the remote you were forced to fetch on demand;
fetch --recent wouldn't pick them up without including the remote
branches. While you *can* create / update local branches without
checking out manually (e.g. git reset to manually fast-forward) to make
fetch --recent then pickup the changes from local branches, it's
too cumbersome. Including remote refs by default makes more sense for
most people, respecting the idea that you do this as an optimistic fetch
to save time at checkout/pull. However, limit the operation to the
current remote only (which makes sense anyway).
I'm not really happy with the way we use errors to control this flow.
I'd like to see it refactored to not rely on a specific error to control
this. The download bool is there, we should rely on that.
Previously if there were modified files at any point in ref's history,
they would all be fetched at the ref instead of just a snapshot of the
current state.
this change improves drastically pre-push behaviour, by not sending
lfs objects which are already on a remote. Works perfectly with
pushing new branches and tags.
currently pre-push command analyse "local sha1" vs "remote sha1" of the
ref being pushed and if "remote sha1" is available locally tries to send
only lfs objects introduced with new commits.
why this is broken:
- remote branch might have moved forward (local repo is not up to date).
In this case you have no chance to isolate new lfs objects ("remote sha1"
does not exist locally) and git-lfs sends everything from the local
branch history.
- remote branch does not exist (or new tag is pushed). Same consequences.
But what is important - local repository always have remote references,
from which user created his local branch and started making some local
changes. So, all we have to do is to identify new lfs objects which do
not exist on remote references. And all this can be easily achieved with
the same all mighty git rev-list command.
This change makes git-lfs usable with gerrit, where changes are uploaded
by using magic gerrit branches which does not really exist. i.e.
git push origin master:refs/for/master
in this case "refs/for/master" does not exist and git feeds all 0-s as
"remote sha1".
The first arg to fetch & pull is now a remote. In addition, the default
remote if you don't specify is now the tracking remote as in `git pull`
if it exists, and origin if that's not set. This makes it more consistent
with the underlying git workflow especially in triangular fork setups.
Renamed Checkable to DownloadCheckable because it only applies to download,
upload checks differently and uses different struct fields. Incorporated
with download_queue.go since code is now smaller & common.
In batch mode you don't get an error from a missing Check(), you just get
a lack of download link (get an upload link instead). Therefore the only
reliable way to judge whether Check() worked is to check the transfer chan.
Also add tests for batch mode to prove this works
If we close down the update-index process without having given it any
args in stdin, then it applies to the whole working copy and has unwanted
side effects e.g. calling clean/smudge filters.
When using make with append, both a capacity and length must be used in
the make call. If only the length is used the array will have len zero
values at the beginning and append will start adding values at the end.
This is a small number of values so just skip make altogether.
This was always the intention but the PointerSmudge functions would
automatically do it if the local files were missing. Now they take a
boolean argument to say whether to download or not, and the case of
skipped smudge is dealt with by writing out the original pointer data
and returning a known non-fatal error.