git-lfs/lfshttp/errors.go
Chris Darroch 0f82147ad5 lfshttp,tq,t: don't fail on retriable batch errors
A prior commit in this PR resolves a bug where a 429 response to an
upload or download request causes a Go panic in the client if the
response lacks a Retry-After header.

The same condition, when it occurs in the response to a batch API
request, does not trigger a Go panic; instead, though, we simply
fail without retrying the batch API request at all.  This stands
in constrast to how we now handle 429 responses for object uploads
and downloads when no Retry-After header is provided, because in
that case, we perform multiple retries, following the exponential
backoff logic introduced in PR #4097.

This difference stems in part from the fact that the download()
function of the basicDownloadAdapter structure and the DoTransfer()
function of the basicUploadAdapter structure both handle 429 responses
by first calling the NewRetriableLaterError() function of the "errors"
package to try to parse any Retry-After header, and if that returns
nil, then calling the NewRetriableError() function, so they always
return some form of retriable error after a 429 status code is received.

We therefore modify the handleResponse() method of the Client structure
in the "lfshttp" package to likewise always return a retriable error
of some kind after a 429 response.  If a Retry-After header is found
and is able to be parsed, then a retriableLaterError (from the "errors"
package) is returned; otherwise, a generic retriableError is returned.

This change is not sufficient on its own, however.  When the batch
API returns 429 responses without a Retry-After header, the transfer
queue now retries its requests following the exponential backoff
logic, as we expect.  If one of those eventually succeeds, though,
the batch is still processed as if it encountered an unrecoverable
failure, and the Git LFS client ultimately returns a non-zero exit code.

The reason this occurs is because the enqueueAndCollectRetriesFor()
method of the TransferQueue structure in the "tq" package sets the
flag which causes it to return an error for the batch both when an
object in the batch cannot be retried (because it has reached its
retry limit) or when an object in the batch can be retried but no
specific retry wait time was provided by a retriableLaterError.

The latter of these two cases is what is now triggered when the batch
API returns a 429 status code and no Retry-After header.  In commit
a3ecbcc7f6bf27aedbcdaf830bd186dba4a7328f of PR #4573 this code was
updated to improve how batch API 429 responses with Retry-After headers
are handled, building on the original code introduced in PR #3449
and some fixes in PR #3930.  This commit added the flag, named
hasNonScheduledErrors, which is set if any objects in a batch which
experiences an error either can not be retried, or can be retried but
don't have a specific wait time as provided by a Retry-After header.
If the flag is set, then the error encountered during the processing
of the batch is returned by the enqueueAndCollectRetriesFor() method,
and although it is wrapped by NewRetriableError function, because the
error is returned instead of just a nil, it is collected into the errors
channel of the queue by the collectBatches() caller method, and this
ultimately causes the client to report the error and return a non-zero
exit code.

By constrast, the handleTransferResult() method of the TransferQueue
structure treats retriable errors from individual object uploads and
downloads in the same way for both errors with a specified wait time
and those without.

To bring our handling of batch API requests into alignment with
this approach, we can simply avoid setting the flag variable when
a batch encounters an error and an object can be retried but without
a specified wait time.

We also rename the flag variable to hasNonRetriableObjects, which
better reflects its meaning, as it signals the fact that at least
one object in the batch can not be retried.  As well, we update
some related comments to clarify the current actions and intent of
this section of code in the enqueueAndCollectRetriesFor() method.

We then add a test to the t/t-batch-retries-ratelimit.sh test suite
like the ones we added to the t/t-batch-storage-retries-ratelimit.sh
script in a previous commit in this PR.  The test relies on a new
sentinel value in the test repository name which now recognize in
our lfstest-gitserver test server, and which causes the test server
to return a 429 response to batch API requests, but without a
Retry-After header.  This test fails without both of the changes
we make in this commit to ensure we handle 429 batch API responses
without Retry-After headers.
2024-06-19 00:55:03 -07:00

129 lines
3.1 KiB
Go

package lfshttp
import (
"fmt"
"net/http"
"strings"
"github.com/git-lfs/git-lfs/v3/errors"
"github.com/git-lfs/git-lfs/v3/tr"
)
type httpError interface {
Error() string
HTTPResponse() *http.Response
}
func IsHTTP(err error) (*http.Response, bool) {
if httpErr, ok := err.(httpError); ok {
return httpErr.HTTPResponse(), true
}
return nil, false
}
type ClientError struct {
Message string `json:"message"`
DocumentationUrl string `json:"documentation_url,omitempty"`
RequestId string `json:"request_id,omitempty"`
response *http.Response
}
func (e *ClientError) HTTPResponse() *http.Response {
return e.response
}
func (e *ClientError) Error() string {
return e.Message
}
func (c *Client) handleResponse(res *http.Response) error {
if res.StatusCode < 400 {
return nil
}
cliErr := &ClientError{response: res}
err := DecodeJSON(res, cliErr)
if IsDecodeTypeError(err) {
err = nil
}
if err == nil {
if len(cliErr.Message) == 0 {
err = defaultError(res)
} else {
err = cliErr
}
}
if res.StatusCode == 401 {
return errors.NewAuthError(err)
}
if res.StatusCode == 422 {
return errors.NewUnprocessableEntityError(err)
}
if res.StatusCode == 429 {
// The Retry-After header could be set, check to see if it exists.
h := res.Header.Get("Retry-After")
retLaterErr := errors.NewRetriableLaterError(err, h)
if retLaterErr != nil {
return retLaterErr
}
return errors.NewRetriableError(err)
}
if res.StatusCode > 499 && res.StatusCode != 501 && res.StatusCode != 507 && res.StatusCode != 509 {
return errors.NewFatalError(err)
}
return err
}
type statusCodeError struct {
response *http.Response
}
func NewStatusCodeError(res *http.Response) error {
return &statusCodeError{response: res}
}
func (e *statusCodeError) Error() string {
req := e.response.Request
return tr.Tr.Get("Invalid HTTP status for %s %s: %d",
req.Method,
strings.SplitN(req.URL.String(), "?", 2)[0],
e.response.StatusCode,
)
}
func (e *statusCodeError) HTTPResponse() *http.Response {
return e.response
}
func defaultError(res *http.Response) error {
var msgFmt string
defaultErrors := map[int]string{
400: tr.Tr.Get("Client error: %%s"),
401: tr.Tr.Get("Authorization error: %%s\nCheck that you have proper access to the repository"),
403: tr.Tr.Get("Authorization error: %%s\nCheck that you have proper access to the repository"),
404: tr.Tr.Get("Repository or object not found: %%s\nCheck that it exists and that you have proper access to it"),
422: tr.Tr.Get("Unprocessable entity: %%s"),
429: tr.Tr.Get("Rate limit exceeded: %%s"),
500: tr.Tr.Get("Server error: %%s"),
501: tr.Tr.Get("Not Implemented: %%s"),
507: tr.Tr.Get("Insufficient server storage: %%s"),
509: tr.Tr.Get("Bandwidth limit exceeded: %%s"),
}
if f, ok := defaultErrors[res.StatusCode]; ok {
msgFmt = f
} else if res.StatusCode < 500 {
msgFmt = tr.Tr.Get("Client error %%s from HTTP %d", res.StatusCode)
} else {
msgFmt = tr.Tr.Get("Server error %%s from HTTP %d", res.StatusCode)
}
return errors.Errorf(fmt.Sprintf(msgFmt), res.Request.URL)
}