Compare commits

..

41 Commits

Author SHA1 Message Date
6543
ef2cb41dc3 Update Changelog (#15322)
* update

* next
2021-04-07 14:23:08 +01:00
6543
9201068ff9 Changelog v1.13.7 (#15319) 2021-04-07 11:12:44 +03:00
6543
bfd33088b4 add 'fonts' into 'KnownPublicEntries' (#15188) (#15317)
fix #15184

Signed-off-by: a1012112796 <1012112796@qq.com>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>

Co-authored-by: a1012112796 <1012112796@qq.com>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-04-07 08:20:42 +01:00
6543
711ca0c410 Update to bluemonday-1.0.6 (#15294) (#15298)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-04-06 01:35:50 +01:00
zeripath
013639b13f Add size to Save function (#15264) (#15271)
Backport #15264

This PR proposes an alternative solution to #15255 - just add the size to the
save function. Yes it is less apparently clean but it may be more correct.

Close #15255
Fix #15253

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-04 12:04:36 -04:00
techknowlogick
558b0005ff update golang libraries (#15258) (#15260) 2021-04-03 06:27:14 +02:00
a1012112796
0d7afb02c0 response 404 for diff/patch of a commit that not exist (#15221) (#15238)
* response 404 for diff/patch of a commit that not exist

fix #15217

Signed-off-by: a1012112796 <1012112796@qq.com>

* Update routers/repo/commit.go

Co-authored-by: silverwind <me@silverwind.io>

* use ctx.NotFound()

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: silverwind <me@silverwind.io>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: silverwind <me@silverwind.io>
Co-authored-by: 6543 <6543@obermui.de>
2021-04-02 04:30:14 +01:00
zeripath
1a26f6c7ab Speed up enry.IsVendor (#15213) (#15246)
Backport #15213

`enry.IsVendor` is kinda slow as it simply iterates across all regexps.
This PR ajdusts the regexps to combine them to make this process a
little quicker.

Related #15143

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-02 00:50:12 +02:00
zeripath
1062931cf1 Prevent NPE in CommentMustAsDiff if no hunk header (#1519) (#15201)
Backport #15199

I do not understand how this can happen or why.

There is an apparent possibility for a comment.Patch to be missing a hunk header
- this should not happen and do not understand how. But it appears to happen on
1.13 at least in some case.

This PR will simply add a new section if the cursection is empty
thus preventing the NPE.

Fix #15198

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-04-01 14:30:44 -04:00
zeripath
8d4f8ebf31 Clusterfuzz found another way (#15160) (#15169)
Backport #15160

Clusterfuzz found another way so I found another way to stop it

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-27 01:53:51 +02:00
sotho
4f47bf5346 Fix wrong user returned in API (#15139) (#15150)
* Fix wrong user returned in API (#15139)

The API call: GET /repos/{owner}/{repo}/pulls/{index}/reviews/{id}/comments
returns always the reviewer, but should return the poster.

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>

* rm regression

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>
2021-03-26 08:01:32 +02:00
6543
6dfa92bb1c Changelog v1.13.6 (#15129) 2021-03-23 15:44:50 -04:00
6543
151bedab52 Fix bug on avatar middleware (#15125)
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-03-23 18:45:06 +00:00
zeripath
6198403fbc Fix another clusterfuzz identified issue (#15096) (#15114)
Backport #15096

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-03-22 16:27:21 -04:00
a1012112796
a6290f603f fix #15104 (#15106)
Signed-off-by: a1012112796 <1012112796@qq.com>
2021-03-22 15:15:44 -04:00
2f09e5775f Fix markdown rendering in milestone content (#15056) (#15092)
- Add missing markdown class for rendered markdown.
- Increase font size of milestone name in list.

Fixes: https://github.com/go-gitea/gitea/issues/15046
2021-03-21 18:03:52 +01:00
zeripath
b0819efaea Place wrapper around comment as diff to catch panics (#15085) (#15086)
* Place wrapper around comment as diff to prevent panics

* propagate the panic up

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-21 16:16:07 +01:00
6543
d7a3bcdd70 Changelog v1.13.5 (#15084) 2021-03-21 15:05:21 +01:00
zeripath
7a85e228d8 Update to goldmark 1.3.3 (#15059) (#15061)
Backport #15059

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-20 10:31:28 +00:00
6543
a461d90415 Fix bug when upload on web (#15042) (#15055)
* Fix bug when upload on web

* move into own function

Co-authored-by: 6543 <6543@obermui.de>
Co-authored-by: zeripath <art27@cantab.net>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: zeripath <art27@cantab.net>
2021-03-20 09:37:53 +08:00
6543
70e4134130 Delete Labels & IssueLabels on Repo Delete too (#15039) (#15051)
* Doctor: find IssueLabels without existing label

* Repo Delete: delete labels & issue_labels too
2021-03-19 22:13:39 +01:00
zeripath
909f2be99d Fix postgres ID sequences broken by recreate-table (#15015) (#15029)
Backport #15015

Unfortunately there is a subtle problem with recreatetable on postgres which
leads to the sequences not being renamed and not being left at 0.

Fix #14725

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-19 04:23:58 +01:00
6543
645c0d8abd another clusterfuzz spotted issue (#15032) (#15034)
Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-03-19 00:21:33 +02:00
zeripath
8c461eb261 Fix several render issues (#14986) (#15013)
Backport #14986

* Fix an issue with panics related to attributes
* Wrap goldmark render in a recovery function
* Reduce memory use in render emoji
* Use a pipe for rendering goldmark - still needs more work and a limiter

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Lauris BH <lauris@nix.lv>
2021-03-17 10:58:58 +02:00
Norwin
fff66eb016 API: fix set milestone on PR creation (#14981) (#15001)
* API: fix set milestone on PR creation

pr creation via API failed with 404, because we searched
for milestoneID 0, due to uninitialized var usage D:

* add tests

Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: 6543 <6543@obermui.de>
2021-03-15 11:01:04 -04:00
zeripath
c965ed6529 Make sure sibling images get a link too (#14979) (#14995)
Backport #14979

Due a problem with the ast.Walker in the our transformer in goldmark
an image with a sibling image will not be transformed to gain a parent
link. This PR fixes this.

Fix #12925

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-15 12:34:56 +08:00
zeripath
71a2adbf10 Fix Anchor jumping with escaped query components (#14969) (#14977)
Backport #14969

Fix #14968

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-13 09:54:53 +00:00
Norwin
3231b70043 check if original author is set (#14972)
Co-authored-by: 6543 <6543@obermui.de>
2021-03-13 11:05:56 +08:00
Norwin
e3c44923d7 fix release mail html template (#14976)
was missing an </a>
2021-03-12 20:39:05 +00:00
zeripath
3e7dccdf47 Fix excluding more than two labels on issues list (#14962) (#14973)
Backport #14962

Fix #14840

Signed-off-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: Norwin Roosen <git@nroo.de>
Co-authored-by: jaqra <48099350+jaqra@users.noreply.github.com>

Co-authored-by: Norwin Roosen <git@nroo.de>
Co-authored-by: jaqra <48099350+jaqra@users.noreply.github.com>
2021-03-12 18:12:14 +01:00
6543
33c2c49627 Prevent panic when editing forked repos by API (#14960) (#14963)
When editing forked repos using the API the BaseRepository needs to loaded
in order to check its visibility otherwise there will be NPE panic.

Fix #14956

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
2021-03-12 08:54:18 +08:00
fnetX (aka fralix)
05ac72cf33 Add "captcha" to list of reserved usernames (#14930)
Signed-off-by: Otto Richter <git@fralix.ovh>
2021-03-08 17:50:13 +01:00
zeripath
906ecfd173 Re-enable import local paths after reversion from #13610 (#14925) (#14927)
Backport #14925

PR #13610 unfortunately disabled importing repositories from local paths.
This PR restores this functionality.

Fix #14700

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-08 14:50:57 +01:00
6543
75496b9ff5 Changelog v1.13.4 (#14917)
* Changelog v1.13.4

* nit
2021-03-07 23:02:54 +08:00
zeripath
8dad47a94a Fix race in LFS ContentStore.Put(...) (#14895) (#14913)
Backport #14895

Continuing on from #14888

The previous implementation has race whereby an incomplete upload or
hash mismatch upload can end up in the ContentStore. This PR moves the
validation into the reader so that if there is a hash error or size
mismatch the reader will return with an error instead of an io.EOF
causing the storage to abort the storage.

Signed-off-by: Andrew Thornton <art27@cantab.net>
2021-03-07 00:53:37 +02:00
6543
8e792986bb Fix a couple of issues with a feeds (#14897) (#14903)
Backport (#14897)

witch fix couple of issues with feeds
2021-03-06 06:13:38 +01:00
6543
da80e90ac8 Fix race in local storage (#14888) (#14901)
LocalStorage should only put completed files in position

Signed-off-by: Andrew Thornton <art27@cantab.net>

Co-authored-by: zeripath <art27@cantab.net>
Co-authored-by: techknowlogick <techknowlogick@gitea.io>
2021-03-06 05:07:03 +01:00
6543
74dc22358b When transfering repository and database transaction failed, rollback the renames (#14864) (#14902)
Fix #14821

Co-authored-by: Andrew Thornton <art27@cantab.net>
Co-authored-by: 6543 <6543@obermui.de>

Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Andrew Thornton <art27@cantab.net>
2021-03-06 11:12:11 +08:00
John Olheiser
7d3e174906 Signed-off-by: jolheiser <john.olheiser@gmail.com> (#14898) (#14899) 2021-03-05 23:54:01 +02:00
6543
8456700411 [Docs] Fix how lfs data path is set (#14855) (#14884)
* fix docs: lfs data path

* DEPRECATED | 已废弃

Co-authored-by: techknowlogick <techknowlogick@gitea.io>
Co-authored-by: Lunny Xiao <xiaolunwen@gmail.com>
2021-03-04 22:10:15 +01:00
6543
8a6acbbc12 IsUserAllowedToUpdate should igonre if user is nil (#14886) 2021-03-04 21:28:28 +01:00
429 changed files with 38831 additions and 16639 deletions

View File

@ -4,6 +4,65 @@ This changelog goes through all the changes that have been made in each release
without substantial changes to our git log; to see the highlights of what has
been added to each release, please refer to the [blog](https://blog.gitea.io).
## [1.13.7](https://github.com/go-gitea/gitea/releases/tag/v1.13.7) - 2021-04-07
* SECURITY
* Update to bluemonday-1.0.6 (#15294) (#15298)
* Clusterfuzz found another way (#15160) (#15169)
* API
* Fix wrong user returned in API (#15139) (#15150)
* BUGFIXES
* Add 'fonts' into 'KnownPublicEntries' (#15188) (#15317)
* Speed up `enry.IsVendor` (#15213) (#15246)
* Response 404 for diff/patch of a commit that not exist (#15221) (#15238)
* Prevent NPE in CommentMustAsDiff if no hunk header (#15199) (#15201)
* MISC
* Add size to Save function (#15264) (#15271)
## [1.13.6](https://github.com/go-gitea/gitea/releases/tag/v1.13.6) - 2021-03-23
* SECURITY
* Fix bug on avatar middleware (#15124) (#15125)
* Fix another clusterfuzz identified issue (#15096) (#15114)
* API
* Fix nil exeption for get pull reviews API #15104 (#15106)
* BUGFIXES
* Fix markdown rendering in milestone content (#15056) (#15092)
## [1.13.5](https://github.com/go-gitea/gitea/releases/tag/v1.13.5) - 2021-03-21
* SECURITY
* Update to goldmark 1.3.3 (#15059) (#15061)
* API
* Fix set milestone on PR creation (#14981) (#15001)
* Prevent panic when editing forked repos by API (#14960) (#14963)
* BUGFIXES
* Fix bug when upload on web (#15042) (#15055)
* Delete Labels & IssueLabels on Repo Delete too (#15039) (#15051)
* another clusterfuzz spotted issue (#15032) (#15034)
* Fix postgres ID sequences broken by recreate-table (#15015) (#15029)
* Fix several render issues (#14986) (#15013)
* Make sure sibling images get a link too (#14979) (#14995)
* Fix Anchor jumping with escaped query components (#14969) (#14977)
* fix release mail html template (#14976)
* Fix excluding more than two labels on issues list (#14962) (#14973)
* don't mark each comment poster as OP (#14971) (#14972)
* Add "captcha" to list of reserved usernames (#14930)
* Re-enable import local paths after reversion from #13610 (#14925) (#14927)
## [1.13.4](https://github.com/go-gitea/gitea/releases/tag/v1.13.4) - 2021-03-07
* SECURITY
* Fix issue popups (#14898) (#14899)
* BUGFIXES
* Fix race in LFS ContentStore.Put(...) (#14895) (#14913)
* Fix a couple of issues with a feeds (#14897) (#14903)
* When transfering repository and database transaction failed, rollback the renames (#14864) (#14902)
* Fix race in local storage (#14888) (#14901)
* Fix 500 on pull view page if user is not loged in (#14885) (#14886)
* DOCS
* Fix how lfs data path is set (#14855) (#14884)
## [1.13.3](https://github.com/go-gitea/gitea/releases/tag/v1.13.3) - 2021-03-04
* BREAKING
@ -194,7 +253,7 @@ been added to each release, please refer to the [blog](https://blog.gitea.io).
* Fix scrolling to resolved comment anchors (#13343) (#13371)
* Storage configuration support `[storage]` (#13314) (#13379)
* When creating line diffs do not split within an html entity (#13357) (#13375) (#13425) (#13427)
* Fix reactions on code comments (#13390) (#13401)
* Fix reactions on code comments (#13390) (#13401)
* Add missing full names when DEFAULT_SHOW_FULL_NAME is enabled (#13424)
* Replies to outdated code comments should also be outdated (#13217) (#13433)
* Fix panic bug in handling multiple references in commit (#13486) (#13487)

View File

@ -606,6 +606,22 @@ func runDoctorCheckDBConsistency(ctx *cli.Context) ([]string, error) {
}
}
// find IssueLabels without existing label
count, err = models.CountOrphanedIssueLabels()
if err != nil {
return nil, err
}
if count > 0 {
if ctx.Bool("fix") {
if err = models.DeleteOrphanedIssueLabels(); err != nil {
return nil, err
}
results = append(results, fmt.Sprintf("%d issue_labels without existing label deleted", count))
} else {
results = append(results, fmt.Sprintf("%d issue_labels without existing label", count))
}
}
//find issues without existing repository
count, err = models.CountOrphanedIssues()
if err != nil {
@ -670,6 +686,23 @@ func runDoctorCheckDBConsistency(ctx *cli.Context) ([]string, error) {
}
}
if setting.Database.UsePostgreSQL {
count, err = models.CountBadSequences()
if err != nil {
return nil, err
}
if count > 0 {
if ctx.Bool("fix") {
err := models.FixBadSequences()
if err != nil {
return nil, err
}
results = append(results, fmt.Sprintf("%d sequences updated", count))
} else {
results = append(results, fmt.Sprintf("%d sequences with incorrect values", count))
}
}
}
//ToDo: function to recalc all counters
return results, nil

View File

@ -276,7 +276,7 @@ Values containing `#` or `;` must be quoted using `` ` `` or `"""`.
- `LANDING_PAGE`: **home**: Landing page for unauthenticated users \[home, explore, organizations, login\].
- `LFS_START_SERVER`: **false**: Enables git-lfs support.
- `LFS_CONTENT_PATH`: **%(APP_DATA_PATH)/lfs**: Default LFS content path. (if it is on local storage.)
- `LFS_CONTENT_PATH`: **%(APP_DATA_PATH)/lfs**: DEPRECATED: Default LFS content path. (if it is on local storage.)
- `LFS_JWT_SECRET`: **\<empty\>**: LFS authentication secret, change this a unique string.
- `LFS_HTTP_AUTH_EXPIRY`: **20m**: LFS authentication validity period in time.Duration, pushes taking longer than this may fail.
- `LFS_MAX_FILE_SIZE`: **0**: Maximum allowed LFS file size in bytes (Set to 0 for no limit).
@ -828,7 +828,7 @@ is `data/lfs` and the default of `MINIO_BASE_PATH` is `lfs/`.
- `STORAGE_TYPE`: **local**: Storage type for lfs, `local` for local disk or `minio` for s3 compatible object storage service or other name defined with `[storage.xxx]`
- `SERVE_DIRECT`: **false**: Allows the storage driver to redirect to authenticated URLs to serve files directly. Currently, only Minio/S3 is supported via signed URLs, local does nothing.
- `CONTENT_PATH`: **./data/lfs**: Where to store LFS files, only available when `STORAGE_TYPE` is `local`.
- `PATH`: **./data/lfs**: Where to store LFS files, only available when `STORAGE_TYPE` is `local`. If not set it fall back to deprecated LFS_CONTENT_PATH value in [server] section.
- `MINIO_ENDPOINT`: **localhost:9000**: Minio endpoint to connect only available when `STORAGE_TYPE` is `minio`
- `MINIO_ACCESS_KEY_ID`: Minio accessKeyID to connect only available when `STORAGE_TYPE` is `minio`
- `MINIO_SECRET_ACCESS_KEY`: Minio secretAccessKey to connect only available when `STORAGE_TYPE is` `minio`

View File

@ -73,6 +73,7 @@ menu:
- `LFS_START_SERVER`: 是否启用 git-lfs 支持. 可以为 `true``false` 默认是 `false`
- `LFS_JWT_SECRET`: LFS 认证密钥,改成自己的。
- `LFS_CONTENT_PATH`: **已废弃**, 存放 lfs 命令上传的文件的地方,默认是 `data/lfs`
## Database (`database`)
@ -323,7 +324,7 @@ LFS 的存储配置。 如果 `STORAGE_TYPE` 为空,则此配置将从 `[stora
- `STORAGE_TYPE`: **local**: LFS 的存储类型,`local` 将存储到磁盘,`minio` 将存储到 s3 兼容的对象服务。
- `SERVE_DIRECT`: **false**: 允许直接重定向到存储系统。当前,仅 Minio/S3 是支持的。
- `CONTENT_PATH`: 存放 lfs 命令上传的文件的地方,默认是 `data/lfs`
- `PATH`: 存放 lfs 命令上传的文件的地方,默认是 `data/lfs`
- `MINIO_ENDPOINT`: **localhost:9000**: Minio 地址,仅当 `LFS_STORAGE_TYPE``minio` 时有效。
- `MINIO_ACCESS_KEY_ID`: Minio accessKeyID仅当 `LFS_STORAGE_TYPE``minio` 时有效。
- `MINIO_SECRET_ACCESS_KEY`: Minio secretAccessKey仅当 `LFS_STORAGE_TYPE``minio` 时有效。

10
go.mod
View File

@ -70,7 +70,7 @@ require (
github.com/mgechev/dots v0.0.0-20190921121421-c36f7dcfbb81
github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7
github.com/mholt/archiver/v3 v3.3.0
github.com/microcosm-cc/bluemonday v1.0.3-0.20191119130333-0a75d7616912
github.com/microcosm-cc/bluemonday v1.0.6
github.com/minio/minio-go/v7 v7.0.4
github.com/mitchellh/go-homedir v1.1.0
github.com/msteinert/pam v0.0.0-20151204160544-02ccfbfaf0cc
@ -99,15 +99,15 @@ require (
github.com/urfave/cli v1.20.0
github.com/xanzy/go-gitlab v0.37.0
github.com/yohcop/openid-go v1.0.0
github.com/yuin/goldmark v1.2.1
github.com/yuin/goldmark v1.3.3
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691
github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60
go.jolheiser.com/hcaptcha v0.0.4
go.jolheiser.com/pwn v0.0.3
golang.org/x/crypto v0.0.0-20201217014255-9d1352758620
golang.org/x/net v0.0.0-20200904194848-62affa334b73
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4
golang.org/x/oauth2 v0.0.0-20200107190931-bf48bf16ab8d
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44
golang.org/x/text v0.3.3
golang.org/x/time v0.0.0-20200630173020-3af7569d3a1e // indirect
golang.org/x/tools v0.0.0-20200921210052-fa0125251cc4
@ -124,5 +124,3 @@ require (
)
replace github.com/hashicorp/go-version => github.com/6543/go-version v1.2.4
replace github.com/microcosm-cc/bluemonday => github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8

22
go.sum
View File

@ -140,8 +140,6 @@ github.com/bradfitz/gomemcache v0.0.0-20190329173943-551aad21a668 h1:U/lr3Dgy4WK
github.com/bradfitz/gomemcache v0.0.0-20190329173943-551aad21a668/go.mod h1:H0wQNHz2YrLsuXOZozoeDmnHXkNCRmMW0gwFWDfEZDA=
github.com/census-instrumentation/opencensus-proto v0.2.1/go.mod h1:f6KPmirojxKA12rnyqOA5BBL4O983OfeGPqjHWSTneU=
github.com/cespare/xxhash v1.1.0/go.mod h1:XrSqR1VqqWfGrhpAt58auRo0WTKS1nRRg3ghfAqPWnc=
github.com/chris-ramon/douceur v0.2.0 h1:IDMEdxlEUUBYBKE4z/mJnFyVXox+MjuEVDJNN27glkU=
github.com/chris-ramon/douceur v0.2.0/go.mod h1:wDW5xjJdeoMm1mRt4sD4c/LbF/mWdEpRXQKjTR8nIBE=
github.com/client9/misspell v0.3.4/go.mod h1:qj6jICC3Q7zFZvVWo7KLAzC3yx5G7kyvSDkc90ppPyw=
github.com/cockroachdb/apd v1.1.0 h1:3LFP3629v+1aKXU5Q37mxmRxX/pIu1nijXydLShEq5I=
github.com/cockroachdb/apd v1.1.0/go.mod h1:8Sl8LxpKi29FqWXR16WEFZRNSz3SoPzUzeMeY4+DwBQ=
@ -598,8 +596,6 @@ github.com/lib/pq v1.3.0/go.mod h1:5WUZQaWbwv1U+lTReE5YruASi9Al49XbQIvNi/34Woo=
github.com/lib/pq v1.7.0/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc h1:ERSU1OvZ6MdWhHieo2oT7xwR/HCksqKdgK6iYPU5pHI=
github.com/lib/pq v1.8.1-0.20200908161135-083382b7e6fc/go.mod h1:AlVN5x4E4T544tWzH6hKfbfQvm3HdbOxrmggDNAPY9o=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8 h1:1omo92DLtxQu6VwVPSZAmduHaK5zssed6cvkHyl1XOg=
github.com/lunny/bluemonday v1.0.5-0.20201227154428-ca34796141e8/go.mod h1:8iwZnFn2CDDNZ0r6UXhF4xawGvzaqzCRa1n3/lO3W2w=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96 h1:uNwtsDp7ci48vBTTxDuwcoTXz4lwtDTe7TjCQ0noaWY=
github.com/lunny/dingtalk_webhook v0.0.0-20171025031554-e3534c89ef96/go.mod h1:mmIfjCSQlGYXmJ95jFN84AkQFnVABtKuJL8IrzwvUKQ=
github.com/lunny/log v0.0.0-20160921050905-7887c61bf0de h1:nyxwRdWHAVxpFcDThedEgQ07DbcRc5xgNObtbTp76fk=
@ -651,6 +647,8 @@ github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7 h1:ydVkpU/M4/c45y
github.com/mgechev/revive v1.0.3-0.20200921231451-246eac737dc7/go.mod h1:no/hfevHbndpXR5CaJahkYCfM/FFpmM/dSOwFGU7Z1o=
github.com/mholt/archiver/v3 v3.3.0 h1:vWjhY8SQp5yzM9P6OJ/eZEkmi3UAbRrxCq48MxjAzig=
github.com/mholt/archiver/v3 v3.3.0/go.mod h1:YnQtqsp+94Rwd0D/rk5cnLrxusUBUXg+08Ebtr1Mqao=
github.com/microcosm-cc/bluemonday v1.0.6 h1:ZOvqHKtnx0fUpnbQm3m3zKFWE+DRC+XB1onh8JoEObE=
github.com/microcosm-cc/bluemonday v1.0.6/go.mod h1:HOT/6NaBlR0f9XlxD3zolN6Z3N8Lp4pvhp+jLS5ihnI=
github.com/miekg/dns v1.0.14/go.mod h1:W1PPwlIAgtquWBMBEV9nkV9Cazfe8ScdGz/Lj7v3Nrg=
github.com/minio/md5-simd v1.1.0 h1:QPfiOqlZH+Cj9teu0t9b1nTBfPbyTl16Of5MeuShdK4=
github.com/minio/md5-simd v1.1.0/go.mod h1:XpBqgZULrMYD3R+M28PcmP0CkI7PEMzB3U77ZrKZ0Gw=
@ -885,8 +883,9 @@ github.com/yuin/goldmark v1.1.7/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9dec
github.com/yuin/goldmark v1.1.22/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.25/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.1.32/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.2.1 h1:ruQGxdhGHe7FWOJPT0mKs5+pD2Xs1Bm/kdGlHO04FmM=
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
github.com/yuin/goldmark v1.3.3 h1:37BdQwPx8VOSic8eDSWee6QL9mRpZRm9VJp/QugNrW0=
github.com/yuin/goldmark v1.3.3/go.mod h1:mwnBkeHKe2W/ZEtQ+71ViKU8L12m81fl3OWwC1Zlc8k=
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691 h1:VWSxtAiQNh3zgHJpdpkpVYjTPqRE3P6UZCOPa1nRDio=
github.com/yuin/goldmark-highlighting v0.0.0-20200307114337-60d527fdb691/go.mod h1:YLF3kDffRfUH/bTxOxHhV6lxwIB3Vfj91rEwNMS9MXo=
github.com/yuin/goldmark-meta v0.0.0-20191126180153-f0638e958b60 h1:gZucqLjL1eDzVWrXj4uiWeMbAopJlBR2mKQAsTGdPwo=
@ -995,8 +994,9 @@ golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/
golang.org/x/net v0.0.0-20200625001655-4c5254603344/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200707034311-ab3426394381/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200822124328-c89045814202/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20200904194848-62affa334b73 h1:MXfv8rhZWmFeqX3GNZRsd6vOLoaCHjYEX3qkRo3YBUA=
golang.org/x/net v0.0.0-20200904194848-62affa334b73/go.mod h1:/O7V0waA8r7cgGh81Ro3o1hOxt32SMVPicZroKQ2sZA=
golang.org/x/net v0.0.0-20210331212208-0fccb6fa2b5c/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4 h1:4nGaVu0QrbjT/AK2PRLuQfQuh6DJve+pELhqTdAj3x0=
golang.org/x/net v0.0.0-20210405180319-a5a99cb37ef4/go.mod h1:p54w0d4576C0XHj96bSt6lcn1PtDYWL6XObtHCRCNQM=
golang.org/x/oauth2 v0.0.0-20180620175406-ef147856a6dd/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20180821212333-d2e6202438be/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
golang.org/x/oauth2 v0.0.0-20181106182150-f42d05182288/go.mod h1:N/0e6XlmueqKjAGxoOufVs8QHGRruUQn6yWY3a++T0U=
@ -1052,10 +1052,12 @@ golang.org/x/sys v0.0.0-20200302150141-5c8b2ff67527/go.mod h1:h1NjWce9XRLGQEsW7w
golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200413165638-669c56c373c4/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200625212154-ddb9806d33ae/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff h1:1CPUrky56AcgSpxz/KfgzQWzfG09u5YOL8MvPYBlrL8=
golang.org/x/sys v0.0.0-20200918174421-af09f7315aff/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221 h1:/ZHdbVpdR/jk3g30/d4yUL0JU9kksj8+F/bnQUVLGDM=
golang.org/x/sys v0.0.0-20201119102817-f84b799fce68/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44 h1:Bli41pIlzTzf3KEY06n+xnzK/BESIg2ze4Pgfh/aI8c=
golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
golang.org/x/term v0.0.0-20201117132131-f5c789dd3221/go.mod h1:Nr5EML6q2oocZ2LXRh80K7BxOlk5/8JxuGnuhpl+muw=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1 h1:v+OssWQX+hTHEmOBgwxdZxK4zHq3yOs8F9J7mk0PY8E=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.1-0.20180807135948-17ff2d5776d2/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk=

View File

@ -74,8 +74,79 @@ func TestAPICreatePullSuccess(t *testing.T) {
Base: "master",
Title: "create a failure pr",
})
session.MakeRequest(t, req, 201)
session.MakeRequest(t, req, http.StatusUnprocessableEntity) // second request should fail
}
func TestAPICreatePullWithFieldsSuccess(t *testing.T) {
defer prepareTestEnv(t)()
// repo10 have code, pulls units.
repo10 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 10}).(*models.Repository)
owner10 := models.AssertExistsAndLoadBean(t, &models.User{ID: repo10.OwnerID}).(*models.User)
// repo11 only have code unit but should still create pulls
repo11 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 11}).(*models.Repository)
owner11 := models.AssertExistsAndLoadBean(t, &models.User{ID: repo11.OwnerID}).(*models.User)
session := loginUser(t, owner11.Name)
token := getTokenForLoggedInUser(t, session)
opts := &api.CreatePullRequestOption{
Head: fmt.Sprintf("%s:master", owner11.Name),
Base: "master",
Title: "create a failure pr",
Body: "foobaaar",
Milestone: 5,
Assignees: []string{owner10.Name},
Labels: []int64{5},
}
req := NewRequestWithJSON(t, http.MethodPost, fmt.Sprintf("/api/v1/repos/%s/%s/pulls?token=%s", owner10.Name, repo10.Name, token), opts)
res := session.MakeRequest(t, req, 201)
pull := new(api.PullRequest)
DecodeJSON(t, res, pull)
assert.NotNil(t, pull.Milestone)
assert.EqualValues(t, opts.Milestone, pull.Milestone.ID)
if assert.Len(t, pull.Assignees, 1) {
assert.EqualValues(t, opts.Assignees[0], owner10.Name)
}
assert.NotNil(t, pull.Labels)
assert.EqualValues(t, opts.Labels[0], pull.Labels[0].ID)
}
func TestAPICreatePullWithFieldsFailure(t *testing.T) {
defer prepareTestEnv(t)()
// repo10 have code, pulls units.
repo10 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 10}).(*models.Repository)
owner10 := models.AssertExistsAndLoadBean(t, &models.User{ID: repo10.OwnerID}).(*models.User)
// repo11 only have code unit but should still create pulls
repo11 := models.AssertExistsAndLoadBean(t, &models.Repository{ID: 11}).(*models.Repository)
owner11 := models.AssertExistsAndLoadBean(t, &models.User{ID: repo11.OwnerID}).(*models.User)
session := loginUser(t, owner11.Name)
token := getTokenForLoggedInUser(t, session)
opts := &api.CreatePullRequestOption{
Head: fmt.Sprintf("%s:master", owner11.Name),
Base: "master",
}
req := NewRequestWithJSON(t, http.MethodPost, fmt.Sprintf("/api/v1/repos/%s/%s/pulls?token=%s", owner10.Name, repo10.Name, token), opts)
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
opts.Title = "is required"
opts.Milestone = 666
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
opts.Milestone = 5
opts.Assignees = []string{"qweruqweroiuyqweoiruywqer"}
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
opts.Assignees = []string{owner10.LoginName}
opts.Labels = []int64{55555}
session.MakeRequest(t, req, http.StatusUnprocessableEntity)
opts.Labels = []int64{5}
}
func TestAPIEditPull(t *testing.T) {

View File

@ -122,7 +122,7 @@ func TestGetAttachment(t *testing.T) {
t.Run(tc.name, func(t *testing.T) {
//Write empty file to be available for response
if tc.createFile {
_, err := storage.Attachments.Save(models.AttachmentRelativePath(tc.uuid), strings.NewReader("hello world"))
_, err := storage.Attachments.Save(models.AttachmentRelativePath(tc.uuid), strings.NewReader("hello world"), -1)
assert.NoError(t, err)
}
//Actual test

View File

@ -99,7 +99,7 @@ func (a *Attachment) LinkedRepository() (*Repository, UnitType, error) {
func NewAttachment(attach *Attachment, buf []byte, file io.Reader) (_ *Attachment, err error) {
attach.UUID = gouuid.New().String()
size, err := storage.Attachments.Save(attach.RelativePath(), io.MultiReader(bytes.NewReader(buf), file))
size, err := storage.Attachments.Save(attach.RelativePath(), io.MultiReader(bytes.NewReader(buf), file), -1)
if err != nil {
return nil, fmt.Errorf("Create: %v", err)
}

View File

@ -5,10 +5,13 @@
package models
import (
"fmt"
"reflect"
"regexp"
"strings"
"testing"
"code.gitea.io/gitea/modules/setting"
"github.com/stretchr/testify/assert"
"xorm.io/builder"
)
@ -221,6 +224,24 @@ func DeleteOrphanedLabels() error {
return nil
}
// CountOrphanedIssueLabels return count of IssueLabels witch have no label behind anymore
func CountOrphanedIssueLabels() (int64, error) {
return x.Table("issue_label").
Join("LEFT", "label", "issue_label.label_id = label.id").
Where(builder.IsNull{"label.id"}).Count()
}
// DeleteOrphanedIssueLabels delete IssueLabels witch have no label behind anymore
func DeleteOrphanedIssueLabels() error {
_, err := x.In("id", builder.Select("issue_label.id").From("issue_label").
Join("LEFT", "label", "issue_label.label_id = label.id").
Where(builder.IsNull{"label.id"})).
Delete(IssueLabel{})
return err
}
// CountOrphanedIssues count issues without a repo
func CountOrphanedIssues() (int64, error) {
return x.Table("issue").
@ -295,3 +316,61 @@ func FixNullArchivedRepository() (int64, error) {
IsArchived: false,
})
}
// CountBadSequences looks for broken sequences from recreate-table mistakes
func CountBadSequences() (int64, error) {
if !setting.Database.UsePostgreSQL {
return 0, nil
}
sess := x.NewSession()
defer sess.Close()
var sequences []string
schema := sess.Engine().Dialect().URI().Schema
sess.Engine().SetSchema("")
if err := sess.Table("information_schema.sequences").Cols("sequence_name").Where("sequence_name LIKE 'tmp_recreate__%_id_seq%' AND sequence_catalog = ?", setting.Database.Name).Find(&sequences); err != nil {
return 0, err
}
sess.Engine().SetSchema(schema)
return int64(len(sequences)), nil
}
// FixBadSequences fixes for broken sequences from recreate-table mistakes
func FixBadSequences() error {
if !setting.Database.UsePostgreSQL {
return nil
}
sess := x.NewSession()
defer sess.Close()
if err := sess.Begin(); err != nil {
return err
}
var sequences []string
schema := sess.Engine().Dialect().URI().Schema
sess.Engine().SetSchema("")
if err := sess.Table("information_schema.sequences").Cols("sequence_name").Where("sequence_name LIKE 'tmp_recreate__%_id_seq%' AND sequence_catalog = ?", setting.Database.Name).Find(&sequences); err != nil {
return err
}
sess.Engine().SetSchema(schema)
sequenceRegexp := regexp.MustCompile(`tmp_recreate__(\w+)_id_seq.*`)
for _, sequence := range sequences {
tableName := sequenceRegexp.FindStringSubmatch(sequence)[1]
newSequenceName := tableName + "_id_seq"
if _, err := sess.Exec(fmt.Sprintf("ALTER SEQUENCE `%s` RENAME TO `%s`", sequence, newSequenceName)); err != nil {
return err
}
if _, err := sess.Exec(fmt.Sprintf("SELECT setval('%s', COALESCE((SELECT MAX(id)+1 FROM `%s`), 1), false)", newSequenceName, tableName)); err != nil {
return err
}
}
return sess.Commit()
}

View File

@ -33,3 +33,11 @@
num_issues: 1
num_closed_issues: 0
-
id: 5
repo_id: 10
org_id: 0
name: pull-test-label
color: '#000000'
num_issues: 0
num_closed_issues: 0

View File

@ -29,3 +29,11 @@
content: content random
is_closed: false
num_issues: 0
-
id: 5
repo_id: 10
name: milestone of repo 10
content: for testing with PRs
is_closed: false
num_issues: 0

View File

@ -146,6 +146,7 @@
num_closed_issues: 0
num_pulls: 1
num_closed_pulls: 0
num_milestones: 1
is_mirror: false
num_forks: 1
status: 0

View File

@ -764,3 +764,15 @@ func DeleteIssueLabel(issue *Issue, label *Label, doer *User) (err error) {
return sess.Commit()
}
func deleteLabelsByRepoID(sess Engine, repoID int64) error {
deleteCond := builder.Select("id").From("label").Where(builder.Eq{"label.repo_id": repoID})
if _, err := sess.In("label_id", deleteCond).
Delete(&IssueLabel{}); err != nil {
return err
}
_, err := sess.Delete(&Label{RepoID: repoID})
return err
}

View File

@ -516,6 +516,31 @@ func recreateTable(sess *xorm.Session, bean interface{}) error {
return err
}
case setting.Database.UsePostgreSQL:
var originalSequences []string
type sequenceData struct {
LastValue int `xorm:"'last_value'"`
IsCalled bool `xorm:"'is_called'"`
}
sequenceMap := map[string]sequenceData{}
schema := sess.Engine().Dialect().URI().Schema
sess.Engine().SetSchema("")
if err := sess.Table("information_schema.sequences").Cols("sequence_name").Where("sequence_name LIKE ? || '_%' AND sequence_catalog = ?", tableName, setting.Database.Name).Find(&originalSequences); err != nil {
log.Error("Unable to rename %s to %s. Error: %v", tempTableName, tableName, err)
return err
}
sess.Engine().SetSchema(schema)
for _, sequence := range originalSequences {
sequenceData := sequenceData{}
if _, err := sess.Table(sequence).Cols("last_value", "is_called").Get(&sequenceData); err != nil {
log.Error("Unable to get last_value and is_called from %s. Error: %v", sequence, err)
return err
}
sequenceMap[sequence] = sequenceData
}
// CASCADE causes postgres to drop all the constraints on the old table
if _, err := sess.Exec(fmt.Sprintf("DROP TABLE `%s` CASCADE", tableName)); err != nil {
log.Error("Unable to drop old table %s. Error: %v", tableName, err)
@ -529,7 +554,6 @@ func recreateTable(sess *xorm.Session, bean interface{}) error {
}
var indices []string
schema := sess.Engine().Dialect().URI().Schema
sess.Engine().SetSchema("")
if err := sess.Table("pg_indexes").Cols("indexname").Where("tablename = ? ", tableName).Find(&indices); err != nil {
log.Error("Unable to rename %s to %s. Error: %v", tempTableName, tableName, err)
@ -545,6 +569,43 @@ func recreateTable(sess *xorm.Session, bean interface{}) error {
}
}
var sequences []string
sess.Engine().SetSchema("")
if err := sess.Table("information_schema.sequences").Cols("sequence_name").Where("sequence_name LIKE 'tmp_recreate__' || ? || '_%' AND sequence_catalog = ?", tableName, setting.Database.Name).Find(&sequences); err != nil {
log.Error("Unable to rename %s to %s. Error: %v", tempTableName, tableName, err)
return err
}
sess.Engine().SetSchema(schema)
for _, sequence := range sequences {
newSequenceName := strings.Replace(sequence, "tmp_recreate__", "", 1)
if _, err := sess.Exec(fmt.Sprintf("ALTER SEQUENCE `%s` RENAME TO `%s`", sequence, newSequenceName)); err != nil {
log.Error("Unable to rename %s sequence to %s. Error: %v", sequence, newSequenceName, err)
return err
}
val, ok := sequenceMap[newSequenceName]
if newSequenceName == tableName+"_id_seq" {
if ok && val.LastValue != 0 {
if _, err := sess.Exec(fmt.Sprintf("SELECT setval('%s', %d, %t)", newSequenceName, val.LastValue, val.IsCalled)); err != nil {
log.Error("Unable to reset %s to %d. Error: %v", newSequenceName, val, err)
return err
}
} else {
// We're going to try to guess this
if _, err := sess.Exec(fmt.Sprintf("SELECT setval('%s', COALESCE((SELECT MAX(id)+1 FROM `%s`), 1), false)", newSequenceName, tableName)); err != nil {
log.Error("Unable to reset %s. Error: %v", newSequenceName, err)
return err
}
}
} else if ok {
if _, err := sess.Exec(fmt.Sprintf("SELECT setval('%s', %d, %t)", newSequenceName, val.LastValue, val.IsCalled)); err != nil {
log.Error("Unable to reset %s to %d. Error: %v", newSequenceName, val, err)
return err
}
}
}
case setting.Database.UseMSSQL:
// MSSQL will drop all the constraints on the old table
if _, err := sess.Exec(fmt.Sprintf("DROP TABLE `%s`", tableName)); err != nil {

View File

@ -1290,11 +1290,44 @@ func IncrementRepoForkNum(ctx DBContext, repoID int64) error {
}
// TransferOwnership transfers all corresponding setting from old user to new one.
func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error {
func TransferOwnership(doer *User, newOwnerName string, repo *Repository) (err error) {
repoRenamed := false
wikiRenamed := false
oldOwnerName := doer.Name
defer func() {
if !repoRenamed && !wikiRenamed {
return
}
recoverErr := recover()
if err == nil && recoverErr == nil {
return
}
if repoRenamed {
if err := os.Rename(RepoPath(newOwnerName, repo.Name), RepoPath(oldOwnerName, repo.Name)); err != nil {
log.Critical("Unable to move repository %s/%s directory from %s back to correct place %s: %v", oldOwnerName, repo.Name, RepoPath(newOwnerName, repo.Name), RepoPath(oldOwnerName, repo.Name), err)
}
}
if wikiRenamed {
if err := os.Rename(WikiPath(newOwnerName, repo.Name), WikiPath(oldOwnerName, repo.Name)); err != nil {
log.Critical("Unable to move wiki for repository %s/%s directory from %s back to correct place %s: %v", oldOwnerName, repo.Name, WikiPath(newOwnerName, repo.Name), WikiPath(oldOwnerName, repo.Name), err)
}
}
if recoverErr != nil {
log.Error("Panic within TransferOwnership: %v\n%s", recoverErr, log.Stack(2))
panic(recoverErr)
}
}()
newOwner, err := GetUserByName(newOwnerName)
if err != nil {
return fmt.Errorf("get new owner '%s': %v", newOwnerName, err)
}
newOwnerName = newOwner.Name // ensure capitalisation matches
// Check if new owner has repository with same name.
has, err := IsRepositoryExist(newOwner, repo.Name)
@ -1311,6 +1344,7 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error
}
oldOwner := repo.Owner
oldOwnerName = oldOwner.Name
// Note: we have to set value here to make sure recalculate accesses is based on
// new owner.
@ -1370,9 +1404,9 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error
}
// Update repository count.
if _, err = sess.Exec("UPDATE `user` SET num_repos=num_repos+1 WHERE id=?", newOwner.ID); err != nil {
if _, err := sess.Exec("UPDATE `user` SET num_repos=num_repos+1 WHERE id=?", newOwner.ID); err != nil {
return fmt.Errorf("increase new owner repository count: %v", err)
} else if _, err = sess.Exec("UPDATE `user` SET num_repos=num_repos-1 WHERE id=?", oldOwner.ID); err != nil {
} else if _, err := sess.Exec("UPDATE `user` SET num_repos=num_repos-1 WHERE id=?", oldOwner.ID); err != nil {
return fmt.Errorf("decrease old owner repository count: %v", err)
}
@ -1382,7 +1416,7 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error
// Remove watch for organization.
if oldOwner.IsOrganization() {
if err = watchRepo(sess, oldOwner.ID, repo.ID, false); err != nil {
if err := watchRepo(sess, oldOwner.ID, repo.ID, false); err != nil {
return fmt.Errorf("watchRepo [false]: %v", err)
}
}
@ -1394,16 +1428,18 @@ func TransferOwnership(doer *User, newOwnerName string, repo *Repository) error
return fmt.Errorf("Failed to create dir %s: %v", dir, err)
}
if err = os.Rename(RepoPath(oldOwner.Name, repo.Name), RepoPath(newOwner.Name, repo.Name)); err != nil {
if err := os.Rename(RepoPath(oldOwner.Name, repo.Name), RepoPath(newOwner.Name, repo.Name)); err != nil {
return fmt.Errorf("rename repository directory: %v", err)
}
repoRenamed = true
// Rename remote wiki repository to new path and delete local copy.
wikiPath := WikiPath(oldOwner.Name, repo.Name)
if com.IsExist(wikiPath) {
if err = os.Rename(wikiPath, WikiPath(newOwner.Name, repo.Name)); err != nil {
if err := os.Rename(wikiPath, WikiPath(newOwner.Name, repo.Name)); err != nil {
return fmt.Errorf("rename repository wiki: %v", err)
}
wikiRenamed = true
}
// If there was previously a redirect at this location, remove it.
@ -1693,6 +1729,10 @@ func DeleteRepository(doer *User, uid, repoID int64) error {
return fmt.Errorf("deleteBeans: %v", err)
}
if err := deleteLabelsByRepoID(sess, repoID); err != nil {
return err
}
// Delete Issues and related objects
var attachmentPaths []string
if attachmentPaths, err = deleteIssuesByRepoID(sess, repoID); err != nil {

View File

@ -727,6 +727,7 @@ var (
"assets",
"attachments",
"avatars",
"captcha",
"commits",
"debug",
"error",

70
modules/analyze/vendor.go Normal file
View File

@ -0,0 +1,70 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package analyze
import (
"regexp"
"sort"
"strings"
"github.com/go-enry/go-enry/v2/data"
)
var isVendorRegExp *regexp.Regexp
func init() {
matchers := data.VendorMatchers
caretStrings := make([]string, 0, 10)
caretShareStrings := make([]string, 0, 10)
matcherStrings := make([]string, 0, len(matchers))
for _, matcher := range matchers {
str := matcher.String()
if str[0] == '^' {
caretStrings = append(caretStrings, str[1:])
} else if str[0:5] == "(^|/)" {
caretShareStrings = append(caretShareStrings, str[5:])
} else {
matcherStrings = append(matcherStrings, str)
}
}
sort.Strings(caretShareStrings)
sort.Strings(caretStrings)
sort.Strings(matcherStrings)
sb := &strings.Builder{}
sb.WriteString("(?:^(?:")
sb.WriteString(caretStrings[0])
for _, matcher := range caretStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString("))")
sb.WriteString("|")
sb.WriteString("(?:(?:^|/)(?:")
sb.WriteString(caretShareStrings[0])
for _, matcher := range caretShareStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString("))")
sb.WriteString("|")
sb.WriteString("(?:")
sb.WriteString(matcherStrings[0])
for _, matcher := range matcherStrings[1:] {
sb.WriteString(")|(?:")
sb.WriteString(matcher)
}
sb.WriteString(")")
combined := sb.String()
isVendorRegExp = regexp.MustCompile(combined)
}
// IsVendor returns whether or not path is a vendor path.
func IsVendor(path string) bool {
return isVendorRegExp.MatchString(path)
}

View File

@ -0,0 +1,42 @@
// Copyright 2021 The Gitea Authors. All rights reserved.
// Use of this source code is governed by a MIT-style
// license that can be found in the LICENSE file.
package analyze
import "testing"
func TestIsVendor(t *testing.T) {
tests := []struct {
path string
want bool
}{
{"cache/", true},
{"random/cache/", true},
{"cache", false},
{"dependencies/", true},
{"Dependencies/", true},
{"dependency/", false},
{"dist/", true},
{"dist", false},
{"random/dist/", true},
{"random/dist", false},
{"deps/", true},
{"configure", true},
{"a/configure", true},
{"config.guess", true},
{"config.guess/", false},
{".vscode/", true},
{"doc/_build/", true},
{"a/docs/_build/", true},
{"a/dasdocs/_build-vsdoc.js", true},
{"a/dasdocs/_build-vsdoc.j", false},
}
for _, tt := range tests {
t.Run(tt.path, func(t *testing.T) {
if got := IsVendor(tt.path); got != tt.want {
t.Errorf("IsVendor() = %v, want %v", got, tt.want)
}
})
}
}

View File

@ -83,18 +83,17 @@ func ToPullReviewCommentList(review *models.Review, doer *models.User) ([]*api.P
apiComments := make([]*api.PullReviewComment, 0, len(review.CodeComments))
auth := false
if doer != nil {
auth = doer.IsAdmin || doer.ID == review.ReviewerID
}
for _, lines := range review.CodeComments {
for _, comments := range lines {
for _, comment := range comments {
auth := false
if doer != nil {
auth = doer.IsAdmin || doer.ID == comment.Poster.ID
}
apiComment := &api.PullReviewComment{
ID: comment.ID,
Body: comment.Content,
Reviewer: ToUser(review.Reviewer, doer != nil, auth),
Reviewer: ToUser(comment.Poster, doer != nil, auth),
ReviewID: review.ID,
Created: comment.CreatedUnix.AsTime(),
Updated: comment.UpdatedUnix.AsTime(),

View File

@ -13,6 +13,10 @@ import (
// ToUser convert models.User to api.User
// signed shall only be set if requester is logged in. authed shall only be set if user is site admin or user himself
func ToUser(user *models.User, signed, authed bool) *api.User {
if user == nil {
return nil
}
result := &api.User{
ID: user.ID,
UserName: user.Name,

View File

@ -30,6 +30,9 @@ var (
// aliasMap provides a map of the alias to its emoji data.
aliasMap map[string]int
// emptyReplacer is the string replacer for emoji codes.
emptyReplacer *strings.Replacer
// codeReplacer is the string replacer for emoji codes.
codeReplacer *strings.Replacer
@ -49,6 +52,7 @@ func loadMap() {
// process emoji codes and aliases
codePairs := make([]string, 0)
emptyPairs := make([]string, 0)
aliasPairs := make([]string, 0)
// sort from largest to small so we match combined emoji first
@ -64,6 +68,7 @@ func loadMap() {
// setup codes
codeMap[e.Emoji] = i
codePairs = append(codePairs, e.Emoji, ":"+e.Aliases[0]+":")
emptyPairs = append(emptyPairs, e.Emoji, e.Emoji)
// setup aliases
for _, a := range e.Aliases {
@ -77,6 +82,7 @@ func loadMap() {
}
// create replacers
emptyReplacer = strings.NewReplacer(emptyPairs...)
codeReplacer = strings.NewReplacer(codePairs...)
aliasReplacer = strings.NewReplacer(aliasPairs...)
})
@ -127,38 +133,53 @@ func ReplaceAliases(s string) string {
return aliasReplacer.Replace(s)
}
type rememberSecondWriteWriter struct {
pos int
idx int
end int
writecount int
}
func (n *rememberSecondWriteWriter) Write(p []byte) (int, error) {
n.writecount++
if n.writecount == 2 {
n.idx = n.pos
n.end = n.pos + len(p)
}
n.pos += len(p)
return len(p), nil
}
func (n *rememberSecondWriteWriter) WriteString(s string) (int, error) {
n.writecount++
if n.writecount == 2 {
n.idx = n.pos
n.end = n.pos + len(s)
}
n.pos += len(s)
return len(s), nil
}
// FindEmojiSubmatchIndex returns index pair of longest emoji in a string
func FindEmojiSubmatchIndex(s string) []int {
loadMap()
found := make(map[int]int)
keys := make([]int, 0)
secondWriteWriter := rememberSecondWriteWriter{}
//see if there are any emoji in string before looking for position of specific ones
//no performance difference when there is a match but 10x faster when there are not
if s == ReplaceCodes(s) {
// A faster and clean implementation would copy the trie tree formation in strings.NewReplacer but
// we can be lazy here.
//
// The implementation of strings.Replacer.WriteString is such that the first index of the emoji
// submatch is simply the second thing that is written to WriteString in the writer.
//
// Therefore we can simply take the index of the second write as our first emoji
//
// FIXME: just copy the trie implementation from strings.NewReplacer
_, _ = emptyReplacer.WriteString(&secondWriteWriter, s)
// if we wrote less than twice then we never "replaced"
if secondWriteWriter.writecount < 2 {
return nil
}
// get index of first emoji occurrence while also checking for longest combination
for j := range GemojiData {
i := strings.Index(s, GemojiData[j].Emoji)
if i != -1 {
if _, ok := found[i]; !ok {
if len(keys) == 0 || i < keys[0] {
found[i] = j
keys = []int{i}
}
if i == 0 {
break
}
}
}
}
if len(keys) > 0 {
index := keys[0]
return []int{index, index + len(GemojiData[found[index]].Emoji)}
}
return nil
return []int{secondWriteWriter.idx, secondWriteWriter.end}
}

View File

@ -8,6 +8,8 @@ package emoji
import (
"reflect"
"testing"
"github.com/stretchr/testify/assert"
)
func TestDumpInfo(t *testing.T) {
@ -65,3 +67,34 @@ func TestReplacers(t *testing.T) {
}
}
}
func TestFindEmojiSubmatchIndex(t *testing.T) {
type testcase struct {
teststring string
expected []int
}
testcases := []testcase{
{
"\U0001f44d",
[]int{0, len("\U0001f44d")},
},
{
"\U0001f44d +1 \U0001f44d \U0001f37a",
[]int{0, 4},
},
{
" \U0001f44d",
[]int{1, 1 + len("\U0001f44d")},
},
{
string([]byte{'\u0001'}) + "\U0001f44d",
[]int{1, 1 + len("\U0001f44d")},
},
}
for _, kase := range testcases {
actual := FindEmojiSubmatchIndex(kase.teststring)
assert.Equal(t, kase.expected, actual)
}
}

View File

@ -47,7 +47,7 @@ func GetRawDiffForFile(repoPath, startCommit, endCommit string, diffType RawDiff
func GetRepoRawDiffForFile(repo *Repository, startCommit, endCommit string, diffType RawDiffType, file string, writer io.Writer) error {
commit, err := repo.GetCommit(endCommit)
if err != nil {
return fmt.Errorf("GetCommit: %v", err)
return err
}
fileArgs := make([]string, 0)
if len(file) > 0 {

View File

@ -44,7 +44,7 @@ func (repo *Repository) GetLanguageStats(commitID string) (map[string]int64, err
sizes := make(map[string]int64)
err = tree.Files().ForEach(func(f *object.File) error {
if f.Size == 0 || enry.IsVendor(f.Name) || enry.IsDotFile(f.Name) ||
if f.Size == 0 || analyze.IsVendor(f.Name) || enry.IsDotFile(f.Name) ||
enry.IsDocumentation(f.Name) || enry.IsConfiguration(f.Name) {
return nil
}

View File

@ -175,7 +175,7 @@ func NewBleveIndexer(indexDir string) (*BleveIndexer, bool, error) {
func (b *BleveIndexer) addUpdate(commitSha string, update fileUpdate, repo *models.Repository, batch rupture.FlushingBatch) error {
// Ignore vendored files in code search
if setting.Indexer.ExcludeVendored && enry.IsVendor(update.Filename) {
if setting.Indexer.ExcludeVendored && analyze.IsVendor(update.Filename) {
return nil
}

View File

@ -170,7 +170,7 @@ func (b *ElasticSearchIndexer) init() (bool, error) {
func (b *ElasticSearchIndexer) addUpdate(sha string, update fileUpdate, repo *models.Repository) ([]elastic.BulkableRequest, error) {
// Ignore vendored files in code search
if setting.Indexer.ExcludeVendored && enry.IsVendor(update.Filename) {
if setting.Indexer.ExcludeVendored && analyze.IsVendor(update.Filename) {
return nil, nil
}

View File

@ -9,6 +9,7 @@ import (
"encoding/hex"
"errors"
"fmt"
"hash"
"io"
"os"
@ -66,15 +67,20 @@ func (s *ContentStore) Get(meta *models.LFSMetaObject, fromByte int64) (io.ReadC
// Put takes a Meta object and an io.Reader and writes the content to the store.
func (s *ContentStore) Put(meta *models.LFSMetaObject, r io.Reader) error {
hash := sha256.New()
rd := io.TeeReader(r, hash)
p := meta.RelativePath()
written, err := s.Save(p, rd)
// Wrap the provided reader with an inline hashing and size checker
wrappedRd := newHashingReader(meta.Size, meta.Oid, r)
// now pass the wrapped reader to Save - if there is a size mismatch or hash mismatch then
// the errors returned by the newHashingReader should percolate up to here
written, err := s.Save(p, wrappedRd, meta.Size)
if err != nil {
log.Error("Whilst putting LFS OID[%s]: Failed to copy to tmpPath: %s Error: %v", meta.Oid, p, err)
return err
}
// This shouldn't happen but it is sensible to test
if written != meta.Size {
if err := s.Delete(p); err != nil {
log.Error("Cleaning the LFS OID[%s] failed: %v", meta.Oid, err)
@ -82,14 +88,6 @@ func (s *ContentStore) Put(meta *models.LFSMetaObject, r io.Reader) error {
return errSizeMismatch
}
shaStr := hex.EncodeToString(hash.Sum(nil))
if shaStr != meta.Oid {
if err := s.Delete(p); err != nil {
log.Error("Cleaning the LFS OID[%s] failed: %v", meta.Oid, err)
}
return errHashMismatch
}
return nil
}
@ -118,3 +116,45 @@ func (s *ContentStore) Verify(meta *models.LFSMetaObject) (bool, error) {
return true, nil
}
type hashingReader struct {
internal io.Reader
currentSize int64
expectedSize int64
hash hash.Hash
expectedHash string
}
func (r *hashingReader) Read(b []byte) (int, error) {
n, err := r.internal.Read(b)
if n > 0 {
r.currentSize += int64(n)
wn, werr := r.hash.Write(b[:n])
if wn != n || werr != nil {
return n, werr
}
}
if err != nil && err == io.EOF {
if r.currentSize != r.expectedSize {
return n, errSizeMismatch
}
shaStr := hex.EncodeToString(r.hash.Sum(nil))
if shaStr != r.expectedHash {
return n, errHashMismatch
}
}
return n, err
}
func newHashingReader(expectedSize int64, expectedHash string, reader io.Reader) *hashingReader {
return &hashingReader{
internal: reader,
expectedSize: expectedSize,
expectedHash: expectedHash,
hash: sha256.New(),
}
}

View File

@ -298,19 +298,27 @@ func RenderEmoji(
return ctx.postProcess(rawHTML)
}
var tagCleaner = regexp.MustCompile(`<((?:/?\w+/\w+)|(?:/[\w ]+/)|(/?[hH][tT][mM][lL]\b)|(/?[hH][eE][aA][dD]\b))`)
var nulCleaner = strings.NewReplacer("\000", "")
func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
if ctx.procs == nil {
ctx.procs = defaultProcessors
}
// give a generous extra 50 bytes
res := make([]byte, 0, len(rawHTML)+50)
res = append(res, "<html><body>"...)
res = append(res, rawHTML...)
res = append(res, "</body></html>"...)
res := bytes.NewBuffer(make([]byte, 0, len(rawHTML)+50))
// prepend "<html><body>"
_, _ = res.WriteString("<html><body>")
// Strip out nuls - they're always invalid
_, _ = res.Write(tagCleaner.ReplaceAll([]byte(nulCleaner.Replace(string(rawHTML))), []byte("&lt;$1")))
// close the tags
_, _ = res.WriteString("</body></html>")
// parse the HTML
nodes, err := html.ParseFragment(bytes.NewReader(res), nil)
nodes, err := html.ParseFragment(res, nil)
if err != nil {
return nil, &postProcessError{"invalid HTML", err}
}
@ -347,17 +355,17 @@ func (ctx *postProcessCtx) postProcess(rawHTML []byte) ([]byte, error) {
// Create buffer in which the data will be placed again. We know that the
// length will be at least that of res; to spare a few alloc+copy, we
// reuse res, resetting its length to 0.
buf := bytes.NewBuffer(res[:0])
res.Reset()
// Render everything to buf.
for _, node := range nodes {
err = html.Render(buf, node)
err = html.Render(res, node)
if err != nil {
return nil, &postProcessError{"error rendering processed HTML", err}
}
}
// Everything done successfully, return parsed data.
return buf.Bytes(), nil
return res.Bytes(), nil
}
func (ctx *postProcessCtx) visitNode(node *html.Node, visitText bool) {

View File

@ -10,6 +10,7 @@ import (
"regexp"
"strings"
"code.gitea.io/gitea/modules/log"
"code.gitea.io/gitea/modules/markup"
"code.gitea.io/gitea/modules/markup/common"
"code.gitea.io/gitea/modules/setting"
@ -76,6 +77,12 @@ func (g *ASTTransformer) Transform(node *ast.Document, reader text.Reader, pc pa
header.ID = util.BytesToReadOnlyString(id.([]byte))
}
toc = append(toc, header)
} else {
for _, attr := range v.Attributes() {
if _, ok := attr.Value.([]byte); !ok {
v.SetAttribute(attr.Name, []byte(fmt.Sprintf("%v", attr.Value)))
}
}
}
case *ast.Image:
// Images need two things:
@ -101,11 +108,41 @@ func (g *ASTTransformer) Transform(node *ast.Document, reader text.Reader, pc pa
parent := n.Parent()
// Create a link around image only if parent is not already a link
if _, ok := parent.(*ast.Link); !ok && parent != nil {
next := n.NextSibling()
// Create a link wrapper
wrap := ast.NewLink()
wrap.Destination = link
wrap.Title = v.Title
// Duplicate the current image node
image := ast.NewImage(ast.NewLink())
image.Destination = link
image.Title = v.Title
for _, attr := range v.Attributes() {
image.SetAttribute(attr.Name, attr.Value)
}
for child := v.FirstChild(); child != nil; {
next := child.NextSibling()
image.AppendChild(image, child)
child = next
}
// Append our duplicate image to the wrapper link
wrap.AppendChild(wrap, image)
// Wire in the next sibling
wrap.SetNextSibling(next)
// Replace the current node with the wrapper link
parent.ReplaceChild(parent, n, wrap)
wrap.AppendChild(wrap, n)
// But most importantly ensure the next sibling is still on the old image too
v.SetNextSibling(next)
} else {
log.Debug("ast.Image: %s has parent: %v", link, parent)
}
case *ast.Link:
// Links need their href to munged to be a real value

Some files were not shown because too many files have changed in this diff Show More