Compare commits

...

18 Commits

Author SHA1 Message Date
Jason Song
517d11c671 Reduce log noise (#108)
Cannot guarantee that all noisy logs can be removed at once.

Comment them instead of removing them to make it easier to merge upstream.

What have been removed in this PR are those that are very very long and almost unreadable logs, like

<img width="839" alt="image" src="/attachments/b59e1dcc-4edd-4f81-b939-83dcc45f2ed2">

Reviewed-on: https://gitea.com/gitea/act/pulls/108
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:55:46 +00:00
Jason Song
e1b1e81124 Revert "Pass 'sleep' as container command rather than entrypoint (#86)" (#107)
This reverts #86.

Some images use a custom entry point for specific usage, then `[entrypoint] [cmd]` like `helm /bin/sleep 1` will failed.

It causes https://gitea.com/gitea/helm-chart/actions/runs/755 since the image is `alpine/helm`.

```yaml
  check-and-test:
    runs-on: ubuntu-latest
    container: alpine/helm:3.14.3
```

Reviewed-on: https://gitea.com/gitea/act/pulls/107
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:53:28 +00:00
Zettat123
64876e3696 Interpolate job name with matrix (#106)
Fix https://github.com/go-gitea/gitea/issues/28207

Reviewed-on: https://gitea.com/gitea/act/pulls/106
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-04-07 03:34:53 +00:00
Jason Song
3fa1dba92b Merge tag 'nektos/v0.2.61' 2024-04-01 14:23:16 +08:00
GitHub Actions
361b7e9f1a chore: bump VERSION to 0.2.61 2024-04-01 02:16:09 +00:00
Zettat123
9725f60394 Support reusing workflows with absolute URLs (#104)
Resolve https://gitea.com/gitea/act_runner/issues/507

Reviewed-on: https://gitea.com/gitea/act/pulls/104
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-29 06:15:28 +00:00
ChristopherHX
f825e42ce2 fix: cache adjust restore order of exact key matches (#2267)
* wip: adjust restore order

* fixup

* add tests

* cleanup

* fix typo

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2024-03-29 02:07:20 +00:00
Jason Collins
d9a19c8b02 Trivial: reduce log spam. (#2256)
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:28:48 +00:00
James Kang
3949d74af5 chore: remove repetitive words (#2259)
Signed-off-by: majorteach <csgcgl@126.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:14:53 +00:00
Jason Song
b9382a2c4e Support overwriting caches (#2265)
* feat: support overwrite caches

* test: fix case

* test: fix get_with_multiple_keys

* chore: use atomic.Bool

* test: improve get_with_multiple_keys

* chore: use ping to improve path

* fix: wrong CompareAndSwap

* test: TestHandler_gcCache

* chore: lint code

* chore: lint code
2024-03-28 16:42:02 +00:00
Jason Song
f56dd65ff6 test: use ping to improve network test (#2266) 2024-03-28 11:56:26 +00:00
Thomas E Lackey
a79d81989f Pass 'sleep' as container command rather than entrypoint (#86)
The current code overrides the container's entrypoint with `sleep`.  Unfortunately, that prevents initialization scripts, such as to initialize Docker-in-Docker, from running.

The change simply moves the `sleep` to the command, rather than entrypoint, directive.

For most containers of this sort, the entrypoint script performs initialization, and then ends with `$@` to execute whatever command is passed.

If the container has no entrypoint, the command is executed directly.  As a result, this should be a transparent change for most use cases, while allowing the container's entrypoint to be used when present.

Reviewed-on: https://gitea.com/gitea/act/pulls/86
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-27 10:17:48 +00:00
dependabot[bot]
069720abff build(deps): bump github.com/docker/docker (#2252)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 24.0.7+incompatible to 24.0.9+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v24.0.7...v24.0.9)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 17:37:01 +00:00
dependabot[bot]
8c83d57212 build(deps): bump golang.org/x/term from 0.17.0 to 0.18.0 (#2244)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.17.0 to 0.18.0.
- [Commits](https://github.com/golang/term/compare/v0.17.0...v0.18.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 02:28:21 +00:00
ChristopherHX
119ceb81d9 fix: rootless permission bits (new actions cache) (#2242)
* fix: rootless permission bits (new actions cache)

* add test

* fix lint / more tests
2024-03-08 01:25:03 +00:00
huajin tong
352ad41ad2 fix function name in comment (#2240)
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-06 14:20:06 +00:00
ChristopherHX
75e4ad93f4 fix: docker buildx cache restore not working (#2236)
* To take effect artifacts v4 pr is needed with adjusted claims
2024-03-05 06:04:54 +00:00
dependabot[bot]
934b13a7a1 build(deps): bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#2235)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-04 03:08:44 +00:00
23 changed files with 645 additions and 174 deletions

View File

@@ -26,7 +26,7 @@
## Images based on [`actions/virtual-environments`][gh/actions/virtual-environments]
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to to use it.**
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to use it.**
| Image | Size | GitHub Repository |
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------------------- |

View File

@@ -1 +1 @@
0.2.60
0.2.61

10
go.mod
View File

@@ -10,7 +10,7 @@ require (
github.com/creack/pty v1.1.21
github.com/docker/cli v24.0.7+incompatible
github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker v24.0.7+incompatible // 24.0 branch
github.com/docker/docker v24.0.9+incompatible // 24.0 branch
github.com/docker/go-connections v0.4.0
github.com/go-git/go-billy/v5 v5.5.0
github.com/go-git/go-git/v5 v5.11.0
@@ -30,10 +30,10 @@ require (
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a
go.etcd.io/bbolt v1.3.9
golang.org/x/term v0.17.0
golang.org/x/term v0.18.0
gopkg.in/yaml.v3 v3.0.1
gotest.tools/v3 v3.5.1
)
@@ -74,7 +74,7 @@ require (
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/skeema/knownhosts v1.2.1 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
@@ -83,7 +83,7 @@ require (
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/sync v0.6.0 // indirect
golang.org/x/sys v0.17.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/tools v0.13.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect

23
go.sum
View File

@@ -42,8 +42,8 @@ github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1x
github.com/docker/cli v24.0.7+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v24.0.9+incompatible h1:HPGzNmwfLZWdxHqK9/II92pyi1EpYKsAqcl4G0Of9v0=
github.com/docker/docker v24.0.9+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
@@ -165,18 +165,15 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a h1:oIi7H/bwFUYKYhzKbHc+3MvHRWqhQwXVB4LweLMiVy0=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a/go.mod h1:iSvujNDmpZ6eQX+bg/0X3lF7LEmZ8N77g2a/J/+Zt2U=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
@@ -250,15 +247,15 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=

View File

@@ -35,7 +35,7 @@ type Handler struct {
server *http.Server
logger logrus.FieldLogger
gcing int32 // TODO: use atomic.Bool when we can use Go 1.19
gcing atomic.Bool
gcAt time.Time
outboundIP string
@@ -170,7 +170,7 @@ func (h *Handler) find(w http.ResponseWriter, r *http.Request, _ httprouter.Para
}
defer db.Close()
cache, err := h.findCache(db, keys, version)
cache, err := findCache(db, keys, version)
if err != nil {
h.responseJSON(w, r, 500, err)
return
@@ -206,32 +206,17 @@ func (h *Handler) reserve(w http.ResponseWriter, r *http.Request, _ httprouter.P
api.Key = strings.ToLower(api.Key)
cache := api.ToCache()
cache.FillKeyVersionHash()
db, err := h.openDB()
if err != nil {
h.responseJSON(w, r, 500, err)
return
}
defer db.Close()
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
h.responseJSON(w, r, 500, err)
return
}
} else {
h.responseJSON(w, r, 400, fmt.Errorf("already exist"))
return
}
now := time.Now().Unix()
cache.CreatedAt = now
cache.UsedAt = now
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
// write back id to db
if err := db.Update(cache.ID, cache); err != nil {
if err := insertCache(db, cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
@@ -364,56 +349,51 @@ func (h *Handler) middleware(handler httprouter.Handle) httprouter.Handle {
}
// if not found, return (nil, nil) instead of an error.
func (h *Handler) findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
if len(keys) == 0 {
return nil, nil
}
key := keys[0] // the first key is for exact match.
cache := &Cache{
Key: key,
Version: version,
}
cache.FillKeyVersionHash()
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
return nil, err
func findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
cache := &Cache{}
for _, prefix := range keys {
// if a key in the list matches exactly, don't return partial matches
if err := db.FindOne(cache,
bolthold.Where("Key").Eq(prefix).
And("Version").Eq(version).
And("Complete").Eq(true).
SortBy("CreatedAt").Reverse()); err == nil || !errors.Is(err, bolthold.ErrNotFound) {
if err != nil {
return nil, fmt.Errorf("find cache: %w", err)
}
return cache, nil
}
} else if cache.Complete {
return cache, nil
}
stop := fmt.Errorf("stop")
for _, prefix := range keys[1:] {
found := false
prefixPattern := fmt.Sprintf("^%s", regexp.QuoteMeta(prefix))
re, err := regexp.Compile(prefixPattern)
if err != nil {
continue
}
if err := db.ForEach(bolthold.Where("Key").RegExp(re).And("Version").Eq(version).SortBy("CreatedAt").Reverse(), func(v *Cache) error {
if !strings.HasPrefix(v.Key, prefix) {
return stop
}
if v.Complete {
cache = v
found = true
return stop
}
return nil
}); err != nil {
if !errors.Is(err, stop) {
return nil, err
if err := db.FindOne(cache,
bolthold.Where("Key").RegExp(re).
And("Version").Eq(version).
And("Complete").Eq(true).
SortBy("CreatedAt").Reverse()); err != nil {
if errors.Is(err, bolthold.ErrNotFound) {
continue
}
return nil, fmt.Errorf("find cache: %w", err)
}
if found {
return cache, nil
}
return cache, nil
}
return nil, nil
}
func insertCache(db *bolthold.Store, cache *Cache) error {
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
return fmt.Errorf("insert cache: %w", err)
}
// write back id to db
if err := db.Update(cache.ID, cache); err != nil {
return fmt.Errorf("write back id to db: %w", err)
}
return nil
}
func (h *Handler) useCache(id int64) {
db, err := h.openDB()
if err != nil {
@@ -428,14 +408,21 @@ func (h *Handler) useCache(id int64) {
_ = db.Update(cache.ID, cache)
}
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
keepOld = 5 * time.Minute
)
func (h *Handler) gcCache() {
if atomic.LoadInt32(&h.gcing) != 0 {
if h.gcing.Load() {
return
}
if !atomic.CompareAndSwapInt32(&h.gcing, 0, 1) {
if !h.gcing.CompareAndSwap(false, true) {
return
}
defer atomic.StoreInt32(&h.gcing, 0)
defer h.gcing.Store(false)
if time.Since(h.gcAt) < time.Hour {
h.logger.Debugf("skip gc: %v", h.gcAt.String())
@@ -444,37 +431,18 @@ func (h *Handler) gcCache() {
h.gcAt = time.Now()
h.logger.Debugf("gc: %v", h.gcAt.String())
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
)
db, err := h.openDB()
if err != nil {
return
}
defer db.Close()
// Remove the caches which are not completed for a while, they are most likely to be broken.
var caches []*Cache
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix())); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
if cache.Complete {
continue
}
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix())); err != nil {
if err := db.Find(&caches, bolthold.
Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix()).
And("Complete").Eq(false),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
@@ -487,8 +455,11 @@ func (h *Handler) gcCache() {
}
}
// Remove the old caches which have not been used recently.
caches = caches[:0]
if err := db.Find(&caches, bolthold.Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix())); err != nil {
if err := db.Find(&caches, bolthold.
Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix()),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
@@ -500,6 +471,55 @@ func (h *Handler) gcCache() {
h.logger.Infof("deleted cache: %+v", cache)
}
}
// Remove the old caches which are too old.
caches = caches[:0]
if err := db.Find(&caches, bolthold.
Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix()),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
// Remove the old caches with the same key and version, keep the latest one.
// Also keep the olds which have been used recently for a while in case of the cache is still in use.
if results, err := db.FindAggregate(
&Cache{},
bolthold.Where("Complete").Eq(true),
"Key", "Version",
); err != nil {
h.logger.Warnf("find aggregate caches: %v", err)
} else {
for _, result := range results {
if result.Count() <= 1 {
continue
}
result.Sort("CreatedAt")
caches = caches[:0]
result.Reduction(&caches)
for _, cache := range caches[:len(caches)-1] {
if time.Since(time.Unix(cache.UsedAt, 0)) < keepOld {
// Keep it since it has been used recently, even if it's old.
// Or it could break downloading in process.
continue
}
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
}
}
func (h *Handler) responseJSON(w http.ResponseWriter, r *http.Request, code int, v ...any) {

View File

@@ -10,9 +10,11 @@ import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/timshannon/bolthold"
"go.etcd.io/bbolt"
)
@@ -78,6 +80,9 @@ func TestHandler(t *testing.T) {
t.Run("duplicate reserve", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var first, second struct {
CacheID uint64 `json:"cacheId"`
}
{
body, err := json.Marshal(&Request{
Key: key,
@@ -89,10 +94,8 @@ func TestHandler(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
require.NoError(t, json.NewDecoder(resp.Body).Decode(&first))
assert.NotZero(t, first.CacheID)
}
{
body, err := json.Marshal(&Request{
@@ -103,8 +106,13 @@ func TestHandler(t *testing.T) {
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
assert.Equal(t, 200, resp.StatusCode)
require.NoError(t, json.NewDecoder(resp.Body).Decode(&second))
assert.NotZero(t, second.CacheID)
}
assert.NotEqual(t, first.CacheID, second.CacheID)
})
t.Run("upload with bad id", func(t *testing.T) {
@@ -341,9 +349,9 @@ func TestHandler(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b",
key + "_a_b_c",
key + "_a_b",
key + "_a",
}
contents := [3][]byte{
make([]byte, 100),
@@ -354,6 +362,7 @@ func TestHandler(t *testing.T) {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
@@ -361,29 +370,33 @@ func TestHandler(t *testing.T) {
key + "_a_b",
key + "_a",
}, ",")
var archiveLocation string
{
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, keys[1], got.CacheKey)
archiveLocation = got.ArchiveLocation
}
{
resp, err := http.Get(archiveLocation) //nolint:gosec
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got, err := io.ReadAll(resp.Body)
require.NoError(t, err)
assert.Equal(t, contents[1], got)
}
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a_b` because:
- `key_a_b_x" doesn't match any caches.
- `key_a_b" matches `key_a_b` and `key_a_b_c`, but `key_a_b` is newer.
*/
except := 1
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, keys[except], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[except], content)
})
t.Run("case insensitive", func(t *testing.T) {
@@ -409,6 +422,110 @@ func TestHandler(t *testing.T) {
assert.Equal(t, key+"_abc", got.CacheKey)
}
})
t.Run("exact keys are preferred (key 0)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
t.Run("exact keys are preferred (key 1)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
"------------------------------------------------------",
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `------------------------------------------------------` doesn't match any caches.
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
}
func uploadCacheNormally(t *testing.T, base, key, version string, content []byte) {
@@ -469,3 +586,112 @@ func uploadCacheNormally(t *testing.T, base, key, version string, content []byte
assert.Equal(t, content, got)
}
}
func TestHandler_gcCache(t *testing.T) {
dir := filepath.Join(t.TempDir(), "artifactcache")
handler, err := StartHandler(dir, "", 0, nil)
require.NoError(t, err)
defer func() {
require.NoError(t, handler.Close())
}()
now := time.Now()
cases := []struct {
Cache *Cache
Kept bool
}{
{
// should be kept, since it's used recently and not too old.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Unix(),
CreatedAt: now.Add(-time.Hour).Unix(),
},
Kept: true,
},
{
// should be removed, since it's not complete and not used for a while.
Cache: &Cache{
Key: "test_key_2",
Version: "test_version",
Complete: false,
UsedAt: now.Add(-(keepTemp + time.Second)).Unix(),
CreatedAt: now.Add(-(keepTemp + time.Hour)).Unix(),
},
Kept: false,
},
{
// should be removed, since it's not used for a while.
Cache: &Cache{
Key: "test_key_3",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepUnused + time.Second)).Unix(),
CreatedAt: now.Add(-(keepUnused + time.Hour)).Unix(),
},
Kept: false,
},
{
// should be removed, since it's used but too old.
Cache: &Cache{
Key: "test_key_3",
Version: "test_version",
Complete: true,
UsedAt: now.Unix(),
CreatedAt: now.Add(-(keepUsed + time.Second)).Unix(),
},
Kept: false,
},
{
// should be kept, since it has a newer edition but be used recently.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepOld - time.Minute)).Unix(),
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
},
Kept: true,
},
{
// should be removed, since it has a newer edition and not be used recently.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepOld + time.Second)).Unix(),
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
},
Kept: false,
},
}
db, err := handler.openDB()
require.NoError(t, err)
for _, c := range cases {
require.NoError(t, insertCache(db, c.Cache))
}
require.NoError(t, db.Close())
handler.gcAt = time.Time{} // ensure gcCache will not skip
handler.gcCache()
db, err = handler.openDB()
require.NoError(t, err)
for i, v := range cases {
t.Run(fmt.Sprintf("%d_%s", i, v.Cache.Key), func(t *testing.T) {
cache := &Cache{}
err = db.Get(v.Cache.ID, cache)
if v.Kept {
assert.NoError(t, err)
} else {
assert.ErrorIs(t, err, bolthold.ErrNotFound)
}
})
}
require.NoError(t, db.Close())
}

View File

@@ -1,10 +1,5 @@
package artifactcache
import (
"crypto/sha256"
"fmt"
)
type Request struct {
Key string `json:"key" `
Version string `json:"version"`
@@ -29,16 +24,11 @@ func (c *Request) ToCache() *Cache {
}
type Cache struct {
ID uint64 `json:"id" boltholdKey:"ID"`
Key string `json:"key" boltholdIndex:"Key"`
Version string `json:"version" boltholdIndex:"Version"`
KeyVersionHash string `json:"keyVersionHash" boltholdUnique:"KeyVersionHash"`
Size int64 `json:"cacheSize"`
Complete bool `json:"complete"`
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
}
func (c *Cache) FillKeyVersionHash() {
c.KeyVersionHash = fmt.Sprintf("%x", sha256.Sum256([]byte(fmt.Sprintf("%s:%s", c.Key, c.Version))))
ID uint64 `json:"id" boltholdKey:"ID"`
Key string `json:"key" boltholdIndex:"Key"`
Version string `json:"version" boltholdIndex:"Version"`
Size int64 `json:"cacheSize"`
Complete bool `json:"complete" boltholdIndex:"Complete"`
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
}

View File

@@ -97,7 +97,7 @@ func NewParallelExecutor(parallel int, executors ...Executor) Executor {
errs := make(chan error, len(executors))
if 1 > parallel {
log.Infof("Parallel tasks (%d) below minimum, setting to 1", parallel)
log.Debugf("Parallel tasks (%d) below minimum, setting to 1", parallel)
parallel = 1
}

View File

@@ -6,6 +6,7 @@ import (
"context"
"github.com/docker/docker/api/types"
"github.com/nektos/act/pkg/common"
)
@@ -22,7 +23,8 @@ func NewDockerNetworkCreateExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
common.Logger(ctx).Debugf("Network %v exists", name)
@@ -56,7 +58,8 @@ func NewDockerNetworkRemoveExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{})

View File

@@ -445,7 +445,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
ExposedPorts: input.ExposedPorts,
Tty: isTerminal,
}
logger.Debugf("Common container.Config ==> %+v", config)
// For Gitea, reduce log noise
// logger.Debugf("Common container.Config ==> %+v", config)
if len(input.Cmd) != 0 {
config.Cmd = input.Cmd
@@ -489,7 +490,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
PortBindings: input.PortBindings,
AutoRemove: input.AutoRemove,
}
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
// For Gitea, reduce log noise
// logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
config, hostConfig, err := cr.mergeContainerConfigs(ctx, config, hostConfig)
if err != nil {
@@ -500,7 +502,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
config, hostConfig = cr.sanitizeConfig(ctx, config, hostConfig)
var networkingConfig *network.NetworkingConfig
logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
// For Gitea, reduce log noise
// logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
n := hostConfig.NetworkMode
// IsUserDefined and IsHost are broken on windows
if n.IsUserDefined() && n != "host" && len(input.NetworkAliases) > 0 {
@@ -730,7 +733,7 @@ func (cr *containerReference) CopyTarStream(ctx context.Context, destPath string
tw := tar.NewWriter(buf)
_ = tw.WriteHeader(&tar.Header{
Name: destPath,
Mode: 777,
Mode: 0o777,
Typeflag: tar.TypeDir,
})
tw.Close()

View File

@@ -2,7 +2,9 @@ package container
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"net"
"strings"
@@ -79,6 +81,11 @@ func (m *mockDockerClient) ContainerExecInspect(ctx context.Context, execID stri
return args.Get(0).(types.ContainerExecInspect), args.Error(1)
}
func (m *mockDockerClient) CopyToContainer(ctx context.Context, id string, path string, content io.Reader, options types.CopyToContainerOptions) error {
args := m.Called(ctx, id, path, content, options)
return args.Error(0)
}
type endlessReader struct {
io.Reader
}
@@ -169,6 +176,78 @@ func TestDockerExecFailure(t *testing.T) {
client.AssertExpectations(t)
}
func TestDockerCopyTarStream(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
_ = cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInCopyFiles(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := fmt.Errorf("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr)
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := fmt.Errorf("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr)
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
// Type assert containerReference implements ExecutionsEnvironment
var _ ExecutionsEnvironment = &containerReference{}

View File

@@ -51,9 +51,9 @@ func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
if job.Name == "" {
job.Name = id
}
job.Name = nameWithMatrix(job.Name, matrix)
job.Strategy.RawMatrix = encodeMatrix(matrix)
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results, pc.vars))
job.Name = nameWithMatrix(job.Name, matrix, evaluator)
runsOn := origin.GetJob(id).RunsOn()
for i, v := range runsOn {
runsOn[i] = evaluator.Interpolate(v)
@@ -134,12 +134,16 @@ func encodeRunsOn(runsOn []string) yaml.Node {
return node
}
func nameWithMatrix(name string, m map[string]interface{}) string {
func nameWithMatrix(name string, m map[string]interface{}, evaluator *ExpressionEvaluator) string {
if len(m) == 0 {
return name
}
return name + " " + matrixName(m)
if !strings.Contains(name, "${{") || !strings.Contains(name, "}}") {
return name + " " + matrixName(m)
}
return evaluator.Interpolate(name)
}
func matrixName(m map[string]interface{}) string {

View File

@@ -47,6 +47,11 @@ func TestParse(t *testing.T) {
options: nil,
wantErr: false,
},
{
name: "job_name_with_matrix",
options: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -0,0 +1,14 @@
name: test
jobs:
job1:
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-20.04]
version: [1.17, 1.18, 1.19]
runs-on: ${{ matrix.os }}
name: test_version_${{ matrix.version }}_on_${{ matrix.os }}
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version

View File

@@ -0,0 +1,101 @@
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.19
---
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.19

View File

@@ -367,7 +367,7 @@ func environment(yml yaml.Node) map[string]string {
return env
}
// Environments returns string-based key=value map for a job
// Environment returns string-based key=value map for a job
func (j *Job) Environment() map[string]string {
return environment(j.Env)
}
@@ -606,7 +606,7 @@ func (s *Step) String() string {
return s.ID
}
// Environments returns string-based key=value map for a step
// Environment returns string-based key=value map for a step
func (s *Step) Environment() map[string]string {
return environment(s.Env)
}

View File

@@ -112,7 +112,8 @@ func readActionImpl(ctx context.Context, step *model.Step, actionDir string, act
defer closer.Close()
action, err := model.ReadAction(reader)
logger.Debugf("Read action %v from '%s'", action, "Unknown")
// For Gitea, reduce log noise
// logger.Debugf("Read action %v from '%s'", action, "Unknown")
return action, err
}
@@ -162,7 +163,8 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
}
action := step.getActionModel()
logger.Debugf("About to run action %v", action)
// For Gitea, reduce log noise
// logger.Debugf("About to run action %v", action)
err := setupActionEnv(ctx, step, remoteAction)
if err != nil {

View File

@@ -27,7 +27,7 @@ func evaluateCompositeInputAndEnv(ctx context.Context, parent *RunContext, step
envKey := regexp.MustCompile("[^A-Z0-9-]").ReplaceAllString(strings.ToUpper(inputID), "_")
envKey = fmt.Sprintf("INPUT_%s", strings.ToUpper(envKey))
// lookup if key is defined in the step but the the already
// lookup if key is defined in the step but the already
// evaluated value from the environment
_, defined := step.getStepModel().With[inputID]
if value, ok := stepEnv[envKey]; defined && ok {

View File

@@ -106,7 +106,7 @@ func (rc *RunContext) NewExpressionEvaluatorWithEnv(ctx context.Context, env map
//go:embed hashfiles/index.js
var hashfiles string
// NewExpressionEvaluator creates a new evaluator
// NewStepExpressionEvaluator creates a new evaluator
func (rc *RunContext) NewStepExpressionEvaluator(ctx context.Context, step step) ExpressionEvaluator {
// todo: cleanup EvaluationEnvironment creation
job := rc.Run.Job()

View File

@@ -52,7 +52,7 @@ func Masks(ctx context.Context) *[]string {
return &[]string{}
}
// WithLogger adds a value to the context for the logger
// WithMasks adds a value to the context for the logger
func WithMasks(ctx context.Context, masks *[]string) context.Context {
return context.WithValue(ctx, masksContextKeyVal, masks)
}

View File

@@ -54,9 +54,17 @@ func newLocalReusableWorkflowExecutor(rc *RunContext) common.Executor {
func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
uses := rc.Run.Job().Uses
remoteReusableWorkflow := newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
var remoteReusableWorkflow *remoteReusableWorkflow
if strings.HasPrefix(uses, "http://") || strings.HasPrefix(uses, "https://") {
remoteReusableWorkflow = newRemoteReusableWorkflowFromAbsoluteURL(uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format http(s)://{domain}/{owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
} else {
remoteReusableWorkflow = newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
}
// uses with safe filename makes the target directory look something like this {owner}-{repo}-.github-workflows-{filename}@{ref}
@@ -226,6 +234,24 @@ func newRemoteReusableWorkflowWithPlat(url, uses string) *remoteReusableWorkflow
}
}
// For Gitea
// newRemoteReusableWorkflowWithPlat create a `remoteReusableWorkflow` from an absolute url
func newRemoteReusableWorkflowFromAbsoluteURL(uses string) *remoteReusableWorkflow {
r := regexp.MustCompile(`^(https?://.*)/([^/]+)/([^/]+)/\.([^/]+)/workflows/([^@]+)@(.*)$`)
matches := r.FindStringSubmatch(uses)
if len(matches) != 7 {
return nil
}
return &remoteReusableWorkflow{
URL: matches[1],
Org: matches[2],
Repo: matches[3],
GitPlatform: matches[4],
Filename: matches[5],
Ref: matches[6],
}
}
// deprecated: use newRemoteReusableWorkflowWithPlat
func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
// GitHub docs:

View File

@@ -221,7 +221,8 @@ func setupEnv(ctx context.Context, step step) error {
}
}
common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
return nil
}

View File

@@ -7,8 +7,8 @@ jobs:
- name: Install tools
run: |
apt update
apt install -y bind9-host
apt install -y iputils-ping
- name: Run hostname test
run: |
hostname -f
host $(hostname -f)
ping -c 4 $(hostname -f)