Compare commits

..

29 Commits

Author SHA1 Message Date
ChristopherHX
39509e9ad0 Support Simplified Concurrency (#139)
- update RawConcurrency struct to parse and serialize string-based concurrency notation
- update EvaluateConcurrency to handle new RawConcurrency format

Reviewed-on: https://gitea.com/gitea/act/pulls/139
Reviewed-by: Zettat123 <zettat123@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
Co-committed-by: ChristopherHX <christopher.homberger@web.de>
2025-07-29 09:14:47 +00:00
badhezi
9924aea786 Evaluate run-name field for workflows (#137)
To support https://github.com/go-gitea/gitea/pull/34301

Reviewed-on: https://gitea.com/gitea/act/pulls/137
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: badhezi <zlilaharon@gmail.com>
Co-committed-by: badhezi <zlilaharon@gmail.com>
2025-05-12 17:17:50 +00:00
Jack Jackson
65c232c4a5 Parse permissions (#133)
Resurrecting [this PR](https://gitea.com/gitea/act/pulls/73) as the original author has [lost motivation](https://github.com/go-gitea/gitea/pull/25664#issuecomment-2737099259) (though, to be clear - all credit belongs to them, all mistakes are mine and mine alone!)

Co-authored-by: Søren L. Hansen <sorenisanerd@gmail.com>
Reviewed-on: https://gitea.com/gitea/act/pulls/133
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Jack Jackson <scubbojj@gmail.com>
Co-committed-by: Jack Jackson <scubbojj@gmail.com>
2025-03-24 18:17:06 +00:00
Guillaume S.
5da4954b65 fix handle missing yaml.ScalarNode (#129)
This bug was reported on https://github.com/go-gitea/gitea/issues/33657
Rewrite of (see below) was missing in this commit 6cdf1e5788
```go
case string:
    acts[act] = []string{b}
```

Reviewed-on: https://gitea.com/gitea/act/pulls/129
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Guillaume S. <me@gsvd.dev>
Co-committed-by: Guillaume S. <me@gsvd.dev>
2025-02-26 06:19:02 +00:00
Zettat123
ec091ad269 Support concurrency (#124)
To support `concurrency` syntax for Gitea Actions

Gitea PR: https://github.com/go-gitea/gitea/pull/32751

Reviewed-on: https://gitea.com/gitea/act/pulls/124
Reviewed-by: Lunny Xiao <lunny@noreply.gitea.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2025-02-11 02:51:48 +00:00
Zettat123
1656206765 Improve the support for reusable workflows (#122)
Fix [#32439](https://github.com/go-gitea/gitea/issues/32439)

- Support reusable workflows with conditional jobs
- Support nesting reusable workflows

Reviewed-on: https://gitea.com/gitea/act/pulls/122
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Reviewed-by: Jason Song <wolfogre@noreply.gitea.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-11-23 14:14:17 +00:00
Lunny Xiao
6cdf1e5788 Fix ParseRawOn sequence problem (#119)
Fix https://gitea.com/gitea/act/actions/runs/277/jobs/0

Reviewed-on: https://gitea.com/gitea/act/pulls/119
2024-10-05 19:29:55 +00:00
Lunny Xiao
ab381649da Add parsing for workflow dispatch (#118)
Reviewed-on: https://gitea.com/gitea/act/pulls/118
2024-10-03 02:56:58 +00:00
Jason Song
38e7e9e939 Use hashed uses string as cache dir name (#117)
Reviewed-on: https://gitea.com/gitea/act/pulls/117
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-09-24 06:53:41 +00:00
Zettat123
2ab806053c Check all job results when calling reusable workflows (#116)
Fix [#31900](https://github.com/go-gitea/gitea/issues/31900)

Reviewed-on: https://gitea.com/gitea/act/pulls/116
Reviewed-by: Jason Song <wolfogre@noreply.gitea.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-09-24 06:52:45 +00:00
Zettat123
6a090f67e5 Support some GITEA_ environment variables (#112)
Fix https://gitea.com/gitea/act_runner/issues/575

Reviewed-on: https://gitea.com/gitea/act/pulls/112
Reviewed-by: Jason Song <i@wolfogre.com>
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-07-29 04:17:45 +00:00
Jason Song
517d11c671 Reduce log noise (#108)
Cannot guarantee that all noisy logs can be removed at once.

Comment them instead of removing them to make it easier to merge upstream.

What have been removed in this PR are those that are very very long and almost unreadable logs, like

<img width="839" alt="image" src="/attachments/b59e1dcc-4edd-4f81-b939-83dcc45f2ed2">

Reviewed-on: https://gitea.com/gitea/act/pulls/108
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:55:46 +00:00
Jason Song
e1b1e81124 Revert "Pass 'sleep' as container command rather than entrypoint (#86)" (#107)
This reverts #86.

Some images use a custom entry point for specific usage, then `[entrypoint] [cmd]` like `helm /bin/sleep 1` will failed.

It causes https://gitea.com/gitea/helm-chart/actions/runs/755 since the image is `alpine/helm`.

```yaml
  check-and-test:
    runs-on: ubuntu-latest
    container: alpine/helm:3.14.3
```

Reviewed-on: https://gitea.com/gitea/act/pulls/107
Reviewed-by: Lunny Xiao <xiaolunwen@gmail.com>
2024-04-10 06:53:28 +00:00
Zettat123
64876e3696 Interpolate job name with matrix (#106)
Fix https://github.com/go-gitea/gitea/issues/28207

Reviewed-on: https://gitea.com/gitea/act/pulls/106
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-04-07 03:34:53 +00:00
Jason Song
3fa1dba92b Merge tag 'nektos/v0.2.61' 2024-04-01 14:23:16 +08:00
GitHub Actions
361b7e9f1a chore: bump VERSION to 0.2.61 2024-04-01 02:16:09 +00:00
Zettat123
9725f60394 Support reusing workflows with absolute URLs (#104)
Resolve https://gitea.com/gitea/act_runner/issues/507

Reviewed-on: https://gitea.com/gitea/act/pulls/104
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Zettat123 <zettat123@gmail.com>
Co-committed-by: Zettat123 <zettat123@gmail.com>
2024-03-29 06:15:28 +00:00
ChristopherHX
f825e42ce2 fix: cache adjust restore order of exact key matches (#2267)
* wip: adjust restore order

* fixup

* add tests

* cleanup

* fix typo

---------

Co-authored-by: mergify[bot] <37929162+mergify[bot]@users.noreply.github.com>
2024-03-29 02:07:20 +00:00
Jason Collins
d9a19c8b02 Trivial: reduce log spam. (#2256)
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:28:48 +00:00
James Kang
3949d74af5 chore: remove repetitive words (#2259)
Signed-off-by: majorteach <csgcgl@126.com>
Co-authored-by: ChristopherHX <christopher.homberger@web.de>
2024-03-28 23:14:53 +00:00
Jason Song
b9382a2c4e Support overwriting caches (#2265)
* feat: support overwrite caches

* test: fix case

* test: fix get_with_multiple_keys

* chore: use atomic.Bool

* test: improve get_with_multiple_keys

* chore: use ping to improve path

* fix: wrong CompareAndSwap

* test: TestHandler_gcCache

* chore: lint code

* chore: lint code
2024-03-28 16:42:02 +00:00
Jason Song
f56dd65ff6 test: use ping to improve network test (#2266) 2024-03-28 11:56:26 +00:00
Thomas E Lackey
a79d81989f Pass 'sleep' as container command rather than entrypoint (#86)
The current code overrides the container's entrypoint with `sleep`.  Unfortunately, that prevents initialization scripts, such as to initialize Docker-in-Docker, from running.

The change simply moves the `sleep` to the command, rather than entrypoint, directive.

For most containers of this sort, the entrypoint script performs initialization, and then ends with `$@` to execute whatever command is passed.

If the container has no entrypoint, the command is executed directly.  As a result, this should be a transparent change for most use cases, while allowing the container's entrypoint to be used when present.

Reviewed-on: https://gitea.com/gitea/act/pulls/86
Reviewed-by: Jason Song <i@wolfogre.com>
Co-authored-by: Thomas E Lackey <telackey@bozemanpass.com>
Co-committed-by: Thomas E Lackey <telackey@bozemanpass.com>
2024-03-27 10:17:48 +00:00
dependabot[bot]
069720abff build(deps): bump github.com/docker/docker (#2252)
Bumps [github.com/docker/docker](https://github.com/docker/docker) from 24.0.7+incompatible to 24.0.9+incompatible.
- [Release notes](https://github.com/docker/docker/releases)
- [Commits](https://github.com/docker/docker/compare/v24.0.7...v24.0.9)

---
updated-dependencies:
- dependency-name: github.com/docker/docker
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-20 17:37:01 +00:00
dependabot[bot]
8c83d57212 build(deps): bump golang.org/x/term from 0.17.0 to 0.18.0 (#2244)
Bumps [golang.org/x/term](https://github.com/golang/term) from 0.17.0 to 0.18.0.
- [Commits](https://github.com/golang/term/compare/v0.17.0...v0.18.0)

---
updated-dependencies:
- dependency-name: golang.org/x/term
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-11 02:28:21 +00:00
ChristopherHX
119ceb81d9 fix: rootless permission bits (new actions cache) (#2242)
* fix: rootless permission bits (new actions cache)

* add test

* fix lint / more tests
2024-03-08 01:25:03 +00:00
huajin tong
352ad41ad2 fix function name in comment (#2240)
Signed-off-by: thirdkeyword <fliterdashen@gmail.com>
2024-03-06 14:20:06 +00:00
ChristopherHX
75e4ad93f4 fix: docker buildx cache restore not working (#2236)
* To take effect artifacts v4 pr is needed with adjusted claims
2024-03-05 06:04:54 +00:00
dependabot[bot]
934b13a7a1 build(deps): bump github.com/stretchr/testify from 1.8.4 to 1.9.0 (#2235)
Bumps [github.com/stretchr/testify](https://github.com/stretchr/testify) from 1.8.4 to 1.9.0.
- [Release notes](https://github.com/stretchr/testify/releases)
- [Commits](https://github.com/stretchr/testify/compare/v1.8.4...v1.9.0)

---
updated-dependencies:
- dependency-name: github.com/stretchr/testify
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-04 03:08:44 +00:00
31 changed files with 1149 additions and 257 deletions

View File

@@ -26,7 +26,7 @@
## Images based on [`actions/virtual-environments`][gh/actions/virtual-environments]
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to to use it.**
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to use it.**
| Image | Size | GitHub Repository |
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------------------- |

View File

@@ -1 +1 @@
0.2.60
0.2.61

10
go.mod
View File

@@ -10,7 +10,7 @@ require (
github.com/creack/pty v1.1.21
github.com/docker/cli v24.0.7+incompatible
github.com/docker/distribution v2.8.3+incompatible
github.com/docker/docker v24.0.7+incompatible // 24.0 branch
github.com/docker/docker v24.0.9+incompatible // 24.0 branch
github.com/docker/go-connections v0.4.0
github.com/go-git/go-billy/v5 v5.5.0
github.com/go-git/go-git/v5 v5.11.0
@@ -30,10 +30,10 @@ require (
github.com/sirupsen/logrus v1.9.3
github.com/spf13/cobra v1.8.0
github.com/spf13/pflag v1.0.5
github.com/stretchr/testify v1.8.4
github.com/stretchr/testify v1.9.0
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a
go.etcd.io/bbolt v1.3.9
golang.org/x/term v0.17.0
golang.org/x/term v0.18.0
gopkg.in/yaml.v3 v3.0.1
gotest.tools/v3 v3.5.1
)
@@ -74,7 +74,7 @@ require (
github.com/robfig/cron/v3 v3.0.1 // indirect
github.com/sergi/go-diff v1.2.0 // indirect
github.com/skeema/knownhosts v1.2.1 // indirect
github.com/stretchr/objx v0.5.0 // indirect
github.com/stretchr/objx v0.5.2 // indirect
github.com/xanzy/ssh-agent v0.3.3 // indirect
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
@@ -83,7 +83,7 @@ require (
golang.org/x/mod v0.12.0 // indirect
golang.org/x/net v0.19.0 // indirect
golang.org/x/sync v0.6.0 // indirect
golang.org/x/sys v0.17.0 // indirect
golang.org/x/sys v0.18.0 // indirect
golang.org/x/text v0.14.0 // indirect
golang.org/x/tools v0.13.0 // indirect
gopkg.in/warnings.v0 v0.1.2 // indirect

23
go.sum
View File

@@ -42,8 +42,8 @@ github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1x
github.com/docker/cli v24.0.7+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker v24.0.9+incompatible h1:HPGzNmwfLZWdxHqK9/II92pyi1EpYKsAqcl4G0Of9v0=
github.com/docker/docker v24.0.9+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
@@ -165,18 +165,15 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a h1:oIi7H/bwFUYKYhzKbHc+3MvHRWqhQwXVB4LweLMiVy0=
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a/go.mod h1:iSvujNDmpZ6eQX+bg/0X3lF7LEmZ8N77g2a/J/+Zt2U=
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
@@ -250,15 +247,15 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.17.0 h1:25cE3gD+tdBA7lp7QfhuV+rJiE9YXTcS3VG1SqssI/Y=
golang.org/x/sys v0.17.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
golang.org/x/term v0.17.0 h1:mkTF7LCd6WGJNL3K1Ad7kwxNfYAW6a8a8QqtMblp/4U=
golang.org/x/term v0.17.0/go.mod h1:lLRBjIVuehSbZlaOtGMbcMncT+aqLLLmKrsjNrUguwk=
golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=

View File

@@ -35,7 +35,7 @@ type Handler struct {
server *http.Server
logger logrus.FieldLogger
gcing int32 // TODO: use atomic.Bool when we can use Go 1.19
gcing atomic.Bool
gcAt time.Time
outboundIP string
@@ -170,7 +170,7 @@ func (h *Handler) find(w http.ResponseWriter, r *http.Request, _ httprouter.Para
}
defer db.Close()
cache, err := h.findCache(db, keys, version)
cache, err := findCache(db, keys, version)
if err != nil {
h.responseJSON(w, r, 500, err)
return
@@ -206,32 +206,17 @@ func (h *Handler) reserve(w http.ResponseWriter, r *http.Request, _ httprouter.P
api.Key = strings.ToLower(api.Key)
cache := api.ToCache()
cache.FillKeyVersionHash()
db, err := h.openDB()
if err != nil {
h.responseJSON(w, r, 500, err)
return
}
defer db.Close()
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
h.responseJSON(w, r, 500, err)
return
}
} else {
h.responseJSON(w, r, 400, fmt.Errorf("already exist"))
return
}
now := time.Now().Unix()
cache.CreatedAt = now
cache.UsedAt = now
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
// write back id to db
if err := db.Update(cache.ID, cache); err != nil {
if err := insertCache(db, cache); err != nil {
h.responseJSON(w, r, 500, err)
return
}
@@ -364,56 +349,51 @@ func (h *Handler) middleware(handler httprouter.Handle) httprouter.Handle {
}
// if not found, return (nil, nil) instead of an error.
func (h *Handler) findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
if len(keys) == 0 {
return nil, nil
}
key := keys[0] // the first key is for exact match.
cache := &Cache{
Key: key,
Version: version,
}
cache.FillKeyVersionHash()
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
if !errors.Is(err, bolthold.ErrNotFound) {
return nil, err
func findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
cache := &Cache{}
for _, prefix := range keys {
// if a key in the list matches exactly, don't return partial matches
if err := db.FindOne(cache,
bolthold.Where("Key").Eq(prefix).
And("Version").Eq(version).
And("Complete").Eq(true).
SortBy("CreatedAt").Reverse()); err == nil || !errors.Is(err, bolthold.ErrNotFound) {
if err != nil {
return nil, fmt.Errorf("find cache: %w", err)
}
return cache, nil
}
} else if cache.Complete {
return cache, nil
}
stop := fmt.Errorf("stop")
for _, prefix := range keys[1:] {
found := false
prefixPattern := fmt.Sprintf("^%s", regexp.QuoteMeta(prefix))
re, err := regexp.Compile(prefixPattern)
if err != nil {
continue
}
if err := db.ForEach(bolthold.Where("Key").RegExp(re).And("Version").Eq(version).SortBy("CreatedAt").Reverse(), func(v *Cache) error {
if !strings.HasPrefix(v.Key, prefix) {
return stop
}
if v.Complete {
cache = v
found = true
return stop
}
return nil
}); err != nil {
if !errors.Is(err, stop) {
return nil, err
if err := db.FindOne(cache,
bolthold.Where("Key").RegExp(re).
And("Version").Eq(version).
And("Complete").Eq(true).
SortBy("CreatedAt").Reverse()); err != nil {
if errors.Is(err, bolthold.ErrNotFound) {
continue
}
return nil, fmt.Errorf("find cache: %w", err)
}
if found {
return cache, nil
}
return cache, nil
}
return nil, nil
}
func insertCache(db *bolthold.Store, cache *Cache) error {
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
return fmt.Errorf("insert cache: %w", err)
}
// write back id to db
if err := db.Update(cache.ID, cache); err != nil {
return fmt.Errorf("write back id to db: %w", err)
}
return nil
}
func (h *Handler) useCache(id int64) {
db, err := h.openDB()
if err != nil {
@@ -428,14 +408,21 @@ func (h *Handler) useCache(id int64) {
_ = db.Update(cache.ID, cache)
}
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
keepOld = 5 * time.Minute
)
func (h *Handler) gcCache() {
if atomic.LoadInt32(&h.gcing) != 0 {
if h.gcing.Load() {
return
}
if !atomic.CompareAndSwapInt32(&h.gcing, 0, 1) {
if !h.gcing.CompareAndSwap(false, true) {
return
}
defer atomic.StoreInt32(&h.gcing, 0)
defer h.gcing.Store(false)
if time.Since(h.gcAt) < time.Hour {
h.logger.Debugf("skip gc: %v", h.gcAt.String())
@@ -444,37 +431,18 @@ func (h *Handler) gcCache() {
h.gcAt = time.Now()
h.logger.Debugf("gc: %v", h.gcAt.String())
const (
keepUsed = 30 * 24 * time.Hour
keepUnused = 7 * 24 * time.Hour
keepTemp = 5 * time.Minute
)
db, err := h.openDB()
if err != nil {
return
}
defer db.Close()
// Remove the caches which are not completed for a while, they are most likely to be broken.
var caches []*Cache
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix())); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
if cache.Complete {
continue
}
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
caches = caches[:0]
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix())); err != nil {
if err := db.Find(&caches, bolthold.
Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix()).
And("Complete").Eq(false),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
@@ -487,8 +455,11 @@ func (h *Handler) gcCache() {
}
}
// Remove the old caches which have not been used recently.
caches = caches[:0]
if err := db.Find(&caches, bolthold.Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix())); err != nil {
if err := db.Find(&caches, bolthold.
Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix()),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
@@ -500,6 +471,55 @@ func (h *Handler) gcCache() {
h.logger.Infof("deleted cache: %+v", cache)
}
}
// Remove the old caches which are too old.
caches = caches[:0]
if err := db.Find(&caches, bolthold.
Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix()),
); err != nil {
h.logger.Warnf("find caches: %v", err)
} else {
for _, cache := range caches {
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
// Remove the old caches with the same key and version, keep the latest one.
// Also keep the olds which have been used recently for a while in case of the cache is still in use.
if results, err := db.FindAggregate(
&Cache{},
bolthold.Where("Complete").Eq(true),
"Key", "Version",
); err != nil {
h.logger.Warnf("find aggregate caches: %v", err)
} else {
for _, result := range results {
if result.Count() <= 1 {
continue
}
result.Sort("CreatedAt")
caches = caches[:0]
result.Reduction(&caches)
for _, cache := range caches[:len(caches)-1] {
if time.Since(time.Unix(cache.UsedAt, 0)) < keepOld {
// Keep it since it has been used recently, even if it's old.
// Or it could break downloading in process.
continue
}
h.storage.Remove(cache.ID)
if err := db.Delete(cache.ID, cache); err != nil {
h.logger.Warnf("delete cache: %v", err)
continue
}
h.logger.Infof("deleted cache: %+v", cache)
}
}
}
}
func (h *Handler) responseJSON(w http.ResponseWriter, r *http.Request, code int, v ...any) {

View File

@@ -10,9 +10,11 @@ import (
"path/filepath"
"strings"
"testing"
"time"
"github.com/stretchr/testify/assert"
"github.com/stretchr/testify/require"
"github.com/timshannon/bolthold"
"go.etcd.io/bbolt"
)
@@ -78,6 +80,9 @@ func TestHandler(t *testing.T) {
t.Run("duplicate reserve", func(t *testing.T) {
key := strings.ToLower(t.Name())
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
var first, second struct {
CacheID uint64 `json:"cacheId"`
}
{
body, err := json.Marshal(&Request{
Key: key,
@@ -89,10 +94,8 @@ func TestHandler(t *testing.T) {
require.NoError(t, err)
assert.Equal(t, 200, resp.StatusCode)
got := struct {
CacheID uint64 `json:"cacheId"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
require.NoError(t, json.NewDecoder(resp.Body).Decode(&first))
assert.NotZero(t, first.CacheID)
}
{
body, err := json.Marshal(&Request{
@@ -103,8 +106,13 @@ func TestHandler(t *testing.T) {
require.NoError(t, err)
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
require.NoError(t, err)
assert.Equal(t, 400, resp.StatusCode)
assert.Equal(t, 200, resp.StatusCode)
require.NoError(t, json.NewDecoder(resp.Body).Decode(&second))
assert.NotZero(t, second.CacheID)
}
assert.NotEqual(t, first.CacheID, second.CacheID)
})
t.Run("upload with bad id", func(t *testing.T) {
@@ -341,9 +349,9 @@ func TestHandler(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b",
key + "_a_b_c",
key + "_a_b",
key + "_a",
}
contents := [3][]byte{
make([]byte, 100),
@@ -354,6 +362,7 @@ func TestHandler(t *testing.T) {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
@@ -361,29 +370,33 @@ func TestHandler(t *testing.T) {
key + "_a_b",
key + "_a",
}, ",")
var archiveLocation string
{
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, keys[1], got.CacheKey)
archiveLocation = got.ArchiveLocation
}
{
resp, err := http.Get(archiveLocation) //nolint:gosec
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
got, err := io.ReadAll(resp.Body)
require.NoError(t, err)
assert.Equal(t, contents[1], got)
}
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a_b` because:
- `key_a_b_x" doesn't match any caches.
- `key_a_b" matches `key_a_b` and `key_a_b_c`, but `key_a_b` is newer.
*/
except := 1
got := struct {
Result string `json:"result"`
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, "hit", got.Result)
assert.Equal(t, keys[except], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[except], content)
})
t.Run("case insensitive", func(t *testing.T) {
@@ -409,6 +422,110 @@ func TestHandler(t *testing.T) {
assert.Equal(t, key+"_abc", got.CacheKey)
}
})
t.Run("exact keys are preferred (key 0)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
t.Run("exact keys are preferred (key 1)", func(t *testing.T) {
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
key := strings.ToLower(t.Name())
keys := [3]string{
key + "_a",
key + "_a_b_c",
key + "_a_b",
}
contents := [3][]byte{
make([]byte, 100),
make([]byte, 200),
make([]byte, 300),
}
for i := range contents {
_, err := rand.Read(contents[i])
require.NoError(t, err)
uploadCacheNormally(t, base, keys[i], version, contents[i])
time.Sleep(time.Second) // ensure CreatedAt of caches are different
}
reqKeys := strings.Join([]string{
"------------------------------------------------------",
key + "_a",
key + "_a_b",
}, ",")
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
require.NoError(t, err)
require.Equal(t, 200, resp.StatusCode)
/*
Expect `key_a` because:
- `------------------------------------------------------` doesn't match any caches.
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
*/
expect := 0
got := struct {
ArchiveLocation string `json:"archiveLocation"`
CacheKey string `json:"cacheKey"`
}{}
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
assert.Equal(t, keys[expect], got.CacheKey)
contentResp, err := http.Get(got.ArchiveLocation)
require.NoError(t, err)
require.Equal(t, 200, contentResp.StatusCode)
content, err := io.ReadAll(contentResp.Body)
require.NoError(t, err)
assert.Equal(t, contents[expect], content)
})
}
func uploadCacheNormally(t *testing.T, base, key, version string, content []byte) {
@@ -469,3 +586,112 @@ func uploadCacheNormally(t *testing.T, base, key, version string, content []byte
assert.Equal(t, content, got)
}
}
func TestHandler_gcCache(t *testing.T) {
dir := filepath.Join(t.TempDir(), "artifactcache")
handler, err := StartHandler(dir, "", 0, nil)
require.NoError(t, err)
defer func() {
require.NoError(t, handler.Close())
}()
now := time.Now()
cases := []struct {
Cache *Cache
Kept bool
}{
{
// should be kept, since it's used recently and not too old.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Unix(),
CreatedAt: now.Add(-time.Hour).Unix(),
},
Kept: true,
},
{
// should be removed, since it's not complete and not used for a while.
Cache: &Cache{
Key: "test_key_2",
Version: "test_version",
Complete: false,
UsedAt: now.Add(-(keepTemp + time.Second)).Unix(),
CreatedAt: now.Add(-(keepTemp + time.Hour)).Unix(),
},
Kept: false,
},
{
// should be removed, since it's not used for a while.
Cache: &Cache{
Key: "test_key_3",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepUnused + time.Second)).Unix(),
CreatedAt: now.Add(-(keepUnused + time.Hour)).Unix(),
},
Kept: false,
},
{
// should be removed, since it's used but too old.
Cache: &Cache{
Key: "test_key_3",
Version: "test_version",
Complete: true,
UsedAt: now.Unix(),
CreatedAt: now.Add(-(keepUsed + time.Second)).Unix(),
},
Kept: false,
},
{
// should be kept, since it has a newer edition but be used recently.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepOld - time.Minute)).Unix(),
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
},
Kept: true,
},
{
// should be removed, since it has a newer edition and not be used recently.
Cache: &Cache{
Key: "test_key_1",
Version: "test_version",
Complete: true,
UsedAt: now.Add(-(keepOld + time.Second)).Unix(),
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
},
Kept: false,
},
}
db, err := handler.openDB()
require.NoError(t, err)
for _, c := range cases {
require.NoError(t, insertCache(db, c.Cache))
}
require.NoError(t, db.Close())
handler.gcAt = time.Time{} // ensure gcCache will not skip
handler.gcCache()
db, err = handler.openDB()
require.NoError(t, err)
for i, v := range cases {
t.Run(fmt.Sprintf("%d_%s", i, v.Cache.Key), func(t *testing.T) {
cache := &Cache{}
err = db.Get(v.Cache.ID, cache)
if v.Kept {
assert.NoError(t, err)
} else {
assert.ErrorIs(t, err, bolthold.ErrNotFound)
}
})
}
require.NoError(t, db.Close())
}

View File

@@ -1,10 +1,5 @@
package artifactcache
import (
"crypto/sha256"
"fmt"
)
type Request struct {
Key string `json:"key" `
Version string `json:"version"`
@@ -29,16 +24,11 @@ func (c *Request) ToCache() *Cache {
}
type Cache struct {
ID uint64 `json:"id" boltholdKey:"ID"`
Key string `json:"key" boltholdIndex:"Key"`
Version string `json:"version" boltholdIndex:"Version"`
KeyVersionHash string `json:"keyVersionHash" boltholdUnique:"KeyVersionHash"`
Size int64 `json:"cacheSize"`
Complete bool `json:"complete"`
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
}
func (c *Cache) FillKeyVersionHash() {
c.KeyVersionHash = fmt.Sprintf("%x", sha256.Sum256([]byte(fmt.Sprintf("%s:%s", c.Key, c.Version))))
ID uint64 `json:"id" boltholdKey:"ID"`
Key string `json:"key" boltholdIndex:"Key"`
Version string `json:"version" boltholdIndex:"Version"`
Size int64 `json:"cacheSize"`
Complete bool `json:"complete" boltholdIndex:"Complete"`
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
}

View File

@@ -97,7 +97,7 @@ func NewParallelExecutor(parallel int, executors ...Executor) Executor {
errs := make(chan error, len(executors))
if 1 > parallel {
log.Infof("Parallel tasks (%d) below minimum, setting to 1", parallel)
log.Debugf("Parallel tasks (%d) below minimum, setting to 1", parallel)
parallel = 1
}

View File

@@ -6,6 +6,7 @@ import (
"context"
"github.com/docker/docker/api/types"
"github.com/nektos/act/pkg/common"
)
@@ -22,7 +23,8 @@ func NewDockerNetworkCreateExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
common.Logger(ctx).Debugf("Network %v exists", name)
@@ -56,7 +58,8 @@ func NewDockerNetworkRemoveExecutor(name string) common.Executor {
if err != nil {
return err
}
common.Logger(ctx).Debugf("%v", networks)
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("%v", networks)
for _, network := range networks {
if network.Name == name {
result, err := cli.NetworkInspect(ctx, network.ID, types.NetworkInspectOptions{})

View File

@@ -445,7 +445,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
ExposedPorts: input.ExposedPorts,
Tty: isTerminal,
}
logger.Debugf("Common container.Config ==> %+v", config)
// For Gitea, reduce log noise
// logger.Debugf("Common container.Config ==> %+v", config)
if len(input.Cmd) != 0 {
config.Cmd = input.Cmd
@@ -489,7 +490,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
PortBindings: input.PortBindings,
AutoRemove: input.AutoRemove,
}
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
// For Gitea, reduce log noise
// logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
config, hostConfig, err := cr.mergeContainerConfigs(ctx, config, hostConfig)
if err != nil {
@@ -500,7 +502,8 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
config, hostConfig = cr.sanitizeConfig(ctx, config, hostConfig)
var networkingConfig *network.NetworkingConfig
logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
// For Gitea, reduce log noise
// logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
n := hostConfig.NetworkMode
// IsUserDefined and IsHost are broken on windows
if n.IsUserDefined() && n != "host" && len(input.NetworkAliases) > 0 {
@@ -730,7 +733,7 @@ func (cr *containerReference) CopyTarStream(ctx context.Context, destPath string
tw := tar.NewWriter(buf)
_ = tw.WriteHeader(&tar.Header{
Name: destPath,
Mode: 777,
Mode: 0o777,
Typeflag: tar.TypeDir,
})
tw.Close()

View File

@@ -2,7 +2,9 @@ package container
import (
"bufio"
"bytes"
"context"
"fmt"
"io"
"net"
"strings"
@@ -79,6 +81,11 @@ func (m *mockDockerClient) ContainerExecInspect(ctx context.Context, execID stri
return args.Get(0).(types.ContainerExecInspect), args.Error(1)
}
func (m *mockDockerClient) CopyToContainer(ctx context.Context, id string, path string, content io.Reader, options types.CopyToContainerOptions) error {
args := m.Called(ctx, id, path, content, options)
return args.Error(0)
}
type endlessReader struct {
io.Reader
}
@@ -169,6 +176,78 @@ func TestDockerExecFailure(t *testing.T) {
client.AssertExpectations(t)
}
func TestDockerCopyTarStream(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
_ = cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInCopyFiles(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := fmt.Errorf("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr)
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
ctx := context.Background()
conn := &mockConn{}
merr := fmt.Errorf("Failure")
client := &mockDockerClient{}
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
cr := &containerReference{
id: "123",
cli: client,
input: &NewContainerInput{
Image: "image",
},
}
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
assert.ErrorIs(t, err, merr)
conn.AssertExpectations(t)
client.AssertExpectations(t)
}
// Type assert containerReference implements ExecutionsEnvironment
var _ ExecutionsEnvironment = &containerReference{}

View File

@@ -16,6 +16,7 @@ func NewInterpeter(
gitCtx *model.GithubContext,
results map[string]*JobResult,
vars map[string]string,
inputs map[string]interface{},
) exprparser.Interpreter {
strategy := make(map[string]interface{})
if job.Strategy != nil {
@@ -62,7 +63,7 @@ func NewInterpeter(
Strategy: strategy,
Matrix: matrix,
Needs: using,
Inputs: nil, // not supported yet
Inputs: inputs,
Vars: vars,
}

View File

@@ -8,6 +8,7 @@ import (
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/exprparser"
"github.com/nektos/act/pkg/model"
)
@@ -40,6 +41,10 @@ func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
if err != nil {
return nil, fmt.Errorf("invalid jobs: %w", err)
}
evaluator := NewExpressionEvaluator(exprparser.NewInterpeter(&exprparser.EvaluationEnvironment{Github: pc.gitContext, Vars: pc.vars}, exprparser.Config{}))
workflow.RunName = evaluator.Interpolate(workflow.RunName)
for i, id := range ids {
job := jobs[i]
matricxes, err := getMatrixes(origin.GetJob(id))
@@ -51,19 +56,21 @@ func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
if job.Name == "" {
job.Name = id
}
job.Name = nameWithMatrix(job.Name, matrix)
job.Strategy.RawMatrix = encodeMatrix(matrix)
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results, pc.vars))
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results, pc.vars, nil))
job.Name = nameWithMatrix(job.Name, matrix, evaluator)
runsOn := origin.GetJob(id).RunsOn()
for i, v := range runsOn {
runsOn[i] = evaluator.Interpolate(v)
}
job.RawRunsOn = encodeRunsOn(runsOn)
swf := &SingleWorkflow{
Name: workflow.Name,
RawOn: workflow.RawOn,
Env: workflow.Env,
Defaults: workflow.Defaults,
Name: workflow.Name,
RawOn: workflow.RawOn,
Env: workflow.Env,
Defaults: workflow.Defaults,
RawPermissions: workflow.RawPermissions,
RunName: workflow.RunName,
}
if err := swf.SetJob(id, job); err != nil {
return nil, fmt.Errorf("SetJob: %w", err)
@@ -134,12 +141,16 @@ func encodeRunsOn(runsOn []string) yaml.Node {
return node
}
func nameWithMatrix(name string, m map[string]interface{}) string {
func nameWithMatrix(name string, m map[string]interface{}, evaluator *ExpressionEvaluator) string {
if len(m) == 0 {
return name
}
return name + " " + matrixName(m)
if !strings.Contains(name, "${{") || !strings.Contains(name, "}}") {
return name + " " + matrixName(m)
}
return evaluator.Interpolate(name)
}
func matrixName(m map[string]interface{}) string {

View File

@@ -47,6 +47,11 @@ func TestParse(t *testing.T) {
options: nil,
wantErr: false,
},
{
name: "job_name_with_matrix",
options: nil,
wantErr: false,
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {

View File

@@ -1,6 +1,7 @@
package jobparser
import (
"bytes"
"fmt"
"github.com/nektos/act/pkg/model"
@@ -9,11 +10,13 @@ import (
// SingleWorkflow is a workflow with single job and single matrix
type SingleWorkflow struct {
Name string `yaml:"name,omitempty"`
RawOn yaml.Node `yaml:"on,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
RawJobs yaml.Node `yaml:"jobs,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
Name string `yaml:"name,omitempty"`
RawOn yaml.Node `yaml:"on,omitempty"`
Env map[string]string `yaml:"env,omitempty"`
RawJobs yaml.Node `yaml:"jobs,omitempty"`
Defaults Defaults `yaml:"defaults,omitempty"`
RawPermissions yaml.Node `yaml:"permissions,omitempty"`
RunName string `yaml:"run-name,omitempty"`
}
func (w *SingleWorkflow) Job() (string, *Job) {
@@ -82,6 +85,8 @@ type Job struct {
Uses string `yaml:"uses,omitempty"`
With map[string]interface{} `yaml:"with,omitempty"`
RawSecrets yaml.Node `yaml:"secrets,omitempty"`
RawConcurrency *model.RawConcurrency `yaml:"concurrency,omitempty"`
RawPermissions yaml.Node `yaml:"permissions,omitempty"`
}
func (j *Job) Clone() *Job {
@@ -104,6 +109,8 @@ func (j *Job) Clone() *Job {
Uses: j.Uses,
With: j.With,
RawSecrets: j.RawSecrets,
RawConcurrency: j.RawConcurrency,
RawPermissions: j.RawPermissions,
}
}
@@ -172,10 +179,20 @@ type RunDefaults struct {
WorkingDirectory string `yaml:"working-directory,omitempty"`
}
type WorkflowDispatchInput struct {
Name string `yaml:"name"`
Description string `yaml:"description"`
Required bool `yaml:"required"`
Default string `yaml:"default"`
Type string `yaml:"type"`
Options []string `yaml:"options"`
}
type Event struct {
Name string
acts map[string][]string
schedules []map[string]string
inputs []WorkflowDispatchInput
}
func (evt *Event) IsSchedule() bool {
@@ -190,6 +207,126 @@ func (evt *Event) Schedules() []map[string]string {
return evt.schedules
}
func (evt *Event) Inputs() []WorkflowDispatchInput {
return evt.inputs
}
func parseWorkflowDispatchInputs(inputs map[string]interface{}) ([]WorkflowDispatchInput, error) {
var results []WorkflowDispatchInput
for name, input := range inputs {
inputMap, ok := input.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("invalid input: %v", input)
}
input := WorkflowDispatchInput{
Name: name,
}
if desc, ok := inputMap["description"].(string); ok {
input.Description = desc
}
if required, ok := inputMap["required"].(bool); ok {
input.Required = required
}
if defaultVal, ok := inputMap["default"].(string); ok {
input.Default = defaultVal
}
if inputType, ok := inputMap["type"].(string); ok {
input.Type = inputType
}
if options, ok := inputMap["options"].([]string); ok {
input.Options = options
} else if options, ok := inputMap["options"].([]interface{}); ok {
for _, option := range options {
if opt, ok := option.(string); ok {
input.Options = append(input.Options, opt)
}
}
}
results = append(results, input)
}
return results, nil
}
func ReadWorkflowRawConcurrency(content []byte) (*model.RawConcurrency, error) {
w := new(model.Workflow)
err := yaml.NewDecoder(bytes.NewReader(content)).Decode(w)
return w.RawConcurrency, err
}
func EvaluateConcurrency(rc *model.RawConcurrency, jobID string, job *Job, gitCtx map[string]any, results map[string]*JobResult, vars map[string]string, inputs map[string]any) (string, bool, error) {
actJob := &model.Job{}
if job != nil {
actJob.Strategy = &model.Strategy{
FailFastString: job.Strategy.FailFastString,
MaxParallelString: job.Strategy.MaxParallelString,
RawMatrix: job.Strategy.RawMatrix,
}
actJob.Strategy.FailFast = actJob.Strategy.GetFailFast()
actJob.Strategy.MaxParallel = actJob.Strategy.GetMaxParallel()
}
matrix := make(map[string]any)
matrixes, err := actJob.GetMatrixes()
if err != nil {
return "", false, err
}
if len(matrixes) > 0 {
matrix = matrixes[0]
}
evaluator := NewExpressionEvaluator(NewInterpeter(jobID, actJob, matrix, toGitContext(gitCtx), results, vars, inputs))
var node yaml.Node
if err := node.Encode(rc); err != nil {
return "", false, fmt.Errorf("failed to encode concurrency: %w", err)
}
if err := evaluator.EvaluateYamlNode(&node); err != nil {
return "", false, fmt.Errorf("failed to evaluate concurrency: %w", err)
}
var evaluated model.RawConcurrency
if err := node.Decode(&evaluated); err != nil {
return "", false, fmt.Errorf("failed to unmarshal evaluated concurrency: %w", err)
}
if evaluated.RawExpression != "" {
return evaluated.RawExpression, false, nil
}
return evaluated.Group, evaluated.CancelInProgress == "true", nil
}
func toGitContext(input map[string]any) *model.GithubContext {
gitContext := &model.GithubContext{
EventPath: asString(input["event_path"]),
Workflow: asString(input["workflow"]),
RunID: asString(input["run_id"]),
RunNumber: asString(input["run_number"]),
Actor: asString(input["actor"]),
Repository: asString(input["repository"]),
EventName: asString(input["event_name"]),
Sha: asString(input["sha"]),
Ref: asString(input["ref"]),
RefName: asString(input["ref_name"]),
RefType: asString(input["ref_type"]),
HeadRef: asString(input["head_ref"]),
BaseRef: asString(input["base_ref"]),
Token: asString(input["token"]),
Workspace: asString(input["workspace"]),
Action: asString(input["action"]),
ActionPath: asString(input["action_path"]),
ActionRef: asString(input["action_ref"]),
ActionRepository: asString(input["action_repository"]),
Job: asString(input["job"]),
RepositoryOwner: asString(input["repository_owner"]),
RetentionDays: asString(input["retention_days"]),
}
event, ok := input["event"].(map[string]any)
if ok {
gitContext.Event = event
}
return gitContext
}
func ParseRawOn(rawOn *yaml.Node) ([]*Event, error) {
switch rawOn.Kind {
case yaml.ScalarNode:
@@ -218,79 +355,126 @@ func ParseRawOn(rawOn *yaml.Node) ([]*Event, error) {
}
return res, nil
case yaml.MappingNode:
events, triggers, err := parseMappingNode[interface{}](rawOn)
events, triggers, err := parseMappingNode[yaml.Node](rawOn)
if err != nil {
return nil, err
}
res := make([]*Event, 0, len(events))
for i, k := range events {
v := triggers[i]
if v == nil {
switch v.Kind {
case yaml.ScalarNode:
res = append(res, &Event{
Name: k,
acts: map[string][]string{},
})
continue
}
switch t := v.(type) {
case string:
res = append(res, &Event{
Name: k,
acts: map[string][]string{},
})
case []string:
res = append(res, &Event{
Name: k,
acts: map[string][]string{},
})
case map[string]interface{}:
acts := make(map[string][]string, len(t))
for act, branches := range t {
switch b := branches.(type) {
case string:
acts[act] = []string{b}
case []string:
acts[act] = b
case []interface{}:
acts[act] = make([]string, len(b))
for i, v := range b {
var ok bool
if acts[act][i], ok = v.(string); !ok {
return nil, fmt.Errorf("unknown on type: %#v", branches)
}
}
default:
return nil, fmt.Errorf("unknown on type: %#v", branches)
}
}
res = append(res, &Event{
Name: k,
acts: acts,
})
case []interface{}:
if k != "schedule" {
return nil, fmt.Errorf("unknown on type: %#v", v)
case yaml.SequenceNode:
var t []interface{}
err := v.Decode(&t)
if err != nil {
return nil, err
}
schedules := make([]map[string]string, len(t))
for i, tt := range t {
vv, ok := tt.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("unknown on type: %#v", v)
}
schedules[i] = make(map[string]string, len(vv))
for k, vvv := range vv {
var ok bool
if schedules[i][k], ok = vvv.(string); !ok {
return nil, fmt.Errorf("unknown on type: %#v", v)
if k == "schedule" {
for i, tt := range t {
vv, ok := tt.(map[string]interface{})
if !ok {
return nil, fmt.Errorf("unknown on type(schedule): %#v", v)
}
schedules[i] = make(map[string]string, len(vv))
for k, vvv := range vv {
var ok bool
if schedules[i][k], ok = vvv.(string); !ok {
return nil, fmt.Errorf("unknown on type(schedule): %#v", v)
}
}
}
}
if len(schedules) == 0 {
schedules = nil
}
res = append(res, &Event{
Name: k,
schedules: schedules,
})
case yaml.MappingNode:
acts := make(map[string][]string, len(v.Content)/2)
var inputs []WorkflowDispatchInput
expectedKey := true
var act string
for _, content := range v.Content {
if expectedKey {
if content.Kind != yaml.ScalarNode {
return nil, fmt.Errorf("key type not string: %#v", content)
}
act = ""
err := content.Decode(&act)
if err != nil {
return nil, err
}
} else {
switch content.Kind {
case yaml.SequenceNode:
var t []string
err := content.Decode(&t)
if err != nil {
return nil, err
}
acts[act] = t
case yaml.ScalarNode:
var t string
err := content.Decode(&t)
if err != nil {
return nil, err
}
acts[act] = []string{t}
case yaml.MappingNode:
if k != "workflow_dispatch" || act != "inputs" {
return nil, fmt.Errorf("map should only for workflow_dispatch but %s: %#v", act, content)
}
var key string
for i, vv := range content.Content {
if i%2 == 0 {
if vv.Kind != yaml.ScalarNode {
return nil, fmt.Errorf("key type not string: %#v", vv)
}
key = ""
if err := vv.Decode(&key); err != nil {
return nil, err
}
} else {
if vv.Kind != yaml.MappingNode {
return nil, fmt.Errorf("key type not map(%s): %#v", key, vv)
}
input := WorkflowDispatchInput{}
if err := vv.Decode(&input); err != nil {
return nil, err
}
input.Name = key
inputs = append(inputs, input)
}
}
default:
return nil, fmt.Errorf("unknown on type: %#v", content)
}
}
expectedKey = !expectedKey
}
if len(inputs) == 0 {
inputs = nil
}
if len(acts) == 0 {
acts = nil
}
res = append(res, &Event{
Name: k,
acts: acts,
inputs: inputs,
})
default:
return nil, fmt.Errorf("unknown on type: %#v", v)
return nil, fmt.Errorf("unknown on type: %v", v.Kind)
}
}
return res, nil
@@ -331,3 +515,12 @@ func parseMappingNode[T any](node *yaml.Node) ([]string, []T, error) {
return scalars, datas, nil
}
func asString(v interface{}) string {
if v == nil {
return ""
} else if s, ok := v.(string); ok {
return s
}
return ""
}

View File

@@ -58,6 +58,19 @@ func TestParseRawOn(t *testing.T) {
},
},
},
{
input: "on:\n push:\n branches: main",
result: []*Event{
{
Name: "push",
acts: map[string][]string{
"branches": {
"main",
},
},
},
},
},
{
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
result: []*Event{
@@ -186,6 +199,60 @@ func TestParseRawOn(t *testing.T) {
},
},
},
{
input: `on:
workflow_dispatch:
inputs:
logLevel:
description: 'Log level'
required: true
default: 'warning'
type: choice
options:
- info
- warning
- debug
tags:
description: 'Test scenario tags'
required: false
type: boolean
environment:
description: 'Environment to run tests against'
type: environment
required: true
push:
`,
result: []*Event{
{
Name: "workflow_dispatch",
inputs: []WorkflowDispatchInput{
{
Name: "logLevel",
Description: "Log level",
Required: true,
Default: "warning",
Type: "choice",
Options: []string{"info", "warning", "debug"},
},
{
Name: "tags",
Description: "Test scenario tags",
Required: false,
Type: "boolean",
},
{
Name: "environment",
Description: "Environment to run tests against",
Type: "environment",
Required: true,
},
},
},
{
Name: "push",
},
},
},
}
for _, kase := range kases {
t.Run(kase.input, func(t *testing.T) {
@@ -230,8 +297,7 @@ func TestParseMappingNode(t *testing.T) {
{
input: "on:\n push:\n branches:\n - master",
scalars: []string{"push"},
datas: []interface {
}{
datas: []interface{}{
map[string]interface{}{
"branches": []interface{}{"master"},
},

View File

@@ -0,0 +1,14 @@
name: test
jobs:
job1:
strategy:
matrix:
os: [ubuntu-22.04, ubuntu-20.04]
version: [1.17, 1.18, 1.19]
runs-on: ${{ matrix.os }}
name: test_version_${{ matrix.version }}_on_${{ matrix.os }}
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version

View File

@@ -0,0 +1,101 @@
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-20.04
runs-on: ubuntu-20.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-20.04
version:
- 1.19
---
name: test
jobs:
job1:
name: test_version_1.17_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.17
---
name: test
jobs:
job1:
name: test_version_1.18_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.18
---
name: test
jobs:
job1:
name: test_version_1.19_on_ubuntu-22.04
runs-on: ubuntu-22.04
steps:
- uses: actions/setup-go@v3
with:
go-version: ${{ matrix.version }}
- run: uname -a && go version
strategy:
matrix:
os:
- ubuntu-22.04
version:
- 1.19

View File

@@ -1,6 +1,7 @@
package model
import (
"crypto/sha256"
"fmt"
"io"
"reflect"
@@ -8,19 +9,22 @@ import (
"strconv"
"strings"
"github.com/nektos/act/pkg/common"
log "github.com/sirupsen/logrus"
"gopkg.in/yaml.v3"
"github.com/nektos/act/pkg/common"
)
// Workflow is the structure of the files in .github/workflows
type Workflow struct {
File string
Name string `yaml:"name"`
RawOn yaml.Node `yaml:"on"`
Env map[string]string `yaml:"env"`
Jobs map[string]*Job `yaml:"jobs"`
Defaults Defaults `yaml:"defaults"`
File string
Name string `yaml:"name"`
RawOn yaml.Node `yaml:"on"`
Env map[string]string `yaml:"env"`
Jobs map[string]*Job `yaml:"jobs"`
Defaults Defaults `yaml:"defaults"`
RawConcurrency *RawConcurrency `yaml:"concurrency"`
RawPermissions yaml.Node `yaml:"permissions"`
}
// On events for the workflow
@@ -197,6 +201,7 @@ type Job struct {
Uses string `yaml:"uses"`
With map[string]interface{} `yaml:"with"`
RawSecrets yaml.Node `yaml:"secrets"`
RawPermissions yaml.Node `yaml:"permissions"`
Result string
}
@@ -367,7 +372,7 @@ func environment(yml yaml.Node) map[string]string {
return env
}
// Environments returns string-based key=value map for a job
// Environment returns string-based key=value map for a job
func (j *Job) Environment() map[string]string {
return environment(j.Env)
}
@@ -606,7 +611,7 @@ func (s *Step) String() string {
return s.ID
}
// Environments returns string-based key=value map for a step
// Environment returns string-based key=value map for a step
func (s *Step) Environment() map[string]string {
return environment(s.Env)
}
@@ -716,6 +721,12 @@ func (s *Step) Type() StepType {
return StepTypeUsesActionRemote
}
// UsesHash returns a hash of the uses string.
// For Gitea.
func (s *Step) UsesHash() string {
return fmt.Sprintf("%x", sha256.Sum256([]byte(s.Uses)))
}
// ReadWorkflow returns a list of jobs for a given workflow file reader
func ReadWorkflow(in io.Reader) (*Workflow, error) {
w := new(Workflow)
@@ -761,3 +772,28 @@ func decodeNode(node yaml.Node, out interface{}) bool {
}
return true
}
// For Gitea
// RawConcurrency represents a workflow concurrency or a job concurrency with uninterpolated options
type RawConcurrency struct {
Group string `yaml:"group,omitempty"`
CancelInProgress string `yaml:"cancel-in-progress,omitempty"`
RawExpression string `yaml:"-,omitempty"`
}
type objectConcurrency RawConcurrency
func (r *RawConcurrency) UnmarshalYAML(n *yaml.Node) error {
if err := n.Decode(&r.RawExpression); err == nil {
return nil
}
return n.Decode((*objectConcurrency)(r))
}
func (r *RawConcurrency) MarshalYAML() (interface{}, error) {
if r.RawExpression != "" {
return r.RawExpression, nil
}
return (*objectConcurrency)(r), nil
}

View File

@@ -603,3 +603,37 @@ func TestReadWorkflow_WorkflowDispatchConfig(t *testing.T) {
Type: "choice",
}, workflowDispatch.Inputs["logLevel"])
}
func TestStep_UsesHash(t *testing.T) {
type fields struct {
Uses string
}
tests := []struct {
name string
fields fields
want string
}{
{
name: "regular",
fields: fields{
Uses: "https://gitea.com/testa/testb@v3",
},
want: "ae437878e9f285bd7518c58664f9fabbb12d05feddd7169c01702a2a14322aa8",
},
{
name: "empty",
fields: fields{
Uses: "",
},
want: "e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855",
},
}
for _, tt := range tests {
t.Run(tt.name, func(t *testing.T) {
s := &Step{
Uses: tt.fields.Uses,
}
assert.Equalf(t, tt.want, s.UsesHash(), "UsesHash()")
})
}
}

View File

@@ -112,7 +112,8 @@ func readActionImpl(ctx context.Context, step *model.Step, actionDir string, act
defer closer.Close()
action, err := model.ReadAction(reader)
logger.Debugf("Read action %v from '%s'", action, "Unknown")
// For Gitea, reduce log noise
// logger.Debugf("Read action %v from '%s'", action, "Unknown")
return action, err
}
@@ -162,7 +163,8 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
}
action := step.getActionModel()
logger.Debugf("About to run action %v", action)
// For Gitea, reduce log noise
// logger.Debugf("About to run action %v", action)
err := setupActionEnv(ctx, step, remoteAction)
if err != nil {
@@ -533,7 +535,7 @@ func runPreStep(step actionStep) common.Executor {
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
@@ -577,7 +579,7 @@ func runPreStep(step actionStep) common.Executor {
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""
@@ -663,7 +665,7 @@ func runPostStep(step actionStep) common.Executor {
var actionPath string
if _, ok := step.(*stepActionRemote); ok {
actionPath = newRemoteAction(stepModel.Uses).Path
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), stepModel.UsesHash())
} else {
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
actionPath = ""

View File

@@ -27,7 +27,7 @@ func evaluateCompositeInputAndEnv(ctx context.Context, parent *RunContext, step
envKey := regexp.MustCompile("[^A-Z0-9-]").ReplaceAllString(strings.ToUpper(inputID), "_")
envKey = fmt.Sprintf("INPUT_%s", strings.ToUpper(envKey))
// lookup if key is defined in the step but the the already
// lookup if key is defined in the step but the already
// evaluated value from the environment
_, defined := step.getStepModel().With[inputID]
if value, ok := stepEnv[envKey]; defined && ok {

View File

@@ -106,7 +106,7 @@ func (rc *RunContext) NewExpressionEvaluatorWithEnv(ctx context.Context, env map
//go:embed hashfiles/index.js
var hashfiles string
// NewExpressionEvaluator creates a new evaluator
// NewStepExpressionEvaluator creates a new evaluator
func (rc *RunContext) NewStepExpressionEvaluator(ctx context.Context, step step) ExpressionEvaluator {
// todo: cleanup EvaluationEnvironment creation
job := rc.Run.Job()

View File

@@ -185,7 +185,8 @@ func setJobResult(ctx context.Context, info jobInfo, rc *RunContext, success boo
info.result(jobResult)
if rc.caller != nil {
// set reusable workflow job result
rc.caller.runContext.result(jobResult)
rc.caller.setReusedWorkflowJobResult(rc.JobName, jobResult) // For Gitea
return
}
jobResultMessage := "succeeded"

View File

@@ -52,7 +52,7 @@ func Masks(ctx context.Context) *[]string {
return &[]string{}
}
// WithLogger adds a value to the context for the logger
// WithMasks adds a value to the context for the logger
func WithMasks(ctx context.Context, masks *[]string) context.Context {
return context.WithValue(ctx, masksContextKeyVal, masks)
}

View File

@@ -54,9 +54,17 @@ func newLocalReusableWorkflowExecutor(rc *RunContext) common.Executor {
func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
uses := rc.Run.Job().Uses
remoteReusableWorkflow := newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
var remoteReusableWorkflow *remoteReusableWorkflow
if strings.HasPrefix(uses, "http://") || strings.HasPrefix(uses, "https://") {
remoteReusableWorkflow = newRemoteReusableWorkflowFromAbsoluteURL(uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format http(s)://{domain}/{owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
} else {
remoteReusableWorkflow = newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
if remoteReusableWorkflow == nil {
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
}
}
// uses with safe filename makes the target directory look something like this {owner}-{repo}-.github-workflows-{filename}@{ref}
@@ -167,7 +175,11 @@ func newReusableWorkflowExecutor(rc *RunContext, directory string, workflow stri
return err
}
return runner.NewPlanExecutor(plan)(ctx)
// return runner.NewPlanExecutor(plan)(ctx)
return common.NewPipelineExecutor( // For Gitea
runner.NewPlanExecutor(plan),
setReusedWorkflowCallerResult(rc, runner),
)(ctx)
}
}
@@ -177,6 +189,8 @@ func NewReusableWorkflowRunner(rc *RunContext) (Runner, error) {
eventJSON: rc.EventJSON,
caller: &caller{
runContext: rc,
reusedWorkflowJobResults: map[string]string{}, // For Gitea
},
}
@@ -226,6 +240,24 @@ func newRemoteReusableWorkflowWithPlat(url, uses string) *remoteReusableWorkflow
}
}
// For Gitea
// newRemoteReusableWorkflowWithPlat create a `remoteReusableWorkflow` from an absolute url
func newRemoteReusableWorkflowFromAbsoluteURL(uses string) *remoteReusableWorkflow {
r := regexp.MustCompile(`^(https?://.*)/([^/]+)/([^/]+)/\.([^/]+)/workflows/([^@]+)@(.*)$`)
matches := r.FindStringSubmatch(uses)
if len(matches) != 7 {
return nil
}
return &remoteReusableWorkflow{
URL: matches[1],
Org: matches[2],
Repo: matches[3],
GitPlatform: matches[4],
Filename: matches[5],
Ref: matches[6],
}
}
// deprecated: use newRemoteReusableWorkflowWithPlat
func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
// GitHub docs:
@@ -243,3 +275,47 @@ func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
URL: "https://github.com",
}
}
// For Gitea
func setReusedWorkflowCallerResult(rc *RunContext, runner Runner) common.Executor {
return func(ctx context.Context) error {
logger := common.Logger(ctx)
runnerImpl, ok := runner.(*runnerImpl)
if !ok {
logger.Warn("Failed to get caller from runner")
return nil
}
caller := runnerImpl.caller
allJobDone := true
hasFailure := false
for _, result := range caller.reusedWorkflowJobResults {
if result == "pending" {
allJobDone = false
break
}
if result == "failure" {
hasFailure = true
}
}
if allJobDone {
reusedWorkflowJobResult := "success"
reusedWorkflowJobResultMessage := "succeeded"
if hasFailure {
reusedWorkflowJobResult = "failure"
reusedWorkflowJobResultMessage = "failed"
}
if rc.caller != nil {
rc.caller.setReusedWorkflowJobResult(rc.JobName, reusedWorkflowJobResult)
} else {
rc.result(reusedWorkflowJobResult)
logger.WithField("jobResult", reusedWorkflowJobResult).Infof("\U0001F3C1 Job %s", reusedWorkflowJobResultMessage)
}
}
return nil
}
}

View File

@@ -92,7 +92,12 @@ func (rc *RunContext) GetEnv() map[string]string {
}
func (rc *RunContext) jobContainerName() string {
return createSimpleContainerName(rc.Config.ContainerNamePrefix, "WORKFLOW-"+rc.Run.Workflow.Name, "JOB-"+rc.Name)
nameParts := []string{rc.Config.ContainerNamePrefix, "WORKFLOW-" + rc.Run.Workflow.Name, "JOB-" + rc.Name}
if rc.caller != nil {
nameParts = append(nameParts, "CALLED-BY-"+rc.caller.runContext.JobName)
}
// return createSimpleContainerName(rc.Config.ContainerNamePrefix, "WORKFLOW-"+rc.Run.Workflow.Name, "JOB-"+rc.Name)
return createSimpleContainerName(nameParts...) // For Gitea
}
// Deprecated: use `networkNameForGitea`
@@ -653,6 +658,7 @@ func (rc *RunContext) Executor() (common.Executor, error) {
return func(ctx context.Context) error {
res, err := rc.isEnabled(ctx)
if err != nil {
rc.caller.setReusedWorkflowJobResult(rc.JobName, "failure") // For Gitea
return err
}
if res {
@@ -748,6 +754,10 @@ func (rc *RunContext) isEnabled(ctx context.Context) (bool, error) {
}
if !runJob {
if rc.caller != nil { // For Gitea
rc.caller.setReusedWorkflowJobResult(rc.JobName, "skipped")
return false, nil
}
l.WithField("jobResult", "skipped").Debugf("Skipping job '%s' due to '%s'", job.Name, job.If.Value)
return false, nil
}

View File

@@ -6,6 +6,7 @@ import (
"fmt"
"os"
"runtime"
"sync"
"time"
docker_container "github.com/docker/docker/api/types/container"
@@ -86,6 +87,9 @@ func (c Config) GetToken() string {
type caller struct {
runContext *RunContext
updateResultLock sync.Mutex // For Gitea
reusedWorkflowJobResults map[string]string // For Gitea
}
type runnerImpl struct {
@@ -206,6 +210,9 @@ func (runner *runnerImpl) NewPlanExecutor(plan *model.Plan) common.Executor {
if len(rc.String()) > maxJobNameLen {
maxJobNameLen = len(rc.String())
}
if rc.caller != nil { // For Gitea
rc.caller.setReusedWorkflowJobResult(rc.JobName, "pending")
}
stageExecutor = append(stageExecutor, func(ctx context.Context) error {
jobName := fmt.Sprintf("%-*s", maxJobNameLen, rc.String())
executor, err := rc.Executor()
@@ -277,3 +284,10 @@ func (runner *runnerImpl) newRunContext(ctx context.Context, run *model.Run, mat
return rc
}
// For Gitea
func (c *caller) setReusedWorkflowJobResult(jobName string, result string) {
c.updateResultLock.Lock()
defer c.updateResultLock.Unlock()
c.reusedWorkflowJobResults[jobName] = result
}

View File

@@ -122,6 +122,15 @@ func runStepExecutor(step step, stage stepStage, executor common.Executor) commo
summaryFileCommand := path.Join("workflow", "SUMMARY.md")
(*step.getEnv())["GITHUB_STEP_SUMMARY"] = path.Join(actPath, summaryFileCommand)
{
// For Gitea
(*step.getEnv())["GITEA_OUTPUT"] = (*step.getEnv())["GITHUB_OUTPUT"]
(*step.getEnv())["GITEA_STATE"] = (*step.getEnv())["GITHUB_STATE"]
(*step.getEnv())["GITEA_PATH"] = (*step.getEnv())["GITHUB_PATH"]
(*step.getEnv())["GITEA_ENV"] = (*step.getEnv())["GITHUB_ENV"]
(*step.getEnv())["GITEA_STEP_SUMMARY"] = (*step.getEnv())["GITHUB_STEP_SUMMARY"]
}
_ = rc.JobContainer.Copy(actPath, &container.FileEntry{
Name: outputFileCommand,
Mode: 0o666,
@@ -221,7 +230,8 @@ func setupEnv(ctx context.Context, step step) error {
}
}
common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
// For Gitea, reduce log noise
// common.Logger(ctx).Debugf("setupEnv => %v", *step.getEnv())
return nil
}

View File

@@ -108,7 +108,7 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
return err
}
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
gitClone := stepActionRemoteNewCloneExecutor(git.NewGitCloneExecutorInput{
URL: sar.remoteAction.CloneURL(sar.RunContext.Config.DefaultActionInstance),
Ref: sar.remoteAction.Ref,
@@ -177,7 +177,7 @@ func (sar *stepActionRemote) main() common.Executor {
return sar.RunContext.JobContainer.CopyDir(copyToPath, sar.RunContext.Config.Workdir+string(filepath.Separator)+".", sar.RunContext.Config.UseGitIgnore)(ctx)
}
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
return sar.runAction(sar, actionDir, sar.remoteAction)(ctx)
}),
@@ -236,7 +236,7 @@ func (sar *stepActionRemote) getActionModel() *model.Action {
func (sar *stepActionRemote) getCompositeRunContext(ctx context.Context) *RunContext {
if sar.compositeRunContext == nil {
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), sar.Step.UsesHash())
actionLocation := path.Join(actionDir, sar.remoteAction.Path)
_, containerActionDir := getContainerActionPaths(sar.getStepModel(), actionLocation, sar.RunContext)

View File

@@ -7,8 +7,8 @@ jobs:
- name: Install tools
run: |
apt update
apt install -y bind9-host
apt install -y iputils-ping
- name: Run hostname test
run: |
hostname -f
host $(hostname -f)
ping -c 4 $(hostname -f)