Compare commits
86 Commits
nektos/v0.
...
v0.259.1
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
82a3640cf8 | ||
|
|
e6fec7324e | ||
|
|
73f32886f2 | ||
|
|
15045b4fc0 | ||
|
|
67918333fa | ||
|
|
c93462e19f | ||
|
|
f3264cac20 | ||
|
|
4699c3b689 | ||
|
|
22d91e3ac3 | ||
|
|
cdc6d4bc6a | ||
|
|
2069b04779 | ||
|
|
3813f40cba | ||
|
|
eb19987893 | ||
|
|
545802b97b | ||
|
|
515c2c429d | ||
|
|
a165e17878 | ||
|
|
56e103b4ba | ||
|
|
422cbdf446 | ||
|
|
8c56bd3aa5 | ||
|
|
a94498b482 | ||
|
|
fe76a035ad | ||
|
|
6ce5c93cc8 | ||
|
|
92b4d73376 | ||
|
|
183bb7af1b | ||
|
|
a72822b3f8 | ||
|
|
9283cfc9b1 | ||
|
|
27846050ae | ||
|
|
ed9b6643ca | ||
|
|
a94a01bff2 | ||
|
|
229dbaf153 | ||
|
|
a18648ee73 | ||
|
|
518d8c96f3 | ||
|
|
0c1f2edb99 | ||
|
|
721857e4a0 | ||
|
|
6b1010ad07 | ||
|
|
e12252a43a | ||
|
|
8609522aa4 | ||
|
|
6a876c4f99 | ||
|
|
de529139af | ||
|
|
d3a56cdb69 | ||
|
|
9bdddf18e0 | ||
|
|
ac1ba34518 | ||
|
|
5c4a96bcb7 | ||
|
|
62abf4fe11 | ||
|
|
cfedc518ca | ||
|
|
5e76853b55 | ||
|
|
2eb4de02ee | ||
|
|
342ad6a51a | ||
|
|
568f053723 | ||
|
|
8f12a6c947 | ||
|
|
83fb85f702 | ||
|
|
3daf313205 | ||
|
|
7c5400d75b | ||
|
|
929ea6df75 | ||
|
|
f6a8a0e643 | ||
|
|
556fd20aed | ||
|
|
a8298365fe | ||
|
|
1dda0aec69 | ||
|
|
49e204166d | ||
|
|
a36b003f7a | ||
|
|
0671d16694 | ||
|
|
881dbdb81b | ||
|
|
1252e551b8 | ||
|
|
c614d8b96c | ||
|
|
84b6649b8b | ||
|
|
dca7801682 | ||
|
|
4b99ed8916 | ||
|
|
e46ede1b17 | ||
|
|
1ba076d321 | ||
|
|
0efa2d5e63 | ||
|
|
0a37a03f2e | ||
|
|
88cce47022 | ||
|
|
7920109e89 | ||
|
|
4cacc14d22 | ||
|
|
c6b8548d35 | ||
|
|
64cae197a4 | ||
|
|
7fb84a54a8 | ||
|
|
70cc6c017b | ||
|
|
d7e9ea75fc | ||
|
|
b9c20dcaa4 | ||
|
|
97629ae8af | ||
|
|
b9a9812ad9 | ||
|
|
113c3e98fb | ||
|
|
7815eec33b | ||
|
|
c051090583 | ||
|
|
0fa1fe0310 |
44
.gitea/workflows/test.yml
Normal file
44
.gitea/workflows/test.yml
Normal file
@@ -0,0 +1,44 @@
|
||||
name: checks
|
||||
on:
|
||||
- push
|
||||
- pull_request
|
||||
|
||||
env:
|
||||
GOPROXY: https://goproxy.io,direct
|
||||
GOPATH: /go_path
|
||||
GOCACHE: /go_cache
|
||||
|
||||
jobs:
|
||||
lint:
|
||||
name: check and test
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- name: cache go path
|
||||
id: cache-go-path
|
||||
uses: https://github.com/actions/cache@v3
|
||||
with:
|
||||
path: /go_path
|
||||
key: go_path-${{ github.repository }}-${{ github.ref_name }}
|
||||
restore-keys: |
|
||||
go_path-${{ github.repository }}-
|
||||
go_path-
|
||||
- name: cache go cache
|
||||
id: cache-go-cache
|
||||
uses: https://github.com/actions/cache@v3
|
||||
with:
|
||||
path: /go_cache
|
||||
key: go_cache-${{ github.repository }}-${{ github.ref_name }}
|
||||
restore-keys: |
|
||||
go_cache-${{ github.repository }}-
|
||||
go_cache-
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: 1.20
|
||||
- uses: actions/checkout@v3
|
||||
- name: vet checks
|
||||
run: go vet -v ./...
|
||||
- name: build
|
||||
run: go build -v ./...
|
||||
- name: test
|
||||
run: go test -v ./pkg/jobparser
|
||||
# TODO test more packages
|
||||
1
.gitignore
vendored
1
.gitignore
vendored
@@ -31,3 +31,4 @@ coverage.txt
|
||||
|
||||
# megalinter
|
||||
report/
|
||||
act
|
||||
|
||||
@@ -26,7 +26,7 @@
|
||||
|
||||
## Images based on [`actions/virtual-environments`][gh/actions/virtual-environments]
|
||||
|
||||
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to use it.**
|
||||
**Note: `nektos/act-environments-ubuntu` have been last updated in February, 2020. It's recommended to update the image manually after `docker pull` if you decide to to use it.**
|
||||
|
||||
| Image | Size | GitHub Repository |
|
||||
| --------------------------------------------------------------------------------- | -------------------------------------------------------------------------- | ------------------------------------------------------- |
|
||||
|
||||
1
LICENSE
1
LICENSE
@@ -1,5 +1,6 @@
|
||||
MIT License
|
||||
|
||||
Copyright (c) 2022 The Gitea Authors
|
||||
Copyright (c) 2019
|
||||
|
||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||
|
||||
479
README.md
479
README.md
@@ -1,4 +1,29 @@
|
||||

|
||||
## Forking rules
|
||||
|
||||
This is a custom fork of [nektos/act](https://github.com/nektos/act/), for the purpose of serving [act_runner](https://gitea.com/gitea/act_runner).
|
||||
|
||||
It cannot be used as command line tool anymore, but only as a library.
|
||||
|
||||
It's a soft fork, which means that it will tracking the latest release of nektos/act.
|
||||
|
||||
Branches:
|
||||
|
||||
- `main`: default branch, contains custom changes, based on the latest release(not the latest of the master branch) of nektos/act.
|
||||
- `nektos/master`: mirror for the master branch of nektos/act.
|
||||
|
||||
Tags:
|
||||
|
||||
- `nektos/vX.Y.Z`: mirror for `vX.Y.Z` of [nektos/act](https://github.com/nektos/act/).
|
||||
- `vX.YZ.*`: based on `nektos/vX.Y.Z`, contains custom changes.
|
||||
- Examples:
|
||||
- `nektos/v0.2.23` -> `v0.223.*`
|
||||
- `nektos/v0.3.1` -> `v0.301.*`, not ~~`v0.31.*`~~
|
||||
- `nektos/v0.10.1` -> `v0.1001.*`, not ~~`v0.101.*`~~
|
||||
- `nektos/v0.3.100` -> not ~~`v0.3100.*`~~, I don't think it's really going to happen, if it does, we can find a way to handle it.
|
||||
|
||||
---
|
||||
|
||||

|
||||
|
||||
# Overview [](https://github.com/nektos/act/actions) [](https://gitter.im/nektos/act?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge) [](https://goreportcard.com/report/github.com/nektos/act) [](https://github.com/jonico/awesome-runners)
|
||||
|
||||
@@ -15,12 +40,462 @@ When you run `act` it reads in your GitHub Actions from `.github/workflows/` and
|
||||
|
||||
Let's see it in action with a [sample repo](https://github.com/cplee/github-actions-demo)!
|
||||
|
||||

|
||||

|
||||
|
||||
# Act User Guide
|
||||
|
||||
Please look at the [act user guide](https://nektosact.com) for more documentation.
|
||||
|
||||
# Installation
|
||||
|
||||
## Necessary prerequisites for running `act`
|
||||
|
||||
`act` depends on `docker` to run workflows.
|
||||
|
||||
If you are using macOS, please be sure to follow the steps outlined in [Docker Docs for how to install Docker Desktop for Mac](https://docs.docker.com/docker-for-mac/install/).
|
||||
|
||||
If you are using Windows, please follow steps for [installing Docker Desktop on Windows](https://docs.docker.com/docker-for-windows/install/).
|
||||
|
||||
If you are using Linux, you will need to [install Docker Engine](https://docs.docker.com/engine/install/).
|
||||
|
||||
`act` is currently not supported with `podman` or other container backends (it might work, but it's not guaranteed). Please see [#303](https://github.com/nektos/act/issues/303) for updates.
|
||||
|
||||
## Installation through package managers
|
||||
|
||||
### [Homebrew](https://brew.sh/) (Linux/macOS)
|
||||
|
||||
[](https://github.com/Homebrew/homebrew-core/blob/master/Formula/act.rb)
|
||||
|
||||
```shell
|
||||
brew install act
|
||||
```
|
||||
|
||||
or if you want to install version based on latest commit, you can run below (it requires compiler to be installed but Homebrew will suggest you how to install it, if you don't have it):
|
||||
|
||||
```shell
|
||||
brew install act --HEAD
|
||||
```
|
||||
|
||||
### [MacPorts](https://www.macports.org) (macOS)
|
||||
|
||||
[](https://repology.org/project/act-run-github-actions/versions)
|
||||
|
||||
```shell
|
||||
sudo port install act
|
||||
```
|
||||
|
||||
### [Chocolatey](https://chocolatey.org/) (Windows)
|
||||
|
||||
[](https://community.chocolatey.org/packages/act-cli)
|
||||
|
||||
```shell
|
||||
choco install act-cli
|
||||
```
|
||||
|
||||
### [Scoop](https://scoop.sh/) (Windows)
|
||||
|
||||
[](https://github.com/ScoopInstaller/Main/blob/master/bucket/act.json)
|
||||
|
||||
```shell
|
||||
scoop install act
|
||||
```
|
||||
|
||||
### [Winget](https://learn.microsoft.com/en-us/windows/package-manager/) (Windows)
|
||||
|
||||
[](https://repology.org/project/act-run-github-actions/versions)
|
||||
|
||||
```shell
|
||||
winget install nektos.act
|
||||
```
|
||||
|
||||
### [AUR](https://aur.archlinux.org/packages/act/) (Linux)
|
||||
|
||||
[](https://aur.archlinux.org/packages/act/)
|
||||
|
||||
```shell
|
||||
yay -Syu act
|
||||
```
|
||||
|
||||
### [COPR](https://copr.fedorainfracloud.org/coprs/rubemlrm/act-cli/) (Linux)
|
||||
|
||||
```shell
|
||||
dnf copr enable rubemlrm/act-cli
|
||||
dnf install act-cli
|
||||
```
|
||||
|
||||
### [Nix](https://nixos.org) (Linux/macOS)
|
||||
|
||||
[Nix recipe](https://github.com/NixOS/nixpkgs/blob/master/pkgs/development/tools/misc/act/default.nix)
|
||||
|
||||
Global install:
|
||||
|
||||
```sh
|
||||
nix-env -iA nixpkgs.act
|
||||
```
|
||||
|
||||
or through `nix-shell`:
|
||||
|
||||
```sh
|
||||
nix-shell -p act
|
||||
```
|
||||
|
||||
Using the latest [Nix command](https://nixos.wiki/wiki/Nix_command), you can run directly :
|
||||
|
||||
```sh
|
||||
nix run nixpkgs#act
|
||||
```
|
||||
|
||||
## Installation as GitHub CLI extension
|
||||
|
||||
Act can be installed as a [GitHub CLI](https://cli.github.com/) extension:
|
||||
|
||||
```sh
|
||||
gh extension install https://github.com/nektos/gh-act
|
||||
```
|
||||
|
||||
## Other install options
|
||||
|
||||
### Bash script
|
||||
|
||||
Run this command in your terminal:
|
||||
|
||||
```shell
|
||||
curl -s https://raw.githubusercontent.com/nektos/act/master/install.sh | sudo bash
|
||||
```
|
||||
|
||||
### Manual download
|
||||
|
||||
Download the [latest release](https://github.com/nektos/act/releases/latest) and add the path to your binary into your PATH.
|
||||
|
||||
# Example commands
|
||||
|
||||
```sh
|
||||
# Command structure:
|
||||
act [<event>] [options]
|
||||
If no event name passed, will default to "on: push"
|
||||
If actions handles only one event it will be used as default instead of "on: push"
|
||||
|
||||
# List all actions for all events:
|
||||
act -l
|
||||
|
||||
# List the actions for a specific event:
|
||||
act workflow_dispatch -l
|
||||
|
||||
# List the actions for a specific job:
|
||||
act -j test -l
|
||||
|
||||
# Run the default (`push`) event:
|
||||
act
|
||||
|
||||
# Run a specific event:
|
||||
act pull_request
|
||||
|
||||
# Run a specific job:
|
||||
act -j test
|
||||
|
||||
# Collect artifacts to the /tmp/artifacts folder:
|
||||
act --artifact-server-path /tmp/artifacts
|
||||
|
||||
# Run a job in a specific workflow (useful if you have duplicate job names)
|
||||
act -j lint -W .github/workflows/checks.yml
|
||||
|
||||
# Run in dry-run mode:
|
||||
act -n
|
||||
|
||||
# Enable verbose-logging (can be used with any of the above commands)
|
||||
act -v
|
||||
```
|
||||
|
||||
## First `act` run
|
||||
|
||||
When running `act` for the first time, it will ask you to choose image to be used as default.
|
||||
It will save that information to `~/.actrc`, please refer to [Configuration](#configuration) for more information about `.actrc` and to [Runners](#runners) for information about used/available Docker images.
|
||||
|
||||
## `GITHUB_TOKEN`
|
||||
|
||||
GitHub [automatically provides](https://docs.github.com/en/actions/security-guides/automatic-token-authentication#about-the-github_token-secret) a `GITHUB_TOKEN` secret when running workflows inside GitHub.
|
||||
|
||||
If your workflow depends on this token, you need to create a [personal access token](https://docs.github.com/en/authentication/keeping-your-account-and-data-secure/creating-a-personal-access-token) and pass it to `act` as a secret:
|
||||
|
||||
```bash
|
||||
act -s GITHUB_TOKEN=[insert token or leave blank and omit equals for secure input]
|
||||
```
|
||||
|
||||
If [GitHub CLI](https://cli.github.com/) is installed, the [`gh auth token`](https://cli.github.com/manual/gh_auth_token) command can be used to automatically pass the token to act
|
||||
|
||||
```bash
|
||||
act -s GITHUB_TOKEN="$(gh auth token)"
|
||||
```
|
||||
|
||||
**WARNING**: `GITHUB_TOKEN` will be logged in shell history if not inserted through secure input or (depending on your shell config) the command is prefixed with a whitespace.
|
||||
|
||||
# Known Issues
|
||||
|
||||
## Services
|
||||
|
||||
Services are not currently supported but are being worked on. See: [#173](https://github.com/nektos/act/issues/173)
|
||||
|
||||
## `MODULE_NOT_FOUND`
|
||||
|
||||
A `MODULE_NOT_FOUND` during `docker cp` command [#228](https://github.com/nektos/act/issues/228) can happen if you are relying on local changes that have not been pushed. This can get triggered if the action is using a path, like:
|
||||
|
||||
```yaml
|
||||
- name: test action locally
|
||||
uses: ./
|
||||
```
|
||||
|
||||
In this case, you _must_ use `actions/checkout@v2` with a path that _has the same name as your repository_. If your repository is called _my-action_, then your checkout step would look like:
|
||||
|
||||
```yaml
|
||||
steps:
|
||||
- name: Checkout
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
path: "my-action"
|
||||
```
|
||||
|
||||
If the `path:` value doesn't match the name of the repository, a `MODULE_NOT_FOUND` will be thrown.
|
||||
|
||||
## `docker context` support
|
||||
|
||||
The current `docker context` isn't respected ([#583](https://github.com/nektos/act/issues/583)).
|
||||
|
||||
You can work around this by setting `DOCKER_HOST` before running `act`, with e.g:
|
||||
|
||||
```bash
|
||||
export DOCKER_HOST=$(docker context inspect --format '{{.Endpoints.docker.Host}}')
|
||||
```
|
||||
|
||||
# Runners
|
||||
|
||||
GitHub Actions offers managed [virtual environments](https://help.github.com/en/actions/reference/virtual-environments-for-github-hosted-runners) for running workflows. In order for `act` to run your workflows locally, it must run a container for the runner defined in your workflow file. Here are the images that `act` uses for each runner type and size:
|
||||
|
||||
| GitHub Runner | Micro Docker Image | Medium Docker Image | Large Docker Image |
|
||||
| --------------- | -------------------------------- | ------------------------------------------------- | -------------------------------------------------- |
|
||||
| `ubuntu-latest` | [`node:16-buster-slim`][micro] | [`catthehacker/ubuntu:act-latest`][docker_images] | [`catthehacker/ubuntu:full-latest`][docker_images] |
|
||||
| `ubuntu-22.04` | [`node:16-bullseye-slim`][micro] | [`catthehacker/ubuntu:act-22.04`][docker_images] | `unavailable` |
|
||||
| `ubuntu-20.04` | [`node:16-buster-slim`][micro] | [`catthehacker/ubuntu:act-20.04`][docker_images] | [`catthehacker/ubuntu:full-20.04`][docker_images] |
|
||||
| `ubuntu-18.04` | [`node:16-buster-slim`][micro] | [`catthehacker/ubuntu:act-18.04`][docker_images] | [`catthehacker/ubuntu:full-18.04`][docker_images] |
|
||||
|
||||
[micro]: https://hub.docker.com/_/buildpack-deps
|
||||
[docker_images]: https://github.com/catthehacker/docker_images
|
||||
|
||||
Windows and macOS based platforms are currently **unsupported and won't work** (see issue [#97](https://github.com/nektos/act/issues/97))
|
||||
|
||||
## Please see [IMAGES.md](./IMAGES.md) for more information about the Docker images that can be used with `act`
|
||||
|
||||
## Default runners are intentionally incomplete
|
||||
|
||||
These default images do **not** contain **all** the tools that GitHub Actions offers by default in their runners.
|
||||
Many things can work improperly or not at all while running those image.
|
||||
Additionally, some software might still not work even if installed properly, since GitHub Actions are running in fully virtualized machines while `act` is using Docker containers (e.g. Docker does not support running `systemd`).
|
||||
In case of any problems [please create issue](https://github.com/nektos/act/issues/new/choose) in respective repository (issues with `act` in this repository, issues with `nektos/act-environments-ubuntu:18.04` in [`nektos/act-environments`](https://github.com/nektos/act-environments) and issues with any image from user `catthehacker` in [`catthehacker/docker_images`](https://github.com/catthehacker/docker_images))
|
||||
|
||||
## Alternative runner images
|
||||
|
||||
If you need an environment that works just like the corresponding GitHub runner then consider using an image provided by [nektos/act-environments](https://github.com/nektos/act-environments):
|
||||
|
||||
- [`nektos/act-environments-ubuntu:18.04`](https://hub.docker.com/r/nektos/act-environments-ubuntu/tags) - built from the Packer file GitHub uses in [actions/virtual-environments](https://github.com/actions/runner).
|
||||
|
||||
:warning: :elephant: `*** WARNING - this image is >18GB 😱***`
|
||||
|
||||
- [`catthehacker/ubuntu:full-*`](https://github.com/catthehacker/docker_images/pkgs/container/ubuntu) - built from Packer template provided by GitHub, see [catthehacker/virtual-environments-fork](https://github.com/catthehacker/virtual-environments-fork) or [catthehacker/docker_images](https://github.com/catthehacker/docker_images) for more information
|
||||
|
||||
## Using local runner images
|
||||
|
||||
The `--pull` flag is set to true by default due to a breaking on older default docker images. This would pull the docker image everytime act is executed.
|
||||
|
||||
Set `--pull` to false if a local docker image is needed
|
||||
```sh
|
||||
act --pull=false
|
||||
```
|
||||
|
||||
## Use an alternative runner image
|
||||
|
||||
To use a different image for the runner, use the `-P` option.
|
||||
|
||||
```sh
|
||||
act -P <platform>=<docker-image>
|
||||
```
|
||||
|
||||
If your workflow uses `ubuntu-18.04`, consider below line as an example for changing Docker image used to run that workflow:
|
||||
|
||||
```sh
|
||||
act -P ubuntu-18.04=nektos/act-environments-ubuntu:18.04
|
||||
```
|
||||
|
||||
If you use multiple platforms in your workflow, you have to specify them to change which image is used.
|
||||
For example, if your workflow uses `ubuntu-18.04`, `ubuntu-16.04` and `ubuntu-latest`, specify all platforms like below
|
||||
|
||||
```sh
|
||||
act -P ubuntu-18.04=nektos/act-environments-ubuntu:18.04 -P ubuntu-latest=ubuntu:latest -P ubuntu-16.04=node:16-buster-slim
|
||||
```
|
||||
|
||||
# Secrets
|
||||
|
||||
To run `act` with secrets, you can enter them interactively, supply them as environment variables or load them from a file. The following options are available for providing secrets:
|
||||
|
||||
- `act -s MY_SECRET=somevalue` - use `somevalue` as the value for `MY_SECRET`.
|
||||
- `act -s MY_SECRET` - check for an environment variable named `MY_SECRET` and use it if it exists. If the environment variable is not defined, prompt the user for a value.
|
||||
- `act --secret-file my.secrets` - load secrets values from `my.secrets` file.
|
||||
- secrets file format is the same as `.env` format
|
||||
|
||||
# Vars
|
||||
|
||||
To run `act` with repository variables that are acessible inside the workflow via ${{ vars.VARIABLE }}, you can enter them interactively or load them from a file. The following options are available for providing github repository variables:
|
||||
|
||||
- `act --var VARIABLE=somevalue` - use `somevalue` as the value for `VARIABLE`.
|
||||
- `act --var-file my.variables` - load variables values from `my.variables` file.
|
||||
- variables file format is the same as `.env` format
|
||||
|
||||
# Configuration
|
||||
|
||||
You can provide default configuration flags to `act` by either creating a `./.actrc` or a `~/.actrc` file. Any flags in the files will be applied before any flags provided directly on the command line. For example, a file like below will always use the `nektos/act-environments-ubuntu:18.04` image for the `ubuntu-latest` runner:
|
||||
|
||||
```sh
|
||||
# sample .actrc file
|
||||
-P ubuntu-latest=nektos/act-environments-ubuntu:18.04
|
||||
```
|
||||
|
||||
Additionally, act supports loading environment variables from an `.env` file. The default is to look in the working directory for the file but can be overridden by:
|
||||
|
||||
```sh
|
||||
act --env-file my.env
|
||||
```
|
||||
|
||||
`.env`:
|
||||
|
||||
```env
|
||||
MY_ENV_VAR=MY_ENV_VAR_VALUE
|
||||
MY_2ND_ENV_VAR="my 2nd env var value"
|
||||
```
|
||||
|
||||
# Skipping jobs
|
||||
|
||||
You cannot use the `env` context in job level if conditions, but you can add a custom event property to the `github` context. You can use this method also on step level if conditions.
|
||||
|
||||
```yml
|
||||
on: push
|
||||
jobs:
|
||||
deploy:
|
||||
if: ${{ !github.event.act }} # skip during local actions testing
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- run: exit 0
|
||||
```
|
||||
|
||||
And use this `event.json` file with act otherwise the Job will run:
|
||||
|
||||
```json
|
||||
{
|
||||
"act": true
|
||||
}
|
||||
```
|
||||
|
||||
Run act like
|
||||
|
||||
```sh
|
||||
act -e event.json
|
||||
```
|
||||
|
||||
_Hint: you can add / append `-e event.json` as a line into `./.actrc`_
|
||||
|
||||
# Skipping steps
|
||||
|
||||
Act adds a special environment variable `ACT` that can be used to skip a step that you
|
||||
don't want to run locally. E.g. a step that posts a Slack message or bumps a version number.
|
||||
**You cannot use this method in job level if conditions, see [Skipping jobs](#skipping-jobs)**
|
||||
|
||||
```yml
|
||||
- name: Some step
|
||||
if: ${{ !env.ACT }}
|
||||
run: |
|
||||
...
|
||||
```
|
||||
|
||||
# Events
|
||||
|
||||
Every [GitHub event](https://developer.github.com/v3/activity/events/types) is accompanied by a payload. You can provide these events in JSON format with the `--eventpath` to simulate specific GitHub events kicking off an action. For example:
|
||||
|
||||
```json
|
||||
{
|
||||
"pull_request": {
|
||||
"head": {
|
||||
"ref": "sample-head-ref"
|
||||
},
|
||||
"base": {
|
||||
"ref": "sample-base-ref"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
```sh
|
||||
act pull_request -e pull-request.json
|
||||
```
|
||||
|
||||
Act will properly provide `github.head_ref` and `github.base_ref` to the action as expected.
|
||||
|
||||
# Pass Inputs to Manually Triggered Workflows
|
||||
|
||||
Example workflow file
|
||||
|
||||
```yaml
|
||||
on:
|
||||
workflow_dispatch:
|
||||
inputs:
|
||||
NAME:
|
||||
description: "A random input name for the workflow"
|
||||
type: string
|
||||
SOME_VALUE:
|
||||
description: "Some other input to pass"
|
||||
type: string
|
||||
|
||||
jobs:
|
||||
test:
|
||||
name: Test
|
||||
runs-on: ubuntu-latest
|
||||
|
||||
steps:
|
||||
- name: Test with inputs
|
||||
run: |
|
||||
echo "Hello ${{ github.event.inputs.NAME }} and ${{ github.event.inputs.SOME_VALUE }}!"
|
||||
```
|
||||
|
||||
## via input or input-file flag
|
||||
|
||||
- `act --input NAME=somevalue` - use `somevalue` as the value for `NAME` input.
|
||||
- `act --input-file my.input` - load input values from `my.input` file.
|
||||
- input file format is the same as `.env` format
|
||||
|
||||
## via JSON
|
||||
|
||||
Example JSON payload file conveniently named `payload.json`
|
||||
|
||||
```json
|
||||
{
|
||||
"inputs": {
|
||||
"NAME": "Manual Workflow",
|
||||
"SOME_VALUE": "ABC"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Command for triggering the workflow
|
||||
|
||||
```sh
|
||||
act workflow_dispatch -e payload.json
|
||||
```
|
||||
|
||||
# GitHub Enterprise
|
||||
|
||||
Act supports using and authenticating against private GitHub Enterprise servers.
|
||||
To use your custom GHE server, set the CLI flag `--github-instance` to your hostname (e.g. `github.company.com`).
|
||||
|
||||
Please note that if your GHE server requires authentication, we will use the secret provided via `GITHUB_TOKEN`.
|
||||
|
||||
Please also see the [official documentation for GitHub actions on GHE](https://docs.github.com/en/enterprise-server@3.0/admin/github-actions/about-using-actions-in-your-enterprise) for more information on how to use actions.
|
||||
|
||||
# Support
|
||||
|
||||
Need help? Ask on [Gitter](https://gitter.im/nektos/act)!
|
||||
|
||||
@@ -59,7 +59,6 @@ type Input struct {
|
||||
logPrefixJobID bool
|
||||
networkName string
|
||||
useNewActionCache bool
|
||||
localRepository []string
|
||||
}
|
||||
|
||||
func (i *Input) resolve(path string) string {
|
||||
|
||||
98
cmd/root.go
98
cmd/root.go
@@ -32,7 +32,7 @@ import (
|
||||
// Execute is the entry point to running the CLI
|
||||
func Execute(ctx context.Context, version string) {
|
||||
input := new(Input)
|
||||
rootCmd := &cobra.Command{
|
||||
var rootCmd = &cobra.Command{
|
||||
Use: "act [event name to run] [flags]\n\nIf no event name passed, will default to \"on: push\"\nIf actions handles only one event it will be used as default instead of \"on: push\"",
|
||||
Short: "Run GitHub actions locally by specifying the event name (e.g. `push`) or an action name directly.",
|
||||
Args: cobra.MaximumNArgs(1),
|
||||
@@ -100,7 +100,6 @@ func Execute(ctx context.Context, version string) {
|
||||
rootCmd.PersistentFlags().BoolVarP(&input.actionOfflineMode, "action-offline-mode", "", false, "If action contents exists, it will not be fetch and pull again. If turn on this,will turn off force pull")
|
||||
rootCmd.PersistentFlags().StringVarP(&input.networkName, "network", "", "host", "Sets a docker network name. Defaults to host.")
|
||||
rootCmd.PersistentFlags().BoolVarP(&input.useNewActionCache, "use-new-action-cache", "", false, "Enable using the new Action Cache for storing Actions locally")
|
||||
rootCmd.PersistentFlags().StringArrayVarP(&input.localRepository, "local-repository", "", []string{}, "Replaces the specified repository and ref with a local folder (e.g. https://github.com/test/test@v0=/home/act/test or test/test@v0=/home/act/test, the latter matches any hosts or protocols)")
|
||||
rootCmd.SetArgs(args())
|
||||
|
||||
if err := rootCmd.Execute(); err != nil {
|
||||
@@ -126,6 +125,34 @@ func configLocations() []string {
|
||||
return []string{specPath, homePath, invocationPath}
|
||||
}
|
||||
|
||||
var commonSocketPaths = []string{
|
||||
"/var/run/docker.sock",
|
||||
"/run/podman/podman.sock",
|
||||
"$HOME/.colima/docker.sock",
|
||||
"$XDG_RUNTIME_DIR/docker.sock",
|
||||
"$XDG_RUNTIME_DIR/podman/podman.sock",
|
||||
`\\.\pipe\docker_engine`,
|
||||
"$HOME/.docker/run/docker.sock",
|
||||
}
|
||||
|
||||
// returns socket path or false if not found any
|
||||
func socketLocation() (string, bool) {
|
||||
if dockerHost, exists := os.LookupEnv("DOCKER_HOST"); exists {
|
||||
return dockerHost, true
|
||||
}
|
||||
|
||||
for _, p := range commonSocketPaths {
|
||||
if _, err := os.Lstat(os.ExpandEnv(p)); err == nil {
|
||||
if strings.HasPrefix(p, `\\.\`) {
|
||||
return "npipe://" + filepath.ToSlash(os.ExpandEnv(p)), true
|
||||
}
|
||||
return "unix://" + filepath.ToSlash(os.ExpandEnv(p)), true
|
||||
}
|
||||
}
|
||||
|
||||
return "", false
|
||||
}
|
||||
|
||||
func args() []string {
|
||||
actrc := configLocations()
|
||||
|
||||
@@ -158,7 +185,7 @@ func bugReport(ctx context.Context, version string) error {
|
||||
|
||||
report += sprintf("Docker host:", dockerHost)
|
||||
report += fmt.Sprintln("Sockets found:")
|
||||
for _, p := range container.CommonSocketLocations {
|
||||
for _, p := range commonSocketPaths {
|
||||
if _, err := os.Lstat(os.ExpandEnv(p)); err != nil {
|
||||
continue
|
||||
} else if _, err := os.Stat(os.ExpandEnv(p)); err != nil {
|
||||
@@ -329,6 +356,18 @@ func parseMatrix(matrix []string) map[string]map[string]bool {
|
||||
return matrixes
|
||||
}
|
||||
|
||||
func isDockerHostURI(daemonPath string) bool {
|
||||
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
|
||||
scheme := daemonPath[:protoIndex]
|
||||
if strings.IndexFunc(scheme, func(r rune) bool {
|
||||
return (r < 'a' || r > 'z') && (r < 'A' || r > 'Z')
|
||||
}) == -1 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
//nolint:gocyclo
|
||||
func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []string) error {
|
||||
return func(cmd *cobra.Command, args []string) error {
|
||||
@@ -339,12 +378,27 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
|
||||
if ok, _ := cmd.Flags().GetBool("bug-report"); ok {
|
||||
return bugReport(ctx, cmd.Version)
|
||||
}
|
||||
if ret, err := container.GetSocketAndHost(input.containerDaemonSocket); err != nil {
|
||||
log.Warnf("Couldn't get a valid docker connection: %+v", err)
|
||||
} else {
|
||||
os.Setenv("DOCKER_HOST", ret.Host)
|
||||
input.containerDaemonSocket = ret.Socket
|
||||
log.Infof("Using docker host '%s', and daemon socket '%s'", ret.Host, ret.Socket)
|
||||
|
||||
// Prefer DOCKER_HOST, don't override it
|
||||
socketPath, hasDockerHost := os.LookupEnv("DOCKER_HOST")
|
||||
if !hasDockerHost {
|
||||
// a - in containerDaemonSocket means don't mount, preserve this value
|
||||
// otherwise if input.containerDaemonSocket is a filepath don't use it as socketPath
|
||||
skipMount := input.containerDaemonSocket == "-" || !isDockerHostURI(input.containerDaemonSocket)
|
||||
if input.containerDaemonSocket != "" && !skipMount {
|
||||
socketPath = input.containerDaemonSocket
|
||||
} else {
|
||||
socket, found := socketLocation()
|
||||
if !found {
|
||||
log.Errorln("daemon Docker Engine socket not found and containerDaemonSocket option was not set")
|
||||
} else {
|
||||
socketPath = socket
|
||||
}
|
||||
if !skipMount {
|
||||
input.containerDaemonSocket = socketPath
|
||||
}
|
||||
}
|
||||
os.Setenv("DOCKER_HOST", socketPath)
|
||||
}
|
||||
|
||||
if runtime.GOOS == "darwin" && runtime.GOARCH == "arm64" && input.containerArchitecture == "" {
|
||||
@@ -562,29 +616,9 @@ func newRunCommand(ctx context.Context, input *Input) func(*cobra.Command, []str
|
||||
Matrix: matrixes,
|
||||
ContainerNetworkMode: docker_container.NetworkMode(input.networkName),
|
||||
}
|
||||
if input.useNewActionCache || len(input.localRepository) > 0 {
|
||||
if input.actionOfflineMode {
|
||||
config.ActionCache = &runner.GoGitActionCacheOfflineMode{
|
||||
Parent: runner.GoGitActionCache{
|
||||
Path: config.ActionCacheDir,
|
||||
},
|
||||
}
|
||||
} else {
|
||||
config.ActionCache = &runner.GoGitActionCache{
|
||||
Path: config.ActionCacheDir,
|
||||
}
|
||||
}
|
||||
if len(input.localRepository) > 0 {
|
||||
localRepositories := map[string]string{}
|
||||
for _, l := range input.localRepository {
|
||||
k, v, _ := strings.Cut(l, "=")
|
||||
localRepositories[k] = v
|
||||
}
|
||||
config.ActionCache = &runner.LocalRepositoryCache{
|
||||
Parent: config.ActionCache,
|
||||
LocalRepositories: localRepositories,
|
||||
CacheDirCache: map[string]string{},
|
||||
}
|
||||
if input.useNewActionCache {
|
||||
config.ActionCache = &runner.GoGitActionCache{
|
||||
Path: config.ActionCacheDir,
|
||||
}
|
||||
}
|
||||
r, err := runner.New(config)
|
||||
|
||||
28
go.mod
28
go.mod
@@ -4,13 +4,12 @@ go 1.20
|
||||
|
||||
require (
|
||||
github.com/AlecAivazis/survey/v2 v2.3.7
|
||||
github.com/Masterminds/semver v1.5.0
|
||||
github.com/adrg/xdg v0.4.0
|
||||
github.com/andreaskoch/go-fswatch v1.0.0
|
||||
github.com/creack/pty v1.1.21
|
||||
github.com/docker/cli v24.0.7+incompatible
|
||||
github.com/docker/distribution v2.8.3+incompatible
|
||||
github.com/docker/docker v24.0.9+incompatible // 24.0 branch
|
||||
github.com/docker/docker v24.0.7+incompatible // 24.0 branch
|
||||
github.com/docker/go-connections v0.4.0
|
||||
github.com/go-git/go-billy/v5 v5.5.0
|
||||
github.com/go-git/go-git/v5 v5.11.0
|
||||
@@ -21,22 +20,27 @@ require (
|
||||
github.com/mattn/go-isatty v0.0.20
|
||||
github.com/moby/buildkit v0.12.5
|
||||
github.com/moby/patternmatcher v0.6.0
|
||||
github.com/opencontainers/image-spec v1.1.0
|
||||
github.com/opencontainers/image-spec v1.1.0-rc3
|
||||
github.com/opencontainers/selinux v1.11.0
|
||||
github.com/pkg/errors v0.9.1
|
||||
github.com/rhysd/actionlint v1.6.27
|
||||
github.com/rhysd/actionlint v1.6.26
|
||||
github.com/sabhiram/go-gitignore v0.0.0-20210923224102-525f6e181f06
|
||||
github.com/sirupsen/logrus v1.9.3
|
||||
github.com/spf13/cobra v1.8.0
|
||||
github.com/spf13/pflag v1.0.5
|
||||
github.com/stretchr/testify v1.9.0
|
||||
github.com/stretchr/testify v1.8.4
|
||||
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a
|
||||
go.etcd.io/bbolt v1.3.9
|
||||
golang.org/x/term v0.18.0
|
||||
go.etcd.io/bbolt v1.3.8
|
||||
golang.org/x/term v0.16.0
|
||||
gopkg.in/yaml.v3 v3.0.1
|
||||
gotest.tools/v3 v3.5.1
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Masterminds/semver v1.5.0
|
||||
github.com/gobwas/glob v0.2.3
|
||||
)
|
||||
|
||||
require (
|
||||
dario.cat/mergo v1.0.0 // indirect
|
||||
github.com/Microsoft/go-winio v0.6.1 // indirect
|
||||
@@ -49,7 +53,7 @@ require (
|
||||
github.com/docker/docker-credential-helpers v0.7.0 // indirect
|
||||
github.com/docker/go-units v0.5.0 // indirect
|
||||
github.com/emirpasic/gods v1.18.1 // indirect
|
||||
github.com/fatih/color v1.16.0 // indirect
|
||||
github.com/fatih/color v1.15.0 // indirect
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 // indirect
|
||||
github.com/gogo/protobuf v1.3.2 // indirect
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da // indirect
|
||||
@@ -69,11 +73,11 @@ require (
|
||||
github.com/opencontainers/runc v1.1.12 // indirect
|
||||
github.com/pjbgf/sha1cd v0.3.0 // indirect
|
||||
github.com/pmezard/go-difflib v1.0.0 // indirect
|
||||
github.com/rivo/uniseg v0.4.7 // indirect
|
||||
github.com/rivo/uniseg v0.4.4 // indirect
|
||||
github.com/robfig/cron/v3 v3.0.1 // indirect
|
||||
github.com/sergi/go-diff v1.2.0 // indirect
|
||||
github.com/skeema/knownhosts v1.2.1 // indirect
|
||||
github.com/stretchr/objx v0.5.2 // indirect
|
||||
github.com/stretchr/objx v0.5.0 // indirect
|
||||
github.com/xanzy/ssh-agent v0.3.3 // indirect
|
||||
github.com/xeipuuv/gojsonpointer v0.0.0-20190905194746-02993c407bfb // indirect
|
||||
github.com/xeipuuv/gojsonreference v0.0.0-20180127040603-bd5ef7bd5415 // indirect
|
||||
@@ -81,8 +85,8 @@ require (
|
||||
golang.org/x/crypto v0.17.0 // indirect
|
||||
golang.org/x/mod v0.12.0 // indirect
|
||||
golang.org/x/net v0.19.0 // indirect
|
||||
golang.org/x/sync v0.6.0 // indirect
|
||||
golang.org/x/sys v0.18.0 // indirect
|
||||
golang.org/x/sync v0.3.0 // indirect
|
||||
golang.org/x/sys v0.16.0 // indirect
|
||||
golang.org/x/text v0.14.0 // indirect
|
||||
golang.org/x/tools v0.13.0 // indirect
|
||||
gopkg.in/warnings.v0 v0.1.2 // indirect
|
||||
|
||||
49
go.sum
49
go.sum
@@ -42,8 +42,8 @@ github.com/docker/cli v24.0.7+incompatible h1:wa/nIwYFW7BVTGa7SWPVyyXU9lgORqUb1x
|
||||
github.com/docker/cli v24.0.7+incompatible/go.mod h1:JLrzqnKDaYBop7H2jaqPtU4hHvMKP+vjCwu2uszcLI8=
|
||||
github.com/docker/distribution v2.8.3+incompatible h1:AtKxIZ36LoNK51+Z6RpzLpddBirtxJnzDrHLEKxTAYk=
|
||||
github.com/docker/distribution v2.8.3+incompatible/go.mod h1:J2gT2udsDAN96Uj4KfcMRqY0/ypR+oyYUYmja8H+y+w=
|
||||
github.com/docker/docker v24.0.9+incompatible h1:HPGzNmwfLZWdxHqK9/II92pyi1EpYKsAqcl4G0Of9v0=
|
||||
github.com/docker/docker v24.0.9+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
||||
github.com/docker/docker v24.0.7+incompatible h1:Wo6l37AuwP3JaMnZa226lzVXGA3F9Ig1seQen0cKYlM=
|
||||
github.com/docker/docker v24.0.7+incompatible/go.mod h1:eEKB0N0r5NX/I1kEveEz05bcu8tLC/8azJZsviup8Sk=
|
||||
github.com/docker/docker-credential-helpers v0.7.0 h1:xtCHsjxogADNZcdv1pKUHXryefjlVRqWqIhk/uXJp0A=
|
||||
github.com/docker/docker-credential-helpers v0.7.0/go.mod h1:rETQfLdHNT3foU5kuNkFR1R1V12OJRRO5lzt2D1b5X0=
|
||||
github.com/docker/go-connections v0.4.0 h1:El9xVISelRB7BuFusrZozjnkIM5YnzCViNKohAFqRJQ=
|
||||
@@ -53,8 +53,8 @@ github.com/docker/go-units v0.5.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDD
|
||||
github.com/elazarl/goproxy v0.0.0-20230808193330-2592e75ae04a h1:mATvB/9r/3gvcejNsXKSkQ6lcIaNec2nyfOdlTBR2lU=
|
||||
github.com/emirpasic/gods v1.18.1 h1:FXtiHYKDGKCW2KzwZKx0iC0PQmdlorYgdFG9jPXJ1Bc=
|
||||
github.com/emirpasic/gods v1.18.1/go.mod h1:8tpGGwCnJ5H4r6BWwaV6OrWmMoPhUl5jm/FMNAnJvWQ=
|
||||
github.com/fatih/color v1.16.0 h1:zmkK9Ngbjj+K0yRhTVONQh1p/HknKYSlNT+vZCzyokM=
|
||||
github.com/fatih/color v1.16.0/go.mod h1:fL2Sau1YI5c0pdGEVCbKQbLXB6edEj1ZgiY4NijnWvE=
|
||||
github.com/fatih/color v1.15.0 h1:kOqh6YHBtK8aywxGerMG2Eq3H6Qgoqeo13Bk2Mv/nBs=
|
||||
github.com/fatih/color v1.15.0/go.mod h1:0h5ZqXfHYED7Bhv2ZJamyIOUej9KtShiJESRwBDUSsw=
|
||||
github.com/gliderlabs/ssh v0.3.5 h1:OcaySEmAQJgyYcArR+gGGTHCyE7nvhEMTlYY+Dp8CpY=
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376 h1:+zs/tPmkDkHx3U66DAb0lQFJrpS6731Oaa12ikc+DiI=
|
||||
github.com/go-git/gcfg v1.5.1-0.20230307220236-3a3c6141e376/go.mod h1:an3vInlBmSxCcxctByoQdvwPiA7DTK7jaaFDBTtu0ic=
|
||||
@@ -63,6 +63,8 @@ github.com/go-git/go-billy/v5 v5.5.0/go.mod h1:hmexnoNsr2SJU1Ju67OaNz5ASJY3+sHgF
|
||||
github.com/go-git/go-git-fixtures/v4 v4.3.2-0.20231010084843-55a94097c399 h1:eMje31YglSBqCdIqdhKBW8lokaMrL3uTkpGYlE2OOT4=
|
||||
github.com/go-git/go-git/v5 v5.11.0 h1:XIZc1p+8YzypNr34itUfSvYJcv+eYdTnTvOZ2vD3cA4=
|
||||
github.com/go-git/go-git/v5 v5.11.0/go.mod h1:6GFcX2P3NM7FPBfpePbpLd21XxsgdAt+lKqXmCUiUCY=
|
||||
github.com/gobwas/glob v0.2.3 h1:A4xDbljILXROh+kObIiy5kIaPYD8e96x1tgBhUI5J+Y=
|
||||
github.com/gobwas/glob v0.2.3/go.mod h1:d3Ez4x06l9bZtSvzIay5+Yzi0fmZzPgnTbPcKjJAkT8=
|
||||
github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q=
|
||||
github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q=
|
||||
github.com/golang/groupcache v0.0.0-20210331224755-41bb18bfe9da h1:oI5xCqsCo564l8iNU+DwB5epxmsaqB+rhGL0m5jtYqE=
|
||||
@@ -124,8 +126,8 @@ github.com/morikuni/aec v1.0.0 h1:nP9CBfwrvYnBRgY6qfDQkygYDmYwOilePFkwzv4dU8A=
|
||||
github.com/onsi/gomega v1.27.10 h1:naR28SdDFlqrG6kScpT8VWpu1xWY5nJRCF3XaYyBjhI=
|
||||
github.com/opencontainers/go-digest v1.0.0 h1:apOUWs51W5PlhuyGyz9FCeeBIOUDA/6nW8Oi/yOhh5U=
|
||||
github.com/opencontainers/go-digest v1.0.0/go.mod h1:0JzlMkj0TRzQZfJkVvzbP0HBR3IKzErnv2BNG4W4MAM=
|
||||
github.com/opencontainers/image-spec v1.1.0 h1:8SG7/vwALn54lVB/0yZ/MMwhFrPYtpEHQb2IpWsCzug=
|
||||
github.com/opencontainers/image-spec v1.1.0/go.mod h1:W4s4sFTMaBeK1BQLXbG4AdM2szdn85PY75RI83NrTrM=
|
||||
github.com/opencontainers/image-spec v1.1.0-rc3 h1:fzg1mXZFj8YdPeNkRXMg+zb88BFV0Ys52cJydRwBkb8=
|
||||
github.com/opencontainers/image-spec v1.1.0-rc3/go.mod h1:X4pATf0uXsnn3g5aiGIsVnJBR4mxhKzfwmvK/B2NTm8=
|
||||
github.com/opencontainers/runc v1.1.12 h1:BOIssBaW1La0/qbNZHXOOa71dZfZEQOzW7dqQf3phss=
|
||||
github.com/opencontainers/runc v1.1.12/go.mod h1:S+lQwSfncpBha7XTy/5lBwWgm5+y5Ma/O44Ekby9FK8=
|
||||
github.com/opencontainers/selinux v1.11.0 h1:+5Zbo97w3Lbmb3PeqQtpmTkMwsW5nRI3YaLpt7tQ7oU=
|
||||
@@ -137,11 +139,11 @@ github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4=
|
||||
github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0=
|
||||
github.com/pmezard/go-difflib v1.0.0 h1:4DBwDE0NGyQoBHbLQYPwSUPoCMWR5BEzIk/f1lZbAQM=
|
||||
github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4=
|
||||
github.com/rhysd/actionlint v1.6.27 h1:xxwe8YmveBcC8lydW6GoHMGmB6H/MTqUU60F2p10wjw=
|
||||
github.com/rhysd/actionlint v1.6.27/go.mod h1:m2nFUjAnOrxCMXuOMz9evYBRCLUsMnKY2IJl/N5umbk=
|
||||
github.com/rhysd/actionlint v1.6.26 h1:zi7jPZf3Ks14gCXYAAL47uBziyFlX7+Xwilqhexct9g=
|
||||
github.com/rhysd/actionlint v1.6.26/go.mod h1:TIj1DlCgtYLOv5CH9wCK+WJTOr1qAdnFzkGi0IgSCO4=
|
||||
github.com/rivo/uniseg v0.2.0/go.mod h1:J6wj4VEh+S6ZtnVlnTBMWIodfgj8LQOQFoIToxlJtxc=
|
||||
github.com/rivo/uniseg v0.4.7 h1:WUdvkW8uEhrYfLC4ZzdpI2ztxP1I582+49Oc5Mq64VQ=
|
||||
github.com/rivo/uniseg v0.4.7/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||
github.com/rivo/uniseg v0.4.4 h1:8TfxU8dW6PdqD27gjM8MVNuicgxIjxpm4K7x4jp8sis=
|
||||
github.com/rivo/uniseg v0.4.4/go.mod h1:FN3SvrM+Zdj16jyLfmOkMNblXMcoc8DfTHruCPUcx88=
|
||||
github.com/robfig/cron/v3 v3.0.1 h1:WdRxkvbJztn8LMz/QEvLN5sBU+xKpSqwwUO1Pjr4qDs=
|
||||
github.com/robfig/cron/v3 v3.0.1/go.mod h1:eQICP3HwyT7UooqI/z+Ov+PtYAWygg1TEWWzGIFLtro=
|
||||
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
|
||||
@@ -163,15 +165,18 @@ github.com/spf13/pflag v1.0.5 h1:iy+VFUOCP1a+8yFto/drg2CJ5u0yRoB7fZw3DKv/JXA=
|
||||
github.com/spf13/pflag v1.0.5/go.mod h1:McXfInJRrz4CZXVZOBLb0bTZqETkiAhM9Iw0y3An2Bg=
|
||||
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
|
||||
github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY=
|
||||
github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA=
|
||||
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
|
||||
github.com/stretchr/objx v0.5.0 h1:1zr/of2m5FGMsad5YfcqgdqdWrIhu+EBEJRhR1U7z/c=
|
||||
github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo=
|
||||
github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
|
||||
github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI=
|
||||
github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4=
|
||||
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.9.0 h1:HtqpIVDClZ4nwg75+f6Lvsy/wHu+3BoSGCbBAcpTsTg=
|
||||
github.com/stretchr/testify v1.9.0/go.mod h1:r2ic/lqez/lEtzL7wO/rwa5dbSLXVDPFyf8C91i36aY=
|
||||
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
|
||||
github.com/stretchr/testify v1.8.0/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
|
||||
github.com/stretchr/testify v1.8.4 h1:CcVxjf3Q8PM0mHUKJCdn+eZZtm5yQwehR5yeSVQQcUk=
|
||||
github.com/stretchr/testify v1.8.4/go.mod h1:sz/lmYIOXD/1dqDmKjjqLyZ2RngseejIcXlSw2iwfAo=
|
||||
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a h1:oIi7H/bwFUYKYhzKbHc+3MvHRWqhQwXVB4LweLMiVy0=
|
||||
github.com/timshannon/bolthold v0.0.0-20210913165410-232392fc8a6a/go.mod h1:iSvujNDmpZ6eQX+bg/0X3lF7LEmZ8N77g2a/J/+Zt2U=
|
||||
github.com/xanzy/ssh-agent v0.3.3 h1:+/15pJfg/RsTxqYcX6fHqOXZwwMP+2VyYWJeWM2qQFM=
|
||||
@@ -187,8 +192,8 @@ github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9de
|
||||
github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74=
|
||||
github.com/yuin/goldmark v1.4.13/go.mod h1:6yULJ656Px+3vBD8DxQVa3kxgyrAnzto9xy5taEt/CY=
|
||||
go.etcd.io/bbolt v1.3.6/go.mod h1:qXsaaIqmgQH0T+OPdb99Bf+PKfBBQVAdyD6TY9G8XM4=
|
||||
go.etcd.io/bbolt v1.3.9 h1:8x7aARPEXiXbHmtUwAIv7eV2fQFHrLLavdiJ3uzJXoI=
|
||||
go.etcd.io/bbolt v1.3.9/go.mod h1:zaO32+Ti0PK1ivdPtgMESzuzL2VPoIG1PCQNvOdo/dE=
|
||||
go.etcd.io/bbolt v1.3.8 h1:xs88BrvEv273UsB79e0hcVrlUWmS0a8upikMFhSyAtA=
|
||||
go.etcd.io/bbolt v1.3.8/go.mod h1:N9Mkw9X8x5fupy0IKsmuqVtoGDyxsaDlbk4Rd05IAQw=
|
||||
golang.org/x/crypto v0.0.0-20190308221718-c2843e01d9a2/go.mod h1:djNgcEr1/C05ACkg1iLfiJU5Ep61QUkGW8qpdssI0+w=
|
||||
golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI=
|
||||
golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto=
|
||||
@@ -222,8 +227,8 @@ golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJ
|
||||
golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.0.0-20220722155255-886fb9371eb4/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.1.0/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
|
||||
golang.org/x/sync v0.6.0 h1:5BMeUDZ7vkXGfEr1x9B4bRcTH4lpkTkpdh0T/J+qjbQ=
|
||||
golang.org/x/sync v0.6.0/go.mod h1:Czt+wKu1gCyEFDUtn0jG5QVvpJ6rzVqr5aXyt9drQfk=
|
||||
golang.org/x/sync v0.3.0 h1:ftCYgMx6zT/asHUrPw8BLLscYtGznsLAnjq5RH9P66E=
|
||||
golang.org/x/sync v0.3.0/go.mod h1:FU7BRWz2tNW+3quACPkgCx/L+uEAv1htQ0V83Z9Rj+Y=
|
||||
golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190222072716-a9d3bda3a223/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY=
|
||||
golang.org/x/sys v0.0.0-20190412213103-97732733099d/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs=
|
||||
@@ -245,15 +250,15 @@ golang.org/x/sys v0.2.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.3.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.5.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.6.0/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
|
||||
golang.org/x/sys v0.18.0 h1:DBdB3niSjOA/O0blCZBqDefyWNYveAYMNF1Wum0DYQ4=
|
||||
golang.org/x/sys v0.18.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/sys v0.16.0 h1:xWw16ngr6ZMtmxDyKyIgsE93KNKz5HKmMa3b8ALHidU=
|
||||
golang.org/x/sys v0.16.0/go.mod h1:/VUhepiaJMQUp4+oa/7Zr1D23ma6VTLIYjOOTFZPUcA=
|
||||
golang.org/x/term v0.0.0-20201126162022-7de9c90e9dd1/go.mod h1:bj7SfCRtBDWHUb9snDiAeCFNEtKQo2Wmx5Cou7ajbmo=
|
||||
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211/go.mod h1:jbD1KX2456YbFQfuXm/mYQcufACuNUgVhRMnK/tPxf8=
|
||||
golang.org/x/term v0.2.0/go.mod h1:TVmDHMZPmdnySmBfhjOoOdhjzdE1h4u1VwSiw2l1Nuc=
|
||||
golang.org/x/term v0.5.0/go.mod h1:jMB1sMXY+tzblOD4FWmEbocvup2/aLOaQEp7JmGp78k=
|
||||
golang.org/x/term v0.6.0/go.mod h1:m6U89DPEgQRMq3DNkDClhWw02AUbt2daBVO4cn4Hv9U=
|
||||
golang.org/x/term v0.18.0 h1:FcHjZXDMxI8mM3nwhX9HlKop4C0YQvCVCdwYl2wOtE8=
|
||||
golang.org/x/term v0.18.0/go.mod h1:ILwASektA3OnRv7amZ1xhE/KTR+u50pbXfZ03+6Nx58=
|
||||
golang.org/x/term v0.16.0 h1:m+B6fahuftsE9qjo0VWp2FW0mB3MTJvR0BaMQrq0pmE=
|
||||
golang.org/x/term v0.16.0/go.mod h1:yn7UURbUtPyrVJPGPq404EukNFxcm/foM+bV/bfcDsY=
|
||||
golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ=
|
||||
golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
golang.org/x/text v0.3.6/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ=
|
||||
|
||||
@@ -35,7 +35,7 @@ type Handler struct {
|
||||
server *http.Server
|
||||
logger logrus.FieldLogger
|
||||
|
||||
gcing atomic.Bool
|
||||
gcing int32 // TODO: use atomic.Bool when we can use Go 1.19
|
||||
gcAt time.Time
|
||||
|
||||
outboundIP string
|
||||
@@ -170,7 +170,7 @@ func (h *Handler) find(w http.ResponseWriter, r *http.Request, _ httprouter.Para
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
cache, err := findCache(db, keys, version)
|
||||
cache, err := h.findCache(db, keys, version)
|
||||
if err != nil {
|
||||
h.responseJSON(w, r, 500, err)
|
||||
return
|
||||
@@ -206,17 +206,32 @@ func (h *Handler) reserve(w http.ResponseWriter, r *http.Request, _ httprouter.P
|
||||
api.Key = strings.ToLower(api.Key)
|
||||
|
||||
cache := api.ToCache()
|
||||
cache.FillKeyVersionHash()
|
||||
db, err := h.openDB()
|
||||
if err != nil {
|
||||
h.responseJSON(w, r, 500, err)
|
||||
return
|
||||
}
|
||||
defer db.Close()
|
||||
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
|
||||
if !errors.Is(err, bolthold.ErrNotFound) {
|
||||
h.responseJSON(w, r, 500, err)
|
||||
return
|
||||
}
|
||||
} else {
|
||||
h.responseJSON(w, r, 400, fmt.Errorf("already exist"))
|
||||
return
|
||||
}
|
||||
|
||||
now := time.Now().Unix()
|
||||
cache.CreatedAt = now
|
||||
cache.UsedAt = now
|
||||
if err := insertCache(db, cache); err != nil {
|
||||
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
|
||||
h.responseJSON(w, r, 500, err)
|
||||
return
|
||||
}
|
||||
// write back id to db
|
||||
if err := db.Update(cache.ID, cache); err != nil {
|
||||
h.responseJSON(w, r, 500, err)
|
||||
return
|
||||
}
|
||||
@@ -349,51 +364,56 @@ func (h *Handler) middleware(handler httprouter.Handle) httprouter.Handle {
|
||||
}
|
||||
|
||||
// if not found, return (nil, nil) instead of an error.
|
||||
func findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
|
||||
cache := &Cache{}
|
||||
for _, prefix := range keys {
|
||||
// if a key in the list matches exactly, don't return partial matches
|
||||
if err := db.FindOne(cache,
|
||||
bolthold.Where("Key").Eq(prefix).
|
||||
And("Version").Eq(version).
|
||||
And("Complete").Eq(true).
|
||||
SortBy("CreatedAt").Reverse()); err == nil || !errors.Is(err, bolthold.ErrNotFound) {
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("find cache: %w", err)
|
||||
}
|
||||
return cache, nil
|
||||
func (h *Handler) findCache(db *bolthold.Store, keys []string, version string) (*Cache, error) {
|
||||
if len(keys) == 0 {
|
||||
return nil, nil
|
||||
}
|
||||
key := keys[0] // the first key is for exact match.
|
||||
|
||||
cache := &Cache{
|
||||
Key: key,
|
||||
Version: version,
|
||||
}
|
||||
cache.FillKeyVersionHash()
|
||||
|
||||
if err := db.FindOne(cache, bolthold.Where("KeyVersionHash").Eq(cache.KeyVersionHash)); err != nil {
|
||||
if !errors.Is(err, bolthold.ErrNotFound) {
|
||||
return nil, err
|
||||
}
|
||||
} else if cache.Complete {
|
||||
return cache, nil
|
||||
}
|
||||
stop := fmt.Errorf("stop")
|
||||
|
||||
for _, prefix := range keys[1:] {
|
||||
found := false
|
||||
prefixPattern := fmt.Sprintf("^%s", regexp.QuoteMeta(prefix))
|
||||
re, err := regexp.Compile(prefixPattern)
|
||||
if err != nil {
|
||||
continue
|
||||
}
|
||||
if err := db.FindOne(cache,
|
||||
bolthold.Where("Key").RegExp(re).
|
||||
And("Version").Eq(version).
|
||||
And("Complete").Eq(true).
|
||||
SortBy("CreatedAt").Reverse()); err != nil {
|
||||
if errors.Is(err, bolthold.ErrNotFound) {
|
||||
continue
|
||||
if err := db.ForEach(bolthold.Where("Key").RegExp(re).And("Version").Eq(version).SortBy("CreatedAt").Reverse(), func(v *Cache) error {
|
||||
if !strings.HasPrefix(v.Key, prefix) {
|
||||
return stop
|
||||
}
|
||||
if v.Complete {
|
||||
cache = v
|
||||
found = true
|
||||
return stop
|
||||
}
|
||||
return nil
|
||||
}); err != nil {
|
||||
if !errors.Is(err, stop) {
|
||||
return nil, err
|
||||
}
|
||||
return nil, fmt.Errorf("find cache: %w", err)
|
||||
}
|
||||
return cache, nil
|
||||
if found {
|
||||
return cache, nil
|
||||
}
|
||||
}
|
||||
return nil, nil
|
||||
}
|
||||
|
||||
func insertCache(db *bolthold.Store, cache *Cache) error {
|
||||
if err := db.Insert(bolthold.NextSequence(), cache); err != nil {
|
||||
return fmt.Errorf("insert cache: %w", err)
|
||||
}
|
||||
// write back id to db
|
||||
if err := db.Update(cache.ID, cache); err != nil {
|
||||
return fmt.Errorf("write back id to db: %w", err)
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (h *Handler) useCache(id int64) {
|
||||
db, err := h.openDB()
|
||||
if err != nil {
|
||||
@@ -408,21 +428,14 @@ func (h *Handler) useCache(id int64) {
|
||||
_ = db.Update(cache.ID, cache)
|
||||
}
|
||||
|
||||
const (
|
||||
keepUsed = 30 * 24 * time.Hour
|
||||
keepUnused = 7 * 24 * time.Hour
|
||||
keepTemp = 5 * time.Minute
|
||||
keepOld = 5 * time.Minute
|
||||
)
|
||||
|
||||
func (h *Handler) gcCache() {
|
||||
if h.gcing.Load() {
|
||||
if atomic.LoadInt32(&h.gcing) != 0 {
|
||||
return
|
||||
}
|
||||
if !h.gcing.CompareAndSwap(false, true) {
|
||||
if !atomic.CompareAndSwapInt32(&h.gcing, 0, 1) {
|
||||
return
|
||||
}
|
||||
defer h.gcing.Store(false)
|
||||
defer atomic.StoreInt32(&h.gcing, 0)
|
||||
|
||||
if time.Since(h.gcAt) < time.Hour {
|
||||
h.logger.Debugf("skip gc: %v", h.gcAt.String())
|
||||
@@ -431,21 +444,26 @@ func (h *Handler) gcCache() {
|
||||
h.gcAt = time.Now()
|
||||
h.logger.Debugf("gc: %v", h.gcAt.String())
|
||||
|
||||
const (
|
||||
keepUsed = 30 * 24 * time.Hour
|
||||
keepUnused = 7 * 24 * time.Hour
|
||||
keepTemp = 5 * time.Minute
|
||||
)
|
||||
|
||||
db, err := h.openDB()
|
||||
if err != nil {
|
||||
return
|
||||
}
|
||||
defer db.Close()
|
||||
|
||||
// Remove the caches which are not completed for a while, they are most likely to be broken.
|
||||
var caches []*Cache
|
||||
if err := db.Find(&caches, bolthold.
|
||||
Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix()).
|
||||
And("Complete").Eq(false),
|
||||
); err != nil {
|
||||
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepTemp).Unix())); err != nil {
|
||||
h.logger.Warnf("find caches: %v", err)
|
||||
} else {
|
||||
for _, cache := range caches {
|
||||
if cache.Complete {
|
||||
continue
|
||||
}
|
||||
h.storage.Remove(cache.ID)
|
||||
if err := db.Delete(cache.ID, cache); err != nil {
|
||||
h.logger.Warnf("delete cache: %v", err)
|
||||
@@ -455,11 +473,8 @@ func (h *Handler) gcCache() {
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the old caches which have not been used recently.
|
||||
caches = caches[:0]
|
||||
if err := db.Find(&caches, bolthold.
|
||||
Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix()),
|
||||
); err != nil {
|
||||
if err := db.Find(&caches, bolthold.Where("UsedAt").Lt(time.Now().Add(-keepUnused).Unix())); err != nil {
|
||||
h.logger.Warnf("find caches: %v", err)
|
||||
} else {
|
||||
for _, cache := range caches {
|
||||
@@ -472,11 +487,8 @@ func (h *Handler) gcCache() {
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the old caches which are too old.
|
||||
caches = caches[:0]
|
||||
if err := db.Find(&caches, bolthold.
|
||||
Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix()),
|
||||
); err != nil {
|
||||
if err := db.Find(&caches, bolthold.Where("CreatedAt").Lt(time.Now().Add(-keepUsed).Unix())); err != nil {
|
||||
h.logger.Warnf("find caches: %v", err)
|
||||
} else {
|
||||
for _, cache := range caches {
|
||||
@@ -488,38 +500,6 @@ func (h *Handler) gcCache() {
|
||||
h.logger.Infof("deleted cache: %+v", cache)
|
||||
}
|
||||
}
|
||||
|
||||
// Remove the old caches with the same key and version, keep the latest one.
|
||||
// Also keep the olds which have been used recently for a while in case of the cache is still in use.
|
||||
if results, err := db.FindAggregate(
|
||||
&Cache{},
|
||||
bolthold.Where("Complete").Eq(true),
|
||||
"Key", "Version",
|
||||
); err != nil {
|
||||
h.logger.Warnf("find aggregate caches: %v", err)
|
||||
} else {
|
||||
for _, result := range results {
|
||||
if result.Count() <= 1 {
|
||||
continue
|
||||
}
|
||||
result.Sort("CreatedAt")
|
||||
caches = caches[:0]
|
||||
result.Reduction(&caches)
|
||||
for _, cache := range caches[:len(caches)-1] {
|
||||
if time.Since(time.Unix(cache.UsedAt, 0)) < keepOld {
|
||||
// Keep it since it has been used recently, even if it's old.
|
||||
// Or it could break downloading in process.
|
||||
continue
|
||||
}
|
||||
h.storage.Remove(cache.ID)
|
||||
if err := db.Delete(cache.ID, cache); err != nil {
|
||||
h.logger.Warnf("delete cache: %v", err)
|
||||
continue
|
||||
}
|
||||
h.logger.Infof("deleted cache: %+v", cache)
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
func (h *Handler) responseJSON(w http.ResponseWriter, r *http.Request, code int, v ...any) {
|
||||
|
||||
@@ -10,11 +10,9 @@ import (
|
||||
"path/filepath"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"github.com/timshannon/bolthold"
|
||||
"go.etcd.io/bbolt"
|
||||
)
|
||||
|
||||
@@ -80,9 +78,6 @@ func TestHandler(t *testing.T) {
|
||||
t.Run("duplicate reserve", func(t *testing.T) {
|
||||
key := strings.ToLower(t.Name())
|
||||
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
|
||||
var first, second struct {
|
||||
CacheID uint64 `json:"cacheId"`
|
||||
}
|
||||
{
|
||||
body, err := json.Marshal(&Request{
|
||||
Key: key,
|
||||
@@ -94,8 +89,10 @@ func TestHandler(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 200, resp.StatusCode)
|
||||
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&first))
|
||||
assert.NotZero(t, first.CacheID)
|
||||
got := struct {
|
||||
CacheID uint64 `json:"cacheId"`
|
||||
}{}
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
|
||||
}
|
||||
{
|
||||
body, err := json.Marshal(&Request{
|
||||
@@ -106,13 +103,8 @@ func TestHandler(t *testing.T) {
|
||||
require.NoError(t, err)
|
||||
resp, err := http.Post(fmt.Sprintf("%s/caches", base), "application/json", bytes.NewReader(body))
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, 200, resp.StatusCode)
|
||||
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&second))
|
||||
assert.NotZero(t, second.CacheID)
|
||||
assert.Equal(t, 400, resp.StatusCode)
|
||||
}
|
||||
|
||||
assert.NotEqual(t, first.CacheID, second.CacheID)
|
||||
})
|
||||
|
||||
t.Run("upload with bad id", func(t *testing.T) {
|
||||
@@ -349,9 +341,9 @@ func TestHandler(t *testing.T) {
|
||||
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
|
||||
key := strings.ToLower(t.Name())
|
||||
keys := [3]string{
|
||||
key + "_a_b_c",
|
||||
key + "_a_b",
|
||||
key + "_a",
|
||||
key + "_a_b",
|
||||
key + "_a_b_c",
|
||||
}
|
||||
contents := [3][]byte{
|
||||
make([]byte, 100),
|
||||
@@ -362,7 +354,6 @@ func TestHandler(t *testing.T) {
|
||||
_, err := rand.Read(contents[i])
|
||||
require.NoError(t, err)
|
||||
uploadCacheNormally(t, base, keys[i], version, contents[i])
|
||||
time.Sleep(time.Second) // ensure CreatedAt of caches are different
|
||||
}
|
||||
|
||||
reqKeys := strings.Join([]string{
|
||||
@@ -370,33 +361,29 @@ func TestHandler(t *testing.T) {
|
||||
key + "_a_b",
|
||||
key + "_a",
|
||||
}, ",")
|
||||
|
||||
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
|
||||
/*
|
||||
Expect `key_a_b` because:
|
||||
- `key_a_b_x" doesn't match any caches.
|
||||
- `key_a_b" matches `key_a_b` and `key_a_b_c`, but `key_a_b` is newer.
|
||||
*/
|
||||
except := 1
|
||||
|
||||
got := struct {
|
||||
Result string `json:"result"`
|
||||
ArchiveLocation string `json:"archiveLocation"`
|
||||
CacheKey string `json:"cacheKey"`
|
||||
}{}
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
|
||||
assert.Equal(t, "hit", got.Result)
|
||||
assert.Equal(t, keys[except], got.CacheKey)
|
||||
|
||||
contentResp, err := http.Get(got.ArchiveLocation)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, contentResp.StatusCode)
|
||||
content, err := io.ReadAll(contentResp.Body)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, contents[except], content)
|
||||
var archiveLocation string
|
||||
{
|
||||
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
got := struct {
|
||||
Result string `json:"result"`
|
||||
ArchiveLocation string `json:"archiveLocation"`
|
||||
CacheKey string `json:"cacheKey"`
|
||||
}{}
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
|
||||
assert.Equal(t, "hit", got.Result)
|
||||
assert.Equal(t, keys[1], got.CacheKey)
|
||||
archiveLocation = got.ArchiveLocation
|
||||
}
|
||||
{
|
||||
resp, err := http.Get(archiveLocation) //nolint:gosec
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
got, err := io.ReadAll(resp.Body)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, contents[1], got)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("case insensitive", func(t *testing.T) {
|
||||
@@ -422,110 +409,6 @@ func TestHandler(t *testing.T) {
|
||||
assert.Equal(t, key+"_abc", got.CacheKey)
|
||||
}
|
||||
})
|
||||
|
||||
t.Run("exact keys are preferred (key 0)", func(t *testing.T) {
|
||||
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
|
||||
key := strings.ToLower(t.Name())
|
||||
keys := [3]string{
|
||||
key + "_a",
|
||||
key + "_a_b_c",
|
||||
key + "_a_b",
|
||||
}
|
||||
contents := [3][]byte{
|
||||
make([]byte, 100),
|
||||
make([]byte, 200),
|
||||
make([]byte, 300),
|
||||
}
|
||||
for i := range contents {
|
||||
_, err := rand.Read(contents[i])
|
||||
require.NoError(t, err)
|
||||
uploadCacheNormally(t, base, keys[i], version, contents[i])
|
||||
time.Sleep(time.Second) // ensure CreatedAt of caches are different
|
||||
}
|
||||
|
||||
reqKeys := strings.Join([]string{
|
||||
key + "_a",
|
||||
key + "_a_b",
|
||||
}, ",")
|
||||
|
||||
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
|
||||
/*
|
||||
Expect `key_a` because:
|
||||
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
|
||||
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
|
||||
*/
|
||||
expect := 0
|
||||
|
||||
got := struct {
|
||||
ArchiveLocation string `json:"archiveLocation"`
|
||||
CacheKey string `json:"cacheKey"`
|
||||
}{}
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
|
||||
assert.Equal(t, keys[expect], got.CacheKey)
|
||||
|
||||
contentResp, err := http.Get(got.ArchiveLocation)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, contentResp.StatusCode)
|
||||
content, err := io.ReadAll(contentResp.Body)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, contents[expect], content)
|
||||
})
|
||||
|
||||
t.Run("exact keys are preferred (key 1)", func(t *testing.T) {
|
||||
version := "c19da02a2bd7e77277f1ac29ab45c09b7d46a4ee758284e26bb3045ad11d9d20"
|
||||
key := strings.ToLower(t.Name())
|
||||
keys := [3]string{
|
||||
key + "_a",
|
||||
key + "_a_b_c",
|
||||
key + "_a_b",
|
||||
}
|
||||
contents := [3][]byte{
|
||||
make([]byte, 100),
|
||||
make([]byte, 200),
|
||||
make([]byte, 300),
|
||||
}
|
||||
for i := range contents {
|
||||
_, err := rand.Read(contents[i])
|
||||
require.NoError(t, err)
|
||||
uploadCacheNormally(t, base, keys[i], version, contents[i])
|
||||
time.Sleep(time.Second) // ensure CreatedAt of caches are different
|
||||
}
|
||||
|
||||
reqKeys := strings.Join([]string{
|
||||
"------------------------------------------------------",
|
||||
key + "_a",
|
||||
key + "_a_b",
|
||||
}, ",")
|
||||
|
||||
resp, err := http.Get(fmt.Sprintf("%s/cache?keys=%s&version=%s", base, reqKeys, version))
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, resp.StatusCode)
|
||||
|
||||
/*
|
||||
Expect `key_a` because:
|
||||
- `------------------------------------------------------` doesn't match any caches.
|
||||
- `key_a` matches `key_a`, `key_a_b` and `key_a_b_c`, but `key_a` is an exact match.
|
||||
- `key_a_b` matches `key_a_b` and `key_a_b_c`, but previous key had a match
|
||||
*/
|
||||
expect := 0
|
||||
|
||||
got := struct {
|
||||
ArchiveLocation string `json:"archiveLocation"`
|
||||
CacheKey string `json:"cacheKey"`
|
||||
}{}
|
||||
require.NoError(t, json.NewDecoder(resp.Body).Decode(&got))
|
||||
assert.Equal(t, keys[expect], got.CacheKey)
|
||||
|
||||
contentResp, err := http.Get(got.ArchiveLocation)
|
||||
require.NoError(t, err)
|
||||
require.Equal(t, 200, contentResp.StatusCode)
|
||||
content, err := io.ReadAll(contentResp.Body)
|
||||
require.NoError(t, err)
|
||||
assert.Equal(t, contents[expect], content)
|
||||
})
|
||||
}
|
||||
|
||||
func uploadCacheNormally(t *testing.T, base, key, version string, content []byte) {
|
||||
@@ -586,112 +469,3 @@ func uploadCacheNormally(t *testing.T, base, key, version string, content []byte
|
||||
assert.Equal(t, content, got)
|
||||
}
|
||||
}
|
||||
|
||||
func TestHandler_gcCache(t *testing.T) {
|
||||
dir := filepath.Join(t.TempDir(), "artifactcache")
|
||||
handler, err := StartHandler(dir, "", 0, nil)
|
||||
require.NoError(t, err)
|
||||
|
||||
defer func() {
|
||||
require.NoError(t, handler.Close())
|
||||
}()
|
||||
|
||||
now := time.Now()
|
||||
|
||||
cases := []struct {
|
||||
Cache *Cache
|
||||
Kept bool
|
||||
}{
|
||||
{
|
||||
// should be kept, since it's used recently and not too old.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_1",
|
||||
Version: "test_version",
|
||||
Complete: true,
|
||||
UsedAt: now.Unix(),
|
||||
CreatedAt: now.Add(-time.Hour).Unix(),
|
||||
},
|
||||
Kept: true,
|
||||
},
|
||||
{
|
||||
// should be removed, since it's not complete and not used for a while.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_2",
|
||||
Version: "test_version",
|
||||
Complete: false,
|
||||
UsedAt: now.Add(-(keepTemp + time.Second)).Unix(),
|
||||
CreatedAt: now.Add(-(keepTemp + time.Hour)).Unix(),
|
||||
},
|
||||
Kept: false,
|
||||
},
|
||||
{
|
||||
// should be removed, since it's not used for a while.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_3",
|
||||
Version: "test_version",
|
||||
Complete: true,
|
||||
UsedAt: now.Add(-(keepUnused + time.Second)).Unix(),
|
||||
CreatedAt: now.Add(-(keepUnused + time.Hour)).Unix(),
|
||||
},
|
||||
Kept: false,
|
||||
},
|
||||
{
|
||||
// should be removed, since it's used but too old.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_3",
|
||||
Version: "test_version",
|
||||
Complete: true,
|
||||
UsedAt: now.Unix(),
|
||||
CreatedAt: now.Add(-(keepUsed + time.Second)).Unix(),
|
||||
},
|
||||
Kept: false,
|
||||
},
|
||||
{
|
||||
// should be kept, since it has a newer edition but be used recently.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_1",
|
||||
Version: "test_version",
|
||||
Complete: true,
|
||||
UsedAt: now.Add(-(keepOld - time.Minute)).Unix(),
|
||||
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
|
||||
},
|
||||
Kept: true,
|
||||
},
|
||||
{
|
||||
// should be removed, since it has a newer edition and not be used recently.
|
||||
Cache: &Cache{
|
||||
Key: "test_key_1",
|
||||
Version: "test_version",
|
||||
Complete: true,
|
||||
UsedAt: now.Add(-(keepOld + time.Second)).Unix(),
|
||||
CreatedAt: now.Add(-(time.Hour + time.Second)).Unix(),
|
||||
},
|
||||
Kept: false,
|
||||
},
|
||||
}
|
||||
|
||||
db, err := handler.openDB()
|
||||
require.NoError(t, err)
|
||||
for _, c := range cases {
|
||||
require.NoError(t, insertCache(db, c.Cache))
|
||||
}
|
||||
require.NoError(t, db.Close())
|
||||
|
||||
handler.gcAt = time.Time{} // ensure gcCache will not skip
|
||||
handler.gcCache()
|
||||
|
||||
db, err = handler.openDB()
|
||||
require.NoError(t, err)
|
||||
for i, v := range cases {
|
||||
t.Run(fmt.Sprintf("%d_%s", i, v.Cache.Key), func(t *testing.T) {
|
||||
cache := &Cache{}
|
||||
err = db.Get(v.Cache.ID, cache)
|
||||
if v.Kept {
|
||||
assert.NoError(t, err)
|
||||
} else {
|
||||
assert.ErrorIs(t, err, bolthold.ErrNotFound)
|
||||
}
|
||||
})
|
||||
}
|
||||
require.NoError(t, db.Close())
|
||||
}
|
||||
|
||||
@@ -1,5 +1,10 @@
|
||||
package artifactcache
|
||||
|
||||
import (
|
||||
"crypto/sha256"
|
||||
"fmt"
|
||||
)
|
||||
|
||||
type Request struct {
|
||||
Key string `json:"key" `
|
||||
Version string `json:"version"`
|
||||
@@ -24,11 +29,16 @@ func (c *Request) ToCache() *Cache {
|
||||
}
|
||||
|
||||
type Cache struct {
|
||||
ID uint64 `json:"id" boltholdKey:"ID"`
|
||||
Key string `json:"key" boltholdIndex:"Key"`
|
||||
Version string `json:"version" boltholdIndex:"Version"`
|
||||
Size int64 `json:"cacheSize"`
|
||||
Complete bool `json:"complete" boltholdIndex:"Complete"`
|
||||
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
|
||||
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
|
||||
ID uint64 `json:"id" boltholdKey:"ID"`
|
||||
Key string `json:"key" boltholdIndex:"Key"`
|
||||
Version string `json:"version" boltholdIndex:"Version"`
|
||||
KeyVersionHash string `json:"keyVersionHash" boltholdUnique:"KeyVersionHash"`
|
||||
Size int64 `json:"cacheSize"`
|
||||
Complete bool `json:"complete"`
|
||||
UsedAt int64 `json:"usedAt" boltholdIndex:"UsedAt"`
|
||||
CreatedAt int64 `json:"createdAt" boltholdIndex:"CreatedAt"`
|
||||
}
|
||||
|
||||
func (c *Cache) FillKeyVersionHash() {
|
||||
c.KeyVersionHash = fmt.Sprintf("%x", sha256.Sum256([]byte(fmt.Sprintf("%s:%s", c.Key, c.Version))))
|
||||
}
|
||||
|
||||
@@ -97,7 +97,7 @@ func NewParallelExecutor(parallel int, executors ...Executor) Executor {
|
||||
errs := make(chan error, len(executors))
|
||||
|
||||
if 1 > parallel {
|
||||
log.Debugf("Parallel tasks (%d) below minimum, setting to 1", parallel)
|
||||
log.Infof("Parallel tasks (%d) below minimum, setting to 1", parallel)
|
||||
parallel = 1
|
||||
}
|
||||
|
||||
|
||||
@@ -25,3 +25,24 @@ func Logger(ctx context.Context) logrus.FieldLogger {
|
||||
func WithLogger(ctx context.Context, logger logrus.FieldLogger) context.Context {
|
||||
return context.WithValue(ctx, loggerContextKeyVal, logger)
|
||||
}
|
||||
|
||||
type loggerHookKey string
|
||||
|
||||
const loggerHookKeyVal = loggerHookKey("logrus.Hook")
|
||||
|
||||
// LoggerHook returns the appropriate logger hook for current context
|
||||
// the hook affects job logger, not global logger
|
||||
func LoggerHook(ctx context.Context) logrus.Hook {
|
||||
val := ctx.Value(loggerHookKeyVal)
|
||||
if val != nil {
|
||||
if hook, ok := val.(logrus.Hook); ok {
|
||||
return hook
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
// WithLoggerHook adds a value to the context for the logger hook
|
||||
func WithLoggerHook(ctx context.Context, hook logrus.Hook) context.Context {
|
||||
return context.WithValue(ctx, loggerHookKeyVal, hook)
|
||||
}
|
||||
|
||||
@@ -30,6 +30,11 @@ type NewContainerInput struct {
|
||||
NetworkAliases []string
|
||||
ExposedPorts nat.PortSet
|
||||
PortBindings nat.PortMap
|
||||
|
||||
// Gitea specific
|
||||
AutoRemove bool
|
||||
|
||||
ValidVolumes []string
|
||||
}
|
||||
|
||||
// FileEntry is a file to copy to a container
|
||||
@@ -42,6 +47,7 @@ type FileEntry struct {
|
||||
// Container for managing docker run containers
|
||||
type Container interface {
|
||||
Create(capAdd []string, capDrop []string) common.Executor
|
||||
ConnectToNetwork(name string) common.Executor
|
||||
Copy(destPath string, files ...*FileEntry) common.Executor
|
||||
CopyTarStream(ctx context.Context, destPath string, tarStream io.Reader) error
|
||||
CopyDir(destPath string, srcPath string, useGitIgnore bool) common.Executor
|
||||
|
||||
@@ -17,16 +17,19 @@ import (
|
||||
"strings"
|
||||
|
||||
"github.com/Masterminds/semver"
|
||||
"github.com/docker/cli/cli/compose/loader"
|
||||
"github.com/docker/cli/cli/connhelper"
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/api/types/mount"
|
||||
"github.com/docker/docker/api/types/network"
|
||||
networktypes "github.com/docker/docker/api/types/network"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/docker/docker/pkg/stdcopy"
|
||||
"github.com/go-git/go-billy/v5/helper/polyfill"
|
||||
"github.com/go-git/go-billy/v5/osfs"
|
||||
"github.com/go-git/go-git/v5/plumbing/format/gitignore"
|
||||
"github.com/gobwas/glob"
|
||||
"github.com/imdario/mergo"
|
||||
"github.com/joho/godotenv"
|
||||
"github.com/kballard/go-shellquote"
|
||||
@@ -45,6 +48,25 @@ func NewContainer(input *NewContainerInput) ExecutionsEnvironment {
|
||||
return cr
|
||||
}
|
||||
|
||||
func (cr *containerReference) ConnectToNetwork(name string) common.Executor {
|
||||
return common.
|
||||
NewDebugExecutor("%sdocker network connect %s %s", logPrefix, name, cr.input.Name).
|
||||
Then(
|
||||
common.NewPipelineExecutor(
|
||||
cr.connect(),
|
||||
cr.connectToNetwork(name, cr.input.NetworkAliases),
|
||||
).IfNot(common.Dryrun),
|
||||
)
|
||||
}
|
||||
|
||||
func (cr *containerReference) connectToNetwork(name string, aliases []string) common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
return cr.cli.NetworkConnect(ctx, name, cr.input.Name, &networktypes.EndpointSettings{
|
||||
Aliases: aliases,
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// supportsContainerImagePlatform returns true if the underlying Docker server
|
||||
// API version is 1.41 and beyond
|
||||
func supportsContainerImagePlatform(ctx context.Context, cli client.APIClient) bool {
|
||||
@@ -346,10 +368,20 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
|
||||
return nil, nil, fmt.Errorf("Cannot parse container options: '%s': '%w'", input.Options, err)
|
||||
}
|
||||
|
||||
if len(copts.netMode.Value()) == 0 {
|
||||
if err = copts.netMode.Set(cr.input.NetworkMode); err != nil {
|
||||
return nil, nil, fmt.Errorf("Cannot parse networkmode=%s. This is an internal error and should not happen: '%w'", cr.input.NetworkMode, err)
|
||||
}
|
||||
// If a service container's network is set to `host`, the container will not be able to
|
||||
// connect to the specified network created for the job container and the service containers.
|
||||
// So comment out the following code.
|
||||
|
||||
// if len(copts.netMode.Value()) == 0 {
|
||||
// if err = copts.netMode.Set(cr.input.NetworkMode); err != nil {
|
||||
// return nil, nil, fmt.Errorf("Cannot parse networkmode=%s. This is an internal error and should not happen: '%w'", cr.input.NetworkMode, err)
|
||||
// }
|
||||
// }
|
||||
|
||||
// If the `privileged` config has been disabled, `copts.privileged` need to be forced to false,
|
||||
// even if the user specifies `--privileged` in the options string.
|
||||
if !hostConfig.Privileged {
|
||||
copts.privileged = false
|
||||
}
|
||||
|
||||
containerConfig, err := parse(flags, copts, runtime.GOOS)
|
||||
@@ -359,7 +391,7 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
|
||||
|
||||
logger.Debugf("Custom container.Config from options ==> %+v", containerConfig.Config)
|
||||
|
||||
err = mergo.Merge(config, containerConfig.Config, mergo.WithOverride)
|
||||
err = mergo.Merge(config, containerConfig.Config, mergo.WithOverride, mergo.WithAppendSlice)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("Cannot merge container.Config options: '%s': '%w'", input.Options, err)
|
||||
}
|
||||
@@ -371,12 +403,17 @@ func (cr *containerReference) mergeContainerConfigs(ctx context.Context, config
|
||||
hostConfig.Mounts = append(hostConfig.Mounts, containerConfig.HostConfig.Mounts...)
|
||||
binds := hostConfig.Binds
|
||||
mounts := hostConfig.Mounts
|
||||
networkMode := hostConfig.NetworkMode
|
||||
err = mergo.Merge(hostConfig, containerConfig.HostConfig, mergo.WithOverride)
|
||||
if err != nil {
|
||||
return nil, nil, fmt.Errorf("Cannot merge container.HostConfig options: '%s': '%w'", input.Options, err)
|
||||
}
|
||||
hostConfig.Binds = binds
|
||||
hostConfig.Mounts = mounts
|
||||
if len(copts.netMode.Value()) > 0 {
|
||||
logger.Warn("--network and --net in the options will be ignored.")
|
||||
}
|
||||
hostConfig.NetworkMode = networkMode
|
||||
logger.Debugf("Merged container.HostConfig ==> %+v", hostConfig)
|
||||
|
||||
return config, hostConfig, nil
|
||||
@@ -440,6 +477,7 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
|
||||
Privileged: input.Privileged,
|
||||
UsernsMode: container.UsernsMode(input.UsernsMode),
|
||||
PortBindings: input.PortBindings,
|
||||
AutoRemove: input.AutoRemove,
|
||||
}
|
||||
logger.Debugf("Common container.HostConfig ==> %+v", hostConfig)
|
||||
|
||||
@@ -448,6 +486,11 @@ func (cr *containerReference) create(capAdd []string, capDrop []string) common.E
|
||||
return err
|
||||
}
|
||||
|
||||
// For Gitea
|
||||
config, hostConfig = cr.sanitizeConfig(ctx, config, hostConfig)
|
||||
|
||||
// For Gitea
|
||||
// network-scoped alias is supported only for containers in user defined networks
|
||||
var networkingConfig *network.NetworkingConfig
|
||||
logger.Debugf("input.NetworkAliases ==> %v", input.NetworkAliases)
|
||||
n := hostConfig.NetworkMode
|
||||
@@ -679,7 +722,7 @@ func (cr *containerReference) CopyTarStream(ctx context.Context, destPath string
|
||||
tw := tar.NewWriter(buf)
|
||||
_ = tw.WriteHeader(&tar.Header{
|
||||
Name: destPath,
|
||||
Mode: 0o777,
|
||||
Mode: 777,
|
||||
Typeflag: tar.TypeDir,
|
||||
})
|
||||
tw.Close()
|
||||
@@ -876,3 +919,63 @@ func (cr *containerReference) wait() common.Executor {
|
||||
return fmt.Errorf("exit with `FAILURE`: %v", statusCode)
|
||||
}
|
||||
}
|
||||
|
||||
// For Gitea
|
||||
// sanitizeConfig remove the invalid configurations from `config` and `hostConfig`
|
||||
func (cr *containerReference) sanitizeConfig(ctx context.Context, config *container.Config, hostConfig *container.HostConfig) (*container.Config, *container.HostConfig) {
|
||||
logger := common.Logger(ctx)
|
||||
|
||||
if len(cr.input.ValidVolumes) > 0 {
|
||||
globs := make([]glob.Glob, 0, len(cr.input.ValidVolumes))
|
||||
for _, v := range cr.input.ValidVolumes {
|
||||
if g, err := glob.Compile(v); err != nil {
|
||||
logger.Errorf("create glob from %s error: %v", v, err)
|
||||
} else {
|
||||
globs = append(globs, g)
|
||||
}
|
||||
}
|
||||
isValid := func(v string) bool {
|
||||
for _, g := range globs {
|
||||
if g.Match(v) {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
// sanitize binds
|
||||
sanitizedBinds := make([]string, 0, len(hostConfig.Binds))
|
||||
for _, bind := range hostConfig.Binds {
|
||||
parsed, err := loader.ParseVolume(bind)
|
||||
if err != nil {
|
||||
logger.Warnf("parse volume [%s] error: %v", bind, err)
|
||||
continue
|
||||
}
|
||||
if parsed.Source == "" {
|
||||
// anonymous volume
|
||||
sanitizedBinds = append(sanitizedBinds, bind)
|
||||
continue
|
||||
}
|
||||
if isValid(parsed.Source) {
|
||||
sanitizedBinds = append(sanitizedBinds, bind)
|
||||
} else {
|
||||
logger.Warnf("[%s] is not a valid volume, will be ignored", parsed.Source)
|
||||
}
|
||||
}
|
||||
hostConfig.Binds = sanitizedBinds
|
||||
// sanitize mounts
|
||||
sanitizedMounts := make([]mount.Mount, 0, len(hostConfig.Mounts))
|
||||
for _, mt := range hostConfig.Mounts {
|
||||
if isValid(mt.Source) {
|
||||
sanitizedMounts = append(sanitizedMounts, mt)
|
||||
} else {
|
||||
logger.Warnf("[%s] is not a valid volume, will be ignored", mt.Source)
|
||||
}
|
||||
}
|
||||
hostConfig.Mounts = sanitizedMounts
|
||||
} else {
|
||||
hostConfig.Binds = []string{}
|
||||
hostConfig.Mounts = []mount.Mount{}
|
||||
}
|
||||
|
||||
return config, hostConfig
|
||||
}
|
||||
|
||||
@@ -2,17 +2,19 @@ package container
|
||||
|
||||
import (
|
||||
"bufio"
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"net"
|
||||
"strings"
|
||||
"testing"
|
||||
"time"
|
||||
|
||||
"github.com/nektos/act/pkg/common"
|
||||
|
||||
"github.com/docker/docker/api/types"
|
||||
"github.com/docker/docker/api/types/container"
|
||||
"github.com/docker/docker/client"
|
||||
"github.com/sirupsen/logrus/hooks/test"
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/mock"
|
||||
)
|
||||
@@ -77,11 +79,6 @@ func (m *mockDockerClient) ContainerExecInspect(ctx context.Context, execID stri
|
||||
return args.Get(0).(types.ContainerExecInspect), args.Error(1)
|
||||
}
|
||||
|
||||
func (m *mockDockerClient) CopyToContainer(ctx context.Context, id string, path string, content io.Reader, options types.CopyToContainerOptions) error {
|
||||
args := m.Called(ctx, id, path, content, options)
|
||||
return args.Error(0)
|
||||
}
|
||||
|
||||
type endlessReader struct {
|
||||
io.Reader
|
||||
}
|
||||
@@ -172,77 +169,78 @@ func TestDockerExecFailure(t *testing.T) {
|
||||
client.AssertExpectations(t)
|
||||
}
|
||||
|
||||
func TestDockerCopyTarStream(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
conn := &mockConn{}
|
||||
|
||||
client := &mockDockerClient{}
|
||||
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
|
||||
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
|
||||
cr := &containerReference{
|
||||
id: "123",
|
||||
cli: client,
|
||||
input: &NewContainerInput{
|
||||
Image: "image",
|
||||
},
|
||||
}
|
||||
|
||||
_ = cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
|
||||
|
||||
conn.AssertExpectations(t)
|
||||
client.AssertExpectations(t)
|
||||
}
|
||||
|
||||
func TestDockerCopyTarStreamErrorInCopyFiles(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
conn := &mockConn{}
|
||||
|
||||
merr := fmt.Errorf("Failure")
|
||||
|
||||
client := &mockDockerClient{}
|
||||
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
|
||||
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
|
||||
cr := &containerReference{
|
||||
id: "123",
|
||||
cli: client,
|
||||
input: &NewContainerInput{
|
||||
Image: "image",
|
||||
},
|
||||
}
|
||||
|
||||
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
|
||||
assert.ErrorIs(t, err, merr)
|
||||
|
||||
conn.AssertExpectations(t)
|
||||
client.AssertExpectations(t)
|
||||
}
|
||||
|
||||
func TestDockerCopyTarStreamErrorInMkdir(t *testing.T) {
|
||||
ctx := context.Background()
|
||||
|
||||
conn := &mockConn{}
|
||||
|
||||
merr := fmt.Errorf("Failure")
|
||||
|
||||
client := &mockDockerClient{}
|
||||
client.On("CopyToContainer", ctx, "123", "/", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(nil)
|
||||
client.On("CopyToContainer", ctx, "123", "/var/run/act", mock.Anything, mock.AnythingOfType("types.CopyToContainerOptions")).Return(merr)
|
||||
cr := &containerReference{
|
||||
id: "123",
|
||||
cli: client,
|
||||
input: &NewContainerInput{
|
||||
Image: "image",
|
||||
},
|
||||
}
|
||||
|
||||
err := cr.CopyTarStream(ctx, "/var/run/act", &bytes.Buffer{})
|
||||
assert.ErrorIs(t, err, merr)
|
||||
|
||||
conn.AssertExpectations(t)
|
||||
client.AssertExpectations(t)
|
||||
}
|
||||
|
||||
// Type assert containerReference implements ExecutionsEnvironment
|
||||
var _ ExecutionsEnvironment = &containerReference{}
|
||||
|
||||
func TestCheckVolumes(t *testing.T) {
|
||||
testCases := []struct {
|
||||
desc string
|
||||
validVolumes []string
|
||||
binds []string
|
||||
expectedBinds []string
|
||||
}{
|
||||
{
|
||||
desc: "match all volumes",
|
||||
validVolumes: []string{"**"},
|
||||
binds: []string{
|
||||
"shared_volume:/shared_volume",
|
||||
"/home/test/data:/test_data",
|
||||
"/etc/conf.d/base.json:/config/base.json",
|
||||
"sql_data:/sql_data",
|
||||
"/secrets/keys:/keys",
|
||||
},
|
||||
expectedBinds: []string{
|
||||
"shared_volume:/shared_volume",
|
||||
"/home/test/data:/test_data",
|
||||
"/etc/conf.d/base.json:/config/base.json",
|
||||
"sql_data:/sql_data",
|
||||
"/secrets/keys:/keys",
|
||||
},
|
||||
},
|
||||
{
|
||||
desc: "no volumes can be matched",
|
||||
validVolumes: []string{},
|
||||
binds: []string{
|
||||
"shared_volume:/shared_volume",
|
||||
"/home/test/data:/test_data",
|
||||
"/etc/conf.d/base.json:/config/base.json",
|
||||
"sql_data:/sql_data",
|
||||
"/secrets/keys:/keys",
|
||||
},
|
||||
expectedBinds: []string{},
|
||||
},
|
||||
{
|
||||
desc: "only allowed volumes can be matched",
|
||||
validVolumes: []string{
|
||||
"shared_volume",
|
||||
"/home/test/data",
|
||||
"/etc/conf.d/*.json",
|
||||
},
|
||||
binds: []string{
|
||||
"shared_volume:/shared_volume",
|
||||
"/home/test/data:/test_data",
|
||||
"/etc/conf.d/base.json:/config/base.json",
|
||||
"sql_data:/sql_data",
|
||||
"/secrets/keys:/keys",
|
||||
},
|
||||
expectedBinds: []string{
|
||||
"shared_volume:/shared_volume",
|
||||
"/home/test/data:/test_data",
|
||||
"/etc/conf.d/base.json:/config/base.json",
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, tc := range testCases {
|
||||
t.Run(tc.desc, func(t *testing.T) {
|
||||
logger, _ := test.NewNullLogger()
|
||||
ctx := common.WithLogger(context.Background(), logger)
|
||||
cr := &containerReference{
|
||||
input: &NewContainerInput{
|
||||
ValidVolumes: tc.validVolumes,
|
||||
},
|
||||
}
|
||||
_, hostConf := cr.sanitizeConfig(ctx, &container.Config{}, &container.HostConfig{Binds: tc.binds})
|
||||
assert.Equal(t, tc.expectedBinds, hostConf.Binds)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1,134 +0,0 @@
|
||||
package container
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
var CommonSocketLocations = []string{
|
||||
"/var/run/docker.sock",
|
||||
"/run/podman/podman.sock",
|
||||
"$HOME/.colima/docker.sock",
|
||||
"$XDG_RUNTIME_DIR/docker.sock",
|
||||
"$XDG_RUNTIME_DIR/podman/podman.sock",
|
||||
`\\.\pipe\docker_engine`,
|
||||
"$HOME/.docker/run/docker.sock",
|
||||
}
|
||||
|
||||
// returns socket URI or false if not found any
|
||||
func socketLocation() (string, bool) {
|
||||
if dockerHost, exists := os.LookupEnv("DOCKER_HOST"); exists {
|
||||
return dockerHost, true
|
||||
}
|
||||
|
||||
for _, p := range CommonSocketLocations {
|
||||
if _, err := os.Lstat(os.ExpandEnv(p)); err == nil {
|
||||
if strings.HasPrefix(p, `\\.\`) {
|
||||
return "npipe://" + filepath.ToSlash(os.ExpandEnv(p)), true
|
||||
}
|
||||
return "unix://" + filepath.ToSlash(os.ExpandEnv(p)), true
|
||||
}
|
||||
}
|
||||
|
||||
return "", false
|
||||
}
|
||||
|
||||
// This function, `isDockerHostURI`, takes a string argument `daemonPath`. It checks if the
|
||||
// `daemonPath` is a valid Docker host URI. It does this by checking if the scheme of the URI (the
|
||||
// part before "://") contains only alphabetic characters. If it does, the function returns true,
|
||||
// indicating that the `daemonPath` is a Docker host URI. If it doesn't, or if the "://" delimiter
|
||||
// is not found in the `daemonPath`, the function returns false.
|
||||
func isDockerHostURI(daemonPath string) bool {
|
||||
if protoIndex := strings.Index(daemonPath, "://"); protoIndex != -1 {
|
||||
scheme := daemonPath[:protoIndex]
|
||||
if strings.IndexFunc(scheme, func(r rune) bool {
|
||||
return (r < 'a' || r > 'z') && (r < 'A' || r > 'Z')
|
||||
}) == -1 {
|
||||
return true
|
||||
}
|
||||
}
|
||||
return false
|
||||
}
|
||||
|
||||
type SocketAndHost struct {
|
||||
Socket string
|
||||
Host string
|
||||
}
|
||||
|
||||
func GetSocketAndHost(containerSocket string) (SocketAndHost, error) {
|
||||
log.Debugf("Handling container host and socket")
|
||||
|
||||
// Prefer DOCKER_HOST, don't override it
|
||||
dockerHost, hasDockerHost := socketLocation()
|
||||
socketHost := SocketAndHost{Socket: containerSocket, Host: dockerHost}
|
||||
|
||||
// ** socketHost.Socket cases **
|
||||
// Case 1: User does _not_ want to mount a daemon socket (passes a dash)
|
||||
// Case 2: User passes a filepath to the socket; is that even valid?
|
||||
// Case 3: User passes a valid socket; do nothing
|
||||
// Case 4: User omitted the flag; set a sane default
|
||||
|
||||
// ** DOCKER_HOST cases **
|
||||
// Case A: DOCKER_HOST is set; use it, i.e. do nothing
|
||||
// Case B: DOCKER_HOST is empty; use sane defaults
|
||||
|
||||
// Set host for sanity's sake, when the socket isn't useful
|
||||
if !hasDockerHost && (socketHost.Socket == "-" || !isDockerHostURI(socketHost.Socket) || socketHost.Socket == "") {
|
||||
// Cases: 1B, 2B, 4B
|
||||
socket, found := socketLocation()
|
||||
socketHost.Host = socket
|
||||
hasDockerHost = found
|
||||
}
|
||||
|
||||
// A - (dash) in socketHost.Socket means don't mount, preserve this value
|
||||
// otherwise if socketHost.Socket is a filepath don't use it as socket
|
||||
// Exit early if we're in an invalid state (e.g. when no DOCKER_HOST and user supplied "-", a dash or omitted)
|
||||
if !hasDockerHost && socketHost.Socket != "" && !isDockerHostURI(socketHost.Socket) {
|
||||
// Cases: 1B, 2B
|
||||
// Should we early-exit here, since there is no host nor socket to talk to?
|
||||
return SocketAndHost{}, fmt.Errorf("DOCKER_HOST was not set, couldn't be found in the usual locations, and the container daemon socket ('%s') is invalid", socketHost.Socket)
|
||||
}
|
||||
|
||||
// Default to DOCKER_HOST if set
|
||||
if socketHost.Socket == "" && hasDockerHost {
|
||||
// Cases: 4A
|
||||
log.Debugf("Defaulting container socket to DOCKER_HOST")
|
||||
socketHost.Socket = socketHost.Host
|
||||
}
|
||||
// Set sane default socket location if user omitted it
|
||||
if socketHost.Socket == "" {
|
||||
// Cases: 4B
|
||||
socket, _ := socketLocation()
|
||||
// socket is empty if it isn't found, so assignment here is at worst a no-op
|
||||
log.Debugf("Defaulting container socket to default '%s'", socket)
|
||||
socketHost.Socket = socket
|
||||
}
|
||||
|
||||
// Exit if both the DOCKER_HOST and socket are fulfilled
|
||||
if hasDockerHost {
|
||||
// Cases: 1A, 2A, 3A, 4A
|
||||
if !isDockerHostURI(socketHost.Socket) {
|
||||
// Cases: 1A, 2A
|
||||
log.Debugf("DOCKER_HOST is set, but socket is invalid '%s'", socketHost.Socket)
|
||||
}
|
||||
return socketHost, nil
|
||||
}
|
||||
|
||||
// Set a sane DOCKER_HOST default if we can
|
||||
if isDockerHostURI(socketHost.Socket) {
|
||||
// Cases: 3B
|
||||
log.Debugf("Setting DOCKER_HOST to container socket '%s'", socketHost.Socket)
|
||||
socketHost.Host = socketHost.Socket
|
||||
// Both DOCKER_HOST and container socket are valid; short-circuit exit
|
||||
return socketHost, nil
|
||||
}
|
||||
|
||||
// Here there is no DOCKER_HOST _and_ the supplied container socket is not a valid URI (either invalid or a file path)
|
||||
// Cases: 2B <- but is already handled at the top
|
||||
// I.e. this path should never be taken
|
||||
return SocketAndHost{}, fmt.Errorf("no DOCKER_HOST and an invalid container socket '%s'", socketHost.Socket)
|
||||
}
|
||||
@@ -1,150 +0,0 @@
|
||||
package container
|
||||
|
||||
import (
|
||||
"os"
|
||||
"testing"
|
||||
|
||||
log "github.com/sirupsen/logrus"
|
||||
assert "github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func init() {
|
||||
log.SetLevel(log.DebugLevel)
|
||||
}
|
||||
|
||||
var originalCommonSocketLocations = CommonSocketLocations
|
||||
|
||||
func TestGetSocketAndHostWithSocket(t *testing.T) {
|
||||
// Arrange
|
||||
CommonSocketLocations = originalCommonSocketLocations
|
||||
dockerHost := "unix:///my/docker/host.sock"
|
||||
socketURI := "/path/to/my.socket"
|
||||
os.Setenv("DOCKER_HOST", dockerHost)
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost(socketURI)
|
||||
|
||||
// Assert
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, SocketAndHost{socketURI, dockerHost}, ret)
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostNoSocket(t *testing.T) {
|
||||
// Arrange
|
||||
dockerHost := "unix:///my/docker/host.sock"
|
||||
os.Setenv("DOCKER_HOST", dockerHost)
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost("")
|
||||
|
||||
// Assert
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, SocketAndHost{dockerHost, dockerHost}, ret)
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostOnlySocket(t *testing.T) {
|
||||
// Arrange
|
||||
socketURI := "/path/to/my.socket"
|
||||
os.Unsetenv("DOCKER_HOST")
|
||||
CommonSocketLocations = originalCommonSocketLocations
|
||||
defaultSocket, defaultSocketFound := socketLocation()
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost(socketURI)
|
||||
|
||||
// Assert
|
||||
assert.NoError(t, err, "Expected no error from GetSocketAndHost")
|
||||
assert.Equal(t, true, defaultSocketFound, "Expected to find default socket")
|
||||
assert.Equal(t, socketURI, ret.Socket, "Expected socket to match common location")
|
||||
assert.Equal(t, defaultSocket, ret.Host, "Expected ret.Host to match default socket location")
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostDontMount(t *testing.T) {
|
||||
// Arrange
|
||||
CommonSocketLocations = originalCommonSocketLocations
|
||||
dockerHost := "unix:///my/docker/host.sock"
|
||||
os.Setenv("DOCKER_HOST", dockerHost)
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost("-")
|
||||
|
||||
// Assert
|
||||
assert.Nil(t, err)
|
||||
assert.Equal(t, SocketAndHost{"-", dockerHost}, ret)
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostNoHostNoSocket(t *testing.T) {
|
||||
// Arrange
|
||||
CommonSocketLocations = originalCommonSocketLocations
|
||||
os.Unsetenv("DOCKER_HOST")
|
||||
defaultSocket, found := socketLocation()
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost("")
|
||||
|
||||
// Assert
|
||||
assert.Equal(t, true, found, "Expected a default socket to be found")
|
||||
assert.Nil(t, err, "Expected no error from GetSocketAndHost")
|
||||
assert.Equal(t, SocketAndHost{defaultSocket, defaultSocket}, ret, "Expected to match default socket location")
|
||||
}
|
||||
|
||||
// Catch
|
||||
// > Your code breaks setting DOCKER_HOST if shouldMount is false.
|
||||
// > This happens if neither DOCKER_HOST nor --container-daemon-socket has a value, but socketLocation() returns a URI
|
||||
func TestGetSocketAndHostNoHostNoSocketDefaultLocation(t *testing.T) {
|
||||
// Arrange
|
||||
mySocketFile, tmpErr := os.CreateTemp("", "act-*.sock")
|
||||
mySocket := mySocketFile.Name()
|
||||
unixSocket := "unix://" + mySocket
|
||||
defer os.RemoveAll(mySocket)
|
||||
assert.NoError(t, tmpErr)
|
||||
os.Unsetenv("DOCKER_HOST")
|
||||
|
||||
CommonSocketLocations = []string{mySocket}
|
||||
defaultSocket, found := socketLocation()
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost("")
|
||||
|
||||
// Assert
|
||||
assert.Equal(t, unixSocket, defaultSocket, "Expected default socket to match common socket location")
|
||||
assert.Equal(t, true, found, "Expected default socket to be found")
|
||||
assert.Nil(t, err, "Expected no error from GetSocketAndHost")
|
||||
assert.Equal(t, SocketAndHost{unixSocket, unixSocket}, ret, "Expected to match default socket location")
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostNoHostInvalidSocket(t *testing.T) {
|
||||
// Arrange
|
||||
os.Unsetenv("DOCKER_HOST")
|
||||
mySocket := "/my/socket/path.sock"
|
||||
CommonSocketLocations = []string{"/unusual", "/socket", "/location"}
|
||||
defaultSocket, found := socketLocation()
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost(mySocket)
|
||||
|
||||
// Assert
|
||||
assert.Equal(t, false, found, "Expected no default socket to be found")
|
||||
assert.Equal(t, "", defaultSocket, "Expected no default socket to be found")
|
||||
assert.Equal(t, SocketAndHost{}, ret, "Expected to match default socket location")
|
||||
assert.Error(t, err, "Expected an error in invalid state")
|
||||
}
|
||||
|
||||
func TestGetSocketAndHostOnlySocketValidButUnusualLocation(t *testing.T) {
|
||||
// Arrange
|
||||
socketURI := "unix:///path/to/my.socket"
|
||||
CommonSocketLocations = []string{"/unusual", "/location"}
|
||||
os.Unsetenv("DOCKER_HOST")
|
||||
defaultSocket, found := socketLocation()
|
||||
|
||||
// Act
|
||||
ret, err := GetSocketAndHost(socketURI)
|
||||
|
||||
// Assert
|
||||
// Default socket locations
|
||||
assert.Equal(t, "", defaultSocket, "Expect default socket location to be empty")
|
||||
assert.Equal(t, false, found, "Expected no default socket to be found")
|
||||
// Sane default
|
||||
assert.Nil(t, err, "Expect no error from GetSocketAndHost")
|
||||
assert.Equal(t, socketURI, ret.Host, "Expect host to default to unusual socket")
|
||||
}
|
||||
@@ -41,6 +41,12 @@ func (e *HostEnvironment) Create(_ []string, _ []string) common.Executor {
|
||||
}
|
||||
}
|
||||
|
||||
func (e *HostEnvironment) ConnectToNetwork(name string) common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (e *HostEnvironment) Close() common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
return nil
|
||||
|
||||
@@ -155,6 +155,8 @@ func (impl *interperterImpl) evaluateVariable(variableNode *actionlint.VariableN
|
||||
switch strings.ToLower(variableNode.Name) {
|
||||
case "github":
|
||||
return impl.env.Github, nil
|
||||
case "gitea": // compatible with Gitea
|
||||
return impl.env.Github, nil
|
||||
case "env":
|
||||
return impl.env.Env, nil
|
||||
case "job":
|
||||
|
||||
185
pkg/jobparser/evaluator.go
Normal file
185
pkg/jobparser/evaluator.go
Normal file
@@ -0,0 +1,185 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"regexp"
|
||||
"strings"
|
||||
|
||||
"github.com/nektos/act/pkg/exprparser"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// ExpressionEvaluator is copied from runner.expressionEvaluator,
|
||||
// to avoid unnecessary dependencies
|
||||
type ExpressionEvaluator struct {
|
||||
interpreter exprparser.Interpreter
|
||||
}
|
||||
|
||||
func NewExpressionEvaluator(interpreter exprparser.Interpreter) *ExpressionEvaluator {
|
||||
return &ExpressionEvaluator{interpreter: interpreter}
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) evaluate(in string, defaultStatusCheck exprparser.DefaultStatusCheck) (interface{}, error) {
|
||||
evaluated, err := ee.interpreter.Evaluate(in, defaultStatusCheck)
|
||||
|
||||
return evaluated, err
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) evaluateScalarYamlNode(node *yaml.Node) error {
|
||||
var in string
|
||||
if err := node.Decode(&in); err != nil {
|
||||
return err
|
||||
}
|
||||
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
|
||||
return nil
|
||||
}
|
||||
expr, _ := rewriteSubExpression(in, false)
|
||||
res, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
return node.Encode(res)
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) evaluateMappingYamlNode(node *yaml.Node) error {
|
||||
// GitHub has this undocumented feature to merge maps, called insert directive
|
||||
insertDirective := regexp.MustCompile(`\${{\s*insert\s*}}`)
|
||||
for i := 0; i < len(node.Content)/2; {
|
||||
k := node.Content[i*2]
|
||||
v := node.Content[i*2+1]
|
||||
if err := ee.EvaluateYamlNode(v); err != nil {
|
||||
return err
|
||||
}
|
||||
var sk string
|
||||
// Merge the nested map of the insert directive
|
||||
if k.Decode(&sk) == nil && insertDirective.MatchString(sk) {
|
||||
node.Content = append(append(node.Content[:i*2], v.Content...), node.Content[(i+1)*2:]...)
|
||||
i += len(v.Content) / 2
|
||||
} else {
|
||||
if err := ee.EvaluateYamlNode(k); err != nil {
|
||||
return err
|
||||
}
|
||||
i++
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) evaluateSequenceYamlNode(node *yaml.Node) error {
|
||||
for i := 0; i < len(node.Content); {
|
||||
v := node.Content[i]
|
||||
// Preserve nested sequences
|
||||
wasseq := v.Kind == yaml.SequenceNode
|
||||
if err := ee.EvaluateYamlNode(v); err != nil {
|
||||
return err
|
||||
}
|
||||
// GitHub has this undocumented feature to merge sequences / arrays
|
||||
// We have a nested sequence via evaluation, merge the arrays
|
||||
if v.Kind == yaml.SequenceNode && !wasseq {
|
||||
node.Content = append(append(node.Content[:i], v.Content...), node.Content[i+1:]...)
|
||||
i += len(v.Content)
|
||||
} else {
|
||||
i++
|
||||
}
|
||||
}
|
||||
return nil
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) EvaluateYamlNode(node *yaml.Node) error {
|
||||
switch node.Kind {
|
||||
case yaml.ScalarNode:
|
||||
return ee.evaluateScalarYamlNode(node)
|
||||
case yaml.MappingNode:
|
||||
return ee.evaluateMappingYamlNode(node)
|
||||
case yaml.SequenceNode:
|
||||
return ee.evaluateSequenceYamlNode(node)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
}
|
||||
|
||||
func (ee ExpressionEvaluator) Interpolate(in string) string {
|
||||
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
|
||||
return in
|
||||
}
|
||||
|
||||
expr, _ := rewriteSubExpression(in, true)
|
||||
evaluated, err := ee.evaluate(expr, exprparser.DefaultStatusCheckNone)
|
||||
if err != nil {
|
||||
return ""
|
||||
}
|
||||
|
||||
value, ok := evaluated.(string)
|
||||
if !ok {
|
||||
panic(fmt.Sprintf("Expression %s did not evaluate to a string", expr))
|
||||
}
|
||||
|
||||
return value
|
||||
}
|
||||
|
||||
func escapeFormatString(in string) string {
|
||||
return strings.ReplaceAll(strings.ReplaceAll(in, "{", "{{"), "}", "}}")
|
||||
}
|
||||
|
||||
func rewriteSubExpression(in string, forceFormat bool) (string, error) {
|
||||
if !strings.Contains(in, "${{") || !strings.Contains(in, "}}") {
|
||||
return in, nil
|
||||
}
|
||||
|
||||
strPattern := regexp.MustCompile("(?:''|[^'])*'")
|
||||
pos := 0
|
||||
exprStart := -1
|
||||
strStart := -1
|
||||
var results []string
|
||||
formatOut := ""
|
||||
for pos < len(in) {
|
||||
if strStart > -1 {
|
||||
matches := strPattern.FindStringIndex(in[pos:])
|
||||
if matches == nil {
|
||||
panic("unclosed string.")
|
||||
}
|
||||
|
||||
strStart = -1
|
||||
pos += matches[1]
|
||||
} else if exprStart > -1 {
|
||||
exprEnd := strings.Index(in[pos:], "}}")
|
||||
strStart = strings.Index(in[pos:], "'")
|
||||
|
||||
if exprEnd > -1 && strStart > -1 {
|
||||
if exprEnd < strStart {
|
||||
strStart = -1
|
||||
} else {
|
||||
exprEnd = -1
|
||||
}
|
||||
}
|
||||
|
||||
if exprEnd > -1 {
|
||||
formatOut += fmt.Sprintf("{%d}", len(results))
|
||||
results = append(results, strings.TrimSpace(in[exprStart:pos+exprEnd]))
|
||||
pos += exprEnd + 2
|
||||
exprStart = -1
|
||||
} else if strStart > -1 {
|
||||
pos += strStart + 1
|
||||
} else {
|
||||
panic("unclosed expression.")
|
||||
}
|
||||
} else {
|
||||
exprStart = strings.Index(in[pos:], "${{")
|
||||
if exprStart != -1 {
|
||||
formatOut += escapeFormatString(in[pos : pos+exprStart])
|
||||
exprStart = pos + exprStart + 3
|
||||
pos = exprStart
|
||||
} else {
|
||||
formatOut += escapeFormatString(in[pos:])
|
||||
pos = len(in)
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if len(results) == 1 && formatOut == "{0}" && !forceFormat {
|
||||
return in, nil
|
||||
}
|
||||
|
||||
out := fmt.Sprintf("format('%s', %s)", strings.ReplaceAll(formatOut, "'", "''"), strings.Join(results, ", "))
|
||||
return out, nil
|
||||
}
|
||||
83
pkg/jobparser/interpeter.go
Normal file
83
pkg/jobparser/interpeter.go
Normal file
@@ -0,0 +1,83 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"github.com/nektos/act/pkg/exprparser"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// NewInterpeter returns an interpeter used in the server,
|
||||
// need github, needs, strategy, matrix, inputs context only,
|
||||
// see https://docs.github.com/en/actions/learn-github-actions/contexts#context-availability
|
||||
func NewInterpeter(
|
||||
jobID string,
|
||||
job *model.Job,
|
||||
matrix map[string]interface{},
|
||||
gitCtx *model.GithubContext,
|
||||
results map[string]*JobResult,
|
||||
vars map[string]string,
|
||||
) exprparser.Interpreter {
|
||||
strategy := make(map[string]interface{})
|
||||
if job.Strategy != nil {
|
||||
strategy["fail-fast"] = job.Strategy.FailFast
|
||||
strategy["max-parallel"] = job.Strategy.MaxParallel
|
||||
}
|
||||
|
||||
run := &model.Run{
|
||||
Workflow: &model.Workflow{
|
||||
Jobs: map[string]*model.Job{},
|
||||
},
|
||||
JobID: jobID,
|
||||
}
|
||||
for id, result := range results {
|
||||
need := yaml.Node{}
|
||||
_ = need.Encode(result.Needs)
|
||||
run.Workflow.Jobs[id] = &model.Job{
|
||||
RawNeeds: need,
|
||||
Result: result.Result,
|
||||
Outputs: result.Outputs,
|
||||
}
|
||||
}
|
||||
|
||||
jobs := run.Workflow.Jobs
|
||||
jobNeeds := run.Job().Needs()
|
||||
|
||||
using := map[string]exprparser.Needs{}
|
||||
for _, need := range jobNeeds {
|
||||
if v, ok := jobs[need]; ok {
|
||||
using[need] = exprparser.Needs{
|
||||
Outputs: v.Outputs,
|
||||
Result: v.Result,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
ee := &exprparser.EvaluationEnvironment{
|
||||
Github: gitCtx,
|
||||
Env: nil, // no need
|
||||
Job: nil, // no need
|
||||
Steps: nil, // no need
|
||||
Runner: nil, // no need
|
||||
Secrets: nil, // no need
|
||||
Strategy: strategy,
|
||||
Matrix: matrix,
|
||||
Needs: using,
|
||||
Inputs: nil, // not supported yet
|
||||
Vars: vars,
|
||||
}
|
||||
|
||||
config := exprparser.Config{
|
||||
Run: run,
|
||||
WorkingDir: "", // WorkingDir is used for the function hashFiles, but it's not needed in the server
|
||||
Context: "job",
|
||||
}
|
||||
|
||||
return exprparser.NewInterpeter(ee, config)
|
||||
}
|
||||
|
||||
// JobResult is the minimum requirement of job results for Interpeter
|
||||
type JobResult struct {
|
||||
Needs []string
|
||||
Result string
|
||||
Outputs map[string]string
|
||||
}
|
||||
157
pkg/jobparser/jobparser.go
Normal file
157
pkg/jobparser/jobparser.go
Normal file
@@ -0,0 +1,157 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"bytes"
|
||||
"fmt"
|
||||
"sort"
|
||||
"strings"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
|
||||
"github.com/nektos/act/pkg/model"
|
||||
)
|
||||
|
||||
func Parse(content []byte, options ...ParseOption) ([]*SingleWorkflow, error) {
|
||||
origin, err := model.ReadWorkflow(bytes.NewReader(content))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("model.ReadWorkflow: %w", err)
|
||||
}
|
||||
|
||||
workflow := &SingleWorkflow{}
|
||||
if err := yaml.Unmarshal(content, workflow); err != nil {
|
||||
return nil, fmt.Errorf("yaml.Unmarshal: %w", err)
|
||||
}
|
||||
|
||||
pc := &parseContext{}
|
||||
for _, o := range options {
|
||||
o(pc)
|
||||
}
|
||||
results := map[string]*JobResult{}
|
||||
for id, job := range origin.Jobs {
|
||||
results[id] = &JobResult{
|
||||
Needs: job.Needs(),
|
||||
Result: pc.jobResults[id],
|
||||
Outputs: nil, // not supported yet
|
||||
}
|
||||
}
|
||||
|
||||
var ret []*SingleWorkflow
|
||||
ids, jobs, err := workflow.jobs()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("invalid jobs: %w", err)
|
||||
}
|
||||
for i, id := range ids {
|
||||
job := jobs[i]
|
||||
matricxes, err := getMatrixes(origin.GetJob(id))
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("getMatrixes: %w", err)
|
||||
}
|
||||
for _, matrix := range matricxes {
|
||||
job := job.Clone()
|
||||
if job.Name == "" {
|
||||
job.Name = id
|
||||
}
|
||||
job.Name = nameWithMatrix(job.Name, matrix)
|
||||
job.Strategy.RawMatrix = encodeMatrix(matrix)
|
||||
evaluator := NewExpressionEvaluator(NewInterpeter(id, origin.GetJob(id), matrix, pc.gitContext, results, pc.vars))
|
||||
runsOn := origin.GetJob(id).RunsOn()
|
||||
for i, v := range runsOn {
|
||||
runsOn[i] = evaluator.Interpolate(v)
|
||||
}
|
||||
job.RawRunsOn = encodeRunsOn(runsOn)
|
||||
swf := &SingleWorkflow{
|
||||
Name: workflow.Name,
|
||||
RawOn: workflow.RawOn,
|
||||
Env: workflow.Env,
|
||||
Defaults: workflow.Defaults,
|
||||
}
|
||||
if err := swf.SetJob(id, job); err != nil {
|
||||
return nil, fmt.Errorf("SetJob: %w", err)
|
||||
}
|
||||
ret = append(ret, swf)
|
||||
}
|
||||
}
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func WithJobResults(results map[string]string) ParseOption {
|
||||
return func(c *parseContext) {
|
||||
c.jobResults = results
|
||||
}
|
||||
}
|
||||
|
||||
func WithGitContext(context *model.GithubContext) ParseOption {
|
||||
return func(c *parseContext) {
|
||||
c.gitContext = context
|
||||
}
|
||||
}
|
||||
|
||||
func WithVars(vars map[string]string) ParseOption {
|
||||
return func(c *parseContext) {
|
||||
c.vars = vars
|
||||
}
|
||||
}
|
||||
|
||||
type parseContext struct {
|
||||
jobResults map[string]string
|
||||
gitContext *model.GithubContext
|
||||
vars map[string]string
|
||||
}
|
||||
|
||||
type ParseOption func(c *parseContext)
|
||||
|
||||
func getMatrixes(job *model.Job) ([]map[string]interface{}, error) {
|
||||
ret, err := job.GetMatrixes()
|
||||
if err != nil {
|
||||
return nil, fmt.Errorf("GetMatrixes: %w", err)
|
||||
}
|
||||
sort.Slice(ret, func(i, j int) bool {
|
||||
return matrixName(ret[i]) < matrixName(ret[j])
|
||||
})
|
||||
return ret, nil
|
||||
}
|
||||
|
||||
func encodeMatrix(matrix map[string]interface{}) yaml.Node {
|
||||
if len(matrix) == 0 {
|
||||
return yaml.Node{}
|
||||
}
|
||||
value := map[string][]interface{}{}
|
||||
for k, v := range matrix {
|
||||
value[k] = []interface{}{v}
|
||||
}
|
||||
node := yaml.Node{}
|
||||
_ = node.Encode(value)
|
||||
return node
|
||||
}
|
||||
|
||||
func encodeRunsOn(runsOn []string) yaml.Node {
|
||||
node := yaml.Node{}
|
||||
if len(runsOn) == 1 {
|
||||
_ = node.Encode(runsOn[0])
|
||||
} else {
|
||||
_ = node.Encode(runsOn)
|
||||
}
|
||||
return node
|
||||
}
|
||||
|
||||
func nameWithMatrix(name string, m map[string]interface{}) string {
|
||||
if len(m) == 0 {
|
||||
return name
|
||||
}
|
||||
|
||||
return name + " " + matrixName(m)
|
||||
}
|
||||
|
||||
func matrixName(m map[string]interface{}) string {
|
||||
ks := make([]string, 0, len(m))
|
||||
for k := range m {
|
||||
ks = append(ks, k)
|
||||
}
|
||||
sort.Strings(ks)
|
||||
vs := make([]string, 0, len(m))
|
||||
for _, v := range ks {
|
||||
vs = append(vs, fmt.Sprint(m[v]))
|
||||
}
|
||||
|
||||
return fmt.Sprintf("(%s)", strings.Join(vs, ", "))
|
||||
}
|
||||
76
pkg/jobparser/jobparser_test.go
Normal file
76
pkg/jobparser/jobparser_test.go
Normal file
@@ -0,0 +1,76 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
func TestParse(t *testing.T) {
|
||||
tests := []struct {
|
||||
name string
|
||||
options []ParseOption
|
||||
wantErr bool
|
||||
}{
|
||||
{
|
||||
name: "multiple_jobs",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "multiple_matrix",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "has_needs",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "has_with",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "has_secrets",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
{
|
||||
name: "empty_step",
|
||||
options: nil,
|
||||
wantErr: false,
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.name, func(t *testing.T) {
|
||||
content := ReadTestdata(t, tt.name+".in.yaml")
|
||||
want := ReadTestdata(t, tt.name+".out.yaml")
|
||||
got, err := Parse(content, tt.options...)
|
||||
if tt.wantErr {
|
||||
require.Error(t, err)
|
||||
}
|
||||
require.NoError(t, err)
|
||||
|
||||
builder := &strings.Builder{}
|
||||
for _, v := range got {
|
||||
if builder.Len() > 0 {
|
||||
builder.WriteString("---\n")
|
||||
}
|
||||
encoder := yaml.NewEncoder(builder)
|
||||
encoder.SetIndent(2)
|
||||
require.NoError(t, encoder.Encode(v))
|
||||
id, job := v.Job()
|
||||
assert.NotEmpty(t, id)
|
||||
assert.NotNil(t, job)
|
||||
}
|
||||
assert.Equal(t, string(want), builder.String())
|
||||
})
|
||||
}
|
||||
}
|
||||
333
pkg/jobparser/model.go
Normal file
333
pkg/jobparser/model.go
Normal file
@@ -0,0 +1,333 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
|
||||
"github.com/nektos/act/pkg/model"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
// SingleWorkflow is a workflow with single job and single matrix
|
||||
type SingleWorkflow struct {
|
||||
Name string `yaml:"name,omitempty"`
|
||||
RawOn yaml.Node `yaml:"on,omitempty"`
|
||||
Env map[string]string `yaml:"env,omitempty"`
|
||||
RawJobs yaml.Node `yaml:"jobs,omitempty"`
|
||||
Defaults Defaults `yaml:"defaults,omitempty"`
|
||||
}
|
||||
|
||||
func (w *SingleWorkflow) Job() (string, *Job) {
|
||||
ids, jobs, _ := w.jobs()
|
||||
if len(ids) >= 1 {
|
||||
return ids[0], jobs[0]
|
||||
}
|
||||
return "", nil
|
||||
}
|
||||
|
||||
func (w *SingleWorkflow) jobs() ([]string, []*Job, error) {
|
||||
ids, jobs, err := parseMappingNode[*Job](&w.RawJobs)
|
||||
if err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
|
||||
for _, job := range jobs {
|
||||
steps := make([]*Step, 0, len(job.Steps))
|
||||
for _, s := range job.Steps {
|
||||
if s != nil {
|
||||
steps = append(steps, s)
|
||||
}
|
||||
}
|
||||
job.Steps = steps
|
||||
}
|
||||
|
||||
return ids, jobs, nil
|
||||
}
|
||||
|
||||
func (w *SingleWorkflow) SetJob(id string, job *Job) error {
|
||||
m := map[string]*Job{
|
||||
id: job,
|
||||
}
|
||||
out, err := yaml.Marshal(m)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
node := yaml.Node{}
|
||||
if err := yaml.Unmarshal(out, &node); err != nil {
|
||||
return err
|
||||
}
|
||||
if len(node.Content) != 1 || node.Content[0].Kind != yaml.MappingNode {
|
||||
return fmt.Errorf("can not set job: %q", out)
|
||||
}
|
||||
w.RawJobs = *node.Content[0]
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *SingleWorkflow) Marshal() ([]byte, error) {
|
||||
return yaml.Marshal(w)
|
||||
}
|
||||
|
||||
type Job struct {
|
||||
Name string `yaml:"name,omitempty"`
|
||||
RawNeeds yaml.Node `yaml:"needs,omitempty"`
|
||||
RawRunsOn yaml.Node `yaml:"runs-on,omitempty"`
|
||||
Env yaml.Node `yaml:"env,omitempty"`
|
||||
If yaml.Node `yaml:"if,omitempty"`
|
||||
Steps []*Step `yaml:"steps,omitempty"`
|
||||
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
|
||||
Services map[string]*ContainerSpec `yaml:"services,omitempty"`
|
||||
Strategy Strategy `yaml:"strategy,omitempty"`
|
||||
RawContainer yaml.Node `yaml:"container,omitempty"`
|
||||
Defaults Defaults `yaml:"defaults,omitempty"`
|
||||
Outputs map[string]string `yaml:"outputs,omitempty"`
|
||||
Uses string `yaml:"uses,omitempty"`
|
||||
With map[string]interface{} `yaml:"with,omitempty"`
|
||||
RawSecrets yaml.Node `yaml:"secrets,omitempty"`
|
||||
}
|
||||
|
||||
func (j *Job) Clone() *Job {
|
||||
if j == nil {
|
||||
return nil
|
||||
}
|
||||
return &Job{
|
||||
Name: j.Name,
|
||||
RawNeeds: j.RawNeeds,
|
||||
RawRunsOn: j.RawRunsOn,
|
||||
Env: j.Env,
|
||||
If: j.If,
|
||||
Steps: j.Steps,
|
||||
TimeoutMinutes: j.TimeoutMinutes,
|
||||
Services: j.Services,
|
||||
Strategy: j.Strategy,
|
||||
RawContainer: j.RawContainer,
|
||||
Defaults: j.Defaults,
|
||||
Outputs: j.Outputs,
|
||||
Uses: j.Uses,
|
||||
With: j.With,
|
||||
RawSecrets: j.RawSecrets,
|
||||
}
|
||||
}
|
||||
|
||||
func (j *Job) Needs() []string {
|
||||
return (&model.Job{RawNeeds: j.RawNeeds}).Needs()
|
||||
}
|
||||
|
||||
func (j *Job) EraseNeeds() *Job {
|
||||
j.RawNeeds = yaml.Node{}
|
||||
return j
|
||||
}
|
||||
|
||||
func (j *Job) RunsOn() []string {
|
||||
return (&model.Job{RawRunsOn: j.RawRunsOn}).RunsOn()
|
||||
}
|
||||
|
||||
type Step struct {
|
||||
ID string `yaml:"id,omitempty"`
|
||||
If yaml.Node `yaml:"if,omitempty"`
|
||||
Name string `yaml:"name,omitempty"`
|
||||
Uses string `yaml:"uses,omitempty"`
|
||||
Run string `yaml:"run,omitempty"`
|
||||
WorkingDirectory string `yaml:"working-directory,omitempty"`
|
||||
Shell string `yaml:"shell,omitempty"`
|
||||
Env yaml.Node `yaml:"env,omitempty"`
|
||||
With map[string]string `yaml:"with,omitempty"`
|
||||
ContinueOnError bool `yaml:"continue-on-error,omitempty"`
|
||||
TimeoutMinutes string `yaml:"timeout-minutes,omitempty"`
|
||||
}
|
||||
|
||||
// String gets the name of step
|
||||
func (s *Step) String() string {
|
||||
if s == nil {
|
||||
return ""
|
||||
}
|
||||
return (&model.Step{
|
||||
ID: s.ID,
|
||||
Name: s.Name,
|
||||
Uses: s.Uses,
|
||||
Run: s.Run,
|
||||
}).String()
|
||||
}
|
||||
|
||||
type ContainerSpec struct {
|
||||
Image string `yaml:"image,omitempty"`
|
||||
Env map[string]string `yaml:"env,omitempty"`
|
||||
Ports []string `yaml:"ports,omitempty"`
|
||||
Volumes []string `yaml:"volumes,omitempty"`
|
||||
Options string `yaml:"options,omitempty"`
|
||||
Credentials map[string]string `yaml:"credentials,omitempty"`
|
||||
Cmd []string `yaml:"cmd,omitempty"`
|
||||
}
|
||||
|
||||
type Strategy struct {
|
||||
FailFastString string `yaml:"fail-fast,omitempty"`
|
||||
MaxParallelString string `yaml:"max-parallel,omitempty"`
|
||||
RawMatrix yaml.Node `yaml:"matrix,omitempty"`
|
||||
}
|
||||
|
||||
type Defaults struct {
|
||||
Run RunDefaults `yaml:"run,omitempty"`
|
||||
}
|
||||
|
||||
type RunDefaults struct {
|
||||
Shell string `yaml:"shell,omitempty"`
|
||||
WorkingDirectory string `yaml:"working-directory,omitempty"`
|
||||
}
|
||||
|
||||
type Event struct {
|
||||
Name string
|
||||
acts map[string][]string
|
||||
schedules []map[string]string
|
||||
}
|
||||
|
||||
func (evt *Event) IsSchedule() bool {
|
||||
return evt.schedules != nil
|
||||
}
|
||||
|
||||
func (evt *Event) Acts() map[string][]string {
|
||||
return evt.acts
|
||||
}
|
||||
|
||||
func (evt *Event) Schedules() []map[string]string {
|
||||
return evt.schedules
|
||||
}
|
||||
|
||||
func ParseRawOn(rawOn *yaml.Node) ([]*Event, error) {
|
||||
switch rawOn.Kind {
|
||||
case yaml.ScalarNode:
|
||||
var val string
|
||||
err := rawOn.Decode(&val)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
return []*Event{
|
||||
{Name: val},
|
||||
}, nil
|
||||
case yaml.SequenceNode:
|
||||
var val []interface{}
|
||||
err := rawOn.Decode(&val)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res := make([]*Event, 0, len(val))
|
||||
for _, v := range val {
|
||||
switch t := v.(type) {
|
||||
case string:
|
||||
res = append(res, &Event{Name: t})
|
||||
default:
|
||||
return nil, fmt.Errorf("invalid type %T", t)
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
case yaml.MappingNode:
|
||||
events, triggers, err := parseMappingNode[interface{}](rawOn)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
res := make([]*Event, 0, len(events))
|
||||
for i, k := range events {
|
||||
v := triggers[i]
|
||||
if v == nil {
|
||||
res = append(res, &Event{
|
||||
Name: k,
|
||||
acts: map[string][]string{},
|
||||
})
|
||||
continue
|
||||
}
|
||||
switch t := v.(type) {
|
||||
case string:
|
||||
res = append(res, &Event{
|
||||
Name: k,
|
||||
acts: map[string][]string{},
|
||||
})
|
||||
case []string:
|
||||
res = append(res, &Event{
|
||||
Name: k,
|
||||
acts: map[string][]string{},
|
||||
})
|
||||
case map[string]interface{}:
|
||||
acts := make(map[string][]string, len(t))
|
||||
for act, branches := range t {
|
||||
switch b := branches.(type) {
|
||||
case string:
|
||||
acts[act] = []string{b}
|
||||
case []string:
|
||||
acts[act] = b
|
||||
case []interface{}:
|
||||
acts[act] = make([]string, len(b))
|
||||
for i, v := range b {
|
||||
var ok bool
|
||||
if acts[act][i], ok = v.(string); !ok {
|
||||
return nil, fmt.Errorf("unknown on type: %#v", branches)
|
||||
}
|
||||
}
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown on type: %#v", branches)
|
||||
}
|
||||
}
|
||||
res = append(res, &Event{
|
||||
Name: k,
|
||||
acts: acts,
|
||||
})
|
||||
case []interface{}:
|
||||
if k != "schedule" {
|
||||
return nil, fmt.Errorf("unknown on type: %#v", v)
|
||||
}
|
||||
schedules := make([]map[string]string, len(t))
|
||||
for i, tt := range t {
|
||||
vv, ok := tt.(map[string]interface{})
|
||||
if !ok {
|
||||
return nil, fmt.Errorf("unknown on type: %#v", v)
|
||||
}
|
||||
schedules[i] = make(map[string]string, len(vv))
|
||||
for k, vvv := range vv {
|
||||
var ok bool
|
||||
if schedules[i][k], ok = vvv.(string); !ok {
|
||||
return nil, fmt.Errorf("unknown on type: %#v", v)
|
||||
}
|
||||
}
|
||||
}
|
||||
res = append(res, &Event{
|
||||
Name: k,
|
||||
schedules: schedules,
|
||||
})
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown on type: %#v", v)
|
||||
}
|
||||
}
|
||||
return res, nil
|
||||
default:
|
||||
return nil, fmt.Errorf("unknown on type: %v", rawOn.Kind)
|
||||
}
|
||||
}
|
||||
|
||||
// parseMappingNode parse a mapping node and preserve order.
|
||||
func parseMappingNode[T any](node *yaml.Node) ([]string, []T, error) {
|
||||
if node.Kind != yaml.MappingNode {
|
||||
return nil, nil, fmt.Errorf("input node is not a mapping node")
|
||||
}
|
||||
|
||||
var scalars []string
|
||||
var datas []T
|
||||
expectKey := true
|
||||
for _, item := range node.Content {
|
||||
if expectKey {
|
||||
if item.Kind != yaml.ScalarNode {
|
||||
return nil, nil, fmt.Errorf("not a valid scalar node: %v", item.Value)
|
||||
}
|
||||
scalars = append(scalars, item.Value)
|
||||
expectKey = false
|
||||
} else {
|
||||
var val T
|
||||
if err := item.Decode(&val); err != nil {
|
||||
return nil, nil, err
|
||||
}
|
||||
datas = append(datas, val)
|
||||
expectKey = true
|
||||
}
|
||||
}
|
||||
|
||||
if len(scalars) != len(datas) {
|
||||
return nil, nil, fmt.Errorf("invalid definition of on: %v", node.Value)
|
||||
}
|
||||
|
||||
return scalars, datas, nil
|
||||
}
|
||||
306
pkg/jobparser/model_test.go
Normal file
306
pkg/jobparser/model_test.go
Normal file
@@ -0,0 +1,306 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"fmt"
|
||||
"strings"
|
||||
"testing"
|
||||
|
||||
"github.com/nektos/act/pkg/model"
|
||||
|
||||
"github.com/stretchr/testify/assert"
|
||||
"github.com/stretchr/testify/require"
|
||||
"gopkg.in/yaml.v3"
|
||||
)
|
||||
|
||||
func TestParseRawOn(t *testing.T) {
|
||||
kases := []struct {
|
||||
input string
|
||||
result []*Event
|
||||
}{
|
||||
{
|
||||
input: "on: issue_comment",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "issue_comment",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
},
|
||||
},
|
||||
},
|
||||
|
||||
{
|
||||
input: "on:\n - push\n - pull_request",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
},
|
||||
{
|
||||
Name: "pull_request",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push:\n branches:\n - master",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
acts: map[string][]string{
|
||||
"branches": {
|
||||
"master",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "branch_protection_rule",
|
||||
acts: map[string][]string{
|
||||
"types": {
|
||||
"created",
|
||||
"deleted",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n project:\n types: [created, deleted]\n milestone:\n types: [opened, deleted]",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "project",
|
||||
acts: map[string][]string{
|
||||
"types": {
|
||||
"created",
|
||||
"deleted",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "milestone",
|
||||
acts: map[string][]string{
|
||||
"types": {
|
||||
"opened",
|
||||
"deleted",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n pull_request:\n types:\n - opened\n branches:\n - 'releases/**'",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "pull_request",
|
||||
acts: map[string][]string{
|
||||
"types": {
|
||||
"opened",
|
||||
},
|
||||
"branches": {
|
||||
"releases/**",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push:\n branches:\n - main\n pull_request:\n types:\n - opened\n branches:\n - '**'",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
acts: map[string][]string{
|
||||
"branches": {
|
||||
"main",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
Name: "pull_request",
|
||||
acts: map[string][]string{
|
||||
"types": {
|
||||
"opened",
|
||||
},
|
||||
"branches": {
|
||||
"**",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push:\n branches:\n - 'main'\n - 'releases/**'",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
acts: map[string][]string{
|
||||
"branches": {
|
||||
"main",
|
||||
"releases/**",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push:\n tags:\n - v1.**",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "push",
|
||||
acts: map[string][]string{
|
||||
"tags": {
|
||||
"v1.**",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on: [pull_request, workflow_dispatch]",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "pull_request",
|
||||
},
|
||||
{
|
||||
Name: "workflow_dispatch",
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n schedule:\n - cron: '20 6 * * *'",
|
||||
result: []*Event{
|
||||
{
|
||||
Name: "schedule",
|
||||
schedules: []map[string]string{
|
||||
{
|
||||
"cron": "20 6 * * *",
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
},
|
||||
}
|
||||
for _, kase := range kases {
|
||||
t.Run(kase.input, func(t *testing.T) {
|
||||
origin, err := model.ReadWorkflow(strings.NewReader(kase.input))
|
||||
assert.NoError(t, err)
|
||||
|
||||
events, err := ParseRawOn(&origin.RawOn)
|
||||
assert.NoError(t, err)
|
||||
assert.EqualValues(t, kase.result, events, fmt.Sprintf("%#v", events))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func TestSingleWorkflow_SetJob(t *testing.T) {
|
||||
t.Run("erase needs", func(t *testing.T) {
|
||||
content := ReadTestdata(t, "erase_needs.in.yaml")
|
||||
want := ReadTestdata(t, "erase_needs.out.yaml")
|
||||
swf, err := Parse(content)
|
||||
require.NoError(t, err)
|
||||
builder := &strings.Builder{}
|
||||
for _, v := range swf {
|
||||
id, job := v.Job()
|
||||
require.NoError(t, v.SetJob(id, job.EraseNeeds()))
|
||||
|
||||
if builder.Len() > 0 {
|
||||
builder.WriteString("---\n")
|
||||
}
|
||||
encoder := yaml.NewEncoder(builder)
|
||||
encoder.SetIndent(2)
|
||||
require.NoError(t, encoder.Encode(v))
|
||||
}
|
||||
assert.Equal(t, string(want), builder.String())
|
||||
})
|
||||
}
|
||||
|
||||
func TestParseMappingNode(t *testing.T) {
|
||||
tests := []struct {
|
||||
input string
|
||||
scalars []string
|
||||
datas []interface{}
|
||||
}{
|
||||
{
|
||||
input: "on:\n push:\n branches:\n - master",
|
||||
scalars: []string{"push"},
|
||||
datas: []interface {
|
||||
}{
|
||||
map[string]interface{}{
|
||||
"branches": []interface{}{"master"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n branch_protection_rule:\n types: [created, deleted]",
|
||||
scalars: []string{"branch_protection_rule"},
|
||||
datas: []interface{}{
|
||||
map[string]interface{}{
|
||||
"types": []interface{}{"created", "deleted"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n project:\n types: [created, deleted]\n milestone:\n types: [opened, deleted]",
|
||||
scalars: []string{"project", "milestone"},
|
||||
datas: []interface{}{
|
||||
map[string]interface{}{
|
||||
"types": []interface{}{"created", "deleted"},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"types": []interface{}{"opened", "deleted"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n pull_request:\n types:\n - opened\n branches:\n - 'releases/**'",
|
||||
scalars: []string{"pull_request"},
|
||||
datas: []interface{}{
|
||||
map[string]interface{}{
|
||||
"types": []interface{}{"opened"},
|
||||
"branches": []interface{}{"releases/**"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n push:\n branches:\n - main\n pull_request:\n types:\n - opened\n branches:\n - '**'",
|
||||
scalars: []string{"push", "pull_request"},
|
||||
datas: []interface{}{
|
||||
map[string]interface{}{
|
||||
"branches": []interface{}{"main"},
|
||||
},
|
||||
map[string]interface{}{
|
||||
"types": []interface{}{"opened"},
|
||||
"branches": []interface{}{"**"},
|
||||
},
|
||||
},
|
||||
},
|
||||
{
|
||||
input: "on:\n schedule:\n - cron: '20 6 * * *'",
|
||||
scalars: []string{"schedule"},
|
||||
datas: []interface{}{
|
||||
[]interface{}{map[string]interface{}{
|
||||
"cron": "20 6 * * *",
|
||||
}},
|
||||
},
|
||||
},
|
||||
}
|
||||
|
||||
for _, test := range tests {
|
||||
t.Run(test.input, func(t *testing.T) {
|
||||
workflow, err := model.ReadWorkflow(strings.NewReader(test.input))
|
||||
assert.NoError(t, err)
|
||||
|
||||
scalars, datas, err := parseMappingNode[interface{}](&workflow.RawOn)
|
||||
assert.NoError(t, err)
|
||||
assert.EqualValues(t, test.scalars, scalars, fmt.Sprintf("%#v", scalars))
|
||||
assert.EqualValues(t, test.datas, datas, fmt.Sprintf("%#v", datas))
|
||||
})
|
||||
}
|
||||
}
|
||||
8
pkg/jobparser/testdata/empty_step.in.yaml
vendored
Normal file
8
pkg/jobparser/testdata/empty_step.in.yaml
vendored
Normal file
@@ -0,0 +1,8 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: echo job-1
|
||||
-
|
||||
7
pkg/jobparser/testdata/empty_step.out.yaml
vendored
Normal file
7
pkg/jobparser/testdata/empty_step.out.yaml
vendored
Normal file
@@ -0,0 +1,7 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: echo job-1
|
||||
16
pkg/jobparser/testdata/erase_needs.in.yaml
vendored
Normal file
16
pkg/jobparser/testdata/erase_needs.in.yaml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
job2:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
needs: job1
|
||||
job3:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
needs: [job1, job2]
|
||||
23
pkg/jobparser/testdata/erase_needs.out.yaml
vendored
Normal file
23
pkg/jobparser/testdata/erase_needs.out.yaml
vendored
Normal file
@@ -0,0 +1,23 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job3:
|
||||
name: job3
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
16
pkg/jobparser/testdata/has_needs.in.yaml
vendored
Normal file
16
pkg/jobparser/testdata/has_needs.in.yaml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
job2:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
needs: job1
|
||||
job3:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
needs: [job1, job2]
|
||||
25
pkg/jobparser/testdata/has_needs.out.yaml
vendored
Normal file
25
pkg/jobparser/testdata/has_needs.out.yaml
vendored
Normal file
@@ -0,0 +1,25 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job2:
|
||||
name: job2
|
||||
needs: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job3:
|
||||
name: job3
|
||||
needs: [job1, job2]
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a
|
||||
14
pkg/jobparser/testdata/has_secrets.in.yaml
vendored
Normal file
14
pkg/jobparser/testdata/has_secrets.in.yaml
vendored
Normal file
@@ -0,0 +1,14 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
secrets:
|
||||
secret: hideme
|
||||
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
secrets: inherit
|
||||
16
pkg/jobparser/testdata/has_secrets.out.yaml
vendored
Normal file
16
pkg/jobparser/testdata/has_secrets.out.yaml
vendored
Normal file
@@ -0,0 +1,16 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
secrets:
|
||||
secret: hideme
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
secrets: inherit
|
||||
15
pkg/jobparser/testdata/has_with.in.yaml
vendored
Normal file
15
pkg/jobparser/testdata/has_with.in.yaml
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
with:
|
||||
package: service
|
||||
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
with:
|
||||
package: module
|
||||
17
pkg/jobparser/testdata/has_with.out.yaml
vendored
Normal file
17
pkg/jobparser/testdata/has_with.out.yaml
vendored
Normal file
@@ -0,0 +1,17 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
with:
|
||||
package: service
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
uses: .gitea/workflows/build.yml
|
||||
with:
|
||||
package: module
|
||||
22
pkg/jobparser/testdata/multiple_jobs.in.yaml
vendored
Normal file
22
pkg/jobparser/testdata/multiple_jobs.in.yaml
vendored
Normal file
@@ -0,0 +1,22 @@
|
||||
name: test
|
||||
jobs:
|
||||
zzz:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: echo zzz
|
||||
job1:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
job2:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
job3:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
aaa:
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
39
pkg/jobparser/testdata/multiple_jobs.out.yaml
vendored
Normal file
39
pkg/jobparser/testdata/multiple_jobs.out.yaml
vendored
Normal file
@@ -0,0 +1,39 @@
|
||||
name: test
|
||||
jobs:
|
||||
zzz:
|
||||
name: zzz
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: echo zzz
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job2:
|
||||
name: job2
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job3:
|
||||
name: job3
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
aaa:
|
||||
name: aaa
|
||||
runs-on: linux
|
||||
steps:
|
||||
- run: uname -a && go version
|
||||
13
pkg/jobparser/testdata/multiple_matrix.in.yaml
vendored
Normal file
13
pkg/jobparser/testdata/multiple_matrix.in.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
strategy:
|
||||
matrix:
|
||||
os: [ubuntu-22.04, ubuntu-20.04]
|
||||
version: [1.17, 1.18, 1.19]
|
||||
runs-on: ${{ matrix.os }}
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
101
pkg/jobparser/testdata/multiple_matrix.out.yaml
vendored
Normal file
101
pkg/jobparser/testdata/multiple_matrix.out.yaml
vendored
Normal file
@@ -0,0 +1,101 @@
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-20.04, 1.17)
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-20.04
|
||||
version:
|
||||
- 1.17
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-20.04, 1.18)
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-20.04
|
||||
version:
|
||||
- 1.18
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-20.04, 1.19)
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-20.04
|
||||
version:
|
||||
- 1.19
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-22.04, 1.17)
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-22.04
|
||||
version:
|
||||
- 1.17
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-22.04, 1.18)
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-22.04
|
||||
version:
|
||||
- 1.18
|
||||
---
|
||||
name: test
|
||||
jobs:
|
||||
job1:
|
||||
name: job1 (ubuntu-22.04, 1.19)
|
||||
runs-on: ubuntu-22.04
|
||||
steps:
|
||||
- uses: actions/setup-go@v3
|
||||
with:
|
||||
go-version: ${{ matrix.version }}
|
||||
- run: uname -a && go version
|
||||
strategy:
|
||||
matrix:
|
||||
os:
|
||||
- ubuntu-22.04
|
||||
version:
|
||||
- 1.19
|
||||
18
pkg/jobparser/testdata_test.go
Normal file
18
pkg/jobparser/testdata_test.go
Normal file
@@ -0,0 +1,18 @@
|
||||
package jobparser
|
||||
|
||||
import (
|
||||
"embed"
|
||||
"path/filepath"
|
||||
"testing"
|
||||
|
||||
"github.com/stretchr/testify/require"
|
||||
)
|
||||
|
||||
//go:embed testdata
|
||||
var testdata embed.FS
|
||||
|
||||
func ReadTestdata(t *testing.T, name string) []byte {
|
||||
content, err := testdata.ReadFile(filepath.Join("testdata", name))
|
||||
require.NoError(t, err)
|
||||
return content
|
||||
}
|
||||
@@ -20,7 +20,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
|
||||
// Force input to lowercase for case insensitive comparison
|
||||
format := ActionRunsUsing(strings.ToLower(using))
|
||||
switch format {
|
||||
case ActionRunsUsingNode20, ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite:
|
||||
case ActionRunsUsingNode20, ActionRunsUsingNode16, ActionRunsUsingNode12, ActionRunsUsingDocker, ActionRunsUsingComposite, ActionRunsUsingGo:
|
||||
*a = format
|
||||
default:
|
||||
return fmt.Errorf(fmt.Sprintf("The runs.using key in action.yml must be one of: %v, got %s", []string{
|
||||
@@ -29,6 +29,7 @@ func (a *ActionRunsUsing) UnmarshalYAML(unmarshal func(interface{}) error) error
|
||||
ActionRunsUsingNode12,
|
||||
ActionRunsUsingNode16,
|
||||
ActionRunsUsingNode20,
|
||||
ActionRunsUsingGo,
|
||||
}, format))
|
||||
}
|
||||
return nil
|
||||
@@ -45,6 +46,8 @@ const (
|
||||
ActionRunsUsingDocker = "docker"
|
||||
// ActionRunsUsingComposite for running composite
|
||||
ActionRunsUsingComposite = "composite"
|
||||
// ActionRunsUsingGo for running with go
|
||||
ActionRunsUsingGo = "go"
|
||||
)
|
||||
|
||||
// ActionRuns are a field in Action
|
||||
|
||||
@@ -162,6 +162,13 @@ func NewWorkflowPlanner(path string, noWorkflowRecurse bool) (WorkflowPlanner, e
|
||||
return wp, nil
|
||||
}
|
||||
|
||||
// CombineWorkflowPlanner combines workflows to a WorkflowPlanner
|
||||
func CombineWorkflowPlanner(workflows ...*Workflow) WorkflowPlanner {
|
||||
return &workflowPlanner{
|
||||
workflows: workflows,
|
||||
}
|
||||
}
|
||||
|
||||
func NewSingleWorkflowPlanner(name string, f io.Reader) (WorkflowPlanner, error) {
|
||||
wp := new(workflowPlanner)
|
||||
|
||||
|
||||
@@ -66,6 +66,30 @@ func (w *Workflow) OnEvent(event string) interface{} {
|
||||
return nil
|
||||
}
|
||||
|
||||
func (w *Workflow) OnSchedule() []string {
|
||||
schedules := w.OnEvent("schedule")
|
||||
if schedules == nil {
|
||||
return []string{}
|
||||
}
|
||||
|
||||
switch val := schedules.(type) {
|
||||
case []interface{}:
|
||||
allSchedules := []string{}
|
||||
for _, v := range val {
|
||||
for k, cron := range v.(map[string]interface{}) {
|
||||
if k != "cron" {
|
||||
continue
|
||||
}
|
||||
allSchedules = append(allSchedules, cron.(string))
|
||||
}
|
||||
}
|
||||
return allSchedules
|
||||
default:
|
||||
}
|
||||
|
||||
return []string{}
|
||||
}
|
||||
|
||||
type WorkflowDispatchInput struct {
|
||||
Description string `yaml:"description"`
|
||||
Required bool `yaml:"required"`
|
||||
@@ -343,7 +367,7 @@ func environment(yml yaml.Node) map[string]string {
|
||||
return env
|
||||
}
|
||||
|
||||
// Environment returns string-based key=value map for a job
|
||||
// Environments returns string-based key=value map for a job
|
||||
func (j *Job) Environment() map[string]string {
|
||||
return environment(j.Env)
|
||||
}
|
||||
@@ -549,10 +573,14 @@ type ContainerSpec struct {
|
||||
Args string
|
||||
Name string
|
||||
Reuse bool
|
||||
|
||||
// Gitea specific
|
||||
Cmd []string `yaml:"cmd"`
|
||||
}
|
||||
|
||||
// Step is the structure of one step in a job
|
||||
type Step struct {
|
||||
Number int `yaml:"-"`
|
||||
ID string `yaml:"id"`
|
||||
If yaml.Node `yaml:"if"`
|
||||
Name string `yaml:"name"`
|
||||
@@ -578,7 +606,7 @@ func (s *Step) String() string {
|
||||
return s.ID
|
||||
}
|
||||
|
||||
// Environment returns string-based key=value map for a step
|
||||
// Environments returns string-based key=value map for a step
|
||||
func (s *Step) Environment() map[string]string {
|
||||
return environment(s.Env)
|
||||
}
|
||||
@@ -599,7 +627,7 @@ func (s *Step) GetEnv() map[string]string {
|
||||
func (s *Step) ShellCommand() string {
|
||||
shellCommand := ""
|
||||
|
||||
//Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
|
||||
// Reference: https://github.com/actions/runner/blob/8109c962f09d9acc473d92c595ff43afceddb347/src/Runner.Worker/Handlers/ScriptHandlerHelpers.cs#L9-L17
|
||||
switch s.Shell {
|
||||
case "", "bash":
|
||||
shellCommand = "bash --noprofile --norc -e -o pipefail {0}"
|
||||
|
||||
@@ -7,6 +7,88 @@ import (
|
||||
"github.com/stretchr/testify/assert"
|
||||
)
|
||||
|
||||
func TestReadWorkflow_ScheduleEvent(t *testing.T) {
|
||||
yaml := `
|
||||
name: local-action-docker-url
|
||||
on:
|
||||
schedule:
|
||||
- cron: '30 5 * * 1,3'
|
||||
- cron: '30 5 * * 2,4'
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: ./actions/docker-url
|
||||
`
|
||||
|
||||
workflow, err := ReadWorkflow(strings.NewReader(yaml))
|
||||
assert.NoError(t, err, "read workflow should succeed")
|
||||
|
||||
schedules := workflow.OnEvent("schedule")
|
||||
assert.Len(t, schedules, 2)
|
||||
|
||||
newSchedules := workflow.OnSchedule()
|
||||
assert.Len(t, newSchedules, 2)
|
||||
|
||||
assert.Equal(t, "30 5 * * 1,3", newSchedules[0])
|
||||
assert.Equal(t, "30 5 * * 2,4", newSchedules[1])
|
||||
|
||||
yaml = `
|
||||
name: local-action-docker-url
|
||||
on:
|
||||
schedule:
|
||||
test: '30 5 * * 1,3'
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: ./actions/docker-url
|
||||
`
|
||||
|
||||
workflow, err = ReadWorkflow(strings.NewReader(yaml))
|
||||
assert.NoError(t, err, "read workflow should succeed")
|
||||
|
||||
newSchedules = workflow.OnSchedule()
|
||||
assert.Len(t, newSchedules, 0)
|
||||
|
||||
yaml = `
|
||||
name: local-action-docker-url
|
||||
on:
|
||||
schedule:
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: ./actions/docker-url
|
||||
`
|
||||
|
||||
workflow, err = ReadWorkflow(strings.NewReader(yaml))
|
||||
assert.NoError(t, err, "read workflow should succeed")
|
||||
|
||||
newSchedules = workflow.OnSchedule()
|
||||
assert.Len(t, newSchedules, 0)
|
||||
|
||||
yaml = `
|
||||
name: local-action-docker-url
|
||||
on: [push, tag]
|
||||
|
||||
jobs:
|
||||
test:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: ./actions/docker-url
|
||||
`
|
||||
|
||||
workflow, err = ReadWorkflow(strings.NewReader(yaml))
|
||||
assert.NoError(t, err, "read workflow should succeed")
|
||||
|
||||
newSchedules = workflow.OnSchedule()
|
||||
assert.Len(t, newSchedules, 0)
|
||||
}
|
||||
|
||||
func TestReadWorkflow_StringEvent(t *testing.T) {
|
||||
yaml := `
|
||||
name: local-action-docker-url
|
||||
|
||||
@@ -197,6 +197,21 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
|
||||
}
|
||||
|
||||
return execAsComposite(step)(ctx)
|
||||
case model.ActionRunsUsingGo:
|
||||
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rc.ApplyExtraPath(ctx, step.getEnv())
|
||||
|
||||
execFileName := fmt.Sprintf("%s.out", action.Runs.Main)
|
||||
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Main}
|
||||
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
|
||||
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
|
||||
)(ctx)
|
||||
default:
|
||||
return fmt.Errorf(fmt.Sprintf("The runs.using key must be one of: %v, got %s", []string{
|
||||
model.ActionRunsUsingDocker,
|
||||
@@ -204,6 +219,7 @@ func runActionImpl(step actionStep, actionDir string, remoteAction *remoteAction
|
||||
model.ActionRunsUsingNode16,
|
||||
model.ActionRunsUsingNode20,
|
||||
model.ActionRunsUsingComposite,
|
||||
model.ActionRunsUsingGo,
|
||||
}, action.Runs.Using))
|
||||
}
|
||||
}
|
||||
@@ -399,23 +415,25 @@ func newStepContainer(ctx context.Context, step step, image string, cmd []string
|
||||
networkMode = "default"
|
||||
}
|
||||
stepContainer := container.NewContainer(&container.NewContainerInput{
|
||||
Cmd: cmd,
|
||||
Entrypoint: entrypoint,
|
||||
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: rc.Config.Secrets["DOCKER_USERNAME"],
|
||||
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
|
||||
Name: createContainerName(rc.jobContainerName(), stepModel.ID),
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: networkMode,
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: rc.Config.ContainerOptions,
|
||||
Cmd: cmd,
|
||||
Entrypoint: entrypoint,
|
||||
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: rc.Config.Secrets["DOCKER_USERNAME"],
|
||||
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
|
||||
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+stepModel.ID),
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: networkMode,
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: rc.Config.ContainerOptions,
|
||||
AutoRemove: rc.Config.AutoRemove,
|
||||
ValidVolumes: rc.Config.ValidVolumes,
|
||||
})
|
||||
return stepContainer
|
||||
}
|
||||
@@ -491,7 +509,8 @@ func hasPreStep(step actionStep) common.Conditional {
|
||||
return action.Runs.Using == model.ActionRunsUsingComposite ||
|
||||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
|
||||
action.Runs.Using == model.ActionRunsUsingNode16 ||
|
||||
action.Runs.Using == model.ActionRunsUsingNode20) &&
|
||||
action.Runs.Using == model.ActionRunsUsingNode20 ||
|
||||
action.Runs.Using == model.ActionRunsUsingGo) &&
|
||||
action.Runs.Pre != "")
|
||||
}
|
||||
}
|
||||
@@ -550,6 +569,43 @@ func runPreStep(step actionStep) common.Executor {
|
||||
}
|
||||
return fmt.Errorf("missing steps in composite action")
|
||||
|
||||
case model.ActionRunsUsingGo:
|
||||
// defaults in pre steps were missing, however provided inputs are available
|
||||
populateEnvsFromInput(ctx, step.getEnv(), action, rc)
|
||||
// todo: refactor into step
|
||||
var actionDir string
|
||||
var actionPath string
|
||||
if _, ok := step.(*stepActionRemote); ok {
|
||||
actionPath = newRemoteAction(stepModel.Uses).Path
|
||||
actionDir = fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(stepModel.Uses))
|
||||
} else {
|
||||
actionDir = filepath.Join(rc.Config.Workdir, stepModel.Uses)
|
||||
actionPath = ""
|
||||
}
|
||||
|
||||
actionLocation := ""
|
||||
if actionPath != "" {
|
||||
actionLocation = path.Join(actionDir, actionPath)
|
||||
} else {
|
||||
actionLocation = actionDir
|
||||
}
|
||||
|
||||
_, containerActionDir := getContainerActionPaths(stepModel, actionLocation, rc)
|
||||
|
||||
if err := maybeCopyToActionDir(ctx, step, actionDir, actionPath, containerActionDir); err != nil {
|
||||
return err
|
||||
}
|
||||
|
||||
rc.ApplyExtraPath(ctx, step.getEnv())
|
||||
|
||||
execFileName := fmt.Sprintf("%s.out", action.Runs.Pre)
|
||||
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Pre}
|
||||
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
|
||||
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
|
||||
)(ctx)
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
@@ -587,7 +643,8 @@ func hasPostStep(step actionStep) common.Conditional {
|
||||
return action.Runs.Using == model.ActionRunsUsingComposite ||
|
||||
((action.Runs.Using == model.ActionRunsUsingNode12 ||
|
||||
action.Runs.Using == model.ActionRunsUsingNode16 ||
|
||||
action.Runs.Using == model.ActionRunsUsingNode20) &&
|
||||
action.Runs.Using == model.ActionRunsUsingNode20 ||
|
||||
action.Runs.Using == model.ActionRunsUsingGo) &&
|
||||
action.Runs.Post != "")
|
||||
}
|
||||
}
|
||||
@@ -643,6 +700,19 @@ func runPostStep(step actionStep) common.Executor {
|
||||
}
|
||||
return fmt.Errorf("missing steps in composite action")
|
||||
|
||||
case model.ActionRunsUsingGo:
|
||||
populateEnvsFromSavedState(step.getEnv(), step, rc)
|
||||
rc.ApplyExtraPath(ctx, step.getEnv())
|
||||
|
||||
execFileName := fmt.Sprintf("%s.out", action.Runs.Post)
|
||||
buildArgs := []string{"go", "build", "-o", execFileName, action.Runs.Post}
|
||||
execArgs := []string{filepath.Join(containerActionDir, execFileName)}
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
rc.execJobContainer(buildArgs, *step.getEnv(), "", containerActionDir),
|
||||
rc.execJobContainer(execArgs, *step.getEnv(), "", ""),
|
||||
)(ctx)
|
||||
|
||||
default:
|
||||
return nil
|
||||
}
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"crypto/rand"
|
||||
"encoding/hex"
|
||||
"errors"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
"path"
|
||||
@@ -42,7 +43,17 @@ func (c GoGitActionCache) Fetch(ctx context.Context, cacheDir, url, ref, token s
|
||||
return "", err
|
||||
}
|
||||
branchName := hex.EncodeToString(tmpBranch)
|
||||
|
||||
var refSpec config.RefSpec
|
||||
spec := config.RefSpec(ref + ":" + branchName)
|
||||
tagOrSha := false
|
||||
if spec.IsExactSHA1() {
|
||||
refSpec = spec
|
||||
} else if strings.HasPrefix(ref, "refs/") {
|
||||
refSpec = config.RefSpec(ref + ":refs/heads/" + branchName)
|
||||
} else {
|
||||
tagOrSha = true
|
||||
refSpec = config.RefSpec("refs/*/" + ref + ":refs/heads/*/" + branchName)
|
||||
}
|
||||
var auth transport.AuthMethod
|
||||
if token != "" {
|
||||
auth = &http.BasicAuth{
|
||||
@@ -60,17 +71,35 @@ func (c GoGitActionCache) Fetch(ctx context.Context, cacheDir, url, ref, token s
|
||||
return "", err
|
||||
}
|
||||
defer func() {
|
||||
_ = gogitrepo.DeleteBranch(branchName)
|
||||
if refs, err := gogitrepo.References(); err == nil {
|
||||
_ = refs.ForEach(func(r *plumbing.Reference) error {
|
||||
if strings.Contains(r.Name().String(), branchName) {
|
||||
return gogitrepo.DeleteBranch(r.Name().String())
|
||||
}
|
||||
return nil
|
||||
})
|
||||
}
|
||||
}()
|
||||
if err := remote.FetchContext(ctx, &git.FetchOptions{
|
||||
RefSpecs: []config.RefSpec{
|
||||
config.RefSpec(ref + ":" + branchName),
|
||||
refSpec,
|
||||
},
|
||||
Auth: auth,
|
||||
Force: true,
|
||||
}); err != nil {
|
||||
if tagOrSha && errors.Is(err, git.NoErrAlreadyUpToDate) {
|
||||
return "", fmt.Errorf("couldn't find remote ref \"%s\"", ref)
|
||||
}
|
||||
return "", err
|
||||
}
|
||||
if tagOrSha {
|
||||
for _, prefix := range []string{"refs/heads/tags/", "refs/heads/heads/"} {
|
||||
hash, err := gogitrepo.ResolveRevision(plumbing.Revision(prefix + branchName))
|
||||
if err == nil {
|
||||
return hash.String(), nil
|
||||
}
|
||||
}
|
||||
}
|
||||
hash, err := gogitrepo.ResolveRevision(plumbing.Revision(branchName))
|
||||
if err != nil {
|
||||
return "", err
|
||||
|
||||
@@ -1,41 +0,0 @@
|
||||
package runner
|
||||
|
||||
import (
|
||||
"context"
|
||||
"io"
|
||||
"path"
|
||||
|
||||
git "github.com/go-git/go-git/v5"
|
||||
"github.com/go-git/go-git/v5/plumbing"
|
||||
)
|
||||
|
||||
type GoGitActionCacheOfflineMode struct {
|
||||
Parent GoGitActionCache
|
||||
}
|
||||
|
||||
func (c GoGitActionCacheOfflineMode) Fetch(ctx context.Context, cacheDir, url, ref, token string) (string, error) {
|
||||
sha, fetchErr := c.Parent.Fetch(ctx, cacheDir, url, ref, token)
|
||||
gitPath := path.Join(c.Parent.Path, safeFilename(cacheDir)+".git")
|
||||
gogitrepo, err := git.PlainOpen(gitPath)
|
||||
if err != nil {
|
||||
return "", fetchErr
|
||||
}
|
||||
refName := plumbing.ReferenceName("refs/action-cache-offline/" + ref)
|
||||
r, err := gogitrepo.Reference(refName, true)
|
||||
if fetchErr == nil {
|
||||
if err != nil || sha != r.Hash().String() {
|
||||
if err == nil {
|
||||
refName = r.Name()
|
||||
}
|
||||
ref := plumbing.NewHashReference(refName, plumbing.NewHash(sha))
|
||||
_ = gogitrepo.Storer.SetReference(ref)
|
||||
}
|
||||
} else if err == nil {
|
||||
return r.Hash().String(), nil
|
||||
}
|
||||
return sha, fetchErr
|
||||
}
|
||||
|
||||
func (c GoGitActionCacheOfflineMode) GetTarArchive(ctx context.Context, cacheDir, sha, includePrefix string) (io.ReadCloser, error) {
|
||||
return c.Parent.GetTarArchive(ctx, cacheDir, sha, includePrefix)
|
||||
}
|
||||
@@ -18,60 +18,20 @@ func TestActionCache(t *testing.T) {
|
||||
Path: os.TempDir(),
|
||||
}
|
||||
ctx := context.Background()
|
||||
cacheDir := "nektos/act-test-actions"
|
||||
repo := "https://github.com/nektos/act-test-actions"
|
||||
refs := []struct {
|
||||
Name string
|
||||
CacheDir string
|
||||
Repo string
|
||||
Ref string
|
||||
}{
|
||||
{
|
||||
Name: "Fetch Branch Name",
|
||||
CacheDir: cacheDir,
|
||||
Repo: repo,
|
||||
Ref: "main",
|
||||
},
|
||||
{
|
||||
Name: "Fetch Branch Name Absolutely",
|
||||
CacheDir: cacheDir,
|
||||
Repo: repo,
|
||||
Ref: "refs/heads/main",
|
||||
},
|
||||
{
|
||||
Name: "Fetch HEAD",
|
||||
CacheDir: cacheDir,
|
||||
Repo: repo,
|
||||
Ref: "HEAD",
|
||||
},
|
||||
{
|
||||
Name: "Fetch Sha",
|
||||
CacheDir: cacheDir,
|
||||
Repo: repo,
|
||||
Ref: "de984ca37e4df4cb9fd9256435a3b82c4a2662b1",
|
||||
},
|
||||
}
|
||||
for _, c := range refs {
|
||||
t.Run(c.Name, func(t *testing.T) {
|
||||
sha, err := cache.Fetch(ctx, c.CacheDir, c.Repo, c.Ref, "")
|
||||
if !a.NoError(err) || !a.NotEmpty(sha) {
|
||||
return
|
||||
}
|
||||
atar, err := cache.GetTarArchive(ctx, c.CacheDir, sha, "js")
|
||||
if !a.NoError(err) || !a.NotEmpty(atar) {
|
||||
return
|
||||
}
|
||||
mytar := tar.NewReader(atar)
|
||||
th, err := mytar.Next()
|
||||
if !a.NoError(err) || !a.NotEqual(0, th.Size) {
|
||||
return
|
||||
}
|
||||
buf := &bytes.Buffer{}
|
||||
// G110: Potential DoS vulnerability via decompression bomb (gosec)
|
||||
_, err = io.Copy(buf, mytar)
|
||||
a.NoError(err)
|
||||
str := buf.String()
|
||||
a.NotEmpty(str)
|
||||
})
|
||||
}
|
||||
sha, err := cache.Fetch(ctx, "christopherhx/script", "https://github.com/christopherhx/script", "main", "")
|
||||
a.NoError(err)
|
||||
a.NotEmpty(sha)
|
||||
atar, err := cache.GetTarArchive(ctx, "christopherhx/script", sha, "node_modules")
|
||||
a.NoError(err)
|
||||
a.NotEmpty(atar)
|
||||
mytar := tar.NewReader(atar)
|
||||
th, err := mytar.Next()
|
||||
a.NoError(err)
|
||||
a.NotEqual(0, th.Size)
|
||||
buf := &bytes.Buffer{}
|
||||
// G110: Potential DoS vulnerability via decompression bomb (gosec)
|
||||
_, err = io.Copy(buf, mytar)
|
||||
a.NoError(err)
|
||||
str := buf.String()
|
||||
a.NotEmpty(str)
|
||||
}
|
||||
|
||||
@@ -27,7 +27,7 @@ func evaluateCompositeInputAndEnv(ctx context.Context, parent *RunContext, step
|
||||
envKey := regexp.MustCompile("[^A-Z0-9-]").ReplaceAllString(strings.ToUpper(inputID), "_")
|
||||
envKey = fmt.Sprintf("INPUT_%s", strings.ToUpper(envKey))
|
||||
|
||||
// lookup if key is defined in the step but the already
|
||||
// lookup if key is defined in the step but the the already
|
||||
// evaluated value from the environment
|
||||
_, defined := step.getStepModel().With[inputID]
|
||||
if value, ok := stepEnv[envKey]; defined && ok {
|
||||
@@ -140,6 +140,7 @@ func (rc *RunContext) compositeExecutor(action *model.Action) *compositeSteps {
|
||||
if step.ID == "" {
|
||||
step.ID = fmt.Sprintf("%d", i)
|
||||
}
|
||||
step.Number = i
|
||||
|
||||
// create a copy of the step, since this composite action could
|
||||
// run multiple times and we might modify the instance
|
||||
|
||||
@@ -77,7 +77,8 @@ func (rc *RunContext) commandHandler(ctx context.Context) common.LineHandler {
|
||||
logger.Infof(" \U00002753 %s", line)
|
||||
}
|
||||
|
||||
return false
|
||||
// return true to let gitea's logger handle these special outputs also
|
||||
return true
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -106,7 +106,7 @@ func (rc *RunContext) NewExpressionEvaluatorWithEnv(ctx context.Context, env map
|
||||
//go:embed hashfiles/index.js
|
||||
var hashfiles string
|
||||
|
||||
// NewStepExpressionEvaluator creates a new evaluator
|
||||
// NewExpressionEvaluator creates a new evaluator
|
||||
func (rc *RunContext) NewStepExpressionEvaluator(ctx context.Context, step step) ExpressionEvaluator {
|
||||
// todo: cleanup EvaluationEnvironment creation
|
||||
job := rc.Run.Job()
|
||||
|
||||
@@ -6,6 +6,7 @@ import (
|
||||
"time"
|
||||
|
||||
"github.com/nektos/act/pkg/common"
|
||||
"github.com/nektos/act/pkg/container"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
)
|
||||
|
||||
@@ -63,6 +64,7 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
|
||||
if stepModel.ID == "" {
|
||||
stepModel.ID = fmt.Sprintf("%d", i)
|
||||
}
|
||||
stepModel.Number = i
|
||||
|
||||
step, err := sf.newStep(stepModel, rc)
|
||||
|
||||
@@ -70,7 +72,19 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
|
||||
return common.NewErrorExecutor(err)
|
||||
}
|
||||
|
||||
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, step.pre()))
|
||||
preExec := step.pre()
|
||||
preSteps = append(preSteps, useStepLogger(rc, stepModel, stepStagePre, func(ctx context.Context) error {
|
||||
logger := common.Logger(ctx)
|
||||
preErr := preExec(ctx)
|
||||
if preErr != nil {
|
||||
logger.Errorf("%v", preErr)
|
||||
common.SetJobError(ctx, preErr)
|
||||
} else if ctx.Err() != nil {
|
||||
logger.Errorf("%v", ctx.Err())
|
||||
common.SetJobError(ctx, ctx.Err())
|
||||
}
|
||||
return preErr
|
||||
}))
|
||||
|
||||
stepExec := step.main()
|
||||
steps = append(steps, useStepLogger(rc, stepModel, stepStageMain, func(ctx context.Context) error {
|
||||
@@ -108,6 +122,18 @@ func newJobExecutor(info jobInfo, sf stepFactory, rc *RunContext) common.Executo
|
||||
if err = info.stopContainer()(ctx); err != nil {
|
||||
logger.Errorf("Error while stop job container: %v", err)
|
||||
}
|
||||
|
||||
if !rc.IsHostEnv(ctx) && rc.Config.ContainerNetworkMode == "" {
|
||||
// clean network in docker mode only
|
||||
// if the value of `ContainerNetworkMode` is empty string,
|
||||
// it means that the network to which containers are connecting is created by `act_runner`,
|
||||
// so, we should remove the network at last.
|
||||
networkName, _ := rc.networkName()
|
||||
logger.Infof("Cleaning up network for job %s, and network name is: %s", rc.JobName, networkName)
|
||||
if err := container.NewDockerNetworkRemoveExecutor(networkName)(ctx); err != nil {
|
||||
logger.Errorf("Error while cleaning network: %v", err)
|
||||
}
|
||||
}
|
||||
}
|
||||
setJobResult(ctx, info, rc, jobError == nil)
|
||||
setJobOutputs(ctx, rc)
|
||||
@@ -179,7 +205,7 @@ func setJobOutputs(ctx context.Context, rc *RunContext) {
|
||||
|
||||
func useStepLogger(rc *RunContext, stepModel *model.Step, stage stepStage, executor common.Executor) common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
ctx = withStepLogger(ctx, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
|
||||
ctx = withStepLogger(ctx, stepModel.Number, stepModel.ID, rc.ExprEval.Interpolate(ctx, stepModel.String()), stage.String())
|
||||
|
||||
rawLogger := common.Logger(ctx).WithField("raw_output", true)
|
||||
logWriter := common.NewLineWriter(rc.commandHandler(ctx), func(s string) bool {
|
||||
|
||||
@@ -1,91 +0,0 @@
|
||||
package runner
|
||||
|
||||
import (
|
||||
"archive/tar"
|
||||
"bytes"
|
||||
"context"
|
||||
"fmt"
|
||||
"io"
|
||||
"io/fs"
|
||||
goURL "net/url"
|
||||
"os"
|
||||
"path/filepath"
|
||||
"strings"
|
||||
|
||||
"github.com/nektos/act/pkg/filecollector"
|
||||
)
|
||||
|
||||
type LocalRepositoryCache struct {
|
||||
Parent ActionCache
|
||||
LocalRepositories map[string]string
|
||||
CacheDirCache map[string]string
|
||||
}
|
||||
|
||||
func (l *LocalRepositoryCache) Fetch(ctx context.Context, cacheDir, url, ref, token string) (string, error) {
|
||||
if dest, ok := l.LocalRepositories[fmt.Sprintf("%s@%s", url, ref)]; ok {
|
||||
l.CacheDirCache[fmt.Sprintf("%s@%s", cacheDir, ref)] = dest
|
||||
return ref, nil
|
||||
}
|
||||
if purl, err := goURL.Parse(url); err == nil {
|
||||
if dest, ok := l.LocalRepositories[fmt.Sprintf("%s@%s", strings.TrimPrefix(purl.Path, "/"), ref)]; ok {
|
||||
l.CacheDirCache[fmt.Sprintf("%s@%s", cacheDir, ref)] = dest
|
||||
return ref, nil
|
||||
}
|
||||
}
|
||||
return l.Parent.Fetch(ctx, cacheDir, url, ref, token)
|
||||
}
|
||||
|
||||
func (l *LocalRepositoryCache) GetTarArchive(ctx context.Context, cacheDir, sha, includePrefix string) (io.ReadCloser, error) {
|
||||
// sha is mapped to ref in fetch if there is a local override
|
||||
if dest, ok := l.CacheDirCache[fmt.Sprintf("%s@%s", cacheDir, sha)]; ok {
|
||||
srcPath := filepath.Join(dest, includePrefix)
|
||||
buf := &bytes.Buffer{}
|
||||
tw := tar.NewWriter(buf)
|
||||
defer tw.Close()
|
||||
srcPath = filepath.Clean(srcPath)
|
||||
fi, err := os.Lstat(srcPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
tc := &filecollector.TarCollector{
|
||||
TarWriter: tw,
|
||||
}
|
||||
if fi.IsDir() {
|
||||
srcPrefix := srcPath
|
||||
if !strings.HasSuffix(srcPrefix, string(filepath.Separator)) {
|
||||
srcPrefix += string(filepath.Separator)
|
||||
}
|
||||
fc := &filecollector.FileCollector{
|
||||
Fs: &filecollector.DefaultFs{},
|
||||
SrcPath: srcPath,
|
||||
SrcPrefix: srcPrefix,
|
||||
Handler: tc,
|
||||
}
|
||||
err = filepath.Walk(srcPath, fc.CollectFiles(ctx, []string{}))
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
var f io.ReadCloser
|
||||
var linkname string
|
||||
if fi.Mode()&fs.ModeSymlink != 0 {
|
||||
linkname, err = os.Readlink(srcPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
} else {
|
||||
f, err = os.Open(srcPath)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
defer f.Close()
|
||||
}
|
||||
err := tc.WriteFile(fi.Name(), fi, linkname, f)
|
||||
if err != nil {
|
||||
return nil, err
|
||||
}
|
||||
}
|
||||
return io.NopCloser(buf), nil
|
||||
}
|
||||
return l.Parent.GetTarArchive(ctx, cacheDir, sha, includePrefix)
|
||||
}
|
||||
@@ -52,7 +52,7 @@ func Masks(ctx context.Context) *[]string {
|
||||
return &[]string{}
|
||||
}
|
||||
|
||||
// WithMasks adds a value to the context for the logger
|
||||
// WithLogger adds a value to the context for the logger
|
||||
func WithMasks(ctx context.Context, masks *[]string) context.Context {
|
||||
return context.WithValue(ctx, masksContextKeyVal, masks)
|
||||
}
|
||||
@@ -96,6 +96,17 @@ func WithJobLogger(ctx context.Context, jobID string, jobName string, config *Co
|
||||
logger.SetFormatter(formatter)
|
||||
}
|
||||
|
||||
{ // Adapt to Gitea
|
||||
if hook := common.LoggerHook(ctx); hook != nil {
|
||||
logger.AddHook(hook)
|
||||
}
|
||||
if config.JobLoggerLevel != nil {
|
||||
logger.SetLevel(*config.JobLoggerLevel)
|
||||
} else {
|
||||
logger.SetLevel(logrus.TraceLevel)
|
||||
}
|
||||
}
|
||||
|
||||
logger.SetFormatter(&maskedFormatter{
|
||||
Formatter: logger.Formatter,
|
||||
masker: valueMasker(config.InsecureSecrets, config.Secrets),
|
||||
@@ -132,11 +143,12 @@ func WithCompositeStepLogger(ctx context.Context, stepID string) context.Context
|
||||
}).WithContext(ctx))
|
||||
}
|
||||
|
||||
func withStepLogger(ctx context.Context, stepID string, stepName string, stageName string) context.Context {
|
||||
func withStepLogger(ctx context.Context, stepNumber int, stepID, stepName, stageName string) context.Context {
|
||||
rtn := common.Logger(ctx).WithFields(logrus.Fields{
|
||||
"step": stepName,
|
||||
"stepID": []string{stepID},
|
||||
"stage": stageName,
|
||||
"stepNumber": stepNumber,
|
||||
"step": stepName,
|
||||
"stepID": []string{stepID},
|
||||
"stage": stageName,
|
||||
})
|
||||
return common.WithLogger(ctx, rtn)
|
||||
}
|
||||
|
||||
@@ -9,6 +9,7 @@ import (
|
||||
"os"
|
||||
"path"
|
||||
"regexp"
|
||||
"strings"
|
||||
"sync"
|
||||
|
||||
"github.com/nektos/act/pkg/common"
|
||||
@@ -17,15 +18,45 @@ import (
|
||||
)
|
||||
|
||||
func newLocalReusableWorkflowExecutor(rc *RunContext) common.Executor {
|
||||
return newReusableWorkflowExecutor(rc, rc.Config.Workdir, rc.Run.Job().Uses)
|
||||
if !rc.Config.NoSkipCheckout {
|
||||
fullPath := rc.Run.Job().Uses
|
||||
|
||||
fileName := path.Base(fullPath)
|
||||
workflowDir := strings.TrimSuffix(fullPath, path.Join("/", fileName))
|
||||
workflowDir = strings.TrimPrefix(workflowDir, "./")
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
newReusableWorkflowExecutor(rc, workflowDir, fileName),
|
||||
)
|
||||
}
|
||||
|
||||
// ./.gitea/workflows/wf.yml -> .gitea/workflows/wf.yml
|
||||
trimmedUses := strings.TrimPrefix(rc.Run.Job().Uses, "./")
|
||||
// uses string format is {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}
|
||||
uses := fmt.Sprintf("%s/%s@%s", rc.Config.PresetGitHubContext.Repository, trimmedUses, rc.Config.PresetGitHubContext.Sha)
|
||||
|
||||
remoteReusableWorkflow := newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
|
||||
if remoteReusableWorkflow == nil {
|
||||
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
|
||||
}
|
||||
|
||||
workflowDir := fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(uses))
|
||||
|
||||
// If the repository is private, we need a token to clone it
|
||||
token := rc.Config.GetToken()
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
|
||||
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
|
||||
)
|
||||
}
|
||||
|
||||
func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
|
||||
uses := rc.Run.Job().Uses
|
||||
|
||||
remoteReusableWorkflow := newRemoteReusableWorkflow(uses)
|
||||
remoteReusableWorkflow := newRemoteReusableWorkflowWithPlat(rc.Config.GitHubInstance, uses)
|
||||
if remoteReusableWorkflow == nil {
|
||||
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.github/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
|
||||
return common.NewErrorExecutor(fmt.Errorf("expected format {owner}/{repo}/.{git_platform}/workflows/{filename}@{ref}. Actual '%s' Input string was not in a correct format", uses))
|
||||
}
|
||||
|
||||
// uses with safe filename makes the target directory look something like this {owner}-{repo}-.github-workflows-{filename}@{ref}
|
||||
@@ -34,13 +65,16 @@ func newRemoteReusableWorkflowExecutor(rc *RunContext) common.Executor {
|
||||
filename := fmt.Sprintf("%s/%s@%s", remoteReusableWorkflow.Org, remoteReusableWorkflow.Repo, remoteReusableWorkflow.Ref)
|
||||
workflowDir := fmt.Sprintf("%s/%s", rc.ActionCacheDir(), safeFilename(filename))
|
||||
|
||||
// FIXME: if the reusable workflow is from a private repository, we need to provide a token to access the repository.
|
||||
token := ""
|
||||
|
||||
if rc.Config.ActionCache != nil {
|
||||
return newActionCacheReusableWorkflowExecutor(rc, filename, remoteReusableWorkflow)
|
||||
}
|
||||
|
||||
return common.NewPipelineExecutor(
|
||||
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir)),
|
||||
newReusableWorkflowExecutor(rc, workflowDir, fmt.Sprintf("./.github/workflows/%s", remoteReusableWorkflow.Filename)),
|
||||
newMutexExecutor(cloneIfRequired(rc, *remoteReusableWorkflow, workflowDir, token)),
|
||||
newReusableWorkflowExecutor(rc, workflowDir, remoteReusableWorkflow.FilePath()),
|
||||
)
|
||||
}
|
||||
|
||||
@@ -92,7 +126,7 @@ func newMutexExecutor(executor common.Executor) common.Executor {
|
||||
}
|
||||
}
|
||||
|
||||
func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkflow, targetDirectory string) common.Executor {
|
||||
func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkflow, targetDirectory, token string) common.Executor {
|
||||
return common.NewConditionalExecutor(
|
||||
func(ctx context.Context) bool {
|
||||
_, err := os.Stat(targetDirectory)
|
||||
@@ -100,12 +134,15 @@ func cloneIfRequired(rc *RunContext, remoteReusableWorkflow remoteReusableWorkfl
|
||||
return notExists
|
||||
},
|
||||
func(ctx context.Context) error {
|
||||
remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
|
||||
// Do not change the remoteReusableWorkflow.URL, because:
|
||||
// 1. Gitea doesn't support specifying GithubContext.ServerURL by the GITHUB_SERVER_URL env
|
||||
// 2. Gitea has already full URL with rc.Config.GitHubInstance when calling newRemoteReusableWorkflowWithPlat
|
||||
// remoteReusableWorkflow.URL = rc.getGithubContext(ctx).ServerURL
|
||||
return git.NewGitCloneExecutor(git.NewGitCloneExecutorInput{
|
||||
URL: remoteReusableWorkflow.CloneURL(),
|
||||
Ref: remoteReusableWorkflow.Ref,
|
||||
Dir: targetDirectory,
|
||||
Token: rc.Config.Token,
|
||||
Token: token,
|
||||
OfflineMode: rc.Config.ActionOfflineMode,
|
||||
})(ctx)
|
||||
},
|
||||
@@ -152,12 +189,44 @@ type remoteReusableWorkflow struct {
|
||||
Repo string
|
||||
Filename string
|
||||
Ref string
|
||||
|
||||
GitPlatform string
|
||||
}
|
||||
|
||||
func (r *remoteReusableWorkflow) CloneURL() string {
|
||||
return fmt.Sprintf("%s/%s/%s", r.URL, r.Org, r.Repo)
|
||||
// In Gitea, r.URL always has the protocol prefix, we don't need to add extra prefix in this case.
|
||||
if strings.HasPrefix(r.URL, "http://") || strings.HasPrefix(r.URL, "https://") {
|
||||
return fmt.Sprintf("%s/%s/%s", r.URL, r.Org, r.Repo)
|
||||
}
|
||||
return fmt.Sprintf("https://%s/%s/%s", r.URL, r.Org, r.Repo)
|
||||
}
|
||||
|
||||
func (r *remoteReusableWorkflow) FilePath() string {
|
||||
return fmt.Sprintf("./.%s/workflows/%s", r.GitPlatform, r.Filename)
|
||||
}
|
||||
|
||||
// For Gitea
|
||||
// newRemoteReusableWorkflowWithPlat create a `remoteReusableWorkflow`
|
||||
// workflows from `.gitea/workflows` and `.github/workflows` are supported
|
||||
func newRemoteReusableWorkflowWithPlat(url, uses string) *remoteReusableWorkflow {
|
||||
// GitHub docs:
|
||||
// https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses
|
||||
r := regexp.MustCompile(`^([^/]+)/([^/]+)/\.([^/]+)/workflows/([^@]+)@(.*)$`)
|
||||
matches := r.FindStringSubmatch(uses)
|
||||
if len(matches) != 6 {
|
||||
return nil
|
||||
}
|
||||
return &remoteReusableWorkflow{
|
||||
Org: matches[1],
|
||||
Repo: matches[2],
|
||||
GitPlatform: matches[3],
|
||||
Filename: matches[4],
|
||||
Ref: matches[5],
|
||||
URL: url,
|
||||
}
|
||||
}
|
||||
|
||||
// deprecated: use newRemoteReusableWorkflowWithPlat
|
||||
func newRemoteReusableWorkflow(uses string) *remoteReusableWorkflow {
|
||||
// GitHub docs:
|
||||
// https://docs.github.com/en/actions/using-workflows/workflow-syntax-for-github-actions#jobsjob_iduses
|
||||
|
||||
@@ -81,11 +81,16 @@ func (rc *RunContext) GetEnv() map[string]string {
|
||||
}
|
||||
}
|
||||
rc.Env["ACT"] = "true"
|
||||
|
||||
if !rc.Config.NoSkipCheckout {
|
||||
rc.Env["ACT_SKIP_CHECKOUT"] = "true"
|
||||
}
|
||||
|
||||
return rc.Env
|
||||
}
|
||||
|
||||
func (rc *RunContext) jobContainerName() string {
|
||||
return createContainerName("act", rc.String())
|
||||
return createSimpleContainerName(rc.Config.ContainerNamePrefix, "WORKFLOW-"+rc.Run.Workflow.Name, "JOB-"+rc.Name)
|
||||
}
|
||||
|
||||
// networkName return the name of the network which will be created by `act` automatically for job,
|
||||
@@ -167,6 +172,14 @@ func (rc *RunContext) GetBindsAndMounts() ([]string, map[string]string) {
|
||||
mounts[name] = ext.ToContainerPath(rc.Config.Workdir)
|
||||
}
|
||||
|
||||
// For Gitea
|
||||
// add some default binds and mounts to ValidVolumes
|
||||
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, "act-toolcache")
|
||||
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, name)
|
||||
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, name+"-env")
|
||||
// TODO: add a new configuration to control whether the docker daemon can be mounted
|
||||
rc.Config.ValidVolumes = append(rc.Config.ValidVolumes, getDockerDaemonSocketMountPath(rc.Config.ContainerDaemonSocket))
|
||||
|
||||
return binds, mounts
|
||||
}
|
||||
|
||||
@@ -261,6 +274,9 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
|
||||
logger.Infof("\U0001f680 Start image=%s", image)
|
||||
name := rc.jobContainerName()
|
||||
// For gitea, to support --volumes-from <container_name_or_id> in options.
|
||||
// We need to set the container name to the environment variable.
|
||||
rc.Env["JOB_CONTAINER_NAME"] = name
|
||||
|
||||
envList := make([]string, 0)
|
||||
|
||||
@@ -274,9 +290,13 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
binds, mounts := rc.GetBindsAndMounts()
|
||||
|
||||
// specify the network to which the container will connect when `docker create` stage. (like execute command line: docker create --network <networkName> <image>)
|
||||
// if using service containers, will create a new network for the containers.
|
||||
// and it will be removed after at last.
|
||||
networkName, createAndDeleteNetwork := rc.networkName()
|
||||
networkName := string(rc.Config.ContainerNetworkMode)
|
||||
var createAndDeleteNetwork bool
|
||||
if networkName == "" {
|
||||
// if networkName is empty string, will create a new network for the containers.
|
||||
// and it will be removed after at last.
|
||||
networkName, createAndDeleteNetwork = rc.networkName()
|
||||
}
|
||||
|
||||
// add service containers
|
||||
for serviceID, spec := range rc.Run.Job().Services {
|
||||
@@ -289,6 +309,11 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
for k, v := range interpolatedEnvs {
|
||||
envs = append(envs, fmt.Sprintf("%s=%s", k, v))
|
||||
}
|
||||
interpolatedCmd := make([]string, 0, len(spec.Cmd))
|
||||
for _, v := range spec.Cmd {
|
||||
interpolatedCmd = append(interpolatedCmd, rc.ExprEval.Interpolate(ctx, v))
|
||||
}
|
||||
|
||||
username, password, err = rc.handleServiceCredentials(ctx, spec.Credentials)
|
||||
if err != nil {
|
||||
return fmt.Errorf("failed to handle service %s credentials: %w", serviceID, err)
|
||||
@@ -316,6 +341,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
Image: rc.ExprEval.Interpolate(ctx, spec.Image),
|
||||
Username: username,
|
||||
Password: password,
|
||||
Cmd: interpolatedCmd,
|
||||
Env: envs,
|
||||
Mounts: serviceMounts,
|
||||
Binds: serviceBinds,
|
||||
@@ -324,11 +350,13 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
AutoRemove: rc.Config.AutoRemove,
|
||||
Options: rc.ExprEval.Interpolate(ctx, spec.Options),
|
||||
NetworkMode: networkName,
|
||||
NetworkAliases: []string{serviceID},
|
||||
ExposedPorts: exposedPorts,
|
||||
PortBindings: portBindings,
|
||||
ValidVolumes: rc.Config.ValidVolumes,
|
||||
})
|
||||
rc.ServiceContainers = append(rc.ServiceContainers, c)
|
||||
}
|
||||
@@ -391,6 +419,8 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Options: rc.options(ctx),
|
||||
AutoRemove: rc.Config.AutoRemove,
|
||||
ValidVolumes: rc.Config.ValidVolumes,
|
||||
})
|
||||
if rc.JobContainer == nil {
|
||||
return errors.New("Failed to create job container")
|
||||
@@ -400,7 +430,7 @@ func (rc *RunContext) startJobContainer() common.Executor {
|
||||
rc.pullServicesImages(rc.Config.ForcePull),
|
||||
rc.JobContainer.Pull(rc.Config.ForcePull),
|
||||
rc.stopJobContainer(),
|
||||
container.NewDockerNetworkCreateExecutor(networkName).IfBool(createAndDeleteNetwork),
|
||||
container.NewDockerNetworkCreateExecutor(networkName).IfBool(!rc.IsHostEnv(ctx) && rc.Config.ContainerNetworkMode == ""), // if the value of `ContainerNetworkMode` is empty string, then will create a new network for containers.
|
||||
rc.startServiceContainers(networkName),
|
||||
rc.JobContainer.Create(rc.Config.ContainerCapAdd, rc.Config.ContainerCapDrop),
|
||||
rc.JobContainer.Start(false),
|
||||
@@ -639,6 +669,17 @@ func (rc *RunContext) runsOnImage(ctx context.Context) string {
|
||||
common.Logger(ctx).Errorf("'runs-on' key not defined in %s", rc.String())
|
||||
}
|
||||
|
||||
runsOn := rc.Run.Job().RunsOn()
|
||||
for i, v := range runsOn {
|
||||
runsOn[i] = rc.ExprEval.Interpolate(ctx, v)
|
||||
}
|
||||
|
||||
if pick := rc.Config.PlatformPicker; pick != nil {
|
||||
if image := pick(runsOn); image != "" {
|
||||
return image
|
||||
}
|
||||
}
|
||||
|
||||
for _, platformName := range rc.runsOnPlatformNames(ctx) {
|
||||
image := rc.Config.Platforms[strings.ToLower(platformName)]
|
||||
if image != "" {
|
||||
@@ -725,6 +766,7 @@ func mergeMaps(maps ...map[string]string) map[string]string {
|
||||
return rtnMap
|
||||
}
|
||||
|
||||
// deprecated: use createSimpleContainerName
|
||||
func createContainerName(parts ...string) string {
|
||||
name := strings.Join(parts, "-")
|
||||
pattern := regexp.MustCompile("[^a-zA-Z0-9]")
|
||||
@@ -738,6 +780,22 @@ func createContainerName(parts ...string) string {
|
||||
return fmt.Sprintf("%s-%x", trimmedName, hash)
|
||||
}
|
||||
|
||||
func createSimpleContainerName(parts ...string) string {
|
||||
pattern := regexp.MustCompile("[^a-zA-Z0-9-]")
|
||||
name := make([]string, 0, len(parts))
|
||||
for _, v := range parts {
|
||||
v = pattern.ReplaceAllString(v, "-")
|
||||
v = strings.Trim(v, "-")
|
||||
for strings.Contains(v, "--") {
|
||||
v = strings.ReplaceAll(v, "--", "-")
|
||||
}
|
||||
if v != "" {
|
||||
name = append(name, v)
|
||||
}
|
||||
}
|
||||
return strings.Join(name, "_")
|
||||
}
|
||||
|
||||
func trimToLen(s string, l int) string {
|
||||
if l < 0 {
|
||||
l = 0
|
||||
@@ -820,6 +878,36 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
|
||||
ghc.Actor = "nektos/act"
|
||||
}
|
||||
|
||||
{ // Adapt to Gitea
|
||||
if preset := rc.Config.PresetGitHubContext; preset != nil {
|
||||
ghc.Event = preset.Event
|
||||
ghc.RunID = preset.RunID
|
||||
ghc.RunNumber = preset.RunNumber
|
||||
ghc.Actor = preset.Actor
|
||||
ghc.Repository = preset.Repository
|
||||
ghc.EventName = preset.EventName
|
||||
ghc.Sha = preset.Sha
|
||||
ghc.Ref = preset.Ref
|
||||
ghc.RefName = preset.RefName
|
||||
ghc.RefType = preset.RefType
|
||||
ghc.HeadRef = preset.HeadRef
|
||||
ghc.BaseRef = preset.BaseRef
|
||||
ghc.Token = preset.Token
|
||||
ghc.RepositoryOwner = preset.RepositoryOwner
|
||||
ghc.RetentionDays = preset.RetentionDays
|
||||
|
||||
instance := rc.Config.GitHubInstance
|
||||
if !strings.HasPrefix(instance, "http://") &&
|
||||
!strings.HasPrefix(instance, "https://") {
|
||||
instance = "https://" + instance
|
||||
}
|
||||
ghc.ServerURL = instance
|
||||
ghc.APIURL = instance + "/api/v1" // the version of Gitea is v1
|
||||
ghc.GraphQLURL = "" // Gitea doesn't support graphql
|
||||
return ghc
|
||||
}
|
||||
}
|
||||
|
||||
if rc.EventJSON != "" {
|
||||
err := json.Unmarshal([]byte(rc.EventJSON), &ghc.Event)
|
||||
if err != nil {
|
||||
@@ -849,6 +937,18 @@ func (rc *RunContext) getGithubContext(ctx context.Context) *model.GithubContext
|
||||
ghc.APIURL = fmt.Sprintf("https://%s/api/v3", rc.Config.GitHubInstance)
|
||||
ghc.GraphQLURL = fmt.Sprintf("https://%s/api/graphql", rc.Config.GitHubInstance)
|
||||
}
|
||||
|
||||
{ // Adapt to Gitea
|
||||
instance := rc.Config.GitHubInstance
|
||||
if !strings.HasPrefix(instance, "http://") &&
|
||||
!strings.HasPrefix(instance, "https://") {
|
||||
instance = "https://" + instance
|
||||
}
|
||||
ghc.ServerURL = instance
|
||||
ghc.APIURL = instance + "/api/v1" // the version of Gitea is v1
|
||||
ghc.GraphQLURL = "" // Gitea doesn't support graphql
|
||||
}
|
||||
|
||||
// allow to be overridden by user
|
||||
if rc.Config.Env["GITHUB_SERVER_URL"] != "" {
|
||||
ghc.ServerURL = rc.Config.Env["GITHUB_SERVER_URL"]
|
||||
@@ -936,6 +1036,17 @@ func (rc *RunContext) withGithubEnv(ctx context.Context, github *model.GithubCon
|
||||
env["GITHUB_API_URL"] = github.APIURL
|
||||
env["GITHUB_GRAPHQL_URL"] = github.GraphQLURL
|
||||
|
||||
{ // Adapt to Gitea
|
||||
instance := rc.Config.GitHubInstance
|
||||
if !strings.HasPrefix(instance, "http://") &&
|
||||
!strings.HasPrefix(instance, "https://") {
|
||||
instance = "https://" + instance
|
||||
}
|
||||
env["GITHUB_SERVER_URL"] = instance
|
||||
env["GITHUB_API_URL"] = instance + "/api/v1" // the version of Gitea is v1
|
||||
env["GITHUB_GRAPHQL_URL"] = "" // Gitea doesn't support graphql
|
||||
}
|
||||
|
||||
if rc.Config.ArtifactServerPath != "" {
|
||||
setActionRuntimeVars(rc, env)
|
||||
}
|
||||
|
||||
@@ -682,3 +682,24 @@ func TestRunContextGetEnv(t *testing.T) {
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_createSimpleContainerName(t *testing.T) {
|
||||
tests := []struct {
|
||||
parts []string
|
||||
want string
|
||||
}{
|
||||
{
|
||||
parts: []string{"a--a", "BB正", "c-C"},
|
||||
want: "a-a_BB_c-C",
|
||||
},
|
||||
{
|
||||
parts: []string{"a-a", "", "-"},
|
||||
want: "a-a",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(strings.Join(tt.parts, " "), func(t *testing.T) {
|
||||
assert.Equalf(t, tt.want, createSimpleContainerName(tt.parts...), "createSimpleContainerName(%v)", tt.parts)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
@@ -6,11 +6,13 @@ import (
|
||||
"fmt"
|
||||
"os"
|
||||
"runtime"
|
||||
"time"
|
||||
|
||||
docker_container "github.com/docker/docker/api/types/container"
|
||||
log "github.com/sirupsen/logrus"
|
||||
|
||||
"github.com/nektos/act/pkg/common"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
log "github.com/sirupsen/logrus"
|
||||
)
|
||||
|
||||
// Runner provides capabilities to run GitHub actions
|
||||
@@ -61,6 +63,24 @@ type Config struct {
|
||||
Matrix map[string]map[string]bool // Matrix config to run
|
||||
ContainerNetworkMode docker_container.NetworkMode // the network mode of job containers (the value of --network)
|
||||
ActionCache ActionCache // Use a custom ActionCache Implementation
|
||||
|
||||
PresetGitHubContext *model.GithubContext // the preset github context, overrides some fields like DefaultBranch, Env, Secrets etc.
|
||||
EventJSON string // the content of JSON file to use for event.json in containers, overrides EventPath
|
||||
ContainerNamePrefix string // the prefix of container name
|
||||
ContainerMaxLifetime time.Duration // the max lifetime of job containers
|
||||
DefaultActionInstance string // the default actions web site
|
||||
PlatformPicker func(labels []string) string // platform picker, it will take precedence over Platforms if isn't nil
|
||||
JobLoggerLevel *log.Level // the level of job logger
|
||||
ValidVolumes []string // only volumes (and bind mounts) in this slice can be mounted on the job container or service containers
|
||||
}
|
||||
|
||||
// GetToken: Adapt to Gitea
|
||||
func (c Config) GetToken() string {
|
||||
token := c.Secrets["GITHUB_TOKEN"]
|
||||
if c.Secrets["GITEA_TOKEN"] != "" {
|
||||
token = c.Secrets["GITEA_TOKEN"]
|
||||
}
|
||||
return token
|
||||
}
|
||||
|
||||
type caller struct {
|
||||
@@ -84,7 +104,9 @@ func New(runnerConfig *Config) (Runner, error) {
|
||||
|
||||
func (runner *runnerImpl) configure() (Runner, error) {
|
||||
runner.eventJSON = "{}"
|
||||
if runner.config.EventPath != "" {
|
||||
if runner.config.EventJSON != "" {
|
||||
runner.eventJSON = runner.config.EventJSON
|
||||
} else if runner.config.EventPath != "" {
|
||||
log.Debugf("Reading event.json from %s", runner.config.EventPath)
|
||||
eventJSONBytes, err := os.ReadFile(runner.config.EventPath)
|
||||
if err != nil {
|
||||
|
||||
@@ -6,7 +6,6 @@ import (
|
||||
"fmt"
|
||||
"io"
|
||||
"os"
|
||||
"path"
|
||||
"path/filepath"
|
||||
"runtime"
|
||||
"strings"
|
||||
@@ -15,7 +14,6 @@ import (
|
||||
"github.com/joho/godotenv"
|
||||
log "github.com/sirupsen/logrus"
|
||||
assert "github.com/stretchr/testify/assert"
|
||||
"gopkg.in/yaml.v3"
|
||||
|
||||
"github.com/nektos/act/pkg/common"
|
||||
"github.com/nektos/act/pkg/model"
|
||||
@@ -189,7 +187,6 @@ func (j *TestJobFileInfo) runTest(ctx context.Context, t *testing.T, cfg *Config
|
||||
GitHubInstance: "github.com",
|
||||
ContainerArchitecture: cfg.ContainerArchitecture,
|
||||
Matrix: cfg.Matrix,
|
||||
ActionCache: cfg.ActionCache,
|
||||
}
|
||||
|
||||
runner, err := New(runnerConfig)
|
||||
@@ -212,10 +209,6 @@ func (j *TestJobFileInfo) runTest(ctx context.Context, t *testing.T, cfg *Config
|
||||
fmt.Println("::endgroup::")
|
||||
}
|
||||
|
||||
type TestConfig struct {
|
||||
LocalRepositories map[string]string `yaml:"local-repositories"`
|
||||
}
|
||||
|
||||
func TestRunEvent(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test")
|
||||
@@ -314,9 +307,6 @@ func TestRunEvent(t *testing.T) {
|
||||
{workdir, "services", "push", "", platforms, secrets},
|
||||
{workdir, "services-host-network", "push", "", platforms, secrets},
|
||||
{workdir, "services-with-container", "push", "", platforms, secrets},
|
||||
|
||||
// local remote action overrides
|
||||
{workdir, "local-remote-action-overrides", "push", "", platforms, secrets},
|
||||
}
|
||||
|
||||
for _, table := range tables {
|
||||
@@ -330,22 +320,6 @@ func TestRunEvent(t *testing.T) {
|
||||
config.EventPath = eventFile
|
||||
}
|
||||
|
||||
testConfigFile := filepath.Join(workdir, table.workflowPath, "config.yml")
|
||||
if file, err := os.ReadFile(testConfigFile); err == nil {
|
||||
testConfig := &TestConfig{}
|
||||
if yaml.Unmarshal(file, testConfig) == nil {
|
||||
if testConfig.LocalRepositories != nil {
|
||||
config.ActionCache = &LocalRepositoryCache{
|
||||
Parent: GoGitActionCache{
|
||||
path.Clean(path.Join(workdir, "cache")),
|
||||
},
|
||||
LocalRepositories: testConfig.LocalRepositories,
|
||||
CacheDirCache: map[string]string{},
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
table.runTest(ctx, t, config)
|
||||
})
|
||||
}
|
||||
@@ -580,6 +554,43 @@ func TestRunEventSecrets(t *testing.T) {
|
||||
tjfi.runTest(context.Background(), t, &Config{Secrets: secrets, Env: env})
|
||||
}
|
||||
|
||||
func TestRunWithService(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test")
|
||||
}
|
||||
|
||||
log.SetLevel(log.DebugLevel)
|
||||
ctx := context.Background()
|
||||
|
||||
platforms := map[string]string{
|
||||
"ubuntu-latest": "node:12.20.1-buster-slim",
|
||||
}
|
||||
|
||||
workflowPath := "services"
|
||||
eventName := "push"
|
||||
|
||||
workdir, err := filepath.Abs("testdata")
|
||||
assert.NoError(t, err, workflowPath)
|
||||
|
||||
runnerConfig := &Config{
|
||||
Workdir: workdir,
|
||||
EventName: eventName,
|
||||
Platforms: platforms,
|
||||
ReuseContainers: false,
|
||||
}
|
||||
runner, err := New(runnerConfig)
|
||||
assert.NoError(t, err, workflowPath)
|
||||
|
||||
planner, err := model.NewWorkflowPlanner(fmt.Sprintf("testdata/%s", workflowPath), true)
|
||||
assert.NoError(t, err, workflowPath)
|
||||
|
||||
plan, err := planner.PlanEvent(eventName)
|
||||
assert.NoError(t, err, workflowPath)
|
||||
|
||||
err = runner.NewPlanExecutor(plan)(ctx)
|
||||
assert.NoError(t, err, workflowPath)
|
||||
}
|
||||
|
||||
func TestRunActionInputs(t *testing.T) {
|
||||
if testing.Short() {
|
||||
t.Skip("skipping integration test")
|
||||
|
||||
@@ -33,9 +33,7 @@ type stepActionRemote struct {
|
||||
resolvedSha string
|
||||
}
|
||||
|
||||
var (
|
||||
stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
|
||||
)
|
||||
var stepActionRemoteNewCloneExecutor = git.NewGitCloneExecutor
|
||||
|
||||
func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
|
||||
return func(ctx context.Context) error {
|
||||
@@ -44,14 +42,18 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
|
||||
return nil
|
||||
}
|
||||
|
||||
// For gitea:
|
||||
// Since actions can specify the download source via a url prefix.
|
||||
// The prefix may contain some sensitive information that needs to be stored in secrets,
|
||||
// so we need to interpolate the expression value for uses first.
|
||||
sar.Step.Uses = sar.RunContext.NewExpressionEvaluator(ctx).Interpolate(ctx, sar.Step.Uses)
|
||||
|
||||
sar.remoteAction = newRemoteAction(sar.Step.Uses)
|
||||
if sar.remoteAction == nil {
|
||||
return fmt.Errorf("Expected format {org}/{repo}[/path]@ref. Actual '%s' Input string was not in a correct format", sar.Step.Uses)
|
||||
}
|
||||
|
||||
github := sar.getGithubContext(ctx)
|
||||
sar.remoteAction.URL = github.ServerURL
|
||||
|
||||
if sar.remoteAction.IsCheckout() && isLocalCheckout(github, sar.Step) && !sar.RunContext.Config.NoSkipCheckout {
|
||||
common.Logger(ctx).Debugf("Skipping local actions/checkout because workdir was already copied")
|
||||
return nil
|
||||
@@ -108,10 +110,16 @@ func (sar *stepActionRemote) prepareActionExecutor() common.Executor {
|
||||
|
||||
actionDir := fmt.Sprintf("%s/%s", sar.RunContext.ActionCacheDir(), safeFilename(sar.Step.Uses))
|
||||
gitClone := stepActionRemoteNewCloneExecutor(git.NewGitCloneExecutorInput{
|
||||
URL: sar.remoteAction.CloneURL(),
|
||||
Ref: sar.remoteAction.Ref,
|
||||
Dir: actionDir,
|
||||
Token: github.Token,
|
||||
URL: sar.remoteAction.CloneURL(sar.RunContext.Config.DefaultActionInstance),
|
||||
Ref: sar.remoteAction.Ref,
|
||||
Dir: actionDir,
|
||||
Token: "", /*
|
||||
Shouldn't provide token when cloning actions,
|
||||
the token comes from the instance which triggered the task,
|
||||
however, it might be not the same instance which provides actions.
|
||||
For GitHub, they are the same, always github.com.
|
||||
But for Gitea, tasks triggered by a.com can clone actions from b.com.
|
||||
*/
|
||||
OfflineMode: sar.RunContext.Config.ActionOfflineMode,
|
||||
})
|
||||
var ntErr common.Executor
|
||||
@@ -258,8 +266,16 @@ type remoteAction struct {
|
||||
Ref string
|
||||
}
|
||||
|
||||
func (ra *remoteAction) CloneURL() string {
|
||||
return fmt.Sprintf("%s/%s/%s", ra.URL, ra.Org, ra.Repo)
|
||||
func (ra *remoteAction) CloneURL(u string) string {
|
||||
if ra.URL == "" {
|
||||
if !strings.HasPrefix(u, "http://") && !strings.HasPrefix(u, "https://") {
|
||||
u = "https://" + u
|
||||
}
|
||||
} else {
|
||||
u = ra.URL
|
||||
}
|
||||
|
||||
return fmt.Sprintf("%s/%s/%s", u, ra.Org, ra.Repo)
|
||||
}
|
||||
|
||||
func (ra *remoteAction) IsCheckout() bool {
|
||||
@@ -270,6 +286,26 @@ func (ra *remoteAction) IsCheckout() bool {
|
||||
}
|
||||
|
||||
func newRemoteAction(action string) *remoteAction {
|
||||
// support http(s)://host/owner/repo@v3
|
||||
for _, schema := range []string{"https://", "http://"} {
|
||||
if strings.HasPrefix(action, schema) {
|
||||
splits := strings.SplitN(strings.TrimPrefix(action, schema), "/", 2)
|
||||
if len(splits) != 2 {
|
||||
return nil
|
||||
}
|
||||
ret := parseAction(splits[1])
|
||||
if ret == nil {
|
||||
return nil
|
||||
}
|
||||
ret.URL = schema + splits[0]
|
||||
return ret
|
||||
}
|
||||
}
|
||||
|
||||
return parseAction(action)
|
||||
}
|
||||
|
||||
func parseAction(action string) *remoteAction {
|
||||
// GitHub's document[^] describes:
|
||||
// > We strongly recommend that you include the version of
|
||||
// > the action you are using by specifying a Git ref, SHA, or Docker tag number.
|
||||
@@ -285,7 +321,7 @@ func newRemoteAction(action string) *remoteAction {
|
||||
Repo: matches[2],
|
||||
Path: matches[4],
|
||||
Ref: matches[6],
|
||||
URL: "https://github.com",
|
||||
URL: "",
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
@@ -616,6 +616,100 @@ func TestStepActionRemotePost(t *testing.T) {
|
||||
}
|
||||
}
|
||||
|
||||
func Test_newRemoteAction(t *testing.T) {
|
||||
tests := []struct {
|
||||
action string
|
||||
want *remoteAction
|
||||
wantCloneURL string
|
||||
}{
|
||||
{
|
||||
action: "actions/heroku@main",
|
||||
want: &remoteAction{
|
||||
URL: "",
|
||||
Org: "actions",
|
||||
Repo: "heroku",
|
||||
Path: "",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "https://github.com/actions/heroku",
|
||||
},
|
||||
{
|
||||
action: "actions/aws/ec2@main",
|
||||
want: &remoteAction{
|
||||
URL: "",
|
||||
Org: "actions",
|
||||
Repo: "aws",
|
||||
Path: "ec2",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "https://github.com/actions/aws",
|
||||
},
|
||||
{
|
||||
action: "./.github/actions/my-action", // it's valid for GitHub, but act don't support it
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
action: "docker://alpine:3.8", // it's valid for GitHub, but act don't support it
|
||||
want: nil,
|
||||
},
|
||||
{
|
||||
action: "https://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
|
||||
want: &remoteAction{
|
||||
URL: "https://gitea.com",
|
||||
Org: "actions",
|
||||
Repo: "heroku",
|
||||
Path: "",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "https://gitea.com/actions/heroku",
|
||||
},
|
||||
{
|
||||
action: "https://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
|
||||
want: &remoteAction{
|
||||
URL: "https://gitea.com",
|
||||
Org: "actions",
|
||||
Repo: "aws",
|
||||
Path: "ec2",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "https://gitea.com/actions/aws",
|
||||
},
|
||||
{
|
||||
action: "http://gitea.com/actions/heroku@main", // it's invalid for GitHub, but gitea supports it
|
||||
want: &remoteAction{
|
||||
URL: "http://gitea.com",
|
||||
Org: "actions",
|
||||
Repo: "heroku",
|
||||
Path: "",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "http://gitea.com/actions/heroku",
|
||||
},
|
||||
{
|
||||
action: "http://gitea.com/actions/aws/ec2@main", // it's invalid for GitHub, but gitea supports it
|
||||
want: &remoteAction{
|
||||
URL: "http://gitea.com",
|
||||
Org: "actions",
|
||||
Repo: "aws",
|
||||
Path: "ec2",
|
||||
Ref: "main",
|
||||
},
|
||||
wantCloneURL: "http://gitea.com/actions/aws",
|
||||
},
|
||||
}
|
||||
for _, tt := range tests {
|
||||
t.Run(tt.action, func(t *testing.T) {
|
||||
got := newRemoteAction(tt.action)
|
||||
assert.Equalf(t, tt.want, got, "newRemoteAction(%v)", tt.action)
|
||||
cloneURL := ""
|
||||
if got != nil {
|
||||
cloneURL = got.CloneURL("github.com")
|
||||
}
|
||||
assert.Equalf(t, tt.wantCloneURL, cloneURL, "newRemoteAction(%v).CloneURL()", tt.action)
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
func Test_safeFilename(t *testing.T) {
|
||||
tests := []struct {
|
||||
s string
|
||||
|
||||
@@ -114,22 +114,24 @@ func (sd *stepDocker) newStepContainer(ctx context.Context, image string, cmd []
|
||||
|
||||
binds, mounts := rc.GetBindsAndMounts()
|
||||
stepContainer := ContainerNewContainer(&container.NewContainerInput{
|
||||
Cmd: cmd,
|
||||
Entrypoint: entrypoint,
|
||||
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: rc.Config.Secrets["DOCKER_USERNAME"],
|
||||
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
|
||||
Name: createContainerName(rc.jobContainerName(), step.ID),
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: fmt.Sprintf("container:%s", rc.jobContainerName()),
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
Cmd: cmd,
|
||||
Entrypoint: entrypoint,
|
||||
WorkingDir: rc.JobContainer.ToContainerPath(rc.Config.Workdir),
|
||||
Image: image,
|
||||
Username: rc.Config.Secrets["DOCKER_USERNAME"],
|
||||
Password: rc.Config.Secrets["DOCKER_PASSWORD"],
|
||||
Name: createSimpleContainerName(rc.jobContainerName(), "STEP-"+step.ID),
|
||||
Env: envList,
|
||||
Mounts: mounts,
|
||||
NetworkMode: fmt.Sprintf("container:%s", rc.jobContainerName()),
|
||||
Binds: binds,
|
||||
Stdout: logWriter,
|
||||
Stderr: logWriter,
|
||||
Privileged: rc.Config.Privileged,
|
||||
UsernsMode: rc.Config.UsernsMode,
|
||||
Platform: rc.Config.ContainerArchitecture,
|
||||
AutoRemove: rc.Config.AutoRemove,
|
||||
ValidVolumes: rc.Config.ValidVolumes,
|
||||
})
|
||||
return stepContainer
|
||||
}
|
||||
|
||||
@@ -1,3 +0,0 @@
|
||||
local-repositories:
|
||||
https://github.com/nektos/test-override@a: testdata/actions/node20
|
||||
nektos/test-override@b: testdata/actions/node16
|
||||
@@ -1,9 +0,0 @@
|
||||
name: basic
|
||||
on: push
|
||||
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: nektos/test-override@a
|
||||
- uses: nektos/test-override@b
|
||||
4
pkg/runner/testdata/networking/push.yml
vendored
4
pkg/runner/testdata/networking/push.yml
vendored
@@ -7,8 +7,8 @@ jobs:
|
||||
- name: Install tools
|
||||
run: |
|
||||
apt update
|
||||
apt install -y iputils-ping
|
||||
apt install -y bind9-host
|
||||
- name: Run hostname test
|
||||
run: |
|
||||
hostname -f
|
||||
ping -c 4 $(hostname -f)
|
||||
host $(hostname -f)
|
||||
|
||||
Reference in New Issue
Block a user