Merge branch 'upstream-master' into sync_upstream2

This commit is contained in:
Mike Fedosin 2020-06-19 01:36:21 +02:00
commit 4fdff31cfb
18 changed files with 336 additions and 206 deletions

11
.prow.sh Executable file
View File

@ -0,0 +1,11 @@
#! /bin/bash
# A Prow job can override these defaults, but this shouldn't be necessary.
# Only these tests make sense for csi-driver-nfs until we can integrate k/k
# e2es.
: ${CSI_PROW_TESTS:="unit"}
. release-tools/prow.sh
main

View File

@ -0,0 +1,22 @@
# v2.0.0
## Breaking Changes
- Changing name of the driver from "csi-nfsplugin" to "nfs.csi.k8s.io" ([#26](https://github.com/kubernetes-csi/csi-driver-nfs/pull/26), [@wozniakjan](https://github.com/wozniakjan))
## New Features
- Add support for CSI spec 1.0.
- Remove external-attacher and update deployment specs to apps/v1.
([#24](https://github.com/kubernetes-csi/csi-driver-nfs/pull/24),
[@wozniakjan](https://github.com/wozniakjan))
## Bug Fixes
- Adds support for all access modes. ([#15](https://github.com/kubernetes-csi/csi-driver-nfs/pull/15), [@msau42](https://github.com/msau42))
## Other Notable Changes
- Update base image to centos8.
([#28](https://github.com/kubernetes-csi/csi-driver-nfs/pull/28), [@wozniakjan](https://github.com/wozniakjan))
- Switch to go mod and update dependencies. ([#22](https://github.com/kubernetes-csi/csi-driver-nfs/pull/22), [@wozniakjan](https://github.com/wozniakjan))

View File

@ -1,8 +1,8 @@
FROM centos:7.4.1708
FROM centos:latest
# Copy nfsplugin from build _output directory
COPY bin/nfsplugin /nfsplugin
RUN yum -y install nfs-utils && yum -y install epel-release && yum -y install jq && yum clean all
RUN yum -y install nfs-utils epel-release jq && yum clean all
ENTRYPOINT ["/nfsplugin"]

112
README.md
View File

@ -1,79 +1,71 @@
# CSI NFS driver
## Kubernetes
### Requirements
## Overview
The folllowing feature gates and runtime config have to be enabled to deploy the driver
This is a repository for [NFS](https://en.wikipedia.org/wiki/Network_File_System) [CSI](https://kubernetes-csi.github.io/docs/) Driver.
Currently it implements bare minimum of the [CSI spec](https://github.com/container-storage-interface/spec) and is in the alpha state
of the development.
#### CSI Feature matrix
| **nfs.csi.k8s.io** | K8s version compatibility | CSI versions compatibility | Dynamic Provisioning | Resize | Snapshots | Raw Block | AccessModes | Status |
|--------------------|---------------------------|----------------------------|----------------------|--------|-----------|-----------|--------------------------|------------------------------------------------------------------------------|
|master | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|v2.0.0 | 1.14 + | v1.0 + | no | no | no | no | Read/Write Multiple Pods | Alpha |
|v1.0.0 | 1.9 - 1.15 | v1.0 | no | no | no | no | Read/Write Multiple Pods | [deprecated](https://github.com/kubernetes-csi/drivers/tree/master/pkg/nfs) |
## Requirements
The CSI NFS driver requires Kubernetes cluster of version 1.14 or newer and
preexisting NFS server, whether it is deployed on cluster or provisioned
independently. The plugin itself provides only a communication layer between
resources in the cluser and the NFS server.
## Example
There are multiple ways to create a kubernetes cluster, the NFS CSI plugin
should work invariantly of your cluster setup. Very simple way of getting
a local environment for testing can be achieved using for example
[kind](https://github.com/kubernetes-sigs/kind).
There are also multiple different NFS servers you can use for testing of
the plugin, the major versions of the protocol v2, v3 and v4 should be supported
by the current implementation.
The example assumes you have your cluster created (e.g. `kind create cluster`)
and working NFS server (e.g. https://github.com/rootfs/nfs-ganesha-docker)
#### Deploy
Deploy the NFS plugin along with the `CSIDriver` info.
```
FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true
RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true"
kubectl -f deploy/kubernetes create
```
Mountprogpation requries support for privileged containers. So, make sure privileged containers are enabled in the cluster.
#### Example Nginx application
### Example local-up-cluster.sh
The [/examples/kubernetes/nginx.yaml](/examples/kubernetes/nginx.yaml) contains a `PersistentVolume`,
`PersistentVolumeClaim` and an nginx `Pod` mounting the NFS volume under `/var/www`.
```ALLOW_PRIVILEGED=true FEATURE_GATES=CSIPersistentVolume=true,MountPropagation=true RUNTIME_CONFIG="storage.k8s.io/v1alpha1=true" LOG_LEVEL=5 hack/local-up-cluster.sh```
You will need to update the NFS Server IP and the share information under
`volumeAttributes` inside `PersistentVolume` in `nginx.yaml` file to match your
NFS server public end point and configuration. You can also provide additional
`mountOptions`, such as protocol version, in the `PersistentVolume` `spec`
relevant for your NFS Server.
### Deploy
```kubectl -f deploy/kubernetes create```
### Example Nginx application
Please update the NFS Server & share information in nginx.yaml file.
```kubectl -f examples/kubernetes/nginx.yaml create```
## Using CSC tool
### Build nfsplugin
```
$ make nfs
kubectl -f examples/kubernetes/nginx.yaml create
```
### Start NFS driver
```
$ sudo ./_output/nfsplugin --endpoint tcp://127.0.0.1:10000 --nodeid CSINode -v=5
```
## Test
Get ```csc``` tool from https://github.com/rexray/gocsi/tree/master/csc
#### Get plugin info
```
$ csc identity plugin-info --endpoint tcp://127.0.0.1:10000
"NFS" "0.1.0"
```
#### NodePublish a volume
```
$ export NFS_SERVER="Your Server IP (Ex: 10.10.10.10)"
$ export NFS_SHARE="Your NFS share"
$ csc node publish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs --attrib server=$NFS_SERVER --attrib share=$NFS_SHARE nfstestvol
nfstestvol
```
#### NodeUnpublish a volume
```
$ csc node unpublish --endpoint tcp://127.0.0.1:10000 --target-path /mnt/nfs nfstestvol
nfstestvol
```
#### Get NodeID
```
$ csc node get-id --endpoint tcp://127.0.0.1:10000
CSINode
```
## Running Kubernetes End To End tests on an NFS Driver
First, stand up a local cluster `ALLOW_PRIVILEGED=1 hack/local-up-cluster.sh` (from your Kubernetes repo)
For Fedora/RHEL clusters, the following might be required:
```
sudo chown -R $USER:$USER /var/run/kubernetes/
sudo chown -R $USER:$USER /var/lib/kubelet
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet
```
```
sudo chown -R $USER:$USER /var/run/kubernetes/
sudo chown -R $USER:$USER /var/lib/kubelet
sudo chcon -R -t svirt_sandbox_file_t /var/lib/kubelet
```
If you are plannig to test using your own private image, you could either install your nfs driver using your own set of YAML files, or edit the existing YAML files to use that private image.
When using the [existing set of YAML files](https://github.com/kubernetes-csi/csi-driver-nfs/tree/master/deploy/kubernetes), you would edit the [csi-attacher-nfsplugin.yaml](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/kubernetes/csi-attacher-nfsplugin.yaml#L46) and [csi-nodeplugin-nfsplugin.yaml](https://github.com/kubernetes-csi/csi-driver-nfs/blob/master/deploy/kubernetes/csi-nodeplugin-nfsplugin.yaml#L45) files to include your private image instead of the default one. After editing these files, skip to step 3 of the following steps.
@ -81,7 +73,7 @@ When using the [existing set of YAML files](https://github.com/kubernetes-csi/cs
If you already have a driver installed, skip to step 4 of the following steps.
1) Build the nfs driver by running `make`
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v1.0.0 .`
2) Create NFS Driver Image, where the image tag would be whatever that is required by your YAML deployment files `docker build -t quay.io/k8scsi/nfsplugin:v2.0.0 .`
3) Install the Driver: `kubectl create -f deploy/kubernetes`
4) Build E2E test binary: `make build-tests`
5) Run E2E Tests using the following command: `./bin/tests --ginkgo.v --ginkgo.progress --kubeconfig=/var/run/kubernetes/admin.kubeconfig`

View File

@ -20,6 +20,7 @@ import (
"flag"
"fmt"
"os"
"strconv"
"github.com/spf13/cobra"
@ -29,6 +30,7 @@ import (
var (
endpoint string
nodeID string
perm string
)
func init() {
@ -55,6 +57,8 @@ func main() {
cmd.PersistentFlags().StringVar(&endpoint, "endpoint", "", "CSI endpoint")
cmd.MarkPersistentFlagRequired("endpoint")
cmd.PersistentFlags().StringVar(&perm, "mount-permissions", "", "mounted folder permissions")
cmd.ParseFlags(os.Args[1:])
if err := cmd.Execute(); err != nil {
fmt.Fprintf(os.Stderr, "%s", err.Error())
@ -65,6 +69,18 @@ func main() {
}
func handle() {
d := nfs.NewNFSdriver(nodeID, endpoint)
// Converting string permission representation to *uint32
var parsedPerm *uint32
if perm != "" {
permu64, err := strconv.ParseUint(perm, 8, 32)
if err != nil {
fmt.Fprintf(os.Stderr, "Incorrect mount-permissions value: %q", perm)
os.Exit(1)
}
permu32 := uint32(permu64)
parsedPerm = &permu32
}
d := nfs.NewNFSdriver(nodeID, endpoint, parsedPerm)
d.Run()
}

View File

@ -1,63 +0,0 @@
# This YAML file contains attacher & csi driver API objects that are necessary
# to run external CSI attacher for nfs
kind: Service
apiVersion: v1
metadata:
name: csi-attacher-nfsplugin
labels:
app: csi-attacher-nfsplugin
spec:
selector:
app: csi-attacher-nfsplugin
ports:
- name: dummy
port: 12345
---
kind: StatefulSet
apiVersion: apps/v1beta1
metadata:
name: csi-attacher-nfsplugin
spec:
serviceName: "csi-attacher"
replicas: 1
template:
metadata:
labels:
app: csi-attacher-nfsplugin
spec:
serviceAccount: csi-attacher
containers:
- name: csi-attacher
image: quay.io/k8scsi/csi-attacher:v1.0.1
args:
- "--v=5"
- "--csi-address=$(ADDRESS)"
env:
- name: ADDRESS
value: /csi/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /csi
- name: nfs
image: quay.io/k8scsi/nfsplugin:v1.0.0
args :
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"
env:
- name: NODE_ID
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: CSI_ENDPOINT
value: unix://plugin/csi.sock
imagePullPolicy: "IfNotPresent"
volumeMounts:
- name: socket-dir
mountPath: /plugin
volumes:
- name: socket-dir
emptyDir:

View File

@ -1,37 +0,0 @@
# This YAML file contains RBAC API objects that are necessary to run external
# CSI attacher for nfs flex adapter
apiVersion: v1
kind: ServiceAccount
metadata:
name: csi-attacher
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: external-attacher-runner
rules:
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: ["storage.k8s.io"]
resources: ["volumeattachments"]
verbs: ["get", "list", "watch", "update"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: csi-attacher-role
subjects:
- kind: ServiceAccount
name: csi-attacher
namespace: default
roleRef:
kind: ClusterRole
name: external-attacher-runner
apiGroup: rbac.authorization.k8s.io

View File

@ -0,0 +1,9 @@
apiVersion: storage.k8s.io/v1beta1
kind: CSIDriver
metadata:
name: nfs.csi.k8s.io
spec:
attachRequired: false
volumeLifecycleModes:
- Persistent
podInfoOnMount: true

View File

@ -1,7 +1,7 @@
# This YAML file contains driver-registrar & csi driver nodeplugin API objects
# that are necessary to run CSI nodeplugin for nfs
kind: DaemonSet
apiVersion: apps/v1beta2
apiVersion: apps/v1
metadata:
name: csi-nodeplugin-nfsplugin
spec:
@ -42,7 +42,7 @@ spec:
capabilities:
add: ["SYS_ADMIN"]
allowPrivilegeEscalation: true
image: quay.io/k8scsi/nfsplugin:v1.0.0
image: quay.io/k8scsi/nfsplugin:v2.0.0
args :
- "--nodeid=$(NODE_ID)"
- "--endpoint=$(CSI_ENDPOINT)"

View File

@ -10,7 +10,7 @@ spec:
capacity:
storage: 100Gi
csi:
driver: csi-nfsplugin
driver: nfs.csi.k8s.io
volumeHandle: data-id
volumeAttributes:
server: 127.0.0.1

View File

@ -29,21 +29,23 @@ type nfsDriver struct {
endpoint string
perm *uint32
//ids *identityServer
ns *nodeServer
cap []*csi.VolumeCapability_AccessMode
cap map[csi.VolumeCapability_AccessMode_Mode]bool
cscap []*csi.ControllerServiceCapability
}
const (
driverName = "csi-nfsplugin"
driverName = "nfs.csi.k8s.io"
)
var (
version = "1.0.0-rc2"
version = "2.0.0"
)
func NewNFSdriver(nodeID, endpoint string) *nfsDriver {
func NewNFSdriver(nodeID, endpoint string, perm *uint32) *nfsDriver {
glog.Infof("Driver: %v version: %v", driverName, version)
n := &nfsDriver{
@ -51,9 +53,19 @@ func NewNFSdriver(nodeID, endpoint string) *nfsDriver {
version: version,
nodeID: nodeID,
endpoint: endpoint,
cap: map[csi.VolumeCapability_AccessMode_Mode]bool{},
perm: perm,
}
n.AddVolumeCapabilityAccessModes([]csi.VolumeCapability_AccessMode_Mode{csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER})
vcam := []csi.VolumeCapability_AccessMode_Mode{
csi.VolumeCapability_AccessMode_SINGLE_NODE_WRITER,
csi.VolumeCapability_AccessMode_SINGLE_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_MULTI_NODE_READER_ONLY,
csi.VolumeCapability_AccessMode_MULTI_NODE_SINGLE_WRITER,
csi.VolumeCapability_AccessMode_MULTI_NODE_MULTI_WRITER,
}
n.AddVolumeCapabilityAccessModes(vcam)
// NFS plugin does not support ControllerServiceCapability now.
// If support is added, it should set to appropriate
// ControllerServiceCapability RPC types.
@ -86,8 +98,8 @@ func (n *nfsDriver) AddVolumeCapabilityAccessModes(vc []csi.VolumeCapability_Acc
for _, c := range vc {
glog.Infof("Enabling volume access mode: %v", c.String())
vca = append(vca, &csi.VolumeCapability_AccessMode{Mode: c})
n.cap[c] = true
}
n.cap = vca
return vca
}

View File

@ -73,6 +73,12 @@ func (ns *nodeServer) NodePublishVolume(ctx context.Context, req *csi.NodePublis
return nil, status.Error(codes.Internal, err.Error())
}
if ns.Driver.perm != nil {
if err := os.Chmod(targetPath, os.FileMode(*ns.Driver.perm)); err != nil {
return nil, status.Error(codes.Internal, err.Error())
}
}
return &csi.NodePublishVolumeResponse{}, nil
}

View File

@ -50,18 +50,22 @@ naming convention `<hostpath-deployment-version>-on-<kubernetes-version>`.
## Release Process
1. Identify all issues and ongoing PRs that should go into the release, and
drive them to resolution.
1. Download [K8s release notes
1. Download v2.8+ [K8s release notes
generator](https://github.com/kubernetes/release/tree/master/cmd/release-notes)
1. Generate release notes for the release. Replace arguments with the relevant
information.
```
GITHUB_TOKEN=<token> ./release-notes --start-sha=0ed6978fd199e3ca10326b82b4b8b8e916211c9b --end-sha=3cb3d2f18ed8cb40371c6d8886edcabd1f27e7b9 \
--github-org=kubernetes-csi --github-repo=external-attacher -branch=master -output out.md
```
* `--start-sha` should point to the last release from the same branch. For
example:
* `1.X-1.0` tag when releasing `1.X.0`
* `1.X.Y-1` tag when releasing `1.X.Y`
* For new minor releases on master:
```
GITHUB_TOKEN=<token> release-notes --discover=mergebase-to-latest
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
```
* For new patch releases on a release branch:
```
GITHUB_TOKEN=<token> release-notes --discover=patch-to-latest --branch=release-1.1
--github-org=kubernetes-csi --github-repo=external-provisioner
--required-author="" --output out.md
```
1. Compare the generated output to the new commits for the release to check if
any notable change missed a release note.
1. Reword release notes as needed. Make sure to check notes for breaking

View File

@ -60,23 +60,30 @@ else
TESTARGS =
endif
ARCH := $(if $(GOARCH),$(GOARCH),$(shell go env GOARCH))
# Specific packages can be excluded from each of the tests below by setting the *_FILTER_CMD variables
# to something like "| grep -v 'github.com/kubernetes-csi/project/pkg/foobar'". See usage below.
build-%: check-go-version-go
mkdir -p bin
CGO_ENABLED=0 GOOS=linux go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$* ./cmd/$*
if [ "$$ARCH" = "amd64" ]; then \
CGO_ENABLED=0 GOOS=windows go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$*.exe ./cmd/$* ; \
CGO_ENABLED=0 GOOS=linux GOARCH=ppc64le go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o ./bin/$*-ppc64le ./cmd/$* ; \
fi
# BUILD_PLATFORMS contains a set of <os> <arch> <suffix> triplets,
# separated by semicolon. An empty variable or empty entry (= just a
# semicolon) builds for the default platform of the current Go
# toolchain.
BUILD_PLATFORMS =
container-%: build-%
# This builds each command (= the sub-directories of ./cmd) for the target platform(s)
# defined by BUILD_PLATFORMS.
$(CMDS:%=build-%): build-%: check-go-version-go
mkdir -p bin
echo '$(BUILD_PLATFORMS)' | tr ';' '\n' | while read -r os arch suffix; do \
if ! (set -x; CGO_ENABLED=0 GOOS="$$os" GOARCH="$$arch" go build $(GOFLAGS_VENDOR) -a -ldflags '-X main.version=$(REV) -extldflags "-static"' -o "./bin/$*$$suffix" ./cmd/$*); then \
echo "Building $* for GOOS=$$os GOARCH=$$arch failed, see error(s) above."; \
exit 1; \
fi; \
done
$(CMDS:%=container-%): container-%: build-%
docker build -t $*:latest -f $(shell if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi) --label revision=$(REV) .
push-%: container-%
$(CMDS:%=push-%): push-%: container-%
set -ex; \
push_image () { \
docker tag $*:latest $(IMAGE_NAME):$$tag; \
@ -98,6 +105,77 @@ build: $(CMDS:%=build-%)
container: $(CMDS:%=container-%)
push: $(CMDS:%=push-%)
# Additional parameters are needed when pushing to a local registry,
# see https://github.com/docker/buildx/issues/94.
# However, that then runs into https://github.com/docker/cli/issues/2396.
#
# What works for local testing is:
# make push-multiarch PULL_BASE_REF=master REGISTRY_NAME=<your account on dockerhub.io> BUILD_PLATFORMS="linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x"
DOCKER_BUILDX_CREATE_ARGS ?=
# This target builds a multiarch image for one command using Moby BuildKit builder toolkit.
# Docker Buildx is included in Docker 19.03.
#
# ./cmd/<command>/Dockerfile[.Windows] is used if found, otherwise Dockerfile[.Windows].
# It is currently optional: if no such file exists, Windows images are not included,
# even when Windows is listed in BUILD_PLATFORMS. That way, projects can test that
# Windows binaries can be built before adding a Dockerfile for it.
#
# BUILD_PLATFORMS determines which individual images are included in the multiarch image.
# PULL_BASE_REF must be set to 'master', 'release-x.y', or a tag name, and determines
# the tag for the resulting multiarch image.
$(CMDS:%=push-multiarch-%): push-multiarch-%: check-pull-base-ref build-%
set -ex; \
DOCKER_CLI_EXPERIMENTAL=enabled; \
export DOCKER_CLI_EXPERIMENTAL; \
docker buildx create $(DOCKER_BUILDX_CREATE_ARGS) --use --name multiarchimage-buildertest; \
trap "docker buildx rm multiarchimage-buildertest" EXIT; \
dockerfile_linux=$$(if [ -e ./cmd/$*/Dockerfile ]; then echo ./cmd/$*/Dockerfile; else echo Dockerfile; fi); \
dockerfile_windows=$$(if [ -e ./cmd/$*/Dockerfile.Windows ]; then echo ./cmd/$*/Dockerfile.Windows; else echo Dockerfile.Windows; fi); \
if [ '$(BUILD_PLATFORMS)' ]; then build_platforms='$(BUILD_PLATFORMS)'; else build_platforms="linux amd64"; fi; \
if ! [ -f "$$dockerfile_windows" ]; then \
build_platforms="$$(echo "$$build_platforms" | sed -e 's/windows *[^ ]* *.exe//g' -e 's/; *;/;/g')"; \
fi; \
pushMultiArch () { \
tag=$$1; \
echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do \
docker buildx build --push \
--tag $(IMAGE_NAME):$$arch-$$os-$$tag \
--platform=$$os/$$arch \
--file $$(eval echo \$${dockerfile_$$os}) \
--build-arg binary=./bin/$*$$suffix \
--label revision=$(REV) \
.; \
done; \
images=$$(echo "$$build_platforms" | tr ';' '\n' | while read -r os arch suffix; do echo $(IMAGE_NAME):$$arch-$$os-$$tag; done); \
docker manifest create --amend $(IMAGE_NAME):$$tag $$images; \
docker manifest push -p $(IMAGE_NAME):$$tag; \
}; \
if [ $(PULL_BASE_REF) = "master" ]; then \
: "creating or overwriting canary image"; \
pushMultiArch canary; \
elif echo $(PULL_BASE_REF) | grep -q -e 'release-*' ; then \
: "creating or overwriting canary image for release branch"; \
release_canary_tag=$$(echo $(PULL_BASE_REF) | cut -f2 -d '-')-canary; \
pushMultiArch $$release_canary_tag; \
elif docker pull $(IMAGE_NAME):$(PULL_BASE_REF) 2>&1 | tee /dev/stderr | grep -q "manifest for $(IMAGE_NAME):$(PULL_BASE_REF) not found"; then \
: "creating release image"; \
pushMultiArch $(PULL_BASE_REF); \
else \
: "ERROR: release image $(IMAGE_NAME):$(PULL_BASE_REF) already exists: a new tag is required!"; \
exit 1; \
fi
.PHONY: check-pull-base-ref
check-pull-base-ref:
if ! [ "$(PULL_BASE_REF)" ]; then \
echo >&2 "ERROR: PULL_BASE_REF must be set to 'master', 'release-x.y', or a tag name."; \
exit 1; \
fi
.PHONY: push-multiarch
push-multiarch: $(CMDS:%=push-multiarch-%)
clean:
-rm -rf bin

6
release-tools/cloudbuild.sh Executable file
View File

@ -0,0 +1,6 @@
#! /bin/bash
# shellcheck disable=SC1091
. release-tools/prow.sh
gcr_cloud_build

View File

@ -0,0 +1,46 @@
# A configuration file for multi-arch image building with the Google cloud build service.
#
# Repos using this file must:
# - import csi-release-tools
# - add a symlink cloudbuild.yaml -> release-tools/cloudbuild.yaml
# - add a .cloudbuild.sh which can be a custom file or a symlink
# to release-tools/cloudbuild.sh
# - accept "binary" as build argument in their Dockerfile(s) (see
# https://github.com/pohly/node-driver-registrar/blob/3018101987b0bb6da2a2657de607174d6e3728f7/Dockerfile#L4-L6)
# because binaries will get built for different architectures and then
# get copied from the built host into the container image
#
# See https://github.com/kubernetes/test-infra/blob/master/config/jobs/image-pushing/README.md
# for more details on image pushing process in Kubernetes.
#
# To promote release images, see https://github.com/kubernetes/k8s.io/tree/master/k8s.gcr.io/images/k8s-staging-sig-storage.
# This must be specified in seconds. If omitted, defaults to 600s (10 mins).
timeout: 1200s
# This prevents errors if you don't use both _GIT_TAG and _PULL_BASE_REF,
# or any new substitutions added in the future.
options:
substitution_option: ALLOW_LOOSE
steps:
# The image must contain bash and curl. Ideally it should also contain
# the desired version of Go (currently defined in release-tools/travis.yml),
# but that just speeds up the build and is not required.
- name: 'gcr.io/k8s-testimages/gcb-docker-gcloud:v20200421-a2bf5f8'
entrypoint: ./.cloudbuild.sh
env:
- GIT_TAG=${_GIT_TAG}
- PULL_BASE_REF=${_PULL_BASE_REF}
- REGISTRY_NAME=gcr.io/${_STAGING_PROJECT}
- HOME=/root
substitutions:
# _GIT_TAG will be filled with a git-based tag for the image, of the form vYYYYMMDD-hash, and
# can be used as a substitution.
_GIT_TAG: '12345'
# _PULL_BASE_REF will contain the ref that was pushed to trigger this build -
# a branch like 'master' or 'release-0.2', or a tag like 'v0.2'.
_PULL_BASE_REF: 'master'
# The default gcr.io staging project for Kubernetes-CSI
# (=> https://console.cloud.google.com/gcr/images/k8s-staging-sig-storage/GLOBAL).
# Might be overridden in the Prow build job for a repo which wants
# images elsewhere.
_STAGING_PROJECT: 'k8s-staging-sig-storage'

View File

@ -85,6 +85,8 @@ get_versioned_variable () {
echo "$value"
}
configvar CSI_PROW_BUILD_PLATFORMS "linux amd64; windows amd64 .exe; linux ppc64le -ppc64le; linux s390x -s390x; linux arm64 -arm64" "Go target platforms (= GOOS + GOARCH) and file suffix of the resulting binaries"
# If we have a vendor directory, then use it. We must be careful to only
# use this for "make" invocations inside the project's repo itself because
# setting it globally can break other go usages (like "go get <some command>"
@ -193,7 +195,7 @@ configvar CSI_PROW_WORK "$(mkdir -p "$GOPATH/pkg" && mktemp -d "$GOPATH/pkg/csip
# If the deployment script is called with CSI_PROW_TEST_DRIVER=<file name> as
# environment variable, then it must write a suitable test driver configuration
# into that file in addition to installing the driver.
configvar CSI_PROW_DRIVER_VERSION "v1.3.0-rc4" "CSI driver version"
configvar CSI_PROW_DRIVER_VERSION "v1.3.0" "CSI driver version"
configvar CSI_PROW_DRIVER_REPO https://github.com/kubernetes-csi/csi-driver-host-path "CSI driver repo"
configvar CSI_PROW_DEPLOYMENT "" "deployment"
@ -340,7 +342,7 @@ configvar CSI_PROW_E2E_ALPHA_GATES_LATEST '' "alpha feature gates for latest Kub
configvar CSI_PROW_E2E_ALPHA_GATES "$(get_versioned_variable CSI_PROW_E2E_ALPHA_GATES "${csi_prow_kubernetes_version_suffix}")" "alpha E2E feature gates"
# Which external-snapshotter tag to use for the snapshotter CRD and snapshot-controller deployment
configvar CSI_SNAPSHOTTER_VERSION 'v2.0.0' "external-snapshotter version tag"
configvar CSI_SNAPSHOTTER_VERSION 'v2.0.1' "external-snapshotter version tag"
# Some tests are known to be unusable in a KinD cluster. For example,
# stopping kubelet with "ssh <node IP> systemctl stop kubelet" simply
@ -1026,7 +1028,7 @@ main () {
images=
if ${CSI_PROW_BUILD_JOB}; then
# A successful build is required for testing.
run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make all "GOFLAGS_VENDOR=${GOFLAGS_VENDOR}" || die "'make all' failed"
run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make all "GOFLAGS_VENDOR=${GOFLAGS_VENDOR}" "BUILD_PLATFORMS=${CSI_PROW_BUILD_PLATFORMS}" || die "'make all' failed"
# We don't want test failures to prevent E2E testing below, because the failure
# might have been minor or unavoidable, for example when experimenting with
# changes in "release-tools" in a PR (that fails the "is release-tools unmodified"
@ -1062,18 +1064,24 @@ main () {
# always pulling the image
# (https://github.com/kubernetes-sigs/kind/issues/328).
docker tag "$i:latest" "$i:csiprow" || die "tagging the locally built container image for $i failed"
done
if [ -e deploy/kubernetes/rbac.yaml ]; then
# This is one of those components which has its own RBAC rules (like external-provisioner).
# We are testing a locally built image and also want to test with the the current,
# potentially modified RBAC rules.
if [ "$(echo "$cmds" | wc -w)" != 1 ]; then
die "ambiguous deploy/kubernetes/rbac.yaml: need exactly one command, got: $cmds"
# For components with multiple cmds, the RBAC file should be in the following format:
# rbac-$cmd.yaml
# If this file cannot be found, we can default to the standard location:
# deploy/kubernetes/rbac.yaml
rbac_file_path=$(find . -type f -name "rbac-$i.yaml")
if [ "$rbac_file_path" == "" ]; then
rbac_file_path="$(pwd)/deploy/kubernetes/rbac.yaml"
fi
e=$(echo "$cmds" | tr '[:lower:]' '[:upper:]' | tr - _)
images="$images ${e}_RBAC=$(pwd)/deploy/kubernetes/rbac.yaml"
fi
if [ -e "$rbac_file_path" ]; then
# This is one of those components which has its own RBAC rules (like external-provisioner).
# We are testing a locally built image and also want to test with the the current,
# potentially modified RBAC rules.
e=$(echo "$i" | tr '[:lower:]' '[:upper:]' | tr - _)
images="$images ${e}_RBAC=$rbac_file_path"
fi
done
fi
if tests_need_non_alpha_cluster; then
@ -1181,3 +1189,23 @@ main () {
return "$ret"
}
# This function can be called by a repo's top-level cloudbuild.sh:
# it handles environment set up in the GCR cloud build and then
# invokes "make push-multiarch" to do the actual image building.
gcr_cloud_build () {
# Register gcloud as a Docker credential helper.
# Required for "docker buildx build --push".
gcloud auth configure-docker
if find . -name Dockerfile | grep -v ^./vendor | xargs --no-run-if-empty cat | grep -q ^RUN; then
# Needed for "RUN" steps on non-linux/amd64 platforms.
# See https://github.com/multiarch/qemu-user-static#getting-started
(set -x; docker run --rm --privileged multiarch/qemu-user-static --reset -p yes)
fi
# Extract tag-n-hash value from GIT_TAG (form vYYYYMMDD-tag-n-hash) for REV value.
REV=v$(echo "$GIT_TAG" | cut -f3- -d 'v')
run_with_go "${CSI_PROW_GO_VERSION_BUILD}" make push-multiarch REV="${REV}" REGISTRY_NAME="${REGISTRY_NAME}" BUILD_PLATFORMS="${CSI_PROW_BUILD_PLATFORMS}"
}

View File

@ -54,7 +54,7 @@ func initNFSDriver(name string, manifests ...string) testsuites.TestDriver {
func InitNFSDriver() testsuites.TestDriver {
return initNFSDriver("csi-nfsplugin",
return initNFSDriver("nfs.csi.k8s.io",
"csi-attacher-nfsplugin.yaml",
"csi-attacher-rbac.yaml",
"csi-nodeplugin-nfsplugin.yaml",