50 Commits

Author SHA1 Message Date
e861726d55 Update .gitreview for stable/2025.1
Change-Id: Iec68d668bcf1464315629a47451b173cd252e978
2025-03-13 13:34:46 +00:00
Ghanshyam Mann
a68e0b5447 Update gate jobs as per the 2025.1 cycle testing runtime
As per 2025.1 testing runtime[1], we need to move testing
on Ubuntu Noble.

Tracking: https://etherpad.opendev.org/p/migrate-to-noble

[1] https://governance.openstack.org/tc/reference/runtimes/2025.1.html

Change-Id: Id5bdc606e256e5e70fccd8760ccc1e40d8e2d144
2024-11-30 12:49:12 -08:00
Yoshiro Watanabe
a0b2a6cbaf Change repository for k8s, cri-o
The legacy k8s repository was retired on March 26, 2024 [1].
The cri-o project followed k8s lead and moved the build to a new
repository [2].

This patch changes the location of k8s, cri-o installed packages
for Ubuntu based deployments only. Changes the value of the
apiversion parameter in the kubeadm configuration because the new
repository can also install 1.27.x and later versions of k8s that
no longer support v1beta2 and earlier APIs.

The version of the package to be installed can be specified using
the K8S_VERSION and CRIO_VERSION variables.
Also, the default values of K8S_VERSION and CRIO_VERSION have been
changed, and it has been confirmed that tacker project FT works fine
with the changed version.

[1]https://kubernetes.io/blog/2023/08/31/legacy-package-repository-deprecation/
[2]https://kubernetes.io/blog/2023/10/10/cri-o-community-package-infrastructure/

Change-Id: I0ce9fd2bcb5d79ebad2cecafabf8c9f33b106647
2024-10-01 09:18:19 +00:00
Dr. Jens Harbott
3e2a0ffe4f zuul: drop devstack-gate reference
Devstack jobs no longer depend on the devstack-gate project, which has
been retired

Change-Id: Id4721d419b22b6d6498d192e3f313629ad33ef69
2024-06-07 14:12:50 +02:00
Ashutosh Mishra
0e50d17b8d Added the correct CentOs 9 stream repo for Cri-o Installation.
In CentOs 9 stream repo added gpgkey url along with baseurl.

Closes-Bug: #2041788
Change-Id: I601eb22df31b33f680996eea98dc8e49d0fbb612
2023-10-30 06:11:39 +00:00
Yasufumi Ogawa
0052374411 Failed to launch kubelet after rebooting
The default behavior of crio service is `disable` if you install it with
devstack. So, kubelet cannot launch after rebooting a host because crio
isn't run on the host before. To fix the issue, enable crio in systemctl
while installing kubeadm.

Change-Id: Ic042494d1cd588fb2b06f7e1d5544206b20b5ad6
Signed-off-by: Yasufumi Ogawa <yasufum.o@gmail.com>
2023-07-26 17:37:16 +00:00
psingla
f2fd4303cf Adding cri-o repository for centos system
cri-o repository for centos need to be added in
/etc/yum.repos.d to successfully install cri-o on centos system.

Change-Id: I6b215cb0efb3c53e97a4a6605e94a262c0d04f25
2023-02-27 15:33:34 +01:00
Hongbin Lu
f8e786f0d5 Support installing specific version of docker
Change-Id: I12015c28f6f8ffc125097a14514a6a90a20cf35b
2023-02-24 15:11:33 +00:00
Roman Dobosz
bdc0b49ce3 Install apparmor tools also for Ubuntu Focal.
k8s gate is still on focal, so patch which unblock the apparmor for
jammy does not affect it. Here is the fix for focal as well.

Change-Id: I2a9bc69a59e7d6d21d61e79115d5a3c726c73ab0
2023-02-23 18:36:19 +01:00
Roman Dobosz
38835f2c54 Use flannel preferred configuration.
On the Github repository, flannel team has stated[1], that for k8s 1.17+
the yaml file[2] with flannel config should be used. This patch is
changing it, as old version stopped to work.

[1] https://github.com/flannel-io/flannel#deploying-flannel-manually
[2] https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml

Change-Id: Ib7af55304714d8e91f5e9c63cb1501fb515553d6
2023-02-22 12:38:38 +01:00
Roman Dobosz
c101497703 Bump k8s version.
Kubernetes 1.19 is long gone over a year now. Current minimal supported
version is 1.23.x. It is also last version, which supports docker-shim.
In this patch we propose to bump the version of k8s to 1.23.16 and crio
to 1.23.

Change-Id: I822217e769cc5cd041032fb2302c3a9c130d11ff
2023-02-22 12:38:09 +01:00
Roman Dobosz
f3cbfa21ff Change default kubernetes registry to current one.
Last year, kubernetes community has made a move from k8s.gcr.io to
registry.k8s.io. Currently images on k8s.gcr.io has been stopped from
serving therefore, there is a need to migrate to the new one.

Change-Id: I20305b380d26fdaa30632107b29debc519e13e54
2023-02-21 17:39:02 +01:00
Roman Dobosz
6c468e5293 Fix issue with lack of apparmor.
Recently there are failures observed with docker installations. Newest
version (23.x) started to fail to create containers, when there are no
tools for apparmor available, and yet, this feature is enabled on
kernel, which is true in case of Ubuntu Jammy (22.04) stable release.

There are couple[1] of bugs[2] reported to the upstream, and as a
workaround, proposal is to install apparmor.

[1] https://github.com/moby/moby/issues/44900
[2] https://github.com/moby/moby/issues/44970

Change-Id: Ie10de8a8b074daa19ba4a882528e78cd1ee74245
2023-02-21 17:37:51 +01:00
Roman Dobosz
aef3c9209b Fix the issue with default_sysctls for cri-o.
In earlier version of cri-o (at least that been seen in 1.18) cri-o
packages have default configuration stored as /etc/crio/crio.conf, with
all the default values defined. Setting a value for the key means that
was a need to actually change the default. In version up to 1.23 there
was even no configuration stored at all, but starting from 1.24, all the
default config options has been commented out, and only section names
are not commented.

Similar situation has been detected for registry configuration, but here
it is even more difficult, as in recent version toml format has been
used instead of ini.

With this patch all of the cases has been covered.

Change-Id: Ia1b3dee3979841e798cec11c02ba1412dccef6c2
2022-12-02 08:44:12 +01:00
Zuul
a6494044ff Merge "Fix docker group name" 2022-11-24 14:13:30 +00:00
Yasufumi Ogawa
a7295a5201 Fix to be prompted to add apt repos
Fix devstack installation for crio is prompted while running
apt-add-repository.

Signed-off-by: Yasufumi Ogawa <yasufum.o@gmail.com>
Change-Id: I66d69d5df254af027baf1d359130d4423fe3c4a9
2022-11-24 06:47:10 +00:00
Martin André
b648421624 Fix docker group name
devstack-plugin-container wrongfully assumes that the stack user name
is also the name of the group under which install the docker daemon.
This can cause devstack to install docker in such a way that the stack
user does not have permissions to access the docker socket, as seen in
[3].

[1] https://opendev.org/openstack/devstack-plugin-container
[2] https://github.com/openstack/devstack-plugin-container/blob/f09c5c9/devstack/lib/docker#L27
[3] https://github.com/gophercloud/gophercloud/pull/2380#issuecomment-1094295137

Closes-Bug: 1970129
Change-Id: Id5f1fa24ebb09db10f0d56e4d6b111be66869b5a
2022-04-24 21:42:40 +02:00
Zuul
b323f5b71a Merge "Docker and kubernetes package installation on CentosStream" 2022-03-28 09:00:53 +00:00
yangjianfeng
f935202d39 Support config pause image for crio
In some places of which network environment was limited, ciro can't
pull images from k8s.gcr.io. This patch add a variable
`CRIO_PAUSE_IMAGE` in order to the developer who located in these
places can set the ciro to pull pause container images from
repository that they can access.

The `CRIO_PAUSE_COMMAND` used to configure crio's `pause_command`
(the pause container's bootstrap command), in order to the developer
can use the special pause image the they customized.

Change-Id: Ib0d4c42870d40ef583546758513a36b906c7663b
2022-03-22 12:39:51 +08:00
yangjianfeng
90b4089cda Support config image repository for kubeadm
In some places of which network environment was limited, kubeadm
can't pull images from k8s.gcr.io. This patch add a variable
`KUBEADMIN_IMAGE_REPOSITORY` in order to the developer who located in
these places can set the kubeadm to pull container images from
repository that they can access.

Change-Id: I14aed50077ef0760635e575770fd2274cb759c53
2022-03-20 11:54:26 +08:00
Ashutosh
f09c5c9342 Docker and kubernetes package installation on CentosStream
Change-Id: Icafab048c43c6591c6cdafb13f34ed1f40258f22
2022-03-04 04:36:29 +00:00
Roman Dobosz
4759935527 Allow ICMP between pods for CRI-O.
By default, CRI-O doesn't allow to have ICMP traffic between the pods
and pods to/from host. It's convenient to have such ability for testing
and debugging purpose.

In this patch there is added appropriate configuration to crio.conf, and
also a setting to disable it if needed.

Change-Id: I1133815d9cbce311313bff7a219a9b3939390660
2021-11-17 09:45:20 +01:00
Zuul
718e0e9521 Merge "Provide right path to the runc binary for Ubuntu and CRI-O installation." 2021-11-03 10:09:04 +00:00
Roman Dobosz
bd98565f99 Provide right path to the runc binary for Ubuntu and CRI-O installation.
There are also two new configuration option introduced:

- CNI_PLUGIN_DIR
- CNI_CONF_DIR

which, if defined, are used to configure crio paths for plugins and
networks config.

Change-Id: Ica4277b06740f8dca3ff5be77432cf6ab2f3cdeb
2021-11-02 17:04:16 +01:00
Martin Kopec
09ff9080a1 Bump min tox version to 3.18.0
Let's bump minimal tox version so that we can rename
whitelist_externals option to allowlist_externals one.

https: //tox.wiki/en/latest/changelog.html#v3-18-0-2020-07-23
Change-Id: I0be6023da2c0b720728ce62a0eb91930c7a5cd28
2021-10-07 08:58:28 +00:00
Roman Dobosz
d4de1bb990 Change repos from projectatomic to kubic OBS project.
Since projectatomic Ubuntu builds are deprecated, and advice was to
consult upstream documentation[1], Kubernetes with cri-o now rely on
Kubic project, which (among the others) provides packages for Ubuntu
20.04. Let us switch for those.

[1] https://kubernetes.io/docs/setup/production-environment/container-runtimes/#cri-o

Change-Id: Ib06753d22f8859eefedc031094851b052f4105b6
2021-01-25 13:32:40 +01:00
Ghanshyam Mann
74bf39e6a6 Migrate devstack-plugin-container jobs to focal
As per victoria cycle testing runtime and community goal[1]
we need to migrate upstream CI/CD to Ubuntu Focal(20.04).

Tempest based jobs will be migrate automatically once devstack
base job start running on Focal(Depends-On). This commit migrates
devstack-plugin-container job to run on focal.

Depends-On: https://review.opendev.org/#/c/734700

[1] https://governance.openstack.org/tc/goals/selected/victoria/migrate-ci-cd-jobs-to-ubuntu-focal.html

Change-Id: I1a3ac070027805691fc1007458ac02567f847ae9
2020-09-13 04:05:37 +00:00
Hongbin Lu
9620216b35 Tolerate non-existing of cni config file
Change-Id: I761bf9344651ec196471ca57bf0b29184a69e161
2020-05-05 01:26:18 +00:00
Zuul
f5983f3c02 Merge "Configure kata runtime for containerd" 2020-05-01 00:14:28 +00:00
Ghanshyam Mann
26e3a3dcdd [ussuri][goal] Update contributor documentation
This patch updates/adds the contributor documentation to follow
the guidelines of the Ussuri cycle community goal[1].

[1] https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html
Story: #2007236
Task: #38554

Change-Id: I5edcc88c9b9adb535597c7850aa3cd05f32ed811
2020-04-25 23:09:13 +00:00
Hongbin Lu
129c4e89ee Add bashate job
Change-Id: I74d09678958ad5e5dec4cbacb450973a31fcf9ba
2020-04-25 22:58:54 +00:00
Hongbin Lu
dc944062c3 Configure kata runtime for containerd
Change-Id: I9d9d223effcaa94d0b1b25210a24aaa313353f05
2020-04-12 00:27:23 +00:00
Hongbin Lu
401029e617 Fix https://review.opendev.org/#/c/705361/
We need to configure CNI plugin first, then configure and restart
containerd. In before, the order is reverse so the CNI config
is not picked.

Change-Id: I1c0e753b19289c339e44f288cae02d7ee2957da6
2020-02-22 21:20:48 +00:00
Hongbin Lu
d80ff940e1 Support enabling CRI for containerd
Installing docker will install the CRI plugin for containerd.
This commit support enabling the CRI-containerd plugin.
By default, this is disabled.

Change-Id: Ica8d5f91ae77d1d6599bfadc4031552016ad8953
2020-02-10 03:31:11 +00:00
Zuul
ac7cd2f4a5 Merge "Add Kubernetes job" 2019-08-28 16:33:40 +00:00
Hongbin Lu
4ea3481486 Add support for kata container
Change-Id: I8de21dd0317734711ba3778c241a428f0325ea85
2019-07-08 05:28:19 +00:00
Hongbin Lu
735bde961d Add Kubernetes job
Change-Id: I2c75c81521ed8a53627119b231f526508154e34d
2019-06-17 02:22:33 +00:00
Hongbin Lu
d9b045050c Convert legacy job to zuul v3 native
The CI job configuration was auto-converted from legacy job in before.
This commit convert the job to zuul v3 native format.

Change-Id: I591ca197b6860db31e76fc7af3547ff4a92b2a55
2019-06-15 21:18:51 +00:00
Hongbin Lu
d7a72df7f4 Make the docker job voting
The job install Docker and use basic scenario to verifies if
Docker is working properly. The job is quite stable so far.
This commit changes this job from non-voting to voting.

Change-Id: I7da8471fc9b3b362bf6502f379b60cfeb2a9ad92
2019-05-11 15:50:12 +00:00
Hongbin Lu
80f8d7f260 Support k8s installation
Add support for installing kubernetes cluster via devstack.
It uses kubeadm to bootstrap the k8s cluster.

Change-Id: I7877ceda08bbdab807116a13d74ff884136dc501
2019-05-07 03:57:24 +00:00
Le Hou
f89832fe0f Use opendev repository
Change-Id: Ie2e20d0d185f58e9234d59264ee213e34e7714a1
2019-04-23 17:55:02 +08:00
OpenDev Sysadmins
42688a1d36 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:42:34 +00:00
Ian Wienand
dd0b868162 Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I22b7533894aae3f217b183a6c8d89221c02dd7aa
2019-03-24 20:33:29 +00:00
Michał Dulko
f896c23116 Support cri-o in CentOS and Fedora
This commit adds support for installing cri-o as container engine in
CentOS and Fedora. Tested on CentOS 7.6 and Fedora 28.

Change-Id: I0e10e06156e02397b5cd64efe802869d0e96b231
2019-02-05 19:57:59 +01:00
Feng Shengqin
b8ff250e97 Configure the dokcer daemon for IPv6
Change-Id: If190af800a8c28e2cf4ae320a770c40847cd18e6
2019-01-29 09:44:38 +08:00
Zuul
7e44a59c1e Merge "Skip linux-image-extra-$(uname -r) on 18.04" 2018-11-30 10:14:08 +00:00
Michał Dulko
63c7b8eddc Add support for CRI-O as container engine
This commit adds support for CRI-O. Support for Fedora/CentOS is in
progress.

Change-Id: Ib049d66058429e499f5d0932c4a749820bec73ff
2018-11-29 09:33:32 +01:00
Michał Dulko
17a865e064 Skip linux-image-extra-$(uname -r) on 18.04
Seems like aforementioned package is not available on Ubuntu 18.04
(Bionic). This commit excludes that version from installation of Docker.

Change-Id: Ib1864497dd19caadf9077386ce278712e4f5de8f
2018-11-27 19:47:33 +01:00
Andreas Jaeger
6d65af2900 Import legacy job
Import legacy job from openstack-zuul-jobs.

Change-Id: I5c28ce42606dc96d7df179a46e55abe453f93fe8
2018-09-09 06:47:12 +02:00
Doug Hellmann
34f6e6fd42 import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: I31bc574b8f66f4fc483c3758e787886fd49d4843
Story: #2002586
Task: #24327
2018-09-08 22:51:18 -04:00
28 changed files with 1070 additions and 227 deletions

1
.gitignore vendored Normal file
View File

@@ -0,0 +1 @@
.tox

View File

@@ -2,4 +2,4 @@
host=review.opendev.org
port=29418
project=openstack/devstack-plugin-container.git
defaultbranch=stable/rocky
defaultbranch=stable/2025.1

View File

@@ -1,19 +1,61 @@
- job:
name: devstack-plugin-container-dsvm
parent: legacy-dsvm-base
parent: devstack
pre-run: playbooks/devstack-plugin-container-dsvm/pre.yaml
run: playbooks/devstack-plugin-container-dsvm/run.yaml
post-run: playbooks/devstack-plugin-container-dsvm/post.yaml
timeout: 4200
required-projects:
- openstack/devstack
- openstack/devstack-gate
- openstack/devstack-plugin-container
vars:
devstack_localrc:
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
- job:
name: devstack-plugin-container-k8s
parent: devstack-minimal
nodeset: openstack-two-node-noble
pre-run: playbooks/devstack-plugin-container-k8s/pre.yaml
run: playbooks/devstack-plugin-container-k8s/run.yaml
post-run: playbooks/devstack-plugin-container-k8s/post.yaml
timeout: 7200
required-projects:
- openstack/devstack
- openstack/devstack-plugin-container
vars:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
etcd3: true
container: true
k8s-master: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
group-vars:
subnode:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
container: true
k8s-node: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
- project:
check:
jobs:
- devstack-plugin-container-dsvm:
- openstack-tox-bashate
- devstack-plugin-container-dsvm
- devstack-plugin-container-k8s:
voting: false
gate:
jobs:
- noop
- openstack-tox-bashate
- devstack-plugin-container-dsvm

19
CONTRIBUTING.rst Normal file
View File

@@ -0,0 +1,19 @@
The source repository for this project can be found at:
https://opendev.org/openstack/devstack-plugin-container
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/devstack
For more specific information about contributing to this repository, see the
Devstack contributor guide:
https://docs.openstack.org/devstack/latest/contributor/contributing.html

View File

@@ -2,8 +2,8 @@
Container Plugin
================
This plugin enables installation of container engine on Devstack. The default
container engine is Docker (currently this plugin supports only Docker!).
This plugin enables installation of container engine and Kubernetes on
Devstack. The default container engine is Docker.
====================
Enabling in Devstack
@@ -21,11 +21,59 @@ For more info on devstack installation follow the below link:
2. Add this repo as an external repository
------------------------------------------
This plugin supports installing Kubernetes or container engine only.
For installing container engine only, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
END
For installing Kata Containers, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
ENABLE_KATA_CONTAINERS=True
END
For installing Kubernetes, using the following config in master node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service etcd3
enable_service container
enable_service k8s-master
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
And using the following config in worker node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
SERVICE_HOST=10.0.0.11 # change this to controller's IP address
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service container
enable_service k8s-node
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
3. Run devstack

View File

@@ -1,21 +0,0 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This script is executed inside gate_hook function in devstack gate.
# Keep all devstack settings here instead of project-config for easy
# maintain if we want to change devstack config settings in future.
$BASE/new/devstack-gate/devstack-vm-gate.sh

View File

@@ -1,48 +0,0 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
# Sleep some time until all services are starting
sleep 5
# Check if a function already exists
function function_exists {
declare -f -F $1 > /dev/null
}
if ! function_exists echo_summary; then
function echo_summary {
echo $@
}
fi
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
echo_summary "Devstack-plugin-container's post_test_hook.sh was called..."
(set -o posix; set)
# Verify that Docker is installed correctly by running the hello-world image
sudo -H -u stack docker run hello-world
EXIT_CODE=$?
# Copy over docker systemd unit journals.
mkdir -p $WORKSPACE/logs
sudo journalctl -o short-precise --unit docker | sudo tee $WORKSPACE/logs/docker.txt > /dev/null
$XTRACE
exit $EXIT_CODE

94
devstack/lib/cni/plugins Normal file
View File

@@ -0,0 +1,94 @@
#!/bin/bash
#
# lib/cni/plugins
# Common CNI plugins functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_CNI_PLUGINS=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CNI_PLUGINS_BIN_DIR=/opt/cni/bin
# install all plugins by default
CNI_PLUGINS_INSTALL_PLUGINS=${CNI_PLUGINS_INSTALL_PLUGINS:-flannel,ptp,host-local,portmap,tuning,vlan,host-device,sample,dhcp,ipvlan,macvlan,loopback,bridge}
CNI_PLUGINS_CONF_SOURCE_DIR=${CNI_PLUGINS_CONF_SOURCE_DIR:-$DEST/devstack-plugin-container/etc/cni/net.d}
CNI_PLUGINS_CONF_DIR=${CNI_PLUGINS_CONF_DIR:-/etc/cni/net.d}
CNI_PLUGINS_VERSION=${CNI_PLUGINS_VERSION:-v0.7.1}
CNI_PLUGINS_SHA256_AMD64=${CNI_PLUGINS_SHA256_AMD64:-"6ecc5c7dbb8e4296b0d0d017e5440618e19605b9aa3b146a2c29af492f299dc7"}
CNI_PLUGINS_SHA256_ARM64=${CNI_PLUGINS_SHA256_ARM64:-"258080b94bfc54bd54fd0ea7494efc31806aa4b2836ba3f2d189e0fc16fab0ef"}
CNI_PLUGINS_SHA256_PPC64=${CNI_PLUGINS_SHA256_PPC64:-"a515c45a52e752249bb0e9feac1654c5d38974df6a36148778f6eeab9826f706"}
CNI_PLUGINS_SHA256_S390X=${CNI_PLUGINS_SHA256_S390X:-"24e31be69a012395f1026cd37d125f5f81001cfc36434d8f7a17b36bc5f1e6ad"}
# Make sure CNI plugins downloads the correct architecture
if is_arch "x86_64"; then
CNI_PLUGINS_ARCH="amd64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_AMD64}
elif is_arch "aarch64"; then
CNI_PLUGINS_ARCH="arm64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_ARM64}
elif is_arch "ppc64le"; then
CNI_PLUGINS_ARCH="ppc64le"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_PPC64}
elif is_arch "s390x"; then
CNI_PLUGINS_ARCH="s390x"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CNI_PLUGINS_DOWNLOAD_URL=${CNI_PLUGINS_DOWNLOAD_URL:-https://github.com/containernetworking/plugins/releases/download}
CNI_PLUGINS_DOWNLOAD_FILE=cni-plugins-$CNI_PLUGINS_ARCH-$CNI_PLUGINS_VERSION.tgz
CNI_PLUGINS_DOWNLOAD_LOCATION=$CNI_PLUGINS_DOWNLOAD_URL/$CNI_PLUGINS_VERSION/$CNI_PLUGINS_DOWNLOAD_FILE
# Installs standard cni plugins.
function install_cni_plugins {
echo "Installing CNI standard plugins"
# Download and cache the cni plugins tgz for subsequent use
local plugins_file
cni_plugins_file="$(get_extra_file $CNI_PLUGINS_DOWNLOAD_LOCATION)"
if [ ! -d "$FILES/cniplugins" ]; then
echo "${CNI_PLUGINS_SHA256} $cni_plugins_file" > $FILES/cniplugins.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/cniplugins.sha256sum || (sudo rm -f $cni_plugins_file; exit 1)
mkdir $FILES/cniplugins
tar xzvf $cni_plugins_file -C $FILES/cniplugins
fi
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
if [ $(ls $FILES/cniplugins/$plugin 2> /dev/null) ]; then
echo "Install plugin: $plugin"
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/cniplugins/$plugin" \
"$CNI_PLUGINS_BIN_DIR/$plugin"
else
echo "Skip installing plugin: $plugin"
fi
done
}
# Configure cni plugins.
function configure_cni_plugins {
echo "Configuring CNI plugins"
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
local source_config_file
source_config_file=$(ls ${CNI_PLUGINS_CONF_SOURCE_DIR}/*${plugin}.conf 2> /dev/null || true)
if [ $source_config_file ]; then
echo "Found config file for plugin: $plugin"
sudo install -o "$STACK_USER" -m 0664 -t "$CNI_PLUGINS_CONF_DIR" -D \
"${source_config_file}"
else
echo "Config file not found for plugin: $plugin"
fi
done
}
# Restore xtrace
$_XTRACE_CONTAINER_CNI_PLUGINS

219
devstack/lib/crio Normal file
View File

@@ -0,0 +1,219 @@
#!/bin/bash
# Dependencies:
#
# - functions
# stack.sh
# ---------
# - check_crio
# - install_crio
# - configure_crio
# - stop_crio
# Save trace setting
_XTRACE_DOCKER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CRIO_ENGINE_SOCKET_FILE=${CRIO_ENGINE_SOCKET_FILE:-/var/run/crio/crio.sock}
CRIO_ALLOW_ICMP=$(trueorfalse True CRIO_ALLOW_ICMP)
# Functions
# ---------
function check_crio {
if is_ubuntu; then
dpkg -l | grep cri-o > /dev/null 2>&1
else
false
# TODO: CentOS/Fedora support.
fi
}
function install_crio {
if [[ -z "$os_PACKAGE" ]]; then
GetOSVersion
fi
local lsb_dist=${os_VENDOR,,}
if is_ubuntu; then
local stream="https://pkgs.k8s.io/addons:/cri-o:/stable:/v${CRIO_VERSION%.*}"
local key_path="/etc/apt/keyrings/cri-o-apt-keyring.gpg"
apt_get install apt-transport-https ca-certificates \
software-properties-common curl
curl -fsSL "${stream}/deb/Release.key" | sudo gpg --dearmor -o "${key_path}"
echo "deb [signed-by=${key_path}] ${stream}/deb/ /" | \
sudo tee /etc/apt/sources.list.d/cri-o.list
# Installing podman and containerd will get us compatible versions of
# cri-o. And we need podman to manage container images anyway.
REPOS_UPDATED=False apt_get_update
crio_pkg_version=$(sudo apt-cache show cri-o | grep "Version: $CRIO_VERSION-" | awk '{ print $2 }' | head -n 1)
apt_get install podman buildah cri-o="${crio_pkg_version}"
sudo systemctl enable crio
elif is_fedora; then
if [[ "$lsb_dist" = "centos" ]]; then
sudo yum-config-manager \
--add-repo \
https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
sudo yum-config-manager \
--add-repo \
https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
fi
if [[ "${os_VENDOR}" == *'Stream' ]]; then
local stream="_Stream"
fi
# NOTE: All crio versions are not supported for Centos 8 stream
# because crio rpm is not present for some minor versions
sudo yum-config-manager \
--add-repo \
"https://download.opensuse.org/repositories/"`
`"devel:/kubic:/libcontainers:/stable:/cri-o:/${CRIO_VERSION}/"`
`"CentOS_${os_RELEASE}${stream}/"`
`"devel:kubic:libcontainers:stable:cri-o:${CRIO_VERSION}.repo"
yum_install cri-o podman buildah
fi
}
function configure_crio {
# After an ./unstack it will be stopped. So it is ok if it returns exit-code == 1
sudo systemctl stop crio.service || true
export CRIO_CONF="/etc/crio/crio.conf"
# We're wrapping values in \"<val>\" because that's the format cri-o wants.
iniset -sudo ${CRIO_CONF} crio.api listen \"${CRIO_ENGINE_SOCKET_FILE}\"
iniset -sudo ${CRIO_CONF} crio.image pause_image \"${CRIO_PAUSE_IMAGE}\"
iniset -sudo ${CRIO_CONF} crio.image pause_command \"${CRIO_PAUSE_COMMAND}\"
if [[ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]]; then
# debug is way too verbose, info will be enough
iniset -sudo ${CRIO_CONF} crio.runtime log_level \"info\"
fi
if is_ubuntu; then
local crio_minor=${CRIO_VERSION#*.}
# At least for 18.04 we need to set up /etc/containers/registries.conf
# with some initial content. That's another bug with that PPA.
local registries_conf
registries_conf="/etc/containers/registries.conf"
if [[ ! -f ${registries_conf} && $crio_minor -lt 24 ]]; then
sudo mkdir -p `dirname ${registries_conf}`
cat << EOF | sudo tee ${registries_conf}
[registries.search]
registries = ['docker.io']
EOF
else
# If there is a config file, that means, we are probably on the
# newer version of crio/container/podman, which basically means
# we cannot mix [registries.search] registries filled with
# something and unqualified-search-registries setting which appear
# on sysregistry v2 config syntax. And because it's a TOML now, we
# cannot rely on iniset, but directly change the file.
local rname='unqualified-search-registries'
local rval='["docker.io", "quay.io"]'
if [[ ! -f ${registries_conf} ]]; then
cat << EOF | sudo tee ${registries_conf}
unqualified-search-registries = ["docker.io", "quay.io"]
EOF
elif grep -wq "^${rname}" "${registries_conf}"; then
sudo sed -i -e \
"s/^${rname}.*$/${rname} = ${rval}/" "${registries_conf}"
else
sudo sed -i "1s/^/${rname} = ${rval}\n/" "${registries_conf}"
fi
fi
# CRI-O from kubic repo have placed runc in different place, not even
# in path, just to not conflict with runc package from official repo.
# We need to change it.
iniset -sudo ${CRIO_CONF} crio.runtime.runtimes.runc runtime_path \
\"/usr/lib/cri-o-runc/sbin/runc\"
if [ -n "${CNI_CONF_DIR}" ]; then
iniset -sudo ${CRIO_CONF} crio.network network_dir \
\"${CNI_CONF_DIR}\"
fi
if [ -n "${CNI_PLUGIN_DIR}" ]; then
iniset -sudo ${CRIO_CONF} crio.network plugin_dir \
\"${CNI_PLUGIN_DIR}\"
fi
# By default CRI-O doesn't allow ICMP between containers, although it
# is ususally expected for testing purposes.
if [ "${CRIO_ALLOW_ICMP}" == "True" ]; then
if grep -wq '^default_sysctls' ${CRIO_CONF}; then
export CRIO_KEY="default_sysctls"
export CRIO_VAL='[ "net.ipv4.ping_group_range=0 2147483647", ]'
_update_config
else
iniset -sudo ${CRIO_CONF} crio.runtime default_sysctls \
'[ "net.ipv4.ping_group_range=0 2147483647", ]'
fi
fi
elif is_fedora; then
local lsb_dist=${os_VENDOR,,}
if [[ "$lsb_dist" = "centos" ]]; then
# CentOS packages are putting runc binary in different place...
iniset -sudo ${CRIO_CONF} crio.runtime runtime \"/usr/sbin/runc\"
# CentOS version seems to only work with cgroupfs...
iniset -sudo ${CRIO_CONF} crio.runtime cgroup_manager \"cgroupfs\"
fi
fi
sudo systemctl --no-block restart crio.service
}
function stop_crio {
sudo systemctl stop crio.service || true
}
function _update_config {
sudo -E python3 - <<EOF
"""
Update provided by CRIO_KEY key list in crio configuration in a form of:
some_key = [ some,
value
]
or just an empty list:
some_key = [
]
with the CRIO_VAL value.
Note, CRIO_VAL must include square brackets.
"""
import os
import re
crio_key = os.environ.get('CRIO_KEY')
crio_val = os.environ.get('CRIO_VAL')
crio_conf = os.environ.get('CRIO_CONF')
pat = re.compile(rf'{crio_key}\s*=\s*\[[^\]]*\]', flags=re.S | re.M)
with open(crio_conf) as fobj:
conf = fobj.read()
with open(crio_conf, 'w') as fobj:
search = pat.search(conf)
if search:
start, end = search.span()
conf = conf[:start] + f'{crio_key} = {crio_val}' + conf[end:]
fobj.write(conf)
EOF
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -24,19 +24,29 @@ set +o xtrace
DOCKER_ENGINE_SOCKET_FILE=${DOCKER_ENGINE_SOCKET_FILE:-/var/run/docker.sock}
DOCKER_ENGINE_PORT=${DOCKER_ENGINE_PORT:-2375}
DOCKER_CLUSTER_STORE=${DOCKER_CLUSTER_STORE:-}
DOCKER_GROUP=${DOCKER_GROUP:-$STACK_USER}
STACK_GROUP="$( id --group --name "$STACK_USER" )"
DOCKER_GROUP=${DOCKER_GROUP:-$STACK_GROUP}
DOCKER_CGROUP_DRIVER=${DOCKER_CGROUP_DRIVER:-}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=$(trueorfalse False ENABLE_CLEAR_CONTAINER)
ENABLE_KATA_CONTAINERS=$(trueorfalse False ENABLE_KATA_CONTAINERS)
ENABLE_CONTAINERD_CRI=$(trueorfalse False ENABLE_CONTAINERD_CRI)
ENABLE_LIVE_RESTORE=$(trueorfalse False ENABLE_LIVE_RESTORE)
ENABLE_IPV6=$(trueorfalse False ENABLE_IPV6)
KATA_BRANCH=${KATA_BRANCH:-master}
KATA_RUNTIME=${KATA_RUNTIME:-kata-runtime}
CONTAINERD_CONF_DIR=/etc/containerd
CONTAINERD_CONF=$CONTAINERD_CONF_DIR/config.toml
# Functions
# ---------
function check_docker {
if is_ubuntu; then
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
else
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
fi
}
@@ -47,8 +57,12 @@ function install_docker {
local lsb_dist=${os_VENDOR,,}
local dist_version=${os_CODENAME}
local arch=$(dpkg --print-architecture)
if [[ "$lsb_dist" != "centosstream" ]]; then
local arch
arch=$(dpkg --print-architecture)
fi
if is_ubuntu; then
apt_get install apparmor
if [[ ${dist_version} == 'trusty' ]]; then
if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -qE '^ii|^hi' 2>/dev/null; then
apt_get install linux-image-extra-$(uname -r) linux-image-extra-virtual
@@ -63,12 +77,27 @@ function install_docker {
${dist_version} \
stable"
REPOS_UPDATED=False apt_get_update
apt_get install docker-ce
if [ -n "${UBUNTU_DOCKER_VERSION}" ]; then
apt_get install docker-ce=$UBUNTU_DOCKER_VERSION
else
apt_get install docker-ce
fi
elif is_fedora; then
if [[ "$lsb_dist" = "centos" ]]; then
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
elif [[ "$lsb_dist" = "centosstream" ]]; then
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum-config-manager \
--add-repo \
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 #noqa
sudo yum-config-manager \
--enable \
packages.cloud.google.com_yum_repos_kubernetes-el7-x86_64
sudo dnf -y install kubeadm --nogpgcheck
elif [[ "$lsb_dist" = "fedora" ]]; then
sudo dnf config-manager \
--add-repo \
@@ -76,9 +105,23 @@ function install_docker {
fi
yum_install docker-ce
fi
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
# Kata Containers can't run inside VM, so check whether virtualization
# is enabled or not
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_kata_container_ubuntu
elif is_fedora; then
install_kata_container_fedora
fi
else
(>&2 echo "WARNING: Kata Containers needs the CPU extensions svm or vmx which is not enabled. Skipping Kata Containers installation.")
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
# Clear Container can't run inside VM, so check whether virtualization
# is enabled or not
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_clear_container_ubuntu
@@ -89,9 +132,27 @@ function install_docker {
(>&2 echo "WARNING: Clear Container needs the CPU extensions svm or vmx which is not enabled. Skipping Clear Container installation.")
fi
fi
if [[ "$ENABLE_CONTAINERD_CRI" == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
install_cni_plugins
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
install_crictl
fi
}
function configure_docker {
if [[ ${ENABLE_CONTAINERD_CRI} == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
configure_cni_plugins
configure_containerd
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
configure_crictl
fi
# After an ./unstack it will be stopped. So it is ok if it returns exit-code == 1
sudo systemctl stop docker.service || true
@@ -100,7 +161,18 @@ function configure_docker {
cluster_store_opts+="\"cluster-store\": \"$DOCKER_CLUSTER_STORE\","
fi
local runtime_opts=""
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"$KATA_RUNTIME\": {
\"path\": \"/usr/bin/kata-runtime\"
}
},
\"default-runtime\": \"$KATA_RUNTIME\","
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"cor\": {
@@ -112,6 +184,7 @@ function configure_docker {
local docker_config_file=/etc/docker/daemon.json
local debug
local live_restore
local ipv6
if [[ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]]; then
debug=true
else
@@ -122,6 +195,11 @@ function configure_docker {
else
live_restore=false
fi
if [[ "$ENABLE_IPV6" == "True" ]]; then
ipv6=true
else
ipv6=false
fi
sudo mkdir -p $(dirname ${docker_config_file})
cat <<EOF | sudo tee $docker_config_file >/dev/null
{
@@ -129,6 +207,7 @@ function configure_docker {
$runtime_opts
"debug": ${debug},
"live-restore": ${live_restore},
"ipv6": ${ipv6},
"group": "$DOCKER_GROUP",
EOF
if [[ -n "$DOCKER_CGROUP_DRIVER" ]]; then
@@ -157,13 +236,45 @@ ExecStart=/usr/bin/dockerd --config-file=$docker_config_file
Environment="HTTP_PROXY=$http_proxy" "HTTPS_PROXY=$https_proxy" "NO_PROXY=$no_proxy"
EOF
sudo systemctl daemon-reload
sudo systemctl --no-block restart docker.service
sudo systemctl restart docker.service
}
function configure_containerd {
sudo mkdir -p $CONTAINERD_CONF_DIR
sudo chown -R $STACK_USER $CONTAINERD_CONF_DIR
stack_user_gid=$(getent group $STACK_USER | cut -d: -f3)
cat <<EOF | sudo tee $CONTAINERD_CONF >/dev/null
[grpc]
gid = $stack_user_gid
[debug]
level = "debug"
EOF
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
cat <<EOF | sudo tee -a $CONTAINERD_CONF >/dev/null
[plugins]
[plugins.cri]
[plugins.cri.containerd]
[plugins.cri.containerd.runtimes.${KATA_RUNTIME}]
runtime_type = "io.containerd.kata.v2"
EOF
fi
sudo systemctl --no-block restart containerd.service
}
function stop_docker {
sudo systemctl stop docker.service || true
}
function cleanup_docker {
uninstall_package docker-ce
rm -f $CONTAINERD_CONF
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/ /' >> /etc/apt/sources.list.d/cc-oci-runtime.list"
curl -fsSL http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
@@ -171,6 +282,7 @@ function install_clear_container_ubuntu {
apt_get install cc-oci-runtime
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_fedora {
source /etc/os-release
local lsb_dist=${os_VENDOR,,}
@@ -182,5 +294,31 @@ function install_clear_container_fedora {
yum_install cc-oci-runtime linux-container
}
function install_kata_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/ /' \
> /etc/apt/sources.list.d/kata-containers.list"
curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/Release.key \
| sudo apt-key add -
REPOS_UPDATED=False apt_get_update
apt_get install kata-runtime kata-proxy kata-shim
}
function install_kata_container_fedora {
source /etc/os-release
if [[ -x $(command -v dnf 2>/dev/null) ]]; then
sudo dnf -y install dnf-plugins-core
sudo -E dnf config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/Fedora_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
elif [[ -x $(command -v yum 2>/dev/null) ]]; then
# all rh patforms (fedora, centos, rhel) have this pkg
sudo yum -y install yum-utils
sudo -E yum-config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/CentOS_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
else
die $LINENO "Unable to find or auto-install Kata Containers"
fi
yum_install kata-runtime kata-proxy kata-shim
}
# Restore xtrace
$_XTRACE_DOCKER

201
devstack/lib/k8s Normal file
View File

@@ -0,0 +1,201 @@
#!/bin/bash
# Dependencies:
#
# - functions
# - ``STACK_USER`` must be defined
# stack.sh
# --------
# - install_k8s
# The following variables are assumed to be defined by certain functions:
#
# - ``http_proxy`` ``https_proxy`` ``no_proxy``
# Save trace setting
_XTRACE_DOCKER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
K8S_TOKEN=${K8S_TOKEN:-""}
K8S_API_SERVER_IP=${K8S_API_SERVER_IP:-$SERVICE_HOST}
K8S_NODE_IP=${K8S_NODE_IP:-$HOST_IP}
K8S_API_SERVER_PORT=${K8S_API_SERVER_PORT:-6443}
K8S_POD_NETWORK_CIDR=${K8S_POD_NETWORK_CIDR:-10.244.0.0/16}
K8S_SERVICE_NETWORK_CIDR=${K8S_SERVICE_NETWORK_CIDR:-10.96.0.0/12}
K8S_VERSION=${K8S_VERSION:-"1.30.5"}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
# Functions
# ---------
function is_k8s_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"k8s-" ]] && return 0
return 1
}
function install_kubeadm {
if is_ubuntu; then
local stream="https://pkgs.k8s.io/core:/stable:/v${K8S_VERSION%.*}"
local key_path="/etc/apt/keyrings/kubernetes-apt-keyring.gpg"
apt_get install apt-transport-https ca-certificates curl gpg
curl -fsSL "${stream}/deb/Release.key" | sudo gpg --dearmor -o "${key_path}"
echo "deb [signed-by=${key_path}] ${stream}/deb/ /" | \
sudo tee /etc/apt/sources.list.d/kubernetes.list
REPOS_UPDATED=False apt_get_update
kube_pkg_version=$(sudo apt-cache show kubeadm | grep "Version: $K8S_VERSION-" | awk '{ print $2 }' | head -n 1)
apt_get install kubelet="${kube_pkg_version}" kubeadm="${kube_pkg_version}" kubectl="${kube_pkg_version}"
sudo apt-mark hold kubelet kubeadm kubectl
# NOTE(hongbin): This work-around an issue that kubelet pick a wrong
# IP address if the node has multiple network interfaces.
# See https://github.com/kubernetes/kubeadm/issues/203
echo "KUBELET_EXTRA_ARGS=--node-ip=$K8S_NODE_IP" | sudo tee -a /etc/default/kubelet
sudo systemctl daemon-reload && sudo systemctl restart kubelet
else
(>&2 echo "WARNING: kubeadm installation is not supported in this distribution.")
fi
}
function kubeadm_init {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
if [[ ${CONTAINER_ENGINE} == 'crio' ]]; then
CGROUP_DRIVER=$(iniget "/etc/crio/crio.conf" crio.runtime cgroup_manager)
CRI_SOCKET="unix:///var/run/crio/crio.sock"
else
# docker is used
CGROUP_DRIVER=$(docker info -f '{{.CgroupDriver}}')
CRI_SOCKET="/var/run/dockershim.sock"
fi
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta3
kind: ClusterConfiguration
imageRepository: "${KUBEADMIN_IMAGE_REPOSITORY}"
etcd:
external:
endpoints:
- "http://${SERVICE_HOST}:${ETCD_PORT}"
networking:
podSubnet: "${K8S_POD_NETWORK_CIDR}"
serviceSubnet: "${K8S_SERVICE_NETWORK_CIDR}"
---
apiVersion: kubeadm.k8s.io/v1beta3
bootstrapTokens:
- token: "${K8S_TOKEN}"
ttl: 0s
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${K8S_API_SERVER_IP}"
bindPort: ${K8S_API_SERVER_PORT}
nodeRegistration:
criSocket: "$CRI_SOCKET"
kubeletExtraArgs:
enable-server: "true"
taints:
[]
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
address: "0.0.0.0"
enableServer: true
cgroupDriver: $CGROUP_DRIVER
EOF
sudo kubeadm config images pull --image-repository=${KUBEADMIN_IMAGE_REPOSITORY}
sudo kubeadm init --config $kubeadm_config_file --ignore-preflight-errors Swap
local kube_config_file=$HOME/.kube/config
sudo mkdir -p $(dirname ${kube_config_file})
sudo cp /etc/kubernetes/admin.conf $kube_config_file
safe_chown $STACK_USER:$STACK_USER $kube_config_file
if [[ "$K8S_NETWORK_ADDON" == "flannel" ]]; then
kubectl apply -f https://github.com/flannel-io/flannel/releases/latest/download/kube-flannel.yml
fi
}
function kubeadm_join {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
if [[ ${CONTAINER_ENGINE} == 'crio' ]]; then
CGROUP_DRIVER=$(iniget "/etc/crio/crio.conf" crio.runtime cgroup_manager)
CRI_SOCKET="unix:///var/run/crio/crio.sock"
else
# docker is used
CGROUP_DRIVER=$(docker info -f '{{.CgroupDriver}}')
CRI_SOCKET="/var/run/dockershim.sock"
fi
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta3
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "${K8S_API_SERVER_IP}:${K8S_API_SERVER_PORT}"
token: "${K8S_TOKEN}"
unsafeSkipCAVerification: true
tlsBootstrapToken: "${K8S_TOKEN}"
nodeRegistration:
criSocket: "$CRI_SOCKET"
kubeletExtraArgs:
enable-server: "true"
taints:
[]
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
address: "0.0.0.0"
enableServer: true
cgroupDriver: $CGROUP_DRIVER
EOF
sudo kubeadm join --config $kubeadm_config_file --ignore-preflight-errors Swap
}
function start_collect_logs {
wait_for_kube_service 180 component=kube-controller-manager
wait_for_kube_service 60 component=kube-apiserver
wait_for_kube_service 30 component=kube-scheduler
wait_for_kube_service 30 k8s-app=kube-proxy
run_process kube-controller-manager "/usr/bin/kubectl logs -n kube-system -f -l component=kube-controller-manager"
run_process kube-apiserver "/usr/bin/kubectl logs -n kube-system -f -l component=kube-apiserver"
run_process kube-scheduler "/usr/bin/kubectl logs -n kube-system -f -l component=kube-scheduler"
run_process kube-proxy "/usr/bin/kubectl logs -n kube-system -f -l k8s-app=kube-proxy"
}
function wait_for_kube_service {
local timeout=$1
local selector=$2
local rval=0
time_start "wait_for_service"
timeout $timeout bash -x <<EOF || rval=$?
NAME=""
while [[ "\$NAME" == "" ]]; do
sleep 1
NAME=\$(kubectl wait --for=condition=Ready pod -n kube-system -l $selector -o name)
done
EOF
time_stop "wait_for_service"
# Figure out what's happening on platforms where this doesn't work
if [[ "$rval" != 0 ]]; then
echo "Didn't find kube service after $timeout seconds"
kubectl get pods -n kube-system -l $selector
fi
return $rval
}
function kubeadm_reset {
sudo kubeadm reset --force
}
# Restore xtrace
$_XTRACE_DOCKER

76
devstack/lib/tools/crictl Normal file
View File

@@ -0,0 +1,76 @@
#!/bin/bash
#
# lib/tools/crictl
# CRI command line tools functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_TOOLS_CRICTL=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CRICTL_BIN_DIR=/usr/local/bin
CRICTL_VERSION=${CRICTL_VERSION:-v1.17.0}
CRICTL_SHA256_AMD64=${CRICTL_SHA256_AMD64:-"7b72073797f638f099ed19550d52e9b9067672523fc51b746e65d7aa0bafa414"}
CRICTL_SHA256_ARM64=${CRICTL_SHA256_ARM64:-"d89afd89c2852509fafeaff6534d456272360fcee732a8d0cb89476377387e12"}
CRICTL_SHA256_PPC64=${CRICTL_SHA256_PPC64:-"a61c52b9ac5bffe94ae4c09763083c60f3eccd30eb351017b310f32d1cafb855"}
CRICTL_SHA256_S390X=${CRICTL_SHA256_S390X:-"0db445f0b74ecb51708b710480a462b728174155c5f2709a39d1cc2dc975e350"}
# Make sure downloads the correct architecture
if is_arch "x86_64"; then
CRICTL_ARCH="amd64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_AMD64}
elif is_arch "aarch64"; then
CRICTL_ARCH="arm64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_ARM64}
elif is_arch "ppc64le"; then
CRICTL_ARCH="ppc64le"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_PPC64}
elif is_arch "s390x"; then
CRICTL_ARCH="s390x"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CRICTL_DOWNLOAD_URL=${CRICTL_DOWNLOAD_URL:-https://github.com/kubernetes-sigs/cri-tools/releases/download}
CRICTL_DOWNLOAD_FILE=crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz
CRICTL_DOWNLOAD_LOCATION=$CRICTL_DOWNLOAD_URL/$CRICTL_VERSION/$CRICTL_DOWNLOAD_FILE
# Installs crictl tools.
function install_crictl {
echo "Installing CRI command-line tools"
# Download and cache the crictl tar for subsequent use
local crictl_file
crictl_file="$(get_extra_file $CRICTL_DOWNLOAD_LOCATION)"
if [ ! -f "$FILES/crictl" ]; then
echo "${CRICTL_SHA256} $crictl_file" > $FILES/crictl.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/crictl.sha256sum || (sudo rm -f $crictl_file; exit 1)
tar xzvf $crictl_file -C $FILES
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/crictl" \
"$CRICTL_BIN_DIR/crictl"
fi
}
# Configure crictl tools.
function configure_crictl {
local crictl_config_file=/etc/crictl.yaml
cat <<EOF | sudo tee $crictl_config_file >/dev/null
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: true
EOF
}
# Restore xtrace
$_XTRACE_CONTAINER_TOOLS_CRICTL

View File

@@ -6,6 +6,8 @@ set -o xtrace
echo_summary "container's plugin.sh was called..."
source $DEST/devstack-plugin-container/devstack/lib/docker
source $DEST/devstack-plugin-container/devstack/lib/crio
source $DEST/devstack-plugin-container/devstack/lib/k8s
(set -o posix; set)
if is_service_enabled container; then
@@ -13,20 +15,52 @@ if is_service_enabled container; then
echo_summary "Installing container engine"
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
check_docker || install_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
check_crio || install_crio
fi
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring container engine"
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
configure_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
configure_crio
fi
fi
if [[ "$1" == "unstack" ]]; then
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
stop_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
stop_crio
fi
fi
if [[ "$1" == "clean" ]]; then
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
cleanup_docker
fi
fi
fi
if is_k8s_enabled; then
if [[ "$1" == "stack" && "$2" == "install" ]]; then
install_kubeadm
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
if is_service_enabled k8s-master; then
kubeadm_init
elif is_service_enabled k8s-node; then
kubeadm_join
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
if is_service_enabled k8s-master; then
start_collect_logs
fi
fi
if [[ "$1" == "unstack" ]]; then
kubeadm_reset
fi
if [[ "$1" == "clean" ]]; then
# nothing needed here
:

View File

@@ -1,8 +1,35 @@
# Devstack settings
# Supported options are "docker" and "crio".
CONTAINER_ENGINE=${CONTAINER_ENGINE:-docker}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=${ENABLE_CLEAR_CONTAINER:-false}
ENABLE_KATA_CONTAINERS=${ENABLE_KATA_CONTAINERS:-false}
ENABLE_LIVE_RESTORE=${ENABLE_LIVE_RESTORE:-false}
ENABLE_IPV6=${ENABLE_IPV6:-false}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
ENABLE_CONTAINERD_CRI=${ENABLE_CONTAINERD_CRI:-false}
CRIO_VERSION=${CRIO_VERSION:-"1.30.5"}
CRIO_ALLOW_ICMP=${CRIO_ALLOW_ICMP:-true}
CNI_CONF_DIR=${CNI_CONF_DIR:-}
CNI_PLUGIN_DIR=${CNI_PLUGIN_DIR:-}
UBUNTU_DOCKER_VERSION=${UBUNTU_DOCKER_VERSION:-}
# Enable container services
enable_service container
# Enable k8s services
if [[ ,${ENABLED_SERVICES} =~ ,"k8s-master" ]]; then
enable_service kube-controller-manager
enable_service kube-apiserver
enable_service kube-scheduler
enable_service kube-proxy
fi
# Customize kubeadm container images repository
KUBEADMIN_IMAGE_REPOSITORY=${KUBEADMIN_IMAGE_REPOSITORY:-"registry.k8s.io"}
# Configure crio pause image
CRIO_PAUSE_IMAGE=${CRIO_PAUSE_IMAGE:-"registry.k8s.io/pause:3.6"}
CRIO_PAUSE_COMMAND=${CRIO_PAUSE_COMMAND:-"/pause"}

View File

@@ -0,0 +1,15 @@
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}

View File

@@ -0,0 +1,5 @@
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}

View File

@@ -1,80 +1,3 @@
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*nose_results.html
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testr_results.html.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.testrepository/tmp*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testrepository.subunit.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}/tox'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.tox/*/log/*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs
- hosts: all
roles:
- fetch_docker_log

View File

@@ -0,0 +1,3 @@
- hosts: all
roles:
- run-devstack

View File

@@ -1,69 +1,8 @@
- hosts: all
name: Autoconverted job legacy-devstack-plugin-container-dsvm from old job gate-devstack-plugin-container-dsvm-nv
name: Verify that Docker is installed correctly by running the hello-world image
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=0
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
export PROJECTS="openstack/devstack-plugin-container $PROJECTS"
# Keep localrc to be able to set some vars in post_test_hook
export KEEP_LOCALRC=1
function gate_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/gate_hook.sh
}
export -f gate_hook
function post_test_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/post_test_hook.sh fullstack
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
sudo -H -u stack docker run hello-world

View File

@@ -0,0 +1,4 @@
- hosts: all
roles:
- fetch_docker_log
- fetch_kubelet_log

View File

@@ -0,0 +1,3 @@
- hosts: all
roles:
- orchestrate-devstack

View File

@@ -0,0 +1,29 @@
- hosts: controller
name: Verify that k8s is installed correctly by running a pod
tasks:
- shell:
cmd: |
set -e
set -x
kubectl get nodes
kubectl get pods --namespace kube-system
tmpfile=$(mktemp)
cat <<EOT > $tmpfile
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
EOT
kubectl create -f $tmpfile
kubectl wait --for=condition=Ready pod myapp-pod
become: true
become_user: stack

View File

@@ -0,0 +1,11 @@
---
prelude: >
Support installing Kata Containers.
features:
- |
In this release, it adds support for Kata Containers and configure it
to work with Docker.
deprecations:
- |
The support of Clear Container is deprecated in this release and will be
removed in the next release.

View File

@@ -0,0 +1 @@
Collect docker log from test run

View File

@@ -0,0 +1,22 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store docker log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit docker | sudo tee {{ ansible_user_dir }}/logs/docker.log > /dev/null
- name: Set docker.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/docker.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

View File

@@ -0,0 +1 @@
Collect kubelet log from test run

View File

@@ -0,0 +1,22 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store kubelet log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit kubelet | sudo tee {{ ansible_user_dir }}/logs/kubelet.log > /dev/null
- name: Set kubelet.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/kubelet.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

35
tox.ini Normal file
View File

@@ -0,0 +1,35 @@
[tox]
minversion = 3.18.0
skipsdist = True
envlist = bashate
[testenv]
usedevelop = False
install_command = pip install {opts} {packages}
[testenv:bashate]
basepython = python3
# if you want to test out some changes you have made to bashate
# against devstack, just set BASHATE_INSTALL_PATH=/path/... to your
# modified bashate tree
deps =
{env:BASHATE_INSTALL_PATH:bashate==0.5.1}
allowlist_externals = bash
commands = bash -c "find {toxinidir} \
-not \( -type d -name .?\* -prune \) \
-not \( -type d -name doc -prune \) \
-not \( -type f -name localrc -prune \) \
-type f \
-not -name \*~ \
-not -name \*.md \
-not -name stack-screenrc \
-not -name \*.orig \
-not -name \*.rej \
\( \
-name \*.sh -or \
-name \*rc -or \
-name functions\* -or \
-wholename \*/inc/\* -or \
-wholename \*/lib/\* \
\) \
-print0 | xargs -0 bashate -v -iE006 -eE005,E042"