6 Commits

Author SHA1 Message Date
OpenDev Sysadmins
0ac7780b26 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:42:34 +00:00
Michał Dulko
86bf2c6d9d Skip linux-image-extra-$(uname -r) on 18.04
Seems like aforementioned package is not available on Ubuntu 18.04
(Bionic). This commit excludes that version from installation of Docker.

Change-Id: Ib1864497dd19caadf9077386ce278712e4f5de8f
(cherry picked from commit 17a865e064)
2019-03-24 23:41:12 +00:00
Ian Wienand
bb969c2075 Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I3a89541d85ef5a646792879bf5a3e3dba77cc3a7
2019-03-24 20:33:29 +00:00
Andreas Jaeger
55ea5c67f4 Import legacy job
Import legacy job from openstack-zuul-jobs.

Change-Id: I5c28ce42606dc96d7df179a46e55abe453f93fe8
(cherry picked from commit 6d65af2900)
2018-09-13 22:17:46 +02:00
Doug Hellmann
2299875327 import zuul job settings from project-config
This is a mechanically generated patch to complete step 1 of moving
the zuul job settings out of project-config and into each project
repository.

Because there will be a separate patch on each branch, the branch
specifiers for branch-specific jobs have been removed.

Because this patch is generated by a script, there may be some
cosmetic changes to the layout of the YAML file(s) as the contents are
normalized.

See the python3-first goal document for details:
https://governance.openstack.org/tc/goals/stein/python3-first.html

Change-Id: If31ebf3fd60fc756b259cf949f5e276b77cfc378
Story: #2002586
Task: #24327
2018-09-08 22:55:42 -04:00
e00831a025 Update .gitreview for stable/queens
Change-Id: I1b35f14b71b47a201bf45618d1c14b402e8192ee
2018-02-16 16:36:28 +00:00
28 changed files with 225 additions and 1000 deletions

1
.gitignore vendored
View File

@@ -1 +0,0 @@
.tox

View File

@@ -2,4 +2,4 @@
host=review.opendev.org
port=29418
project=openstack/devstack-plugin-container.git
defaultbranch=unmaintained/zed
defaultbranch=stable/queens

View File

@@ -1,61 +1,19 @@
- job:
name: devstack-plugin-container-dsvm
parent: devstack
pre-run: playbooks/devstack-plugin-container-dsvm/pre.yaml
parent: legacy-dsvm-base
run: playbooks/devstack-plugin-container-dsvm/run.yaml
post-run: playbooks/devstack-plugin-container-dsvm/post.yaml
timeout: 4200
required-projects:
- openstack/devstack
- openstack/devstack-gate
- openstack/devstack-plugin-container
vars:
devstack_localrc:
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
- job:
name: devstack-plugin-container-k8s
parent: devstack-minimal
nodeset: openstack-two-node-focal
pre-run: playbooks/devstack-plugin-container-k8s/pre.yaml
run: playbooks/devstack-plugin-container-k8s/run.yaml
post-run: playbooks/devstack-plugin-container-k8s/post.yaml
timeout: 7200
required-projects:
- openstack/devstack
- openstack/devstack-plugin-container
vars:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
etcd3: true
container: true
k8s-master: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
group-vars:
subnode:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
container: true
k8s-node: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
- project:
check:
jobs:
- openstack-tox-bashate
- devstack-plugin-container-dsvm
- devstack-plugin-container-k8s:
- devstack-plugin-container-dsvm:
voting: false
gate:
jobs:
- openstack-tox-bashate
- devstack-plugin-container-dsvm
- noop

View File

@@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/devstack-plugin-container
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/devstack
For more specific information about contributing to this repository, see the
Devstack contributor guide:
https://docs.openstack.org/devstack/latest/contributor/contributing.html

View File

@@ -2,8 +2,8 @@
Container Plugin
================
This plugin enables installation of container engine and Kubernetes on
Devstack. The default container engine is Docker.
This plugin enables installation of container engine on Devstack. The default
container engine is Docker (currently this plugin supports only Docker!).
====================
Enabling in Devstack
@@ -21,59 +21,11 @@ For more info on devstack installation follow the below link:
2. Add this repo as an external repository
------------------------------------------
This plugin supports installing Kubernetes or container engine only.
For installing container engine only, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
END
For installing Kata Containers, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
ENABLE_KATA_CONTAINERS=True
END
For installing Kubernetes, using the following config in master node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service etcd3
enable_service container
enable_service k8s-master
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
And using the following config in worker node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
SERVICE_HOST=10.0.0.11 # change this to controller's IP address
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service container
enable_service k8s-node
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
3. Run devstack

21
contrib/gate_hook.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This script is executed inside gate_hook function in devstack gate.
# Keep all devstack settings here instead of project-config for easy
# maintain if we want to change devstack config settings in future.
$BASE/new/devstack-gate/devstack-vm-gate.sh

48
contrib/post_test_hook.sh Normal file
View File

@@ -0,0 +1,48 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
# Sleep some time until all services are starting
sleep 5
# Check if a function already exists
function function_exists {
declare -f -F $1 > /dev/null
}
if ! function_exists echo_summary; then
function echo_summary {
echo $@
}
fi
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
echo_summary "Devstack-plugin-container's post_test_hook.sh was called..."
(set -o posix; set)
# Verify that Docker is installed correctly by running the hello-world image
sudo -H -u stack docker run hello-world
EXIT_CODE=$?
# Copy over docker systemd unit journals.
mkdir -p $WORKSPACE/logs
sudo journalctl -o short-precise --unit docker | sudo tee $WORKSPACE/logs/docker.txt > /dev/null
$XTRACE
exit $EXIT_CODE

View File

@@ -1,94 +0,0 @@
#!/bin/bash
#
# lib/cni/plugins
# Common CNI plugins functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_CNI_PLUGINS=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CNI_PLUGINS_BIN_DIR=/opt/cni/bin
# install all plugins by default
CNI_PLUGINS_INSTALL_PLUGINS=${CNI_PLUGINS_INSTALL_PLUGINS:-flannel,ptp,host-local,portmap,tuning,vlan,host-device,sample,dhcp,ipvlan,macvlan,loopback,bridge}
CNI_PLUGINS_CONF_SOURCE_DIR=${CNI_PLUGINS_CONF_SOURCE_DIR:-$DEST/devstack-plugin-container/etc/cni/net.d}
CNI_PLUGINS_CONF_DIR=${CNI_PLUGINS_CONF_DIR:-/etc/cni/net.d}
CNI_PLUGINS_VERSION=${CNI_PLUGINS_VERSION:-v0.7.1}
CNI_PLUGINS_SHA256_AMD64=${CNI_PLUGINS_SHA256_AMD64:-"6ecc5c7dbb8e4296b0d0d017e5440618e19605b9aa3b146a2c29af492f299dc7"}
CNI_PLUGINS_SHA256_ARM64=${CNI_PLUGINS_SHA256_ARM64:-"258080b94bfc54bd54fd0ea7494efc31806aa4b2836ba3f2d189e0fc16fab0ef"}
CNI_PLUGINS_SHA256_PPC64=${CNI_PLUGINS_SHA256_PPC64:-"a515c45a52e752249bb0e9feac1654c5d38974df6a36148778f6eeab9826f706"}
CNI_PLUGINS_SHA256_S390X=${CNI_PLUGINS_SHA256_S390X:-"24e31be69a012395f1026cd37d125f5f81001cfc36434d8f7a17b36bc5f1e6ad"}
# Make sure CNI plugins downloads the correct architecture
if is_arch "x86_64"; then
CNI_PLUGINS_ARCH="amd64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_AMD64}
elif is_arch "aarch64"; then
CNI_PLUGINS_ARCH="arm64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_ARM64}
elif is_arch "ppc64le"; then
CNI_PLUGINS_ARCH="ppc64le"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_PPC64}
elif is_arch "s390x"; then
CNI_PLUGINS_ARCH="s390x"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CNI_PLUGINS_DOWNLOAD_URL=${CNI_PLUGINS_DOWNLOAD_URL:-https://github.com/containernetworking/plugins/releases/download}
CNI_PLUGINS_DOWNLOAD_FILE=cni-plugins-$CNI_PLUGINS_ARCH-$CNI_PLUGINS_VERSION.tgz
CNI_PLUGINS_DOWNLOAD_LOCATION=$CNI_PLUGINS_DOWNLOAD_URL/$CNI_PLUGINS_VERSION/$CNI_PLUGINS_DOWNLOAD_FILE
# Installs standard cni plugins.
function install_cni_plugins {
echo "Installing CNI standard plugins"
# Download and cache the cni plugins tgz for subsequent use
local plugins_file
cni_plugins_file="$(get_extra_file $CNI_PLUGINS_DOWNLOAD_LOCATION)"
if [ ! -d "$FILES/cniplugins" ]; then
echo "${CNI_PLUGINS_SHA256} $cni_plugins_file" > $FILES/cniplugins.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/cniplugins.sha256sum || (sudo rm -f $cni_plugins_file; exit 1)
mkdir $FILES/cniplugins
tar xzvf $cni_plugins_file -C $FILES/cniplugins
fi
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
if [ $(ls $FILES/cniplugins/$plugin 2> /dev/null) ]; then
echo "Install plugin: $plugin"
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/cniplugins/$plugin" \
"$CNI_PLUGINS_BIN_DIR/$plugin"
else
echo "Skip installing plugin: $plugin"
fi
done
}
# Configure cni plugins.
function configure_cni_plugins {
echo "Configuring CNI plugins"
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
local source_config_file
source_config_file=$(ls ${CNI_PLUGINS_CONF_SOURCE_DIR}/*${plugin}.conf 2> /dev/null || true)
if [ $source_config_file ]; then
echo "Found config file for plugin: $plugin"
sudo install -o "$STACK_USER" -m 0664 -t "$CNI_PLUGINS_CONF_DIR" -D \
"${source_config_file}"
else
echo "Config file not found for plugin: $plugin"
fi
done
}
# Restore xtrace
$_XTRACE_CONTAINER_CNI_PLUGINS

View File

@@ -1,186 +0,0 @@
#!/bin/bash
# Dependencies:
#
# - functions
# stack.sh
# ---------
# - check_crio
# - install_crio
# - configure_crio
# - stop_crio
# Save trace setting
_XTRACE_DOCKER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CRIO_ENGINE_SOCKET_FILE=${CRIO_ENGINE_SOCKET_FILE:-/var/run/crio/crio.sock}
CRIO_ALLOW_ICMP=$(trueorfalse True CRIO_ALLOW_ICMP)
# Functions
# ---------
function check_crio {
if is_ubuntu; then
dpkg -l | grep crio-o > /dev/null 2>&1
else
false
# TODO: CentOS/Fedora support.
fi
}
function install_crio {
if [[ -z "$os_PACKAGE" ]]; then
GetOSVersion
fi
local lsb_dist=${os_VENDOR,,}
local dist_version=${os_CODENAME}
local kubic_obs_project_key="2472d6d0d2f66af87aba8da34d64390375060aa4"
local os="x${os_VENDOR}_${os_RELEASE}"
if is_ubuntu; then
apt_get install apt-transport-https ca-certificates \
software-properties-common
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 \
--recv ${kubic_obs_project_key}
sudo apt-add-repository "deb https://download.opensuse.org/"`
`"repositories/devel:/kubic:/libcontainers:/stable/${os}/ /"
sudo apt-add-repository "deb http://download.opensuse.org/"`
`"repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/"`
`"${CRIO_VERSION}/${os}/ /"
# Installing podman and containerd will get us compatible versions of
# cri-o and runc. And we need podman to manage container images anyway.
apt_get install podman buildah cri-o-runc cri-o
elif is_fedora; then
if [[ "$lsb_dist" = "centos" ]]; then
sudo yum-config-manager \
--add-repo \
https://cbs.centos.org/repos/virt7-container-common-candidate/x86_64/os/
sudo yum-config-manager \
--add-repo \
https://cbs.centos.org/repos/paas7-crio-311-candidate/x86_64/os/
fi
yum_install cri-o podman buildah
fi
}
function configure_crio {
# After an ./unstack it will be stopped. So it is ok if it returns exit-code == 1
sudo systemctl stop crio.service || true
export CRIO_CONF="/etc/crio/crio.conf"
# We're wrapping values in \"<val>\" because that's the format cri-o wants.
iniset -sudo ${CRIO_CONF} crio.api listen \"${CRIO_ENGINE_SOCKET_FILE}\"
iniset -sudo ${CRIO_CONF} crio.image pause_image \"${CRIO_PAUSE_IMAGE}\"
iniset -sudo ${CRIO_CONF} crio.image pause_command \"${CRIO_PAUSE_COMMAND}\"
if [[ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]]; then
# debug is way too verbose, info will be enough
iniset -sudo ${CRIO_CONF} crio.runtime log_level \"info\"
fi
if is_ubuntu; then
# At least for 18.04 we need to set up /etc/containers/registries.conf
# with some initial content. That's another bug with that PPA.
local registries_conf
registries_conf="/etc/containers/registries.conf"
if [[ ! -f ${registries_conf} ]]; then
sudo mkdir -p `dirname ${registries_conf}`
cat << EOF | sudo tee ${registries_conf}
[registries.search]
registries = ['docker.io']
EOF
fi
# CRI-O from kubic repo have placed runc in different place, not even
# in path, just to not conflict with runc package from official repo.
# We need to change it.
iniset -sudo ${CRIO_CONF} crio.runtime.runtimes.runc runtime_path \
\"/usr/lib/cri-o-runc/sbin/runc\"
if [ -n "${CNI_CONF_DIR}" ]; then
iniset -sudo ${CRIO_CONF} crio.network network_dir \
\"${CNI_CONF_DIR}\"
fi
if [ -n "${CNI_PLUGIN_DIR}" ]; then
iniset -sudo ${CRIO_CONF} crio.network plugin_dir \
\"${CNI_PLUGIN_DIR}\"
fi
# By default CRI-O doesn't allow ICMP between containers, although it
# is ususally expected for testing purposes.
if [ "${CRIO_ALLOW_ICMP}" == "True" ]; then
if grep -q 'default_sysctls =' ${CRIO_CONF}; then
export CRIO_KEY="default_sysctls"
export CRIO_VAL='[ "net.ipv4.ping_group_range=0 2147483647", ]'
_update_config
else
iniset -sudo ${CRIO_CONF} crio.runtime default_sysctls \
'[ "net.ipv4.ping_group_range=0 2147483647", ]'
fi
fi
elif is_fedora; then
local lsb_dist=${os_VENDOR,,}
if [[ "$lsb_dist" = "centos" ]]; then
# CentOS packages are putting runc binary in different place...
iniset -sudo ${CRIO_CONF} crio.runtime runtime \"/usr/sbin/runc\"
# CentOS version seems to only work with cgroupfs...
iniset -sudo ${CRIO_CONF} crio.runtime cgroup_manager \"cgroupfs\"
fi
fi
sudo systemctl --no-block restart crio.service
}
function stop_crio {
sudo systemctl stop crio.service || true
}
function _update_config {
sudo -E python3 - <<EOF
"""
Update provided by CRIO_KEY key list in crio configuration in a form of:
some_key = [ some,
value
]
or just an empty list:
some_key = [
]
with the CRIO_VAL value.
Note, CRIO_VAL must include square brackets.
"""
import os
import re
crio_key = os.environ.get('CRIO_KEY')
crio_val = os.environ.get('CRIO_VAL')
crio_conf = os.environ.get('CRIO_CONF')
pat = re.compile(rf'{crio_key}\s*=\s*\[[^\]]*\]', flags=re.S | re.M)
with open(crio_conf) as fobj:
conf = fobj.read()
with open(crio_conf, 'w') as fobj:
search = pat.search(conf)
if search:
start, end = search.span()
conf = conf[:start] + f'{crio_key} = {crio_val}' + conf[end:]
fobj.write(conf)
EOF
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -26,26 +26,16 @@ DOCKER_ENGINE_PORT=${DOCKER_ENGINE_PORT:-2375}
DOCKER_CLUSTER_STORE=${DOCKER_CLUSTER_STORE:-}
DOCKER_GROUP=${DOCKER_GROUP:-$STACK_USER}
DOCKER_CGROUP_DRIVER=${DOCKER_CGROUP_DRIVER:-}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=$(trueorfalse False ENABLE_CLEAR_CONTAINER)
ENABLE_KATA_CONTAINERS=$(trueorfalse False ENABLE_KATA_CONTAINERS)
ENABLE_CONTAINERD_CRI=$(trueorfalse False ENABLE_CONTAINERD_CRI)
ENABLE_LIVE_RESTORE=$(trueorfalse False ENABLE_LIVE_RESTORE)
ENABLE_IPV6=$(trueorfalse False ENABLE_IPV6)
KATA_BRANCH=${KATA_BRANCH:-master}
KATA_RUNTIME=${KATA_RUNTIME:-kata-runtime}
CONTAINERD_CONF_DIR=/etc/containerd
CONTAINERD_CONF=$CONTAINERD_CONF_DIR/config.toml
# Functions
# ---------
function check_docker {
if is_ubuntu; then
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
else
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
fi
}
@@ -56,12 +46,8 @@ function install_docker {
local lsb_dist=${os_VENDOR,,}
local dist_version=${os_CODENAME}
if [[ "$lsb_dist" != "centosstream" ]]; then
local arch
arch=$(dpkg --print-architecture)
fi
local arch=$(dpkg --print-architecture)
if is_ubuntu; then
apt_get install apparmor
if [[ ${dist_version} == 'trusty' ]]; then
if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -qE '^ii|^hi' 2>/dev/null; then
apt_get install linux-image-extra-$(uname -r) linux-image-extra-virtual
@@ -76,27 +62,12 @@ function install_docker {
${dist_version} \
stable"
REPOS_UPDATED=False apt_get_update
if [ -n "${UBUNTU_DOCKER_VERSION}" ]; then
apt_get install docker-ce=$UBUNTU_DOCKER_VERSION
else
apt_get install docker-ce
fi
apt_get install docker-ce
elif is_fedora; then
if [[ "$lsb_dist" = "centos" ]]; then
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
elif [[ "$lsb_dist" = "centosstream" ]]; then
sudo yum-config-manager \
--add-repo \
https://download.docker.com/linux/centos/docker-ce.repo
sudo yum-config-manager \
--add-repo \
https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64 #noqa
sudo yum-config-manager \
--enable \
packages.cloud.google.com_yum_repos_kubernetes-el7-x86_64
sudo dnf -y install kubeadm --nogpgcheck
elif [[ "$lsb_dist" = "fedora" ]]; then
sudo dnf config-manager \
--add-repo \
@@ -104,23 +75,9 @@ function install_docker {
fi
yum_install docker-ce
fi
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
# Kata Containers can't run inside VM, so check whether virtualization
# is enabled or not
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_kata_container_ubuntu
elif is_fedora; then
install_kata_container_fedora
fi
else
(>&2 echo "WARNING: Kata Containers needs the CPU extensions svm or vmx which is not enabled. Skipping Kata Containers installation.")
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
# Clear Container can't run inside VM, so check whether virtualization
# is enabled or not
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_clear_container_ubuntu
@@ -131,27 +88,9 @@ function install_docker {
(>&2 echo "WARNING: Clear Container needs the CPU extensions svm or vmx which is not enabled. Skipping Clear Container installation.")
fi
fi
if [[ "$ENABLE_CONTAINERD_CRI" == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
install_cni_plugins
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
install_crictl
fi
}
function configure_docker {
if [[ ${ENABLE_CONTAINERD_CRI} == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
configure_cni_plugins
configure_containerd
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
configure_crictl
fi
# After an ./unstack it will be stopped. So it is ok if it returns exit-code == 1
sudo systemctl stop docker.service || true
@@ -160,18 +99,7 @@ function configure_docker {
cluster_store_opts+="\"cluster-store\": \"$DOCKER_CLUSTER_STORE\","
fi
local runtime_opts=""
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"$KATA_RUNTIME\": {
\"path\": \"/usr/bin/kata-runtime\"
}
},
\"default-runtime\": \"$KATA_RUNTIME\","
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"cor\": {
@@ -182,31 +110,17 @@ function configure_docker {
fi
local docker_config_file=/etc/docker/daemon.json
local debug
local live_restore
local ipv6
if [[ "$ENABLE_DEBUG_LOG_LEVEL" == "True" ]]; then
debug=true
else
debug=false
fi
if [[ "$ENABLE_LIVE_RESTORE" == "True" ]]; then
live_restore=true
else
live_restore=false
fi
if [[ "$ENABLE_IPV6" == "True" ]]; then
ipv6=true
else
ipv6=false
fi
sudo mkdir -p $(dirname ${docker_config_file})
cat <<EOF | sudo tee $docker_config_file >/dev/null
{
$cluster_store_opts
$runtime_opts
"debug": ${debug},
"live-restore": ${live_restore},
"ipv6": ${ipv6},
"group": "$DOCKER_GROUP",
EOF
if [[ -n "$DOCKER_CGROUP_DRIVER" ]]; then
@@ -238,42 +152,10 @@ EOF
sudo systemctl --no-block restart docker.service
}
function configure_containerd {
sudo mkdir -p $CONTAINERD_CONF_DIR
sudo chown -R $STACK_USER $CONTAINERD_CONF_DIR
stack_user_gid=$(getent group $STACK_USER | cut -d: -f3)
cat <<EOF | sudo tee $CONTAINERD_CONF >/dev/null
[grpc]
gid = $stack_user_gid
[debug]
level = "debug"
EOF
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
cat <<EOF | sudo tee -a $CONTAINERD_CONF >/dev/null
[plugins]
[plugins.cri]
[plugins.cri.containerd]
[plugins.cri.containerd.runtimes.${KATA_RUNTIME}]
runtime_type = "io.containerd.kata.v2"
EOF
fi
sudo systemctl --no-block restart containerd.service
}
function stop_docker {
sudo systemctl stop docker.service || true
}
function cleanup_docker {
uninstall_package docker-ce
rm -f $CONTAINERD_CONF
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/ /' >> /etc/apt/sources.list.d/cc-oci-runtime.list"
curl -fsSL http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
@@ -281,7 +163,6 @@ function install_clear_container_ubuntu {
apt_get install cc-oci-runtime
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_fedora {
source /etc/os-release
local lsb_dist=${os_VENDOR,,}
@@ -293,31 +174,5 @@ function install_clear_container_fedora {
yum_install cc-oci-runtime linux-container
}
function install_kata_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/ /' \
> /etc/apt/sources.list.d/kata-containers.list"
curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/Release.key \
| sudo apt-key add -
REPOS_UPDATED=False apt_get_update
apt_get install kata-runtime kata-proxy kata-shim
}
function install_kata_container_fedora {
source /etc/os-release
if [[ -x $(command -v dnf 2>/dev/null) ]]; then
sudo dnf -y install dnf-plugins-core
sudo -E dnf config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/Fedora_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
elif [[ -x $(command -v yum 2>/dev/null) ]]; then
# all rh patforms (fedora, centos, rhel) have this pkg
sudo yum -y install yum-utils
sudo -E yum-config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/CentOS_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
else
die $LINENO "Unable to find or auto-install Kata Containers"
fi
yum_install kata-runtime kata-proxy kata-shim
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -1,158 +0,0 @@
#!/bin/bash
# Dependencies:
#
# - functions
# - ``STACK_USER`` must be defined
# stack.sh
# --------
# - install_k8s
# The following variables are assumed to be defined by certain functions:
#
# - ``http_proxy`` ``https_proxy`` ``no_proxy``
# Save trace setting
_XTRACE_DOCKER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
K8S_TOKEN=${K8S_TOKEN:-""}
K8S_API_SERVER_IP=${K8S_API_SERVER_IP:-$SERVICE_HOST}
K8S_NODE_IP=${K8S_NODE_IP:-$HOST_IP}
K8S_API_SERVER_PORT=${K8S_API_SERVER_PORT:-6443}
K8S_POD_NETWORK_CIDR=${K8S_POD_NETWORK_CIDR:-10.244.0.0/16}
K8S_SERVICE_NETWORK_CIDR=${K8S_SERVICE_NETWORK_CIDR:-10.96.0.0/12}
K8S_VERSION=${K8S_VERSION:-1.19.0-00}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
# Functions
# ---------
function is_k8s_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"k8s-" ]] && return 0
return 1
}
function install_kubeadm {
if is_ubuntu; then
apt_get install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo add-apt-repository -y \
"deb https://apt.kubernetes.io/ kubernetes-xenial main"
REPOS_UPDATED=False apt_get_update
apt_get install kubelet=$K8S_VERSION kubeadm=$K8S_VERSION kubectl=$K8S_VERSION
sudo apt-mark hold kubelet kubeadm kubectl
# NOTE(hongbin): This work-around an issue that kubelet pick a wrong
# IP address if the node has multiple network interfaces.
# See https://github.com/kubernetes/kubeadm/issues/203
echo "KUBELET_EXTRA_ARGS=--node-ip=$K8S_NODE_IP" | sudo tee -a /etc/default/kubelet
sudo systemctl daemon-reload && sudo systemctl restart kubelet
else
(>&2 echo "WARNING: kubeadm installation is not supported in this distribution.")
fi
}
function kubeadm_init {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: "${KUBEADMIN_IMAGE_REPOSITORY}"
etcd:
external:
endpoints:
- "http://${SERVICE_HOST}:${ETCD_PORT}"
networking:
podSubnet: "${K8S_POD_NETWORK_CIDR}"
serviceSubnet: "${K8S_SERVICE_NETWORK_CIDR}"
---
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- token: "${K8S_TOKEN}"
ttl: 0s
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${K8S_API_SERVER_IP}"
bindPort: ${K8S_API_SERVER_PORT}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
EOF
sudo kubeadm config images pull --image-repository=${KUBEADMIN_IMAGE_REPOSITORY}
sudo kubeadm init --config $kubeadm_config_file --ignore-preflight-errors Swap
local kube_config_file=$HOME/.kube/config
sudo mkdir -p $(dirname ${kube_config_file})
sudo cp /etc/kubernetes/admin.conf $kube_config_file
safe_chown $STACK_USER:$STACK_USER $kube_config_file
if [[ "$K8S_NETWORK_ADDON" == "flannel" ]]; then
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/4ff77dc7c35851913587f7daccf25d754e77aa65/Documentation/kube-flannel.yml
fi
}
function kubeadm_join {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "${K8S_API_SERVER_IP}:${K8S_API_SERVER_PORT}"
token: "${K8S_TOKEN}"
unsafeSkipCAVerification: true
tlsBootstrapToken: "${K8S_TOKEN}"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
EOF
sudo kubeadm join --config $kubeadm_config_file --ignore-preflight-errors Swap
}
function start_collect_logs {
wait_for_kube_service 180 component=kube-controller-manager
wait_for_kube_service 60 component=kube-apiserver
wait_for_kube_service 30 component=kube-scheduler
wait_for_kube_service 30 k8s-app=kube-proxy
run_process kube-controller-manager "/usr/bin/kubectl logs -n kube-system -f -l component=kube-controller-manager"
run_process kube-apiserver "/usr/bin/kubectl logs -n kube-system -f -l component=kube-apiserver"
run_process kube-scheduler "/usr/bin/kubectl logs -n kube-system -f -l component=kube-scheduler"
run_process kube-proxy "/usr/bin/kubectl logs -n kube-system -f -l k8s-app=kube-proxy"
}
function wait_for_kube_service {
local timeout=$1
local selector=$2
local rval=0
time_start "wait_for_service"
timeout $timeout bash -x <<EOF || rval=$?
NAME=""
while [[ "\$NAME" == "" ]]; do
sleep 1
NAME=\$(kubectl wait --for=condition=Ready pod -n kube-system -l $selector -o name)
done
EOF
time_stop "wait_for_service"
# Figure out what's happening on platforms where this doesn't work
if [[ "$rval" != 0 ]]; then
echo "Didn't find kube service after $timeout seconds"
kubectl get pods -n kube-system -l $selector
fi
return $rval
}
function kubeadm_reset {
sudo kubeadm reset --force
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -1,76 +0,0 @@
#!/bin/bash
#
# lib/tools/crictl
# CRI command line tools functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_TOOLS_CRICTL=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CRICTL_BIN_DIR=/usr/local/bin
CRICTL_VERSION=${CRICTL_VERSION:-v1.17.0}
CRICTL_SHA256_AMD64=${CRICTL_SHA256_AMD64:-"7b72073797f638f099ed19550d52e9b9067672523fc51b746e65d7aa0bafa414"}
CRICTL_SHA256_ARM64=${CRICTL_SHA256_ARM64:-"d89afd89c2852509fafeaff6534d456272360fcee732a8d0cb89476377387e12"}
CRICTL_SHA256_PPC64=${CRICTL_SHA256_PPC64:-"a61c52b9ac5bffe94ae4c09763083c60f3eccd30eb351017b310f32d1cafb855"}
CRICTL_SHA256_S390X=${CRICTL_SHA256_S390X:-"0db445f0b74ecb51708b710480a462b728174155c5f2709a39d1cc2dc975e350"}
# Make sure downloads the correct architecture
if is_arch "x86_64"; then
CRICTL_ARCH="amd64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_AMD64}
elif is_arch "aarch64"; then
CRICTL_ARCH="arm64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_ARM64}
elif is_arch "ppc64le"; then
CRICTL_ARCH="ppc64le"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_PPC64}
elif is_arch "s390x"; then
CRICTL_ARCH="s390x"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CRICTL_DOWNLOAD_URL=${CRICTL_DOWNLOAD_URL:-https://github.com/kubernetes-sigs/cri-tools/releases/download}
CRICTL_DOWNLOAD_FILE=crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz
CRICTL_DOWNLOAD_LOCATION=$CRICTL_DOWNLOAD_URL/$CRICTL_VERSION/$CRICTL_DOWNLOAD_FILE
# Installs crictl tools.
function install_crictl {
echo "Installing CRI command-line tools"
# Download and cache the crictl tar for subsequent use
local crictl_file
crictl_file="$(get_extra_file $CRICTL_DOWNLOAD_LOCATION)"
if [ ! -f "$FILES/crictl" ]; then
echo "${CRICTL_SHA256} $crictl_file" > $FILES/crictl.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/crictl.sha256sum || (sudo rm -f $crictl_file; exit 1)
tar xzvf $crictl_file -C $FILES
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/crictl" \
"$CRICTL_BIN_DIR/crictl"
fi
}
# Configure crictl tools.
function configure_crictl {
local crictl_config_file=/etc/crictl.yaml
cat <<EOF | sudo tee $crictl_config_file >/dev/null
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: true
EOF
}
# Restore xtrace
$_XTRACE_CONTAINER_TOOLS_CRICTL

View File

@@ -6,8 +6,6 @@ set -o xtrace
echo_summary "container's plugin.sh was called..."
source $DEST/devstack-plugin-container/devstack/lib/docker
source $DEST/devstack-plugin-container/devstack/lib/crio
source $DEST/devstack-plugin-container/devstack/lib/k8s
(set -o posix; set)
if is_service_enabled container; then
@@ -15,52 +13,20 @@ if is_service_enabled container; then
echo_summary "Installing container engine"
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
check_docker || install_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
check_crio || install_crio
fi
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
echo_summary "Configuring container engine"
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
configure_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
configure_crio
fi
fi
if [[ "$1" == "unstack" ]]; then
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
stop_docker
elif [[ ${CONTAINER_ENGINE} == "crio" ]]; then
stop_crio
fi
fi
if [[ "$1" == "clean" ]]; then
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
cleanup_docker
fi
fi
fi
if is_k8s_enabled; then
if [[ "$1" == "stack" && "$2" == "install" ]]; then
install_kubeadm
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
if is_service_enabled k8s-master; then
kubeadm_init
elif is_service_enabled k8s-node; then
kubeadm_join
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
if is_service_enabled k8s-master; then
start_collect_logs
fi
fi
if [[ "$1" == "unstack" ]]; then
kubeadm_reset
fi
if [[ "$1" == "clean" ]]; then
# nothing needed here
:

View File

@@ -1,35 +1,7 @@
# Devstack settings
# Supported options are "docker" and "crio".
CONTAINER_ENGINE=${CONTAINER_ENGINE:-docker}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=${ENABLE_CLEAR_CONTAINER:-false}
ENABLE_KATA_CONTAINERS=${ENABLE_KATA_CONTAINERS:-false}
ENABLE_LIVE_RESTORE=${ENABLE_LIVE_RESTORE:-false}
ENABLE_IPV6=${ENABLE_IPV6:-false}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
ENABLE_CONTAINERD_CRI=${ENABLE_CONTAINERD_CRI:-false}
CRIO_VERSION=${CRIO_VERSION:-"1.18:/1.18.0"}
CRIO_ALLOW_ICMP=${CRIO_ALLOW_ICMP:-true}
CNI_CONF_DIR=${CNI_CONF_DIR:-}
CNI_PLUGIN_DIR=${CNI_PLUGIN_DIR:-}
UBUNTU_DOCKER_VERSION=${UBUNTU_DOCKER_VERSION:-}
# Enable container services
enable_service container
# Enable k8s services
if [[ ,${ENABLED_SERVICES} =~ ,"k8s-master" ]]; then
enable_service kube-controller-manager
enable_service kube-apiserver
enable_service kube-scheduler
enable_service kube-proxy
fi
# Customize kubeadm container images repository
KUBEADMIN_IMAGE_REPOSITORY=${KUBEADMIN_IMAGE_REPOSITORY:-"k8s.gcr.io"}
# Configure crio pause image
CRIO_PAUSE_IMAGE=${CRIO_PAUSE_IMAGE:-"k8s.gcr.io/pause:3.6"}
CRIO_PAUSE_COMMAND=${CRIO_PAUSE_COMMAND:-"/pause"}

View File

@@ -1,15 +0,0 @@
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}

View File

@@ -1,5 +0,0 @@
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}

View File

@@ -1,3 +1,80 @@
- hosts: all
roles:
- fetch_docker_log
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*nose_results.html
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testr_results.html.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.testrepository/tmp*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testrepository.subunit.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}/tox'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.tox/*/log/*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -1,3 +0,0 @@
- hosts: all
roles:
- run-devstack

View File

@@ -1,8 +1,69 @@
- hosts: all
name: Verify that Docker is installed correctly by running the hello-world image
name: Autoconverted job legacy-devstack-plugin-container-dsvm from old job gate-devstack-plugin-container-dsvm-nv
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
sudo -H -u stack docker run hello-world
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=0
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
export PROJECTS="openstack/devstack-plugin-container $PROJECTS"
# Keep localrc to be able to set some vars in post_test_hook
export KEEP_LOCALRC=1
function gate_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/gate_hook.sh
}
export -f gate_hook
function post_test_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/post_test_hook.sh fullstack
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -1,4 +0,0 @@
- hosts: all
roles:
- fetch_docker_log
- fetch_kubelet_log

View File

@@ -1,3 +0,0 @@
- hosts: all
roles:
- orchestrate-devstack

View File

@@ -1,29 +0,0 @@
- hosts: controller
name: Verify that k8s is installed correctly by running a pod
tasks:
- shell:
cmd: |
set -e
set -x
kubectl get nodes
kubectl get pods --namespace kube-system
tmpfile=$(mktemp)
cat <<EOT > $tmpfile
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
EOT
kubectl create -f $tmpfile
kubectl wait --for=condition=Ready pod myapp-pod
become: true
become_user: stack

View File

@@ -1,11 +0,0 @@
---
prelude: >
Support installing Kata Containers.
features:
- |
In this release, it adds support for Kata Containers and configure it
to work with Docker.
deprecations:
- |
The support of Clear Container is deprecated in this release and will be
removed in the next release.

View File

@@ -1 +0,0 @@
Collect docker log from test run

View File

@@ -1,22 +0,0 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store docker log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit docker | sudo tee {{ ansible_user_dir }}/logs/docker.log > /dev/null
- name: Set docker.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/docker.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

View File

@@ -1 +0,0 @@
Collect kubelet log from test run

View File

@@ -1,22 +0,0 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store kubelet log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit kubelet | sudo tee {{ ansible_user_dir }}/logs/kubelet.log > /dev/null
- name: Set kubelet.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/kubelet.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

35
tox.ini
View File

@@ -1,35 +0,0 @@
[tox]
minversion = 3.18.0
skipsdist = True
envlist = bashate
[testenv]
usedevelop = False
install_command = pip install {opts} {packages}
[testenv:bashate]
basepython = python3
# if you want to test out some changes you have made to bashate
# against devstack, just set BASHATE_INSTALL_PATH=/path/... to your
# modified bashate tree
deps =
{env:BASHATE_INSTALL_PATH:bashate==0.5.1}
allowlist_externals = bash
commands = bash -c "find {toxinidir} \
-not \( -type d -name .?\* -prune \) \
-not \( -type d -name doc -prune \) \
-not \( -type f -name localrc -prune \) \
-type f \
-not -name \*~ \
-not -name \*.md \
-not -name stack-screenrc \
-not -name \*.orig \
-not -name \*.rej \
\( \
-name \*.sh -or \
-name \*rc -or \
-name functions\* -or \
-wholename \*/inc/\* -or \
-wholename \*/lib/\* \
\) \
-print0 | xargs -0 bashate -v -iE006 -eE005,E042"