3 Commits

Author SHA1 Message Date
OpenDev Sysadmins
541d79af77 OpenDev Migration Patch
This commit was bulk generated and pushed by the OpenDev sysadmins
as a part of the Git hosting and code review systems migration
detailed in these mailing list posts:

http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003603.html
http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004920.html

Attempts have been made to correct repository namespaces and
hostnames based on simple pattern matching, but it's possible some
were updated incorrectly or missed entirely. Please reach out to us
via the contact information listed at https://opendev.org/ with any
questions you may have.
2019-04-19 19:42:34 +00:00
Ian Wienand
cdb06f60eb Replace openstack.org git:// URLs with https://
This is a mechanically generated change to replace openstack.org
git:// URLs with https:// equivalents.

This is in aid of a planned future move of the git hosting
infrastructure to a self-hosted instance of gitea (https://gitea.io),
which does not support the git wire protocol at this stage.

This update should result in no functional change.

For more information see the thread at

 http://lists.openstack.org/pipermail/openstack-discuss/2019-March/003825.html

Change-Id: I6758879f5f50b2e14aed55c4427f5f42614c7f3f
2019-03-24 20:33:29 +00:00
9690a7709b Update .gitreview for stable/stein
Change-Id: Id63d9b0fdade34cce3edfc8a985443e4f1dbafa5
2019-03-08 01:12:48 +00:00
28 changed files with 240 additions and 778 deletions

1
.gitignore vendored
View File

@@ -1 +0,0 @@
.tox

View File

@@ -2,4 +2,4 @@
host=review.opendev.org
port=29418
project=openstack/devstack-plugin-container.git
defaultbranch=stable/xena
defaultbranch=stable/stein

View File

@@ -1,7 +1,6 @@
- job:
name: devstack-plugin-container-dsvm
parent: devstack
pre-run: playbooks/devstack-plugin-container-dsvm/pre.yaml
parent: legacy-dsvm-base
run: playbooks/devstack-plugin-container-dsvm/run.yaml
post-run: playbooks/devstack-plugin-container-dsvm/post.yaml
timeout: 4200
@@ -9,55 +8,12 @@
- openstack/devstack
- openstack/devstack-gate
- openstack/devstack-plugin-container
vars:
devstack_localrc:
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
- job:
name: devstack-plugin-container-k8s
parent: devstack-minimal
nodeset: openstack-two-node-focal
pre-run: playbooks/devstack-plugin-container-k8s/pre.yaml
run: playbooks/devstack-plugin-container-k8s/run.yaml
post-run: playbooks/devstack-plugin-container-k8s/post.yaml
timeout: 7200
required-projects:
- openstack/devstack
- openstack/devstack-gate
- openstack/devstack-plugin-container
vars:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
etcd3: true
container: true
k8s-master: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
devstack_plugins:
devstack-plugin-container: https://opendev.org/openstack/devstack-plugin-container
group-vars:
subnode:
devstack_services:
# Ignore any default set by devstack. Emit a "disable_all_services".
base: false
container: true
k8s-node: true
devstack_localrc:
K8S_TOKEN: "9agf12.zsu5uh2m4pzt3qba"
USE_PYTHON3: true
- project:
check:
jobs:
- openstack-tox-bashate
- devstack-plugin-container-dsvm
- devstack-plugin-container-k8s:
- devstack-plugin-container-dsvm:
voting: false
gate:
jobs:
- openstack-tox-bashate
- devstack-plugin-container-dsvm
- noop

View File

@@ -1,19 +0,0 @@
The source repository for this project can be found at:
https://opendev.org/openstack/devstack-plugin-container
Pull requests submitted through GitHub are not monitored.
To start contributing to OpenStack, follow the steps in the contribution guide
to set up and use Gerrit:
https://docs.openstack.org/contributors/code-and-documentation/quick-start.html
Bugs should be filed on Launchpad:
https://bugs.launchpad.net/devstack
For more specific information about contributing to this repository, see the
Devstack contributor guide:
https://docs.openstack.org/devstack/latest/contributor/contributing.html

View File

@@ -2,8 +2,8 @@
Container Plugin
================
This plugin enables installation of container engine and Kubernetes on
Devstack. The default container engine is Docker.
This plugin enables installation of container engine on Devstack. The default
container engine is Docker (currently this plugin supports only Docker!).
====================
Enabling in Devstack
@@ -21,59 +21,11 @@ For more info on devstack installation follow the below link:
2. Add this repo as an external repository
------------------------------------------
This plugin supports installing Kubernetes or container engine only.
For installing container engine only, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
END
For installing Kata Containers, using the following config:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
ENABLE_KATA_CONTAINERS=True
END
For installing Kubernetes, using the following config in master node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service etcd3
enable_service container
enable_service k8s-master
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
And using the following config in worker node:
.. code-block:: ini
cat > /opt/stack/devstack/local.conf << END
[[local|localrc]]
SERVICE_HOST=10.0.0.11 # change this to controller's IP address
enable_plugin devstack-plugin-container https://git.openstack.org/openstack/devstack-plugin-container
enable_service container
enable_service k8s-node
# kubeadm token generate
K8S_TOKEN="9agf12.zsu5uh2m4pzt3qba"
...
END
3. Run devstack

21
contrib/gate_hook.sh Normal file
View File

@@ -0,0 +1,21 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
#
# This script is executed inside gate_hook function in devstack gate.
# Keep all devstack settings here instead of project-config for easy
# maintain if we want to change devstack config settings in future.
$BASE/new/devstack-gate/devstack-vm-gate.sh

48
contrib/post_test_hook.sh Normal file
View File

@@ -0,0 +1,48 @@
#!/bin/bash -x
#
# Licensed under the Apache License, Version 2.0 (the "License"); you may
# not use this file except in compliance with the License. You may obtain
# a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
# License for the specific language governing permissions and limitations
# under the License.
# This script is executed inside post_test_hook function in devstack gate.
# Sleep some time until all services are starting
sleep 5
# Check if a function already exists
function function_exists {
declare -f -F $1 > /dev/null
}
if ! function_exists echo_summary; then
function echo_summary {
echo $@
}
fi
# Save trace setting
XTRACE=$(set +o | grep xtrace)
set -o xtrace
echo_summary "Devstack-plugin-container's post_test_hook.sh was called..."
(set -o posix; set)
# Verify that Docker is installed correctly by running the hello-world image
sudo -H -u stack docker run hello-world
EXIT_CODE=$?
# Copy over docker systemd unit journals.
mkdir -p $WORKSPACE/logs
sudo journalctl -o short-precise --unit docker | sudo tee $WORKSPACE/logs/docker.txt > /dev/null
$XTRACE
exit $EXIT_CODE

View File

@@ -1,94 +0,0 @@
#!/bin/bash
#
# lib/cni/plugins
# Common CNI plugins functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_CNI_PLUGINS=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CNI_PLUGINS_BIN_DIR=/opt/cni/bin
# install all plugins by default
CNI_PLUGINS_INSTALL_PLUGINS=${CNI_PLUGINS_INSTALL_PLUGINS:-flannel,ptp,host-local,portmap,tuning,vlan,host-device,sample,dhcp,ipvlan,macvlan,loopback,bridge}
CNI_PLUGINS_CONF_SOURCE_DIR=${CNI_PLUGINS_CONF_SOURCE_DIR:-$DEST/devstack-plugin-container/etc/cni/net.d}
CNI_PLUGINS_CONF_DIR=${CNI_PLUGINS_CONF_DIR:-/etc/cni/net.d}
CNI_PLUGINS_VERSION=${CNI_PLUGINS_VERSION:-v0.7.1}
CNI_PLUGINS_SHA256_AMD64=${CNI_PLUGINS_SHA256_AMD64:-"6ecc5c7dbb8e4296b0d0d017e5440618e19605b9aa3b146a2c29af492f299dc7"}
CNI_PLUGINS_SHA256_ARM64=${CNI_PLUGINS_SHA256_ARM64:-"258080b94bfc54bd54fd0ea7494efc31806aa4b2836ba3f2d189e0fc16fab0ef"}
CNI_PLUGINS_SHA256_PPC64=${CNI_PLUGINS_SHA256_PPC64:-"a515c45a52e752249bb0e9feac1654c5d38974df6a36148778f6eeab9826f706"}
CNI_PLUGINS_SHA256_S390X=${CNI_PLUGINS_SHA256_S390X:-"24e31be69a012395f1026cd37d125f5f81001cfc36434d8f7a17b36bc5f1e6ad"}
# Make sure CNI plugins downloads the correct architecture
if is_arch "x86_64"; then
CNI_PLUGINS_ARCH="amd64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_AMD64}
elif is_arch "aarch64"; then
CNI_PLUGINS_ARCH="arm64"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_ARM64}
elif is_arch "ppc64le"; then
CNI_PLUGINS_ARCH="ppc64le"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_PPC64}
elif is_arch "s390x"; then
CNI_PLUGINS_ARCH="s390x"
CNI_PLUGINS_SHA256=${CNI_PLUGINS_SHA256:-$CNI_PLUGINS_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CNI_PLUGINS_DOWNLOAD_URL=${CNI_PLUGINS_DOWNLOAD_URL:-https://github.com/containernetworking/plugins/releases/download}
CNI_PLUGINS_DOWNLOAD_FILE=cni-plugins-$CNI_PLUGINS_ARCH-$CNI_PLUGINS_VERSION.tgz
CNI_PLUGINS_DOWNLOAD_LOCATION=$CNI_PLUGINS_DOWNLOAD_URL/$CNI_PLUGINS_VERSION/$CNI_PLUGINS_DOWNLOAD_FILE
# Installs standard cni plugins.
function install_cni_plugins {
echo "Installing CNI standard plugins"
# Download and cache the cni plugins tgz for subsequent use
local plugins_file
cni_plugins_file="$(get_extra_file $CNI_PLUGINS_DOWNLOAD_LOCATION)"
if [ ! -d "$FILES/cniplugins" ]; then
echo "${CNI_PLUGINS_SHA256} $cni_plugins_file" > $FILES/cniplugins.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/cniplugins.sha256sum || (sudo rm -f $cni_plugins_file; exit 1)
mkdir $FILES/cniplugins
tar xzvf $cni_plugins_file -C $FILES/cniplugins
fi
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
if [ $(ls $FILES/cniplugins/$plugin 2> /dev/null) ]; then
echo "Install plugin: $plugin"
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/cniplugins/$plugin" \
"$CNI_PLUGINS_BIN_DIR/$plugin"
else
echo "Skip installing plugin: $plugin"
fi
done
}
# Configure cni plugins.
function configure_cni_plugins {
echo "Configuring CNI plugins"
for plugin in ${CNI_PLUGINS_INSTALL_PLUGINS//,/ }; do
local source_config_file
source_config_file=$(ls ${CNI_PLUGINS_CONF_SOURCE_DIR}/*${plugin}.conf 2> /dev/null || true)
if [ $source_config_file ]; then
echo "Found config file for plugin: $plugin"
sudo install -o "$STACK_USER" -m 0664 -t "$CNI_PLUGINS_CONF_DIR" -D \
"${source_config_file}"
else
echo "Config file not found for plugin: $plugin"
fi
done
}
# Restore xtrace
$_XTRACE_CONTAINER_CNI_PLUGINS

View File

@@ -26,10 +26,10 @@ CRIO_ENGINE_SOCKET_FILE=${CRIO_ENGINE_SOCKET_FILE:-/var/run/crio/crio.sock}
function check_crio {
if is_ubuntu; then
dpkg -l | grep crio-o > /dev/null 2>&1
dpkg -l | grep crio-o > /dev/null 2>&1
else
false
# TODO: CentOS/Fedora support.
false
# TODO: CentOS/Fedora support.
fi
}
@@ -40,22 +40,13 @@ function install_crio {
local lsb_dist=${os_VENDOR,,}
local dist_version=${os_CODENAME}
local kubic_obs_project_key="2472d6d0d2f66af87aba8da34d64390375060aa4"
local os="x${os_VENDOR}_${os_RELEASE}"
local arch=$(dpkg --print-architecture)
if is_ubuntu; then
apt_get install apt-transport-https ca-certificates \
software-properties-common
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 \
--recv ${kubic_obs_project_key}
sudo apt-add-repository "deb https://download.opensuse.org/"`
`"repositories/devel:/kubic:/libcontainers:/stable/${os}/ /"
sudo apt-add-repository "deb http://download.opensuse.org/"`
`"repositories/devel:/kubic:/libcontainers:/stable:/cri-o:/"`
`"${CRIO_VERSION}/${os}/ /"
apt_get install apt-transport-https ca-certificates software-properties-common
sudo add-apt-repository -y ppa:projectatomic/ppa
# Installing podman and containerd will get us compatible versions of
# cri-o and runc. And we need podman to manage container images anyway.
apt_get install podman buildah cri-o-runc cri-o
apt_get install podman buildah
elif is_fedora; then
if [[ "$lsb_dist" = "centos" ]]; then
sudo yum-config-manager \
@@ -83,6 +74,16 @@ function configure_crio {
iniset -sudo ${crio_conf} crio.runtime log_level \"info\"
fi
if is_ubuntu; then
# In Ubuntu's a special vendored version of runc is installed with
# cri-o. This means that it'll not work with the system's version of
# runc. Moreover vendored runc is not placed into /usr/bin, where
# crio.conf states that it will be. We fix that by linking the vendored
# binary to /usr/bin.
if [[ ! -e /usr/bin/runc ]]; then
sudo ln -s /usr/lib/cri-o-runc/sbin/runc /usr/bin/runc
sudo chmod +x /usr/bin/runc
fi
# At least for 18.04 we need to set up /etc/containers/registries.conf
# with some initial content. That's another bug with that PPA.
local registries_conf

View File

@@ -26,26 +26,18 @@ DOCKER_ENGINE_PORT=${DOCKER_ENGINE_PORT:-2375}
DOCKER_CLUSTER_STORE=${DOCKER_CLUSTER_STORE:-}
DOCKER_GROUP=${DOCKER_GROUP:-$STACK_USER}
DOCKER_CGROUP_DRIVER=${DOCKER_CGROUP_DRIVER:-}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=$(trueorfalse False ENABLE_CLEAR_CONTAINER)
ENABLE_KATA_CONTAINERS=$(trueorfalse False ENABLE_KATA_CONTAINERS)
ENABLE_CONTAINERD_CRI=$(trueorfalse False ENABLE_CONTAINERD_CRI)
ENABLE_LIVE_RESTORE=$(trueorfalse False ENABLE_LIVE_RESTORE)
ENABLE_IPV6=$(trueorfalse False ENABLE_IPV6)
KATA_BRANCH=${KATA_BRANCH:-master}
KATA_RUNTIME=${KATA_RUNTIME:-kata-runtime}
CONTAINERD_CONF_DIR=/etc/containerd
CONTAINERD_CONF=$CONTAINERD_CONF_DIR/config.toml
# Functions
# ---------
function check_docker {
if is_ubuntu; then
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
dpkg -s docker-engine > /dev/null 2>&1 || dpkg -s docker-ce > /dev/null 2>&1
else
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
rpm -q docker-engine > /dev/null 2>&1 || rpm -q docker > /dev/null 2>&1 || rpm -q docker-ce > /dev/null 2>&1
fi
}
@@ -56,10 +48,8 @@ function install_docker {
local lsb_dist=${os_VENDOR,,}
local dist_version=${os_CODENAME}
local arch
arch=$(dpkg --print-architecture)
local arch=$(dpkg --print-architecture)
if is_ubuntu; then
apt_get install apparmor
if [[ ${dist_version} == 'trusty' ]]; then
if uname -r | grep -q -- '-generic' && dpkg -l 'linux-image-*-generic' | grep -qE '^ii|^hi' 2>/dev/null; then
apt_get install linux-image-extra-$(uname -r) linux-image-extra-virtual
@@ -87,23 +77,9 @@ function install_docker {
fi
yum_install docker-ce
fi
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
# Kata Containers can't run inside VM, so check whether virtualization
# is enabled or not
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_kata_container_ubuntu
elif is_fedora; then
install_kata_container_fedora
fi
else
(>&2 echo "WARNING: Kata Containers needs the CPU extensions svm or vmx which is not enabled. Skipping Kata Containers installation.")
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
# Clear Container can't run inside VM, so check whether virtualization
# is enabled or not
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
if is_ubuntu; then
install_clear_container_ubuntu
@@ -114,27 +90,9 @@ function install_docker {
(>&2 echo "WARNING: Clear Container needs the CPU extensions svm or vmx which is not enabled. Skipping Clear Container installation.")
fi
fi
if [[ "$ENABLE_CONTAINERD_CRI" == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
install_cni_plugins
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
install_crictl
fi
}
function configure_docker {
if [[ ${ENABLE_CONTAINERD_CRI} == "True" ]]; then
source $DEST/devstack-plugin-container/devstack/lib/cni/plugins
configure_cni_plugins
configure_containerd
source $DEST/devstack-plugin-container/devstack/lib/tools/crictl
configure_crictl
fi
# After an ./unstack it will be stopped. So it is ok if it returns exit-code == 1
sudo systemctl stop docker.service || true
@@ -143,18 +101,7 @@ function configure_docker {
cluster_store_opts+="\"cluster-store\": \"$DOCKER_CLUSTER_STORE\","
fi
local runtime_opts=""
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"$KATA_RUNTIME\": {
\"path\": \"/usr/bin/kata-runtime\"
}
},
\"default-runtime\": \"$KATA_RUNTIME\","
fi
# TODO(hongbin): deprecate and remove clear container
elif [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
(>&2 echo "WARNING: Clear Container support is deprecated in Train release and will be removed in U release.")
if [[ "$ENABLE_CLEAR_CONTAINER" == "True" ]]; then
if sudo grep -E 'svm|vmx' /proc/cpuinfo &> /dev/null; then
runtime_opts+="\"runtimes\": {
\"cor\": {
@@ -221,42 +168,10 @@ EOF
sudo systemctl --no-block restart docker.service
}
function configure_containerd {
sudo mkdir -p $CONTAINERD_CONF_DIR
sudo chown -R $STACK_USER $CONTAINERD_CONF_DIR
stack_user_gid=$(getent group $STACK_USER | cut -d: -f3)
cat <<EOF | sudo tee $CONTAINERD_CONF >/dev/null
[grpc]
gid = $stack_user_gid
[debug]
level = "debug"
EOF
if [[ "$ENABLE_KATA_CONTAINERS" == "True" ]]; then
cat <<EOF | sudo tee -a $CONTAINERD_CONF >/dev/null
[plugins]
[plugins.cri]
[plugins.cri.containerd]
[plugins.cri.containerd.runtimes.${KATA_RUNTIME}]
runtime_type = "io.containerd.kata.v2"
EOF
fi
sudo systemctl --no-block restart containerd.service
}
function stop_docker {
sudo systemctl stop docker.service || true
}
function cleanup_docker {
uninstall_package docker-ce
rm -f $CONTAINERD_CONF
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/ /' >> /etc/apt/sources.list.d/cc-oci-runtime.list"
curl -fsSL http://download.opensuse.org/repositories/home:/clearlinux:/preview:/clear-containers-2.1/xUbuntu_$(lsb_release -rs)/Release.key | sudo apt-key add -
@@ -264,7 +179,6 @@ function install_clear_container_ubuntu {
apt_get install cc-oci-runtime
}
# TODO(hongbin): deprecate and remove clear container
function install_clear_container_fedora {
source /etc/os-release
local lsb_dist=${os_VENDOR,,}
@@ -276,31 +190,5 @@ function install_clear_container_fedora {
yum_install cc-oci-runtime linux-container
}
function install_kata_container_ubuntu {
sudo sh -c "echo 'deb http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/ /' \
> /etc/apt/sources.list.d/kata-containers.list"
curl -sL http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/xUbuntu_${os_RELEASE}/Release.key \
| sudo apt-key add -
REPOS_UPDATED=False apt_get_update
apt_get install kata-runtime kata-proxy kata-shim
}
function install_kata_container_fedora {
source /etc/os-release
if [[ -x $(command -v dnf 2>/dev/null) ]]; then
sudo dnf -y install dnf-plugins-core
sudo -E dnf config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/Fedora_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
elif [[ -x $(command -v yum 2>/dev/null) ]]; then
# all rh patforms (fedora, centos, rhel) have this pkg
sudo yum -y install yum-utils
sudo -E yum-config-manager --add-repo \
"http://download.opensuse.org/repositories/home:/katacontainers:/releases:/$(arch):/${KATA_BRANCH}/CentOS_${VERSION_ID}/home:katacontainers:releases:$(arch):${KATA_BRANCH}.repo"
else
die $LINENO "Unable to find or auto-install Kata Containers"
fi
yum_install kata-runtime kata-proxy kata-shim
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -1,158 +0,0 @@
#!/bin/bash
# Dependencies:
#
# - functions
# - ``STACK_USER`` must be defined
# stack.sh
# --------
# - install_k8s
# The following variables are assumed to be defined by certain functions:
#
# - ``http_proxy`` ``https_proxy`` ``no_proxy``
# Save trace setting
_XTRACE_DOCKER=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
K8S_TOKEN=${K8S_TOKEN:-""}
K8S_API_SERVER_IP=${K8S_API_SERVER_IP:-$SERVICE_HOST}
K8S_NODE_IP=${K8S_NODE_IP:-$HOST_IP}
K8S_API_SERVER_PORT=${K8S_API_SERVER_PORT:-6443}
K8S_POD_NETWORK_CIDR=${K8S_POD_NETWORK_CIDR:-10.244.0.0/16}
K8S_SERVICE_NETWORK_CIDR=${K8S_SERVICE_NETWORK_CIDR:-10.96.0.0/12}
K8S_VERSION=${K8S_VERSION:-1.19.0-00}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
# Functions
# ---------
function is_k8s_enabled {
[[ ,${ENABLED_SERVICES} =~ ,"k8s-" ]] && return 0
return 1
}
function install_kubeadm {
if is_ubuntu; then
apt_get install apt-transport-https curl
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
sudo add-apt-repository -y \
"deb https://apt.kubernetes.io/ kubernetes-xenial main"
REPOS_UPDATED=False apt_get_update
apt_get install kubelet=$K8S_VERSION kubeadm=$K8S_VERSION kubectl=$K8S_VERSION
sudo apt-mark hold kubelet kubeadm kubectl
# NOTE(hongbin): This work-around an issue that kubelet pick a wrong
# IP address if the node has multiple network interfaces.
# See https://github.com/kubernetes/kubeadm/issues/203
echo "KUBELET_EXTRA_ARGS=--node-ip=$K8S_NODE_IP" | sudo tee -a /etc/default/kubelet
sudo systemctl daemon-reload && sudo systemctl restart kubelet
else
(>&2 echo "WARNING: kubeadm installation is not supported in this distribution.")
fi
}
function kubeadm_init {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
imageRepository: "${KUBEADMIN_IMAGE_REPOSITORY}"
etcd:
external:
endpoints:
- "http://${SERVICE_HOST}:${ETCD_PORT}"
networking:
podSubnet: "${K8S_POD_NETWORK_CIDR}"
serviceSubnet: "${K8S_SERVICE_NETWORK_CIDR}"
---
apiVersion: kubeadm.k8s.io/v1beta1
bootstrapTokens:
- token: "${K8S_TOKEN}"
ttl: 0s
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: "${K8S_API_SERVER_IP}"
bindPort: ${K8S_API_SERVER_PORT}
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
EOF
sudo kubeadm config images pull --image-repository=${KUBEADMIN_IMAGE_REPOSITORY}
sudo kubeadm init --config $kubeadm_config_file --ignore-preflight-errors Swap
local kube_config_file=$HOME/.kube/config
sudo mkdir -p $(dirname ${kube_config_file})
sudo cp /etc/kubernetes/admin.conf $kube_config_file
safe_chown $STACK_USER:$STACK_USER $kube_config_file
if [[ "$K8S_NETWORK_ADDON" == "flannel" ]]; then
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/4ff77dc7c35851913587f7daccf25d754e77aa65/Documentation/kube-flannel.yml
fi
}
function kubeadm_join {
local kubeadm_config_file
kubeadm_config_file=$(mktemp)
cat <<EOF | tee $kubeadm_config_file >/dev/null
apiVersion: kubeadm.k8s.io/v1beta1
kind: JoinConfiguration
discovery:
bootstrapToken:
apiServerEndpoint: "${K8S_API_SERVER_IP}:${K8S_API_SERVER_PORT}"
token: "${K8S_TOKEN}"
unsafeSkipCAVerification: true
tlsBootstrapToken: "${K8S_TOKEN}"
---
apiVersion: kubelet.config.k8s.io/v1beta1
kind: KubeletConfiguration
failSwapOn: false
EOF
sudo kubeadm join --config $kubeadm_config_file --ignore-preflight-errors Swap
}
function start_collect_logs {
wait_for_kube_service 180 component=kube-controller-manager
wait_for_kube_service 60 component=kube-apiserver
wait_for_kube_service 30 component=kube-scheduler
wait_for_kube_service 30 k8s-app=kube-proxy
run_process kube-controller-manager "/usr/bin/kubectl logs -n kube-system -f -l component=kube-controller-manager"
run_process kube-apiserver "/usr/bin/kubectl logs -n kube-system -f -l component=kube-apiserver"
run_process kube-scheduler "/usr/bin/kubectl logs -n kube-system -f -l component=kube-scheduler"
run_process kube-proxy "/usr/bin/kubectl logs -n kube-system -f -l k8s-app=kube-proxy"
}
function wait_for_kube_service {
local timeout=$1
local selector=$2
local rval=0
time_start "wait_for_service"
timeout $timeout bash -x <<EOF || rval=$?
NAME=""
while [[ "\$NAME" == "" ]]; do
sleep 1
NAME=\$(kubectl wait --for=condition=Ready pod -n kube-system -l $selector -o name)
done
EOF
time_stop "wait_for_service"
# Figure out what's happening on platforms where this doesn't work
if [[ "$rval" != 0 ]]; then
echo "Didn't find kube service after $timeout seconds"
kubectl get pods -n kube-system -l $selector
fi
return $rval
}
function kubeadm_reset {
sudo kubeadm reset --force
}
# Restore xtrace
$_XTRACE_DOCKER

View File

@@ -1,76 +0,0 @@
#!/bin/bash
#
# lib/tools/crictl
# CRI command line tools functions
# Dependencies:
# ``functions`` file
# ``STACK_USER`` has to be defined
# Save trace setting
_XTRACE_CONTAINER_TOOLS_CRICTL=$(set +o | grep xtrace)
set +o xtrace
# Defaults
# --------
CRICTL_BIN_DIR=/usr/local/bin
CRICTL_VERSION=${CRICTL_VERSION:-v1.17.0}
CRICTL_SHA256_AMD64=${CRICTL_SHA256_AMD64:-"7b72073797f638f099ed19550d52e9b9067672523fc51b746e65d7aa0bafa414"}
CRICTL_SHA256_ARM64=${CRICTL_SHA256_ARM64:-"d89afd89c2852509fafeaff6534d456272360fcee732a8d0cb89476377387e12"}
CRICTL_SHA256_PPC64=${CRICTL_SHA256_PPC64:-"a61c52b9ac5bffe94ae4c09763083c60f3eccd30eb351017b310f32d1cafb855"}
CRICTL_SHA256_S390X=${CRICTL_SHA256_S390X:-"0db445f0b74ecb51708b710480a462b728174155c5f2709a39d1cc2dc975e350"}
# Make sure downloads the correct architecture
if is_arch "x86_64"; then
CRICTL_ARCH="amd64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_AMD64}
elif is_arch "aarch64"; then
CRICTL_ARCH="arm64"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_ARM64}
elif is_arch "ppc64le"; then
CRICTL_ARCH="ppc64le"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_PPC64}
elif is_arch "s390x"; then
CRICTL_ARCH="s390x"
CRICTL_SHA256=${CRICTL_SHA256:-$CRICTL_SHA256_S390X}
else
exit_distro_not_supported "invalid hardware type"
fi
CRICTL_DOWNLOAD_URL=${CRICTL_DOWNLOAD_URL:-https://github.com/kubernetes-sigs/cri-tools/releases/download}
CRICTL_DOWNLOAD_FILE=crictl-$CRICTL_VERSION-linux-$CRICTL_ARCH.tar.gz
CRICTL_DOWNLOAD_LOCATION=$CRICTL_DOWNLOAD_URL/$CRICTL_VERSION/$CRICTL_DOWNLOAD_FILE
# Installs crictl tools.
function install_crictl {
echo "Installing CRI command-line tools"
# Download and cache the crictl tar for subsequent use
local crictl_file
crictl_file="$(get_extra_file $CRICTL_DOWNLOAD_LOCATION)"
if [ ! -f "$FILES/crictl" ]; then
echo "${CRICTL_SHA256} $crictl_file" > $FILES/crictl.sha256sum
# remove the damaged file when checksum fails
sha256sum -c $FILES/crictl.sha256sum || (sudo rm -f $crictl_file; exit 1)
tar xzvf $crictl_file -C $FILES
sudo install -o "$STACK_USER" -m 0555 -D "$FILES/crictl" \
"$CRICTL_BIN_DIR/crictl"
fi
}
# Configure crictl tools.
function configure_crictl {
local crictl_config_file=/etc/crictl.yaml
cat <<EOF | sudo tee $crictl_config_file >/dev/null
runtime-endpoint: unix:///run/containerd/containerd.sock
image-endpoint: unix:///run/containerd/containerd.sock
timeout: 10
debug: true
EOF
}
# Restore xtrace
$_XTRACE_CONTAINER_TOOLS_CRICTL

View File

@@ -7,7 +7,6 @@ set -o xtrace
echo_summary "container's plugin.sh was called..."
source $DEST/devstack-plugin-container/devstack/lib/docker
source $DEST/devstack-plugin-container/devstack/lib/crio
source $DEST/devstack-plugin-container/devstack/lib/k8s
(set -o posix; set)
if is_service_enabled container; then
@@ -35,32 +34,6 @@ if is_service_enabled container; then
fi
fi
if [[ "$1" == "clean" ]]; then
if [[ ${CONTAINER_ENGINE} == "docker" ]]; then
cleanup_docker
fi
fi
fi
if is_k8s_enabled; then
if [[ "$1" == "stack" && "$2" == "install" ]]; then
install_kubeadm
elif [[ "$1" == "stack" && "$2" == "post-config" ]]; then
if is_service_enabled k8s-master; then
kubeadm_init
elif is_service_enabled k8s-node; then
kubeadm_join
fi
elif [[ "$1" == "stack" && "$2" == "extra" ]]; then
if is_service_enabled k8s-master; then
start_collect_logs
fi
fi
if [[ "$1" == "unstack" ]]; then
kubeadm_reset
fi
if [[ "$1" == "clean" ]]; then
# nothing needed here
:

View File

@@ -2,25 +2,9 @@
# Supported options are "docker" and "crio".
CONTAINER_ENGINE=${CONTAINER_ENGINE:-docker}
# TODO(hongbin): deprecate and remove clear container
ENABLE_CLEAR_CONTAINER=${ENABLE_CLEAR_CONTAINER:-false}
ENABLE_KATA_CONTAINERS=${ENABLE_KATA_CONTAINERS:-false}
ENABLE_LIVE_RESTORE=${ENABLE_LIVE_RESTORE:-false}
ENABLE_IPV6=${ENABLE_IPV6:-false}
K8S_NETWORK_ADDON=${K8S_NETWORK_ADDON:-flannel}
ENABLE_CONTAINERD_CRI=${ENABLE_CONTAINERD_CRI:-false}
CRIO_VERSION=${CRIO_VERSION:-"1.18:/1.18.0"}
# Enable container services
enable_service container
# Enable k8s services
if [[ ,${ENABLED_SERVICES} =~ ,"k8s-master" ]]; then
enable_service kube-controller-manager
enable_service kube-apiserver
enable_service kube-scheduler
enable_service kube-proxy
fi
# Customize kubeadm container images repository
KUBEADMIN_IMAGE_REPOSITORY=${KUBEADMIN_IMAGE_REPOSITORY:-"k8s.gcr.io"}

View File

@@ -1,15 +0,0 @@
{
"cniVersion": "0.2.0",
"name": "mynet",
"type": "bridge",
"bridge": "cni0",
"isGateway": true,
"ipMasq": true,
"ipam": {
"type": "host-local",
"subnet": "10.22.0.0/16",
"routes": [
{ "dst": "0.0.0.0/0" }
]
}
}

View File

@@ -1,5 +0,0 @@
{
"cniVersion": "0.2.0",
"name": "lo",
"type": "loopback"
}

View File

@@ -1,3 +1,80 @@
- hosts: all
roles:
- fetch_docker_log
- hosts: primary
tasks:
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*nose_results.html
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testr_results.html.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.testrepository/tmp*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=**/*testrepository.subunit.gz
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}/tox'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/.tox/*/log/*
- --include=*/
- --exclude=*
- --prune-empty-dirs
- name: Copy files from {{ ansible_user_dir }}/workspace/ on node
synchronize:
src: '{{ ansible_user_dir }}/workspace/'
dest: '{{ zuul.executor.log_root }}'
mode: pull
copy_links: true
verify_host: true
rsync_opts:
- --include=/logs/**
- --include=*/
- --exclude=*
- --prune-empty-dirs

View File

@@ -1,3 +0,0 @@
- hosts: all
roles:
- run-devstack

View File

@@ -1,8 +1,69 @@
- hosts: all
name: Verify that Docker is installed correctly by running the hello-world image
name: Autoconverted job legacy-devstack-plugin-container-dsvm from old job gate-devstack-plugin-container-dsvm-nv
tasks:
- name: Ensure legacy workspace directory
file:
path: '{{ ansible_user_dir }}/workspace'
state: directory
- shell:
cmd: |
set -e
set -x
sudo -H -u stack docker run hello-world
cat > clonemap.yaml << EOF
clonemap:
- name: openstack/devstack-gate
dest: devstack-gate
EOF
/usr/zuul-env/bin/zuul-cloner -m clonemap.yaml --cache-dir /opt/git \
https://opendev.org \
openstack/devstack-gate
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
cat << 'EOF' >>"/tmp/dg-local.conf"
[[local|localrc]]
enable_plugin devstack-plugin-container https://opendev.org/openstack/devstack-plugin-container
EOF
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'
- shell:
cmd: |
set -e
set -x
export PYTHONUNBUFFERED=true
export DEVSTACK_GATE_TEMPEST=0
export BRANCH_OVERRIDE=default
if [ "$BRANCH_OVERRIDE" != "default" ] ; then
export OVERRIDE_ZUUL_BRANCH=$BRANCH_OVERRIDE
fi
export PROJECTS="openstack/devstack-plugin-container $PROJECTS"
# Keep localrc to be able to set some vars in post_test_hook
export KEEP_LOCALRC=1
function gate_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/gate_hook.sh
}
export -f gate_hook
function post_test_hook {
bash -xe $BASE/new/devstack-plugin-container/contrib/post_test_hook.sh fullstack
}
export -f post_test_hook
cp devstack-gate/devstack-vm-gate-wrap.sh ./safe-devstack-vm-gate-wrap.sh
./safe-devstack-vm-gate-wrap.sh
executable: /bin/bash
chdir: '{{ ansible_user_dir }}/workspace'
environment: '{{ zuul | zuul_legacy_vars }}'

View File

@@ -1,4 +0,0 @@
- hosts: all
roles:
- fetch_docker_log
- fetch_kubelet_log

View File

@@ -1,3 +0,0 @@
- hosts: all
roles:
- orchestrate-devstack

View File

@@ -1,29 +0,0 @@
- hosts: controller
name: Verify that k8s is installed correctly by running a pod
tasks:
- shell:
cmd: |
set -e
set -x
kubectl get nodes
kubectl get pods --namespace kube-system
tmpfile=$(mktemp)
cat <<EOT > $tmpfile
apiVersion: v1
kind: Pod
metadata:
name: myapp-pod
labels:
app: myapp
spec:
containers:
- name: myapp-container
image: busybox
command: ['sh', '-c', 'echo Hello Kubernetes! && sleep 3600']
EOT
kubectl create -f $tmpfile
kubectl wait --for=condition=Ready pod myapp-pod
become: true
become_user: stack

View File

@@ -1,11 +0,0 @@
---
prelude: >
Support installing Kata Containers.
features:
- |
In this release, it adds support for Kata Containers and configure it
to work with Docker.
deprecations:
- |
The support of Clear Container is deprecated in this release and will be
removed in the next release.

View File

@@ -1 +0,0 @@
Collect docker log from test run

View File

@@ -1,22 +0,0 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store docker log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit docker | sudo tee {{ ansible_user_dir }}/logs/docker.log > /dev/null
- name: Set docker.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/docker.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

View File

@@ -1 +0,0 @@
Collect kubelet log from test run

View File

@@ -1,22 +0,0 @@
- name: Ensure log path exists
become: yes
file:
path: "{{ ansible_user_dir }}/logs"
state: directory
owner: "{{ ansible_user }}"
group: "{{ ansible_user }}"
mode: 0775
- name: Store kubelet log in {{ ansible_user_dir }}/logs
become: yes
shell:
cmd: |
sudo journalctl -o short-precise --unit kubelet | sudo tee {{ ansible_user_dir }}/logs/kubelet.log > /dev/null
- name: Set kubelet.log file permissions
become: yes
file:
path: '{{ ansible_user_dir }}/logs/kubelet.log'
owner: '{{ ansible_user }}'
group: '{{ ansible_user }}'
mode: 0644

35
tox.ini
View File

@@ -1,35 +0,0 @@
[tox]
minversion = 1.6
skipsdist = True
envlist = bashate
[testenv]
usedevelop = False
install_command = pip install {opts} {packages}
[testenv:bashate]
basepython = python3
# if you want to test out some changes you have made to bashate
# against devstack, just set BASHATE_INSTALL_PATH=/path/... to your
# modified bashate tree
deps =
{env:BASHATE_INSTALL_PATH:bashate==0.5.1}
whitelist_externals = bash
commands = bash -c "find {toxinidir} \
-not \( -type d -name .?\* -prune \) \
-not \( -type d -name doc -prune \) \
-not \( -type f -name localrc -prune \) \
-type f \
-not -name \*~ \
-not -name \*.md \
-not -name stack-screenrc \
-not -name \*.orig \
-not -name \*.rej \
\( \
-name \*.sh -or \
-name \*rc -or \
-name functions\* -or \
-wholename \*/inc/\* -or \
-wholename \*/lib/\* \
\) \
-print0 | xargs -0 bashate -v -iE006 -eE005,E042"