This is available in all supported Cinder branches.
Change-Id: I988d599425d4733e5e40e8b9126454ebcfdcdf0c
Signed-off-by: Eric Harney <eharney@redhat.com>
Replace debian-bookworm with debian-trixie for the base job in order
to use a newer version of qemu that has patch [0] that addresses an
issue where test_boot_cloned_encrypted_volume fails when run on a
fast machine.
The default python in debain-trixie is Python 3.13, which is a supported
python version for the Gazpacho development cycle [1].
Also, updated the package prereq file for some necessary packages that
aren't included by default in trixie.
[0] https://lists.gnu.org/archive/html/qemu-devel/2025-01/msg01071.html
[1] https://governance.openstack.org/tc/reference/runtimes/2026.1.html
Related-bug: #2121941
Change-Id: I0db46ae97e61186f7bc2e0c940cf27278d742146
Signed-off-by: Brian Rosmaita <rosmaita.fossdev@gmail.com>
Tentacle is the last Ceph release [1] and this patch bumps
devstack-plugin-ceph to deploy this version.
[1] https://docs.ceph.com/en/tentacle/
Change-Id: Id303b97d4ad6bcf1da9f0c39a113bba24a60481e
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
It is possible to setup one way or two way replication between pools.
rbd-mirror daemon is responsible for pulling image updates from the
remote peer cluster and applying them to the image within the local
cluster. This patch allows to enable this daemon as part of the
deployment, but the actual configuration between sites remains out
of scope for the deployment script.
This work aims to support [1].
[1] https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/948293
Change-Id: Id1a66c244b9be33b6df63e6504500c77d59a1b9d
Signed-off-by: Francesco Pantano <fpantano@redhat.com>
Ceph Squid was released on 2024-09-26. It's the
latest supported release, and EOL is planned in
2026-09. [1]
This roughly corresponds to the support timeline
for OpenStack 2025.1 ("Epoxy") and 2025.2 ("Flamingo")[2].
We could plan to change the default version tested
again when Ceph's Tentacle release arrives (estimated: late '25).
[1] https://docs.ceph.com/en/latest/releases/#active-releases
[2] https://releases.openstack.org/
Depends-On: I11a3c8f573e5540840a23459d698197a9c3a8f4c
Change-Id: If6009392198b1d9e9c35e4fabae344a550a07796
Still seeing oom-killer trigger in Cinder Ceph CI
jobs with ceph-osd being a top memory consumer.
It appears to default to trying to use 4GB of memory,
change this to 2GB to see if it helps.
Increases the job timeout to 3h for a bit more
headroom since jobs are regularly taking more than 2h
to complete.
Change-Id: I71a46a452914256f36299623a29ed7eebf8e61d2
When the Ceph ingress IP is not set, devstack is currently failing
to install with CephFS NFS. When it attempts to create the NFS
cluster, it fails because the VIP value is not set.
The VIP should be the same as the ceph ingress IP, but it is only
being set in the manila job definitions, so if you don't provide
it in the local.conf file manually, the value will be empty.
This change fixes it by defaulting the ceph ingress IP to the
host IP in case it wasn't provided in the local.conf.
Change-Id: Ib2db0faa5381da9e3d391ff0f887eb92dff9c295
Signed-off-by: Carlos Eduardo <ces.eduardo98@gmail.com>
This mode of deployment isn't supported by the Ceph
community, and was always a chimera that we were
feeding/maintaining.
Ceph's tool of choice to bootstrap and install a ceph
cluster is by using the Ceph Orchestrator (via the
cephadm tool).
We're also cleaning up the old/unused and poorly tested
"CONTAINERIZED_CEPH". When using ceph orchestrator,
ceph daemons are run within podman containers on the
devstack host.
Change-Id: I5f75cb829383d7acd536e24c70cc4418d93c13bc
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
Manila supports using a standalone NFS-Ganesha server
as well as a ceph orchestrator deployed NFS-Ganesha cluster
("ceph nfs service"). We've only ever allowed using
ceph orch deployed NFS with ceph orch deployed clusters
through this devstack plugin. With this change,
the plugin can optionally deploy a standalone
NFS-Ganesha service with a ceph orch deployed
ceph cluster. This will greatly simplify testing when we sunset
the package based installation/deployment of ceph.
Depends-On: I2198eee3892b2bb0eb835ec66e21b708152b33a9
Change-Id: If983bb5d5a5fc0c16c1cead84b5fa30ea961d21b
Implements: bp/cephadm-deploy
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
During the devstack setup, if you have set `CEPHADM_DEPLOY=True`,
we are not considering cephfsnative as the default driver, even
though it was the expected behavior before.
If we do not set this flag and it is not set in the local.conf,
the devstack setup fails.
That is fixed by enforcing `cephfsnative` as the default value for
the Ceph backend. In case users want to deploy devstack with Ceph
NFS, they can continue using the same workflow (setting
`MANILA_CEPH_DRIVER=cephfsnfs`).
Change-Id: I51660fa2466fff873f2230e683661b53874bf862
Signed-off-by: Carlos Eduardo <ces.eduardo98@gmail.com>
Fixes ingress deamon, for 18.0+ Ceph versions
by setting correct VIP.
Ingress daemon was added in Caracal, and when
backported to Bobcat encountered CI failures.
In this patch I've added verbose MDS logging
capabilities, and fixed the failures that were
present in Bobcat stable CI, cephfs-nfs-cephadm
job. History is in patchset 908940.
Resubmititng starting at master to backport
to stable branches in the correct order.
Depends-On: I5b7fd5b2b557203189c25fa2a988d790e7fda3eb
Change-Id: Ia1671de5c770d1cf5a3cd58e05fe5204f5bbc3c6
download.ceph.com used to maintain named versions of ceph
releases, which were hardlinked folders corresponding
to the latest minor release from a stable branch; we've noticed
that these folders can be deleted. Let's instead look for
cephadm under the corresponding numeric release tag folder.
Change-Id: Ic39b48fb2dd48f47d5b3c6165e4f4c6b1c47cc7d
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
For releases older than reef (e.g., octopus, pacific), packages are not
always available under the el9 subdirectory.
This patch introduces a switch case to make sure we're able to match the
right version of cephadm.
Change-Id: I0ee37b832f1ea47961528f074f2d42492b0ac755
Ceph release tags adhere to a versioning scheme x.y.z [1], where:
- x = major release number (e.g.: quincy is 17, reef is 18)
- y = 1 or 2, where 1 is an release candidate, and 2 is a stable release
- z = patch/updates
We shouldn't hardcode a patch version in the default container
image we're fetching in our jobs, unless absolutely necessary
for some bugfix/feature that we rely on.
[1] https://docs.ceph.com/en/latest/releases/general/
Related-Bug: #1989273
Change-Id: Iea541d2edefc871bcac2d965997c88462fcbe521
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
Reverting a bad rebase to fix cephadm binary source
Also adds catatonit to rpms file to fix CI issues
Change-Id: Ie1b1dc0ef2508eae38ae7954fb0bb62653780644
Add podman ceph-common and jq as part of preinstall dependency.
Add REMOTE_CEPH capabilities to CEPHADM deployment.
Removed set_min_client only if cinder is enabled, this should be set
in any case.
Get FSID from ceph.conf in /etc/ceph to avoid unnecessary override.
Part of an effort to test multinode deployments with cephadm.
Needed-By: I5162815b66d3f3e8cf8c1e246b61b0ea06c1a270
Change-Id: I84249ae268dfe00a112c67e5170b679acb318a25
The current code only adds the service secret to libvirt when Ceph has
been enabled in Nova, but it should also be enabled if it has only been
enabled in Cinder.
This patch changes devstack/plugin.sh to write the service secret to
libvirt whenever Nova or Cinder is using Ceph.
This would be the counterpart of the code we already have in
_undefine_virsh_secret where we are already checking if it's enabled in
either of the services to remove the secret.
Change-Id: I1067d52b7a435fcef7996eea6479d598be842dca
Affects only the package based install script.
Pacific is going to be EOL'ed soon [1], we do not
expect folks to deploy the next version of OpenStack
(2023.2/Bobcat) with Ceph Pacific; moreover, all our
CI jobs are using Quincy by default, and the cephadm
script uses quincy by default.
[1] https://docs.ceph.com/en/latest/releases/index.html#active-releases
Change-Id: I6abdb1241e99d76bcee35b331e1059c4fe48296b
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
This reverts commit 863a01b032.
Partial revert only for the pin to focal, leaves the broken other jobs
commented out.
Update paste-deploy workaround to be used always.
Add qemu-block-extra and podman deps to the debs list.
Running on the newer ceph and distro causes some quite different
performance characteristics that cause tests that used to pass to fail
more often. This includes some performance optimizations to help
reduce the memory footprint, as well as depends on changes to
tempest tests to improve the reliability of those tests by enabling
validation via SSH.
This also moves the cephadm job to be the voting/gating job as that
seems to be the clear consensus about "the future" of how we deploy
ceph for testing.
Depends-On: https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/881764
Co-Authored-By: Dan Smith <dms@danplanet.com>
Change-Id: I899822fec863f43cd6c58b25cf4688c6a3ac1e9b
The cephfs-nfs job was turned off [1] for perma-failing.
This commit adds the original non-voting job back into the
check queue and fixes some installation issues:
1) use ceph "quincy" release: Ceph Pacific's end of life
is 2023-06-01 [2]. The manila community thinks deployers
are more likely to use quincy with the 2023.2 (bobcat)
release of OpenStack.
2) run the job with centos-stream-9: There are no packages
currently available for Jammy Jellyfish on download.ceph.com [3].
The OS shouldn't really matter for this CI job that is meant to
test feature functionality provided by manila. At this time, we'd
like to stick with builds provided by the ceph community instead
of the distro since it may take a while to get bugfixes into distro
builds.
3) The install script uses "nfs-ganesha" builds for ubuntu and
centos hosted by the nfs-ganesha community [4]. We will not rely on
the ceph community to provide the latest builds for
nfs-ganesha any longer.
This commit also cleans up the unnecessary condition in the
ceph script file pertaining to configuring ceph packages for
Jammy Jellyfish. This step wasn't doing anything.
Ubuntu packages don't work at the moment and that requires some more
investigation.
[1] Id2ae61979505de5efb47ce90a2bac8aac2fc5484
[2] https://docs.ceph.com/en/latest/releases/
[3] https://www.spinics.net/lists/ceph-users/msg74312.html
[4] https://download.nfs-ganesha.org/
Change-Id: I40dfecfbbe21b2f4b3e4efd903980b5b146c4202
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
Ubuntu 22.04 does provide Ceph Quincy (17.2.*) out of the box, thus
there's no urgent need to have comunity repos and we can simply rely
on distro-provided ones. We can remove logic once community publish
Ceph packages for Ubuntu Jammy (22.04).
Change-Id: I169971ef77f11ceb01a5db87441051dcb33555f7
CephFS driver supports snapshots, therefore the snapshot_support
extra spec for the cephfsnfstype we create should be True.
Change-Id: I97b58697f27824a97cfd31ed21d79916b9e270cc
As part of change I1826f2970528928a31b32a664013380e38bbd7c9
we added a configuration option to the manila cephfs nfs driver
when deployed using cephadm.
We also need this option set here to get the m-shr service
to pick the right helper when deploying with DevStack
Change-Id: If99714e07f4b75c76db29a660ad8d1e93f7055e5
Latest Ceph stable version is Quincy. This patch set
sets Quincy as default for the cephadm deploy.
This change won't have an impact in the script being
currently used in our CI.
Change-Id: I2d87ec0e93853cd0852944b30a87f6127f491550
The configuration that we are using by default that
sets pg_num and pgp_num values in bootstrap_ceph
breaks devstack-plugin-ceph when using the latest Ceph version (quincy).
This patch sets removes the config for pg_num and pgp_num,
so we delegate to Ceph the pools autoscaling.
Closes-Bug: #1983107
Change-Id: Iecd949ef2258ae8a6ded596219bb993aeff20de5
It's not a required configuration item; and its not required
with cephadm deployed NFS-Ganesha daemon/s.
Change-Id: I54380f1cb905dfa5ab287ba423561aa75bc1d2f4
Signed-off-by: Goutham Pacha Ravi <gouthampravi@gmail.com>
Add the option to deploy the Ceph cluster
with the cephadm tool.
Depends-On: I799521f008123b8e42b2021c1c11d374b834bec3
Co-Authored-By: Francesco Pantano <fpantano@redhat.com>
Change-Id: Id2a704b136b9e47b7b88ef586282cb5d0f754cf1
This patch adds a new VAR for when the user wants to set the minimum
client version used in a Manila/Glance/Nova/Cinder test job in addition
to using devstack.
A new configuration option, CEPH_MIN_ CLIENT _VERSION, has been
introduced to specify a Ceph-minium client version that allows proper
handling when deleting images and snapshots with dependencies, etc. The
default value is null.
Co-Author-By: Sofia Enriquez <lsofia.enriquez@gmail.com>
Change-Id: Id8e581893ee4b373b268acc7c59b670985cedc2f
Recently cinder has utilized clone v2 support of Ceph for its
RBD backend, since then if you attempt to delete an image from
glance that has a dependent volume, all future uses of that
image will fail in error state. Despite the fact that image
itself is still inside of Ceph/Glance. This issue is reproducible
if you are using ceph client version greater than 'luminous'
To resolve this issue glance RBD driver now checks whether original
image has any dependency before deleting/removing it's snapshot and
returns 409 response if it has any dependency. To check this
dependency glance osd needs 'read' access to cinder and nova
side RBD pool.
This change allows glance keyring/osd a read access on cinder and nova
side RBD pool.
Related-Bug: #1954883
Change-Id: I2e6221e6de23920998bb5f32b2323704b3c89f74
this change ensure that python3-logutils is removed so that
that it can be installed by pip later.
Before today in passing build, there were no python3-logutils pulled by neutron deps so
it was not installed or tried to uninstall so all good
- https://zuul.openstack.org/build/590c5996ca1b402486bfe1c7e1d08535/log/job-output.txt
But from today (10th Dec), python3-logutils pulled by neutron deps and failure started
- https://zuul.opendev.org/t/openstack/build/722c6caf8e454849b897a43bcf617dd2/log/job-output.txt#9419
The root cause of why this issue started happening today is not known. May be its
pecan===1.4.1 ? I8ee467bbb363f428a005f92554812bfdae95881a making it install but there
is no change for logutils as deps in pecan previous version 1.3.3 also[1]). Or it may
be cpeh/ubuntu packaging.
But it is clear that python3-logutils is coming from somewhere and causing gate blocker
in Nova gate, let's remove it and later we can find the root cause if anyone need this
package for their ceph job.
[1] https://github.com/pecan/pecan/blob/1.3.3/requirements.txt#L5
Closes-bug: #1954427
Change-Id: Icb63649b252fd6eb229adeae454b5ec3c6b79cad
Few URLs from some ganesha repositories were not corresponding
to actual valid URLs, where packages could be searched in
the repository.
These broken URLs have now been fixed.
Change-Id: If27b488cfec29731b74e7db774c4811b0e34c14e
When we deploy devstack, we need to initialize the RBD pool.
Not doing so means that functionality like rbd trash purge
scheduling will not work correctly.
Ref: https://docs.ceph.com/en/latest/start/quick-rbd/
Change-Id: I5b0b3b83fb7ef805929fdcd106a5c8a988b05ec4
Seems a trace from older code. It doesn't make
much sense to allow all Fedora versions that
matches f[0-9][0-9] and then check for specific
Fedora versions in the following lines.
Remove this check and just allow some specific
versions
Change-Id: Ie14a453f96689f574f1b388ab8f6e5467a59b7f7
This will change the version of Ceph from Octopus to Pacific and,
with it, the version of Ganesha from v3.3 to v3.5 which is the
version shipped into the Ceph Pacific container.
Change-Id: I1b31ef9dd13e1d56284f8d9f8be03e3fee0eb0a7