Skip to content

Add back test case 42543#1358

Open
JianLi-RH wants to merge 3 commits intoopenshift:mainfrom
JianLi-RH:adding_auth_file_to_AdmReleaseExtract_method
Open

Add back test case 42543#1358
JianLi-RH wants to merge 3 commits intoopenshift:mainfrom
JianLi-RH:adding_auth_file_to_AdmReleaseExtract_method

Conversation

@JianLi-RH
Copy link
Contributor

@JianLi-RH JianLi-RH commented Mar 24, 2026

For a non-stable cluster, for example nightly cluster, we have to set --registry-config to the oc adm release extract command, otherwise it will fail with authentication error, for example:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          133m    Unable to apply 4.22.0-0.nightly-2026-03-23-022245: an unknown error has occurred: MultipleErrors
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc adm release extract --to=/tmp/OTA-42543-manifest-437546753
error: unable to read image registry.ci.openshift.org/ocp/release@sha256:c14da8aabe5fea93da47839bac175cb6f211f37447e91b0c78bce38e896fc322: unauthorized: authentication required
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$

The extract command will work fine when we give auth file to it, for example:

 [jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc extract secret/pull-secret -n openshift-config --confirm --to=.
.dockerconfigjson
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc adm release extract --to=/tmp/OTA-42543-manifest-437546753 -a .dockerconfigjson
Extracted release payload created at 2026-03-23T02:24:35Z
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 

I have tested the new method AdmReleaseExtract with below test case:

	g.It("can use oc to extract manifests", func() {
		restCfg, err := util.GetRestConfig()
		o.Expect(err).NotTo(o.HaveOccurred(), "Failed to load Kubernetes configuration. Please ensure KUBECONFIG environment variable is set.")

		kubeClient, err := util.GetKubeClient(restCfg)
		o.Expect(err).NotTo(o.HaveOccurred(), "Failed to create Kubernetes client")

		authFile, err := util.GetAuthFile(context.Background(), kubeClient, "openshift-config", "pull-secret", ".dockerconfigjson")
		o.Expect(err).NotTo(o.HaveOccurred())

		targetFolder := "/tmp/manifests"
		ocClient, err := oc.NewOC(ocapi.Options{Logger: logger})
		o.Expect(err).NotTo(o.HaveOccurred())
		o.Expect(ocClient).NotTo(o.BeNil())
		err = ocClient.AdmReleaseExtract(ocapi.ReleaseExtractOptions{To: targetFolder, AuthFile: authFile})
		o.Expect(err).NotTo(o.HaveOccurred())
		defer os.RemoveAll(targetFolder)
		defer os.RemoveAll(authFile)
	})

the case passed:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ _output/linux/amd64/cluster-version-operator-tests run-test "[Jira:\"Cluster Version Operator\"] cluster-version-operator-tests can use oc to extract manifests"
  Running Suite:  - /home/jianl/1_code/cluster-version-operator
  =============================================================
  Random Seed: 1774338059 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"Cluster Version Operator"] cluster-version-operator-tests can use oc to extract manifests
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:43
  cluster-version-operator-tests "level"=0 "msg"="Running command succeeded." "cmd"="/usr/local/sbin/oc" "args"="adm release extract --to=/tmp/manifests --registry-config=/tmp/ota-RSIMHWNPMLUGKW4EV5PQTL2ZPO"
  • [29.594 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 29.594 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"Cluster Version Operator\"] cluster-version-operator-tests can use oc to extract manifests",
    "lifecycle": "blocking",
    "duration": 29594,
    "startTime": "2026-03-24 07:40:59.615925 UTC",
    "endTime": "2026-03-24 07:41:29.210040 UTC",
    "result": "passed",
    "output": ""
  }
]
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$

To add back test case 42543, I did below changes to the case:

  1. add a new label: ConnectedOnly (Actually I don't know how to use it)
  2. Skip the case by:
err = util.SkipIfNetworkRestricted(ctx, restCfg, util.FauxinnatiAPIURL)
o.Expect(err).NotTo(o.HaveOccurred(), "Failed to determine if cluster is network restricted")
  1. add auth file to ReleaseExtractOptions:
		authFile, err := util.GetAuthFile(context.Background(), kubeClient, "openshift-config", "pull-secret", ".dockerconfigjson")
		o.Expect(err).NotTo(o.HaveOccurred())
		defer func() { _ = os.Remove(authFile) }()
		manifestDir := ocapi.ReleaseExtractOptions{To: tempDir, AuthFile: authFile}

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 24, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 24, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a blocking Conformance test ensuring CVO does not install resources annotated release.openshift.io/delete=true, supporting test utilities and pod file-reading helpers, a public dynamic client wrapper, and optional registry auth wiring for oc adm release extract.

Changes

Cohort / File(s) Summary
Conformance Test Definition
.openshift-tests-extension/openshift_payload_cluster-version-operator.json
Appended a new blocking Conformance test entry: cluster-version-operator should not install resources annotated with release.openshift.io/delete=true (source openshift:payload:cluster-version-operator, labels 42543, Conformance, High).
Conformance Test Implementation
test/cvo/cvo.go
Added a Ginkgo test that skips on HyperShift/MicroShift, finds the CVO pod, scans /release-manifests/ for files with the release.openshift.io/delete annotation, parses manifests, and asserts referenced resources are NotFound.
Dynamic Client Wrapper
pkg/cvo/external/dynamicclient/client.go
Added dynamicclient.New(config *rest.Config, gvk schema.GroupVersionKind, namespace string) (dynamic.ResourceInterface, error) delegating to the internal dynamicclient factory.
Test Utilities
test/util/util.go
Added exported helpers: GetAuthFile, GetPodsByNamespace, ListFilesInPodContainer, GetFileContentInPodContainer, and ParseManifest for secret-to-tempfile extraction, pod listing, exec-based file listing/reading, and manifest parsing.
CLI/API Authentication Support
test/oc/api/api.go, test/oc/cli/cli.go
Added AuthFile string to ReleaseExtractOptions and updated AdmReleaseExtract to include --registry-config=<AuthFile> when AuthFile is non-empty.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 24, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: JianLi-RH
Once this PR has been reviewed and has the lgtm label, please assign davidhurta for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@JianLi-RH
Copy link
Contributor Author

JianLi-RH commented Mar 24, 2026

@hongkailiu Because my pr #1309 was reverted by #1353, added back the case in this PR.

For any tests on connected/disconnected cluster, please read below comments.

@JianLi-RH
Copy link
Contributor Author

Confirmed, the extract command does not work on a disconnected cluster:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc adm release extract --to=/tmp/manifests --to=/tmp/manifests --registry-config=.dockerconfigjson 
error: unable to read image registry.ci.openshift.org/ocp/release@sha256:c14da8aabe5fea93da47839bac175cb6f211f37447e91b0c78bce38e896fc322: unauthorized: authentication required
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 

/cc @hongkailiu @jiajliu

@openshift-ci openshift-ci bot requested review from hongkailiu and jiajliu March 24, 2026 09:43
@JianLi-RH
Copy link
Contributor Author

/unhold

@JianLi-RH
Copy link
Contributor Author

JianLi-RH commented Mar 24, 2026

For connected cluster, test passed.

launch 4.22 aws

Cluster-bot log: https://prow.ci.openshift.org/view/gs/test-platform-results/logs/release-openshift-origin-installer-launch-aws-modern/2036384394893070336

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ _output/linux/amd64/cluster-version-operator-tests run-test "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true"
  Running Suite:  - /home/jianl/1_code/cluster-version-operator
  =============================================================
  Random Seed: 1774352243 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"Cluster Version Operator"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543, ConnectedOnly]
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:112
  "level"=0 "msg"="microshift-version configmap not found"
    STEP: Setting up oc @ 03/24/26 19:37:28.725
    STEP: Extracting manifests in the release @ 03/24/26 19:37:28.725
  cluster-version-operator-tests "level"=0 "msg"="Extract manifests to: /tmp/OTA-42543-manifest-2629628469"
  cluster-version-operator-tests "level"=0 "msg"="Running command succeeded." "cmd"="/usr/local/sbin/oc" "args"="adm release extract --to=/tmp/OTA-42543-manifest-2629628469"
    STEP: Checking if getting manifests with release.openshift.io/delete on the cluster led to not-found error @ 03/24/26 19:38:39.443
  • [94.919 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 94.920 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true",
    "lifecycle": "blocking",
    "duration": 94920,
    "startTime": "2026-03-24 11:37:23.082908 UTC",
    "endTime": "2026-03-24 11:38:58.003456 UTC",
    "result": "passed",
    "output": "  STEP: Setting up oc @ 03/24/26 19:37:28.725\n  STEP: Extracting manifests in the release @ 03/24/26 19:37:28.725\n  STEP: Checking if getting manifests with release.openshift.io/delete on the cluster led to not-found error @ 03/24/26 19:38:39.443\n"
  }
]
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$

@JianLi-RH
Copy link
Contributor Author

JianLi-RH commented Mar 24, 2026

For disconnected cluster, I couldn't create a normal cluster (Cluster-bot seems does not support disconnect, Jenkins Flexy-install job has issue https://jenkins-csb-openshift-qe-mastern.dno.corp.redhat.com/job/ocp-common/job/Flexy-install/372175/), the case failed:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc get clusterversion
NAME      VERSION   AVAILABLE   PROGRESSING   SINCE   STATUS
version             False       True          49m     Unable to apply 4.22.0-0.nightly-2026-03-23-022245: an unknown error has occurred: MultipleErrors
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc get pod -n openshift-cluster-version
No resources found in openshift-cluster-version namespace.
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ _output/linux/amd64/cluster-version-operator-tests run-test "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true"
  Running Suite:  - /home/jianl/1_code/cluster-version-operator
  =============================================================
  Random Seed: 1774351623 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"Cluster Version Operator"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543, ConnectedOnly]
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:112
  "level"=0 "msg"="microshift-version configmap not found"
    [FAILED] in [It] - /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:119 @ 03/24/26 19:27:04.783
  • [FAILED] [1.512 seconds]
  [Jira:"Cluster Version Operator"] cluster-version-operator [It] should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543, ConnectedOnly]
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:112

    [FAILED] Failed to determine if cluster is network restricted
    Unexpected error:
        <*fmt.wrapError | 0xc0002d1440>: 
        could not find CVO pod: %!w(<nil>)
        {
            msg: "could not find CVO pod: %!w(<nil>)",
            err: nil,
        }
    occurred
    In [It] at: /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:119 @ 03/24/26 19:27:04.783
  ------------------------------

  Summarizing 1 Failure:
    [FAIL] [Jira:"Cluster Version Operator"] cluster-version-operator [It] should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543, ConnectedOnly]
    /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:119

  Ran 1 of 1 Specs in 1.513 seconds
  FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true",
    "lifecycle": "blocking",
    "duration": 1513,
    "startTime": "2026-03-24 11:27:03.270399 UTC",
    "endTime": "2026-03-24 11:27:04.784067 UTC",
    "result": "failed",
    "output": "  [FAILED] in [It] - /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:119 @ 03/24/26 19:27:04.783\n",
    "error": "fail [/home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:119]: Failed to determine if cluster is network restricted\nUnexpected error:\n    \u003c*fmt.wrapError | 0xc0002d1440\u003e: \n    could not find CVO pod: %!w(\u003cnil\u003e)\n    {\n        msg: \"could not find CVO pod: %!w(\u003cnil\u003e)\",\n        err: nil,\n    }\noccurred"
  }
]
Error: 1 tests failed
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 

@JianLi-RH JianLi-RH changed the title [WIP] Add auth file to oc extract command [WIP] Add back test case 42543 Mar 24, 2026
@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from cd22772 to c5b4eda Compare March 24, 2026 12:11
`oc adm release extract` command accept --registry-config option to set auth file.
To avoid future issues, add auth file to method AdmReleaseExtract()

And this is also what we did in openshift-tests-private:
https://github.com/openshift/openshift-tests-private/blob/49b10e687053061ecc87989335f2908b59c66f98/test/extended/ota/cvo/utils.go#L541-L547
The automation case 42543 was reverted by openshift#1353
This PR will fix the issue on disconnected environment and added back the case which initialized by
openshift#1309
@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from c5b4eda to 773b61b Compare March 24, 2026 12:20
@JianLi-RH JianLi-RH changed the title [WIP] Add back test case 42543 Add back test case 42543 Mar 24, 2026
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 24, 2026
@petr-muller
Copy link
Member

Looking at what the test is actually doing, could it iterate over the extracted manifests in CVO's container filesystem directly in the cluster? That way the test would not need to do any heavy operations like release extract, it would not need to deal with a pull secret... It would be faster, too.

@JianLi-RH
Copy link
Contributor Author

@petr-muller could it iterate over the extracted manifests in CVO's container filesystem directly in the cluster sounds interesting, let me find out how to do it. thanks.

@JianLi-RH
Copy link
Contributor Author

/hold

@openshift-ci openshift-ci bot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 25, 2026
@jiajliu
Copy link
Contributor

jiajliu commented Mar 25, 2026

Confirmed, the extract command does not work on a disconnected cluster:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ oc adm release extract --to=/tmp/manifests --to=/tmp/manifests --registry-config=.dockerconfigjson 
error: unable to read image registry.ci.openshift.org/ocp/release@sha256:c14da8aabe5fea93da47839bac175cb6f211f37447e91b0c78bce38e896fc322: unauthorized: authentication required
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 

If the root cause is an authorization issue with the extract command, shall we fix the extract issue first and disable the skip on disconnected cluster? Checked the case history of case-42543 in CI with OTP tests, the case passed in a disconnected cluster as expected, for example:

passed: (18.4s) 2026-03-22T06:03:37 "[sig-updates] OTA cvo should Author:jianl-High-42543-the removed resources are not created in a fresh installed cluster"

btw, it's suggested to rerun the failed job e2e-metal-ipi-ovn-ipv6 after the fix(see #1353 (comment))

@JianLi-RH
Copy link
Contributor Author

@petr-muller Thank you for the suggestion, it works fine.

The code changes can be found in commit: 0734315

There are some new methods:

  1. GetPodsByNamespace() used to get CVO pod
  2. ListFilesInPodContainer() used to list files in a pod container, here I use a customized command to reduce files (get file content file takes long time)
  3. GetFileContentInPodContainer() used to get file content from a pod container
  4. ParseManifest() parse yaml file content to Manifest object.

Here is the test result:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ _output/linux/amd64/cluster-version-operator-tests run-test "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true"
  Running Suite:  - /home/jianl/1_code/cluster-version-operator
  =============================================================
  Random Seed: 1774414472 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"Cluster Version Operator"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543, ConnectedOnly]
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:107
  "level"=0 "msg"="microshift-version configmap not found"
  cluster-version-operator-tests "level"=0 "msg"="CVO pod found" "name"="cluster-version-operator-54df765cd4-rf7sf"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_cloud-credential-operator_01-service-delete.yaml"=", GVK:" "/v1, Kind=Service"=", Name: " "controller-manager-service"=", Namespace: " "openshift-cloud-credential-operator"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_90_cluster-authentication-operator_03_prometheusrule.yaml"=", GVK:" "monitoring.coreos.com/v1, Kind=PrometheusRule"=", Name: " "authentication-operator"=", Namespace: " "openshift-authentication-operator"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_31_cluster-baremetal-operator_06_deployment-hostedcluster-delete.yaml"=", GVK:" "apps/v1, Kind=Deployment"=", Name: " "cluster-baremetal-operator-hostedcluster"=", Namespace: " "openshift-machine-api"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_30_cluster-api_00_tombstones-4.22-tpnu.yaml"=", GVK:" "/v1, Kind=ServiceAccount"=", Name: " "cluster-capi-operator"=", Namespace: " "openshift-cluster-api"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_90_machine-config_90_deletion.yaml"=", GVK:" "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"=", Name: " "default-account-openshift-machine-config-operator"=", Namespace: " ""="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-config.yaml"=", GVK:" "/v1, Kind=ConfigMap"=", Name: " "collect-profiles-config"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-rbac.yaml"=", GVK:" "rbac.authorization.k8s.io/v1, Kind=Role"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-secret.yaml"=", GVK:" "/v1, Kind=Secret"=", Name: " "pprof-cert"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_07-collect-profiles.cronjob.yaml"=", GVK:" "batch/v1, Kind=CronJob"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_07-collect-profiles.networkpolicy.yaml"=", GVK:" "networking.k8s.io/v1, Kind=NetworkPolicy"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  • [22.158 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 22.158 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true",
    "lifecycle": "blocking",
    "duration": 22159,
    "startTime": "2026-03-25 04:54:32.945833 UTC",
    "endTime": "2026-03-25 04:54:55.105344 UTC",
    "result": "passed",
    "output": ""
  }
]
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ 

If it work as expected, I will remove prior case (but I want to keep GetAuthFile())

@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from 0734315 to 5823e70 Compare March 25, 2026 05:05
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/cvo/cvo.go`:
- Around line 116-119: The code dereferences pods[0] after
util.GetPodsByNamespace without checking the slice length and may pick a
non-running CVO causing panics or flaky execs; update the logic around pods (the
variable returned from GetPodsByNamespace in test/cvo/cvo.go) to first assert
len(pods) > 0 and return/test-fail if empty, then choose a pod in Running state
(e.g., inspect pod.Status.Phase or pod.Status.Conditions) rather than assuming
pods[0]; replace direct pods[0] usage (and the same pattern later around the
subsequent pod access) with the guarded selection of a runnable pod and handle
the no-runnable-pod case cleanly.
- Around line 136-142: The test currently parses only the first YAML document
via util.ParseManifest(fileContent) and then checks
manifest.Obj.GetAnnotations() for the annotation variable, which misses later
documents; change the logic to iterate over all YAML documents in fileContent
(either by using a multi-doc parser such as util.ParseManifests if available, or
by splitting on the YAML document separator and calling util.ParseManifest for
each document), and for each parsed manifest check
manifest.Obj.GetAnnotations()[annotation] == "true" and handle accordingly so
deleted resources in later documents are detected.

In `@test/util/util.go`:
- Around line 125-127: Replace the insecure manual temp file creation for
authFile with a call to os.CreateTemp to create a file with restrictive
permissions (0600), write secretData into that file (using the returned
*os.File) and close it, and remove the manual filepath.Join("/tmp/",
fmt.Sprintf("ota-%s", rand.Text())) logic; specifically update the block that
defines authFile, writes secretData, and handles errors so it uses
os.CreateTemp, writes to the created file, sets/ensures file mode 0600, closes
the file, and defers/removes the temp file when done.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 1d7539f7-6dda-4fd2-a47f-2681ad9c4954

📥 Commits

Reviewing files that changed from the base of the PR and between 0276666 and 0734315.

📒 Files selected for processing (6)
  • .openshift-tests-extension/openshift_payload_cluster-version-operator.json
  • pkg/cvo/external/dynamicclient/client.go
  • test/cvo/cvo.go
  • test/oc/api/api.go
  • test/oc/cli/cli.go
  • test/util/util.go

Comment on lines +116 to +119
pods, err := util.GetPodsByNamespace(ctx, kubeClient, external.DefaultCVONamespace, map[string]string{"k8s-app": "cluster-version-operator"})
o.Expect(err).NotTo(o.HaveOccurred())
logger.Info("CVO pod found", "name", pods[0].Name)

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard pod lookup before indexing and target a runnable CVO pod.

Line 118 dereferences pods[0] without validating list length, and the current selection may pick a non-running pod, causing panic/flaky exec failures.

Suggested fix
 	pods, err := util.GetPodsByNamespace(ctx, kubeClient, external.DefaultCVONamespace, map[string]string{"k8s-app": "cluster-version-operator"})
 	o.Expect(err).NotTo(o.HaveOccurred())
-	logger.Info("CVO pod found", "name", pods[0].Name)
+	o.Expect(pods).NotTo(o.BeEmpty(), "Expected at least one CVO pod")
+
+	cvoPod := pods[0]
+	o.Expect(cvoPod.Status.Phase).To(o.Equal(corev1.PodRunning), "Expected selected CVO pod to be Running")
+	logger.Info("CVO pod found", "name", cvoPod.Name)
@@
-	files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, pods[0].Name, "cluster-version-operator")
+	files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, cvoPod.Name, "cluster-version-operator")

Also applies to: 123-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 116 - 119, The code dereferences pods[0] after
util.GetPodsByNamespace without checking the slice length and may pick a
non-running CVO causing panics or flaky execs; update the logic around pods (the
variable returned from GetPodsByNamespace in test/cvo/cvo.go) to first assert
len(pods) > 0 and return/test-fail if empty, then choose a pod in Running state
(e.g., inspect pod.Status.Phase or pod.Status.Conditions) rather than assuming
pods[0]; replace direct pods[0] usage (and the same pattern later around the
subsequent pod access) with the guarded selection of a runnable pod and handle
the no-runnable-pod case cleanly.

Comment on lines +136 to +142
manifest, err := util.ParseManifest(fileContent)
o.Expect(err).NotTo(o.HaveOccurred(), fmt.Sprintf("Failed to parse manifest content of file %s in CVO pod", f))

ann := manifest.Obj.GetAnnotations()
if ann[annotation] != "true" {
continue
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

This checks only one document per YAML file and can miss deleted resources.

util.ParseManifest(fileContent) returns a single manifest, but release-manifest files may contain multiple YAML documents. If release.openshift.io/delete=true appears in a later document, the test can incorrectly pass.

Suggested fix direction
-			manifest, err := util.ParseManifest(fileContent)
-			o.Expect(err).NotTo(o.HaveOccurred(), fmt.Sprintf("Failed to parse manifest content of file %s in CVO pod", f))
-
-			ann := manifest.Obj.GetAnnotations()
-			if ann[annotation] != "true" {
-				continue
-			}
-			// check object absence ...
+			manifests, err := util.ParseManifests(fileContent) // parse all docs
+			o.Expect(err).NotTo(o.HaveOccurred(), fmt.Sprintf("Failed to parse manifest content of file %s in CVO pod", f))
+			for _, manifest := range manifests {
+				ann := manifest.Obj.GetAnnotations()
+				if ann[annotation] != "true" {
+					continue
+				}
+				// check object absence ...
+			}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 136 - 142, The test currently parses only the
first YAML document via util.ParseManifest(fileContent) and then checks
manifest.Obj.GetAnnotations() for the annotation variable, which misses later
documents; change the logic to iterate over all YAML documents in fileContent
(either by using a multi-doc parser such as util.ParseManifests if available, or
by splitting on the YAML document separator and calling util.ParseManifest for
each document), and for each parsed manifest check
manifest.Obj.GetAnnotations()[annotation] == "true" and handle accordingly so
deleted resources in later documents are detected.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
test/util/util.go (1)

125-130: ⚠️ Potential issue | 🟠 Major

Protect pull-secret material with secure temp-file creation and permissions

Line 126 writes registry credentials with 0644, making the auth file broadly readable; plus manual /tmp filename construction is weaker than atomic temp-file creation. Please switch to os.CreateTemp and keep mode 0600.

Suggested fix
-	authFile := filepath.Join("/tmp/", fmt.Sprintf("ota-%s", rand.Text()))
-	err = os.WriteFile(authFile, secretData, 0644)
-	if err != nil {
-		return "", fmt.Errorf("error writing file %s: %v", authFile, err)
-	}
-	return authFile, nil
+	f, err := os.CreateTemp("", "ota-*")
+	if err != nil {
+		return "", fmt.Errorf("error creating auth temp file: %w", err)
+	}
+	defer func() {
+		if err != nil {
+			_ = os.Remove(f.Name())
+		}
+	}()
+	if err := f.Chmod(0o600); err != nil {
+		_ = f.Close()
+		return "", fmt.Errorf("error setting auth temp file permissions: %w", err)
+	}
+	if _, err := f.Write(secretData); err != nil {
+		_ = f.Close()
+		return "", fmt.Errorf("error writing auth temp file %s: %w", f.Name(), err)
+	}
+	if err := f.Close(); err != nil {
+		return "", fmt.Errorf("error closing auth temp file %s: %w", f.Name(), err)
+	}
+	return f.Name(), nil
As per coding guidelines, `**`: "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/util/util.go` around lines 125 - 130, Replace the manual
filepath.Join("/tmp/", ...) + os.WriteFile flow with os.CreateTemp to atomically
create a secure temp file (instead of constructing authFile by hand); write
secretData into the returned *os.File, ensure the file is closed, explicitly
enforce mode 0600 (file.Chmod(0600)) and handle/cleanup on errors, and then
return the temp filename (authFile) — update usages around the authFile variable
and remove the original os.WriteFile call.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@test/util/util.go`:
- Around line 125-130: Replace the manual filepath.Join("/tmp/", ...) +
os.WriteFile flow with os.CreateTemp to atomically create a secure temp file
(instead of constructing authFile by hand); write secretData into the returned
*os.File, ensure the file is closed, explicitly enforce mode 0600
(file.Chmod(0600)) and handle/cleanup on errors, and then return the temp
filename (authFile) — update usages around the authFile variable and remove the
original os.WriteFile call.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 9f81f7d2-d4a5-4de5-8a14-8fb5b59e2b5c

📥 Commits

Reviewing files that changed from the base of the PR and between 0734315 and 5823e70.

📒 Files selected for processing (2)
  • test/cvo/cvo.go
  • test/util/util.go
🚧 Files skipped from review as they are similar to previous changes (1)
  • test/cvo/cvo.go

@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from 5823e70 to cada5ac Compare March 25, 2026 07:08
@JianLi-RH
Copy link
Contributor Author

@jiajliu With the new approach #1358 (comment), the case passed on a disconnected cluster:
My cluster: https://jenkins-csb-openshift-qe-mastern.dno.corp.redhat.com/job/ocp-common/job/Flexy-install/372247/parameters/
Profile: private-templates/functionality-testing/aos-4_21/ipi-on-aws/versioned-installer-customer_vpc-disconnected
Test result:

[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$ _output/linux/amd64/cluster-version-operator-tests run-test "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true"
  Running Suite:  - /home/jianl/1_code/cluster-version-operator
  =============================================================
  Random Seed: 1774422533 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"Cluster Version Operator"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true [Conformance, High, 42543]
  /home/jianl/1_code/cluster-version-operator/test/cvo/cvo.go:107
  "level"=0 "msg"="microshift-version configmap not found"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_cloud-credential-operator_01-service-delete.yaml"=", GVK:" "/v1, Kind=Service"=", Name: " "controller-manager-service"=", Namespace: " "openshift-cloud-credential-operator"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_90_cluster-authentication-operator_03_prometheusrule.yaml"=", GVK:" "monitoring.coreos.com/v1, Kind=PrometheusRule"=", Name: " "authentication-operator"=", Namespace: " "openshift-authentication-operator"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_31_cluster-baremetal-operator_06_deployment-hostedcluster-delete.yaml"=", GVK:" "apps/v1, Kind=Deployment"=", Name: " "cluster-baremetal-operator-hostedcluster"=", Namespace: " "openshift-machine-api"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_90_machine-config_90_deletion.yaml"=", GVK:" "rbac.authorization.k8s.io/v1, Kind=ClusterRoleBinding"=", Name: " "default-account-openshift-machine-config-operator"=", Namespace: " ""="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-config.yaml"=", GVK:" "/v1, Kind=ConfigMap"=", Name: " "collect-profiles-config"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-rbac.yaml"=", GVK:" "rbac.authorization.k8s.io/v1, Kind=Role"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_00-pprof-secret.yaml"=", GVK:" "/v1, Kind=Secret"=", Name: " "pprof-cert"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_07-collect-profiles.cronjob.yaml"=", GVK:" "batch/v1, Kind=CronJob"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  cluster-version-operator-tests "level"=0 "msg"="Checking file: " "/release-manifests/0000_50_olm_07-collect-profiles.networkpolicy.yaml"=", GVK:" "networking.k8s.io/v1, Kind=NetworkPolicy"=", Name: " "collect-profiles"=", Namespace: " "openshift-operator-lifecycle-manager"="<no-value>"
  • [24.300 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 24.301 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"Cluster Version Operator\"] cluster-version-operator should not install resources annotated with release.openshift.io/delete=true",
    "lifecycle": "blocking",
    "duration": 24301,
    "startTime": "2026-03-25 07:08:53.607793 UTC",
    "endTime": "2026-03-25 07:09:17.909023 UTC",
    "result": "passed",
    "output": ""
  }
]
[jianl@jianl-thinkpadt14gen4 cluster-version-operator]$

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
test/cvo/cvo.go (2)

134-140: ⚠️ Potential issue | 🟠 Major

Parse all YAML documents, not only the first one

Line 134 calls util.ParseManifest(fileContent), which returns only the first non-empty document. Multi-doc manifest files can hide release.openshift.io/delete=true in later docs, causing false passes.

Proposed fix direction
-			manifest, err := util.ParseManifest(fileContent)
-			o.Expect(err).NotTo(o.HaveOccurred(), fmt.Sprintf("Failed to parse manifest content of file %s in CVO pod", f))
-
-			ann := manifest.Obj.GetAnnotations()
-			if ann[annotation] != "true" {
-				continue
-			}
-			logger.Info("Checking file: ", f, ", GVK:", manifest.GVK.String(), ", Name: ", manifest.Obj.GetName(), ", Namespace: ", manifest.Obj.GetNamespace())
-			client, err := dynamicclient.New(restCfg, manifest.GVK, manifest.Obj.GetNamespace())
-			o.Expect(err).NotTo(o.HaveOccurred())
-			_, err = client.Get(ctx, manifest.Obj.GetName(), metav1.GetOptions{})
-			o.Expect(apierrors.IsNotFound(err)).To(o.BeTrue(),
-				fmt.Sprintf("The deleted manifest should not be installed, but actually installed: manifest: %s %s in namespace %s from file %q, error: %v",
-					manifest.GVK, manifest.Obj.GetName(), manifest.Obj.GetNamespace(), f, err))
+			manifests, err := util.ParseManifests(fileContent) // add helper that decodes all YAML docs
+			o.Expect(err).NotTo(o.HaveOccurred(), fmt.Sprintf("Failed to parse manifest content of file %s in CVO pod", f))
+			for _, manifest := range manifests {
+				ann := manifest.Obj.GetAnnotations()
+				if ann[annotation] != "true" {
+					continue
+				}
+				logger.Info("Checking file: ", f, ", GVK:", manifest.GVK.String(), ", Name: ", manifest.Obj.GetName(), ", Namespace: ", manifest.Obj.GetNamespace())
+				client, err := dynamicclient.New(restCfg, manifest.GVK, manifest.Obj.GetNamespace())
+				o.Expect(err).NotTo(o.HaveOccurred())
+				_, err = client.Get(ctx, manifest.Obj.GetName(), metav1.GetOptions{})
+				o.Expect(apierrors.IsNotFound(err)).To(o.BeTrue(),
+					fmt.Sprintf("The deleted manifest should not be installed, but actually installed: manifest: %s %s in namespace %s from file %q, error: %v",
+						manifest.GVK, manifest.Obj.GetName(), manifest.Obj.GetNamespace(), f, err))
+			}

As per coding guidelines, **: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 134 - 140, The code only parses the first YAML
document via util.ParseManifest(fileContent) and will miss later docs that might
contain the release.openshift.io/delete=true annotation; update the logic to
iterate over all YAML documents in fileContent (split on document separators
like "---" or use a YAML multi-doc parser), call util.ParseManifest (or
equivalent) for each non-empty doc, then for each parsed manifest check
manifest.Obj.GetAnnotations() (the ann map) for ann[annotation] == "true" and
continue accordingly; ensure you reference the same variables (fileContent,
manifest, ann, annotation, f) so the loop replaces the single-call parsing and
correctly handles multi-doc manifests.

114-127: ⚠️ Potential issue | 🟠 Major

Select a running CVO pod before exec calls

Line 121 and Line 126 still use pods[0] from a label-only list. That can pick a non-running pod and make this test flaky/fail intermittently.

Proposed fix
+	corev1 "k8s.io/api/core/v1"
@@
 		pods, err := util.GetPodsByNamespace(ctx, kubeClient, external.DefaultCVONamespace, map[string]string{"k8s-app": "cluster-version-operator"})
 		o.Expect(err).NotTo(o.HaveOccurred())
 		o.Expect(pods).NotTo(o.BeEmpty(), "Expected at least one CVO pod")
+
+		var cvoPodName string
+		for _, pod := range pods {
+			if pod.Status.Phase == corev1.PodRunning {
+				cvoPodName = pod.Name
+				break
+			}
+		}
+		o.Expect(cvoPodName).NotTo(o.BeEmpty(), "Expected at least one running CVO pod")
@@
-		files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, pods[0].Name, "cluster-version-operator")
+		files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, cvoPodName, "cluster-version-operator")
@@
-			fileContent, err := util.GetFileContentInPodContainer(ctx, restCfg, external.DefaultCVONamespace, pods[0].Name, "cluster-version-operator", f)
+			fileContent, err := util.GetFileContentInPodContainer(ctx, restCfg, external.DefaultCVONamespace, cvoPodName, "cluster-version-operator", f)

As per coding guidelines, **: Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 114 - 127, The code currently uses pods[0] (the
first pod from util.GetPodsByNamespace) when calling
util.ListFilesInPodContainer and util.GetFileContentInPodContainer which can be
a non-running pod; update the test to choose a running CVO pod first (e.g.,
iterate the returned pods slice and pick the one with Status.Phase ==
v1.PodRunning or use any helper that returns a running pod), assign it to a
variable like runningPod, and replace pods[0].Name with runningPod.Name in the
calls to ListFilesInPodContainer and GetFileContentInPodContainer (ensure
imports/types for v1.PodPhase are available).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@test/cvo/cvo.go`:
- Around line 134-140: The code only parses the first YAML document via
util.ParseManifest(fileContent) and will miss later docs that might contain the
release.openshift.io/delete=true annotation; update the logic to iterate over
all YAML documents in fileContent (split on document separators like "---" or
use a YAML multi-doc parser), call util.ParseManifest (or equivalent) for each
non-empty doc, then for each parsed manifest check manifest.Obj.GetAnnotations()
(the ann map) for ann[annotation] == "true" and continue accordingly; ensure you
reference the same variables (fileContent, manifest, ann, annotation, f) so the
loop replaces the single-call parsing and correctly handles multi-doc manifests.
- Around line 114-127: The code currently uses pods[0] (the first pod from
util.GetPodsByNamespace) when calling util.ListFilesInPodContainer and
util.GetFileContentInPodContainer which can be a non-running pod; update the
test to choose a running CVO pod first (e.g., iterate the returned pods slice
and pick the one with Status.Phase == v1.PodRunning or use any helper that
returns a running pod), assign it to a variable like runningPod, and replace
pods[0].Name with runningPod.Name in the calls to ListFilesInPodContainer and
GetFileContentInPodContainer (ensure imports/types for v1.PodPhase are
available).

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: 88c4ae5d-0d47-47d1-90ae-8196b0510b2a

📥 Commits

Reviewing files that changed from the base of the PR and between 5823e70 and cada5ac.

📒 Files selected for processing (3)
  • .openshift-tests-extension/openshift_payload_cluster-version-operator.json
  • test/cvo/cvo.go
  • test/util/util.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • .openshift-tests-extension/openshift_payload_cluster-version-operator.json
  • test/util/util.go

@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from cada5ac to 6af3a47 Compare March 25, 2026 07:19
There are some new methods:
1. GetPodsByNamespace() used to get CVO pod
2. ListFilesInPodContainer() used to list files in a pod container, here I use a customized command to reduce files (get file content file takes long time)
3. GetFileContentInPodContainer() used to get file content from a pod container
4. ParseManifest() parse yaml file content to Manifest object.
@JianLi-RH JianLi-RH force-pushed the adding_auth_file_to_AdmReleaseExtract_method branch from 6af3a47 to a9869a5 Compare March 25, 2026 07:47
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (2)
test/cvo/cvo.go (2)

114-127: ⚠️ Potential issue | 🟠 Major

Pick a runnable CVO pod instead of hard-coding pods[0].

Line 121 and Line 126 execute into pods[0] without ensuring it is Running. This can make the test flaky when the first listed pod is Pending/Terminating.

Suggested fix
+	corev1 "k8s.io/api/core/v1"
...
 	pods, err := util.GetPodsByNamespace(ctx, kubeClient, external.DefaultCVONamespace, map[string]string{"k8s-app": "cluster-version-operator"})
 	o.Expect(err).NotTo(o.HaveOccurred())
 	o.Expect(pods).NotTo(o.BeEmpty(), "Expected at least one CVO pod")
+
+	var cvoPodName string
+	for i := range pods {
+		if pods[i].Status.Phase == corev1.PodRunning {
+			cvoPodName = pods[i].Name
+			break
+		}
+	}
+	o.Expect(cvoPodName).NotTo(o.BeEmpty(), "Expected at least one Running CVO pod")
...
-	files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, pods[0].Name, "cluster-version-operator")
+	files, err := util.ListFilesInPodContainer(ctx, restCfg, command, external.DefaultCVONamespace, cvoPodName, "cluster-version-operator")
...
-			fileContent, err := util.GetFileContentInPodContainer(ctx, restCfg, external.DefaultCVONamespace, pods[0].Name, "cluster-version-operator", f)
+			fileContent, err := util.GetFileContentInPodContainer(ctx, restCfg, external.DefaultCVONamespace, cvoPodName, "cluster-version-operator", f)

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 114 - 127, The code currently uses pods[0] when
calling util.ListFilesInPodContainer and util.GetFileContentInPodContainer which
can be flaky if that pod is not Running; change the selection to pick a runnable
CVO pod by iterating the slice returned by util.GetPodsByNamespace and selecting
the first pod whose Status.Phase == "Running" and has the Ready condition true
(or equivalent readiness check) before using its Name in ListFilesInPodContainer
and GetFileContentInPodContainer; if no runnable pod is found, return/fail with
a clear error message.

134-140: ⚠️ Potential issue | 🟠 Major

Parse all YAML documents in each manifest file.

Line 134 uses util.ParseManifest(fileContent), which parses only one document. Multi-doc manifest files can hide release.openshift.io/delete=true in later documents, causing false passes.

As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/cvo/cvo.go` around lines 134 - 140, The code currently calls
util.ParseManifest(fileContent) which parses only the first YAML document, so
change the logic to parse and iterate over all YAML documents in fileContent:
either call an existing helper that returns multiple manifests (e.g.,
util.ParseManifests(fileContent) or similar) and loop over the returned slice,
or split fileContent into separate YAML documents and call util.ParseManifest on
each document; for each manifest ensure you inspect
manifest.Obj.GetAnnotations() and check ann[annotation] == "true" before
continuing. Ensure the code iterates all manifests in the file rather than only
the first.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@test/cvo/cvo.go`:
- Around line 114-127: The code currently uses pods[0] when calling
util.ListFilesInPodContainer and util.GetFileContentInPodContainer which can be
flaky if that pod is not Running; change the selection to pick a runnable CVO
pod by iterating the slice returned by util.GetPodsByNamespace and selecting the
first pod whose Status.Phase == "Running" and has the Ready condition true (or
equivalent readiness check) before using its Name in ListFilesInPodContainer and
GetFileContentInPodContainer; if no runnable pod is found, return/fail with a
clear error message.
- Around line 134-140: The code currently calls util.ParseManifest(fileContent)
which parses only the first YAML document, so change the logic to parse and
iterate over all YAML documents in fileContent: either call an existing helper
that returns multiple manifests (e.g., util.ParseManifests(fileContent) or
similar) and loop over the returned slice, or split fileContent into separate
YAML documents and call util.ParseManifest on each document; for each manifest
ensure you inspect manifest.Obj.GetAnnotations() and check ann[annotation] ==
"true" before continuing. Ensure the code iterates all manifests in the file
rather than only the first.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro

Run ID: fe24f23a-63c3-4f8a-be92-1445d7452b4b

📥 Commits

Reviewing files that changed from the base of the PR and between 6af3a47 and a9869a5.

📒 Files selected for processing (3)
  • .openshift-tests-extension/openshift_payload_cluster-version-operator.json
  • test/cvo/cvo.go
  • test/util/util.go
🚧 Files skipped from review as they are similar to previous changes (2)
  • .openshift-tests-extension/openshift_payload_cluster-version-operator.json
  • test/util/util.go

@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 25, 2026

@JianLi-RH: The following tests failed, say /retest to rerun all failed tests or /retest-required to rerun all mandatory failed tests:

Test name Commit Details Required Rerun command
ci/prow/e2e-agnostic-ovn a9869a5 link true /test e2e-agnostic-ovn
ci/prow/e2e-hypershift a9869a5 link true /test e2e-hypershift

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants