Skip to content

Conversation

@kvaps
Copy link
Member

@kvaps kvaps commented Jan 16, 2026

Summary

  • Split telemetry collection between cozystack-operator (cluster metrics) and cozystack-controller (application metrics)
  • Add build-time version injection via ldflags instead of runtime flag
  • Replace bundle/oidc telemetry with package_info metric
  • Replace cozy_workloads_count with cozy_application_count

Metrics

cozystack-operator:

  • cozy_cluster_info - cluster version info
  • cozy_nodes_count - nodes by OS/kernel
  • cozy_cluster_capacity - total cpu, memory, GPU resources
  • cozy_loadbalancers_count - LoadBalancer services count
  • cozy_pvs_count - PVs by driver and size
  • cozy_package_info - installed packages with variants

cozystack-controller:

  • cozy_application_count - application instances per ApplicationDefinition kind

Test plan

  • Build cozystack-operator and cozystack-controller images
  • Deploy to test cluster
  • Verify telemetry is sent to endpoint
  • Check metrics format matches documentation

Split telemetry collection between cozystack-operator and cozystack-controller:

cozystack-operator now collects cluster-level metrics:
- cozy_cluster_info (cozystack_version, kubernetes_version)
- cozy_nodes_count (os, kernel)
- cozy_cluster_capacity (cpu, memory, nvidia.com/* resources)
- cozy_loadbalancers_count
- cozy_pvs_count (driver, size)
- cozy_package_info (name, variant)

cozystack-controller now collects application-level metrics:
- cozy_application_count (kind) - counts HelmReleases per ApplicationDefinition

Other changes:
- Add pkg/version for build-time version injection via ldflags
- Remove --cozystack-version flag (version now embedded at build time)
- Remove bundle/oidc configuration from telemetry (replaced by package_info)
- Remove cozy_workloads_count metric (replaced by cozy_application_count)

Co-Authored-By: Claude <noreply@anthropic.com>
Signed-off-by: Andrei Kvapil <kvapss@gmail.com>
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Jan 16, 2026

Important

Review skipped

Draft detected.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @kvaps, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the telemetry system within Cozystack to provide a more granular and accurate view of both cluster infrastructure and application deployments. By separating telemetry responsibilities between the operator and controller, the system can now collect more relevant data for each component. Additionally, the change streamlines version reporting through build-time injection and updates several metric definitions to better reflect the system's state.

Highlights

  • Telemetry Split: Telemetry collection has been split between cozystack-operator (for cluster metrics) and cozystack-controller (for application metrics), allowing for more focused data gathering.
  • Build-Time Version Injection: The Cozystack version is now injected at build time using ldflags, replacing the previous runtime flag approach for improved consistency and reliability.
  • Metric Refinements: The bundle/oidc telemetry has been replaced with a package_info metric, and cozy_workloads_count has been updated to cozy_application_count for clearer application-centric reporting.
  • New Operator Telemetry Collector: A new OperatorCollector has been introduced to specifically handle cluster-level metrics such as node counts, cluster capacity, load balancers, persistent volumes, and installed packages.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request effectively refactors the telemetry collection by splitting responsibilities between the operator and the controller, which improves modularity. The change to inject the version at build time using ldflags is a good improvement over runtime flags. My review includes a high-severity suggestion to address significant code duplication between the new telemetry collectors, which would improve long-term maintainability. I've also included medium-severity comments to encourage the use of structured logging for better observability. Overall, this is a great set of changes that improves the telemetry architecture.

Comment on lines +46 to +67
func (c *OperatorCollector) Start(ctx context.Context) error {
if c.config.Disabled {
return nil
}

c.ticker = time.NewTicker(c.config.Interval)
c.stopCh = make(chan struct{})

// Initial collection
c.collect(ctx)

for {
select {
case <-ctx.Done():
c.ticker.Stop()
close(c.stopCh)
return nil
case <-c.ticker.C:
c.collect(ctx)
}
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

There is significant code duplication between OperatorCollector in this file and Collector in internal/telemetry/collector.go. The Start, NeedLeaderElection, and sendMetrics methods are identical, and the struct definitions are very similar. This makes the code harder to maintain.

I recommend refactoring to use a common baseCollector that can be embedded by both OperatorCollector and Collector to hold the shared logic. This would centralize the common functionality, making it easier to manage.

// Get cluster ID from kube-system namespace
var kubeSystemNS corev1.Namespace
if err := c.client.Get(ctx, types.NamespacedName{Name: "kube-system"}, &kubeSystemNS); err != nil {
logger.Info(fmt.Sprintf("Failed to get kube-system namespace: %v", err))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Throughout this file and internal/telemetry/collector.go, log messages are constructed using fmt.Sprintf. It is a best practice to use structured logging with key-value pairs for variable data. This makes logs more machine-readable and easier to query.
For example, this line could be improved.

Suggested change
logger.Info(fmt.Sprintf("Failed to get kube-system namespace: %v", err))
logger.Info("Failed to get kube-system namespace", "error", err)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants