diff --git a/README.md b/README.md index abf6003..aebd2ac 100644 --- a/README.md +++ b/README.md @@ -1,2 +1,356 @@ # lib-connection + Deckhouse connection to nodes over SSH and kube-api over SSH and directly implementations. + +Library provide interfaces and own implementations for SSH and kubernetes client. +Also library provide special providers for getting clients (more information about this below). +Please DO NOT CREATE implementations of clients directly without need. Please use providers for it. + +## Global settings + +Library routines needs some global settings for running routines. +It describes as Settings interface [here](./pkg/settings/settings.go). Implementation can create +with `NewBaseProviders` constructor. Now we have next settings: +- `LoggerProvider` - func that provide logger. By default, uses silent logger +If you need debug logs you need to provide logger with debug logging enable. +- `NodeTmpDir` - uses for upload bundles and some additional temp files to remote node +default - `/opt/deckhouse/tmp` +- `NodeBinPath` - now, uses only for kube-proxy to add this path to PATH env, because +we use your own path to safe kubectl on node. Default - `/opt/deckhouse/bin` +- `IsDebug` - enable some routines with debug +- `TmpDir` - root tmp dir default `os.TmpDir() + "/dhctl"` +- `AuthSock` - ssh-agent auth sock, if not set uses `os.Getenv("SSH_AUTH_SOCK")` for every call +- `EnvsPrefix` - envs prefix for flags parsers. Default - empty string +- `OnShutdown` - function to add some routines on end of your logic. Default empty function. +You can use `tomb` package in dhctl for this. +- + +## SSH client + +Interface of SSH client (`SSHClient`) described [here](./pkg/ssh.go). +With this interface we can run commands, upload and download files, run scripts and bundles, +up tunnel and reverse tunnels and up kubernetes proxy for access to create kubernetes client running over ssh. + +Now, we have 3 implementations of `SSHClient` +- [cli](./pkg/ssh/clissh) - use `ssh` and `scp` binaries for ssh routines. If you use your own bin path +for these binaries you should add bin path to `PATH` env before use. +- [go](./pkg/ssh/gossh) - use [own fork](https://github.com/deckhouse/lib-gossh) of [crypto](https://pkg.go.dev/golang.org/x/crypto/ssh) +library. We use own with adding additional logging +- [testssh](./pkg/ssh/testssh) - our mock for testing purposes without connection to ssh. + +All implementations contain monitors and auto reconnecting to ssh, tunnels and kube-proxy if connection +was failed. + +`Script` implementations contains method `ExecuteBundle` for running script that run list of scripts +named as `bundle` as output progress of running ([see implementation here](./pkg/ssh/utils/bundle.go)). +By default, it runs `bashible` bundle from [deckhouse](https://github.com/deckhouse/deckhouse/blob/main/candi/bashible/bashible.sh.tpl). +If you need run your own bundle pass bundler options `BundlerOption` with `Script.WithBundlerOpts` method. + +Also library provides `Interface` interface for running commands and scrip routines on local machine. + +## Kube client + +Interface of SSH client (`KubeClient`) described [here](./pkg/kube.go). It implements `client-go` client +interface with some additional methods. + +Now, we have two implementations of this interface: +- [KubernetesClient](./pkg/kube/client.go) - [use](https://github.com/flant/kube-client) library this +implementation can work with kubeconfig, rest client, local run and over SSH with kube-proxy +- [ErrorKubernetesClient](./pkg/kube/error_client.go) - it always returns error for all calls. It needs +for prevent using closed kube client (more information about this below). + +`KubeClient` can stop with `Stop` method. If using over SSH connection it stops kube-proxy and client +if passed `full` flag. Also `Stop` method switch inner `KubeClient` to `ErrorKubernetesClient` for +prevent using closed client and do not additional attempts to kube-proxy. + +## Clients providers interfaces + +Library implement own interfaces to provide clients for creating clients for lightweight usage in your routines. + +### SSHProvider + +Described [here](./pkg/ssh.go) as `SSHProvider`. Have next interface: +- `Client` - this provides SSH client for default settings passed in provider. Implementations should cache +current client. You should use this method for getting `SSHClient`. Please do not stop this client directly. +- `SwitchClient` - switch current `SSHClient` with new settings. It needs if you first connect with defaults +but in you logic we need to use new connection. For example, you connect to master, create new user and should +continue working with new user. It will close current `SSHClient` if this got via `Client` method, but safe +if `Client` did not call. Warning! This method returns `SSHClient`, but DO NOT SAVE it your structures. +Please use `Client` for getting current client. +Example usage: +```go +package my +func do(){ + // ini provider + // provider.Client() + // ... + // creating new user over default client + // provider.SwitchClient() + // provider.Client() + // ... + // provider.Client() + // ... +} +``` +- `SwitchToDefault` - it uses if you need to use default configuration client after `SwitchClient`. +For example, For example, you connect to master, create new user do all routines with new user +and continue with default. It will close current `SSHClient` if this got via `Client` or `SwitchClient` +method, but safe if `Client` or/and `SwitchClient` did not call. Warning! This method returns +`SSHClient`, but DO NOT SAVE it your structures. Please use `Client` for getting current client. +Example usage: +```go +package my +func do(){ + // ini provider + // provider.Client() + // ... + // creating new user over default client + // provider.SwitchClient() + // provider.Client() + // ... + // provider.SwitchToDefault() + // provider.Client() + // delete created user over default client + // ... +} +``` +- `NewAdditionalClient` - creates new additional client with default configuration. It needs if you want +to use another connection without affect current client. Provider save all clients created via this method +for cleanup. If clients does not need anymore you can stop it with `Stop` method +- `NewStandaloneClient` - creates new standalone client. It needs if you need to connect to another hosts. +Provider save all clients created via this method for cleanup. If clients does not need anymore +you can stop it with `Stop` method. +- `Cleanup` - provider can provide some files for its routines like private keys passed from configuration. +This files will delete in this call. Also, it stops current client and all additional clients created with +`NewAdditionalClient` and `NewStandaloneClient`. It is safe if provider does not have current client or +additional clients. Also, it is safe if some or all clients were stopped. Current client and all additional +will remove from provider. Use this method in end of your logic. + +Now we have two implementations of `SSHProvider`: `DefaultSSHProvider`, `SSHProvider` in `testssh` package +and `ErrorSSHProvider`. + +#### DefaultSSHProvider + +DefaultSSHProvider provide clients with configuration passed default configuration. + +Configuration can provide with [this](./pkg/ssh/config/config.go). + +You can create this configuration (`ConnectionConfig` struct) directly +or with [parse flags](./pkg/ssh/config/parse_flags.go) or with parse +[configuration document](./pkg/ssh/config/parse_config.go). Document schemas described +[here](./pkg/ssh/config/openapi/). If you need to provide configuration in your project +(for example, render documentation by specs), you can download these schemas in CI or makefile or directly. +You can see can you download specs over GitHub API in [makefile](./Makefile) `validation/license/download` +target. + +###### ParseConnectionConfig + +`ParseConnectionConfig` gets reader with documents and returns `ConnectionConfig` struct. +By default, `ParseConnectionConfig` not allow configuration without hosts and with unknown kinds. +For redeclare it, please use `ParseWithRequiredSSHHost` and `ParseWithSkipUnknownKinds` options. +Also, `ParseConnectionConfig` add some additional checks, like that private keys parsed (with provided +password if password set) and that `legacyMode` and `modernMode` set both. + +###### ParseFlags + +`FlagsParser` provide `ConnectionConfig` from cli arguments. It is use `https://github.com/spf13/pflag` package for parse it. +All flags can rewrite with env variables [described in](./pkg/ssh/config/parse_flags.go). You can +provide prefix for envs variables with `WithEnvsPrefix` method. Parse flags doing in next order: +```go +package my + +import "os" + +func do() error { + // create and prepare parser + parser := NewFlagsParser() + parser.WithEnvsPrefix("DHCTL") + // init flags or you can pass your flagset, parser skip unknown flags + fset := flag.NewFlagSet("my-set", flag.ExitOnError) + flags, err := parser.InitFlags(fset) + if err != nil { + return err + } + // or you can provide your ouwn arguments slice + err = flags.Parse(os.Args[1:]) + if err != nil { + return err + } + + // you can use ValidateOption for configure parse + config, err := parser.ExtractConfigAfterParse(flags) + if err != nil { + return err + } + + return nil +} +``` + +Flags parsers uses copy of passed flag set for parsing. If you need parse with you another flags set +you can get new flag set with `FlagSet` method and parse flag set by your hand. +After parse, extract `ConnectionConfig` with `ExtractConfigAfterParse` method. + +By default, hosts is not required for parse, you can rewrite with `ParseWithRequiredSSHHost`. +It needs because we can parse ssh configuration and kube configuration both and if we have kubeconfig +path we should skip all ssh flags and empty flag set for ssh is valid in this case. +But we can use `OverSSH` method in kube configuration. But Warning, you can use ssh routines and kube +in one logic, and we can use kubeconfig for kube connection. +`ExtractConfigAfterParse` add some defaults if some flags not passes, like port and bastion port (22 by default), +user and bastion user (current user from USER env or getting with sys cals). +Also, by default flags parser add `~/.ssh/id_rsa` private key. In some cases it is not required: +if user uses password auth (without private key) or if user want to use ssh agent private keys only. +For force use password auth key user should pass `--force-no-private-keys` with `--ask-become-pass` flags. +For force only ssh-agent private keys user should pass `--force-no-private-keys` with +`--use-agent-with-no-private-keys` flags and set `SSH_AUTH_SOCK` (in this case parser check that this +env value is exists file). +Flags parser also doing some additional checks for parsed flags: +- private keys files should parse as valid private key. If private key protected with password, parser +ask password for key from terminal. If you need set your own extract logic, please set extractor with +`WithPrivateKeyPasswordExtractor` method +- `--ssh-legacy-mode` and `--ssh-modern-mode` should not provide both +- if pass `--ask-become-pass` or/and `--ask-bastion-pass` parser ask passwords from terminal. +If you need set your getting passwords logic, you can provide your func with `WithAsk` method, like here: +```go +package my +func do { + // ... + parser.WithAsk(func(promt string) ([]byte, error) { + switch promt { + case "[bastion] Password: ": + return []byte("not secure bastion password"), nil + case "[sudo] Password: ": + return []byte("not secure sudo password"), nil + default: + return nil, fmt.Errorf("unknown prompt %s", promt) + } + }) +} +``` +- also, parsers checks that auth method was provided (private keys, sudo pass, use agent private keys). + +User can pass document file with connection config via `--connection-config` flag. If this flag provided +parser returns `ConnectionConfig` parsed with `ParseConnectionConfig`. If user pass connection config path +with another flags, parser returns error. + +###### Create `ConnectionConfig` directly + +If you create `ConnectionConfig` and want to use ssh-agent only, please set `ForceUseSSHAgent` field to true. +`AgentPrivateKey` can proccess Key field as content or file path. If you provide key as file please +set `IsPath` field to true. + +##### DefaultSSHProvider logic + +User can pass private keys with `ConnectionConfig` as file path or content. If it uses as content, +`DefaultSSHProvider` creates temp files with private keys, because internal logic process private keys +as file. All files will delete on `Cleanup` call. +Also, in creating all clients (additional, standalone, switch) provider adds private keys from default +configuration by default. For example, if you switch client, you could not add private keys from current +client for safe switching. + +`DefaultSSHProvider` provide client implementations with next rules: +- if you provide `SSHClientWithForceGoSSH` option it returns go-ssh +- if set `ForceModern` in configuration returns go-ssh +- if set `ForceLegacy` in configuration returns cli-ssh +- if configuration does not contain private keys returns go-ssh, because cli-ssh not supported +password authentification +- by default returns cli-ssh. Warning! this behaviour can change in the future. + +By default, provider not start client if you need you can pass `SSHClientWithStartAfterCreate` option. + +##### ErrorSSHProvider + +This provider returns error for every call. This provider can use with `KubeProvider` if you sure +that you need to use kube client not over ssh. + +###### SSHProvider in `testssh` + +You can pass this provider in unit tests. This provider save all switch calls and you can test it. + +### KubeProvider + +Provides kubernetes client. Have next methods: +- `Client` - gets current client or init new if current client not set. Client cached. +If you client in retry loop, please call `Client` on every iteration. +And please do not save client in your structures, please call `Client` with every kube-api routine. +And do not stop this client directly. +- `NewAdditionalClient` - initialize new client. Need use if you do not want affect current client. +If you do not need a client, you can call `kube.Stop` method for stop client and its inferiors. +All clients created with this method saved in provider. +- `NewAdditionalClientWithoutInitialize` - create new client, but not initialize it. For start client +please use `client.InitContext`. Need use if you do not want affect current client. +All clients created with this method saved in provider. +- `Cleanup` - stops all additional clients got from NewAdditionalClient and NewAdditionalClientWithoutInitialize +also current client also stop, but not fully because if we use over ssh current client can use in another routines. +Call `Cleanup` is safe for call on stopped clients. + +Now, we have next implementations: +- `DefaultKubeProvider` - provide default client with its config +- `FakeKubeProvider` - provide fake clients for using in tests. + +#### DefaultKubeProvider + +DefaultKubeProvider creates kube provider dependent on passed user configuration. +Configuration described [here](./pkg/kube/config.go) + +Kube client creates with in next order: +- if set `config.KubeConfigInCluster` provider will use `in-cluster` configuration. This +should use for creating kube client in containers in k8s cluster +- if set `config.KubeConfig` (path to kubeconfig) uses this kubeconfig for connection +- if set `config.RestConfig` uses this configuration for connect to kube API. It needs +if you want to use BearerToken for connect. +- if set `config.LocalKubeClient` use directly connection on same host +- by default uses kube proxy over ssh. + +##### Parse configuration from flags + +You can use `kube.FlagsParser` for extract configuration from cli flags. +This parser have same rules as [ssh flags parser](#parseflags). Client can provide +kubeconfig path with context in kubeconfig or `in-cluster` mode only. For another options like +local or rest config you can prepare configuration in code. +`FlagsParser` have next additional checks: +- fail if `in-cluster` mode pass with kubeconfig path +- if kubeconfig provided, parser checks that provide valid kubeconfig +- if pass context, provider checks that kubeconfig contains this context. + +Warning! Parser also checks `KUBECONFIG` env. If this env sets, parser use value from env as +kubeconfig path. + +##### Provider initialization and logic + +For init provider, you can pass special interface `RunnerInterface` this interface provide routines +to additional logic used depend on configuration. For getting implementation use `GetRunnerInterface` +This function checks that configuration is not conflicted (use one connection method) +For kubeconfig, `in-cluster` and rest config modes, implementations does not contain complex logic. +But for ssh logic is complex. + +###### Kube-proxy (over ssh) mode + +`RunnerInterfaceSSH` got `SSHProvider` for provide client for starting kube-proxy + +For call `Client` provider (in fact `RunnerInterfaceSSH`) use `SSHProvider.Client()` method. +For every call, provider checks that ssh-client configuration is same with current. +If it is same, returns current saved kube client. Otherwise, provider initialize new kube client +with got `SSHClient`. Also, during initialization it checks that ssh host available and switch to +another host if it needs. After initialize new kube client stops current kube-client, but not fully. +This logic needs for simple usage `KubeProvider` you do not need track ssh switches in your logic. +And that's why you need `Client` call for every kube API interaction. + +`NewAdditionalClient` and `NewAdditionalClientWithoutInitialize` always create new ssh-client with +`sshProvider.NewAdditionalClient`. That's why you can stop this kube-clients fully. All these clients +saved to internals for cleanup. + +Before returns new kube-client, provider checks, that kube API is available. + +`Cleanup` - stops all additional clients fully, but current stop not fully (only kube-proxy), because +current kube-client uses current ssh-client but this client can use in the next operations in your code. + +#### FakeKubeProvider + +Provides fake kube client. + +In creation, `FakeKubeProvider` creates current kube-client and returns this client for all methods. +It needs for test resources if you use additional clients in one place without saving additional +clients in your code. You can use `Client` call for getting kube client after test your methods and +asserts resources after test. +`KubernetesClient.InitContext` is save for call with fake client \ No newline at end of file diff --git a/go.mod b/go.mod index 835a7a6..1e3491c 100644 --- a/go.mod +++ b/go.mod @@ -7,6 +7,7 @@ require ( github.com/bramvdbogaerde/go-scp v1.6.0 github.com/deckhouse/lib-dhctl v0.13.0 github.com/deckhouse/lib-gossh v0.3.0 + github.com/flant/kube-client v1.5.1 github.com/go-openapi/spec v0.19.8 github.com/google/uuid v1.6.0 github.com/hashicorp/go-multierror v1.1.1 @@ -15,6 +16,7 @@ require ( github.com/spf13/pflag v1.0.10 github.com/stretchr/testify v1.11.1 golang.org/x/term v0.39.0 + k8s.io/apimachinery v0.32.10 k8s.io/client-go v0.32.10 sigs.k8s.io/yaml v1.6.0 ) @@ -23,9 +25,14 @@ require ( github.com/DataDog/gostackparse v0.7.0 // indirect github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535 // indirect github.com/avelino/slugify v0.0.0-20180501145920-855f152bd774 // indirect + github.com/beorn7/perks v1.0.1 // indirect + github.com/blang/semver/v4 v4.0.0 // indirect + github.com/cespare/xxhash/v2 v2.3.0 // indirect github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect github.com/deckhouse/deckhouse/pkg/log v0.1.1-0.20251230144142-2bad7c3d1edf // indirect + github.com/emicklei/go-restful/v3 v3.11.0 // indirect github.com/fxamacker/cbor/v2 v2.7.0 // indirect + github.com/go-errors/errors v1.4.2 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-openapi/analysis v0.19.10 // indirect github.com/go-openapi/errors v0.19.7 // indirect @@ -38,8 +45,14 @@ require ( github.com/go-openapi/validate v0.19.12 // indirect github.com/go-stack/stack v1.8.0 // indirect github.com/gogo/protobuf v1.3.2 // indirect + github.com/golang/protobuf v1.5.4 // indirect + github.com/google/btree v1.0.1 // indirect + github.com/google/gnostic-models v0.6.8 // indirect + github.com/google/go-cmp v0.6.0 // indirect github.com/google/gofuzz v1.2.0 // indirect + github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 // indirect github.com/gookit/color v1.5.2 // indirect + github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 // indirect github.com/hashicorp/errwrap v1.0.0 // indirect github.com/josharian/intern v1.0.0 // indirect github.com/json-iterator/go v1.1.12 // indirect @@ -47,25 +60,41 @@ require ( github.com/mitchellh/mapstructure v1.3.2 // indirect github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 // indirect github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect + github.com/peterbourgon/diskv v2.0.1+incompatible // indirect github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect + github.com/prometheus/client_golang v1.19.1 // indirect + github.com/prometheus/client_model v0.6.1 // indirect + github.com/prometheus/common v0.55.0 // indirect + github.com/prometheus/procfs v0.15.1 // indirect github.com/rogpeppe/go-internal v1.14.1 // indirect github.com/werf/logboek v0.5.5 // indirect github.com/x448/float16 v0.8.4 // indirect + github.com/xlab/treeprint v1.2.0 // indirect github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 // indirect go.mongodb.org/mongo-driver v1.5.1 // indirect go.yaml.in/yaml/v2 v2.4.2 // indirect + go.yaml.in/yaml/v3 v3.0.3 // indirect golang.org/x/crypto v0.47.0 // indirect golang.org/x/net v0.48.0 // indirect golang.org/x/oauth2 v0.23.0 // indirect + golang.org/x/sync v0.19.0 // indirect golang.org/x/sys v0.40.0 // indirect golang.org/x/text v0.33.0 // indirect golang.org/x/time v0.7.0 // indirect + google.golang.org/protobuf v1.35.1 // indirect + gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect gopkg.in/inf.v0 v0.9.1 // indirect gopkg.in/yaml.v3 v3.0.1 // indirect - k8s.io/apimachinery v0.32.10 // indirect + k8s.io/api v0.32.10 // indirect + k8s.io/apiextensions-apiserver v0.32.10 // indirect + k8s.io/cli-runtime v0.32.10 // indirect k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect + sigs.k8s.io/kustomize/api v0.18.0 // indirect + sigs.k8s.io/kustomize/kyaml v0.18.1 // indirect sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect ) diff --git a/go.sum b/go.sum index c801d09..c7ffd53 100644 --- a/go.sum +++ b/go.sum @@ -16,8 +16,14 @@ github.com/asaskevich/govalidator v0.0.0-20200428143746-21a406dcc535/go.mod h1:o github.com/avelino/slugify v0.0.0-20180501145920-855f152bd774 h1:HrMVYtly2IVqg9EBooHsakQ256ueojP7QuG32K71X/U= github.com/avelino/slugify v0.0.0-20180501145920-855f152bd774/go.mod h1:5wi5YYOpfuAKwL5XLFYopbgIl/v7NZxaJpa/4X6yFKE= github.com/aws/aws-sdk-go v1.34.28/go.mod h1:H7NKnBqNVzoTJpGfLrQkkD+ytBA93eiDYi/+8rV9s48= +github.com/beorn7/perks v1.0.1 h1:VlbKKnNfV8bJzeqoa4cOKqO6bYr3WgKZxO8Z16+hsOM= +github.com/beorn7/perks v1.0.1/go.mod h1:G2ZrVWU2WbWT9wwq4/hrbKbnv/1ERSJQ0ibhJ6rlkpw= +github.com/blang/semver/v4 v4.0.0 h1:1PFHFE6yCCTv8C1TeyNNarDzntLi7wMI5i/pzqYIsAM= +github.com/blang/semver/v4 v4.0.0/go.mod h1:IbckMUScFkM3pff0VJDNKRiT6TG/YpiHIM2yvyW5YoQ= github.com/bramvdbogaerde/go-scp v1.6.0 h1:lDh0lUuz1dbIhJqlKLwWT7tzIRONCp1Mtx3pgQVaLQo= github.com/bramvdbogaerde/go-scp v1.6.0/go.mod h1:on2aH5AxaFb2G0N5Vsdy6B0Ml7k9HuHSwfo1y0QzAbQ= +github.com/cespare/xxhash/v2 v2.3.0 h1:UL815xU9SqsFlibzuggzjXhog7bL6oX9BbNZnL2UFvs= +github.com/cespare/xxhash/v2 v2.3.0/go.mod h1:VGX0DQ3Q6kWi7AoAeZDth3/j3BFtOZR5XLFGgcrjCOs= github.com/creack/pty v1.1.9/go.mod h1:oKZEueFk5CKHvIhNR5MUki03XCEU+Q6VDXinZuGJ33E= github.com/davecgh/go-spew v1.1.0/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= github.com/davecgh/go-spew v1.1.1/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38= @@ -33,10 +39,14 @@ github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDD github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/flant/kube-client v1.5.1 h1:9UTPMxZqAPHUQzWS/4yE5hEPNIYhS+gGmegfi3r2lvQ= +github.com/flant/kube-client v1.5.1/go.mod h1:hpJZ0FnDKHW3r5q5SYQgBrTw9k94q4+dcnJ4uOGYBHc= github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= +github.com/go-errors/errors v1.4.2 h1:J6MZopCL4uSllY1OfXM374weqZFFItUbrImctkmUxIA= +github.com/go-errors/errors v1.4.2/go.mod h1:sIVyrIiJhuEF+Pj9Ebtd6P/rEYROXFi3BopGUQ5a5Og= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= github.com/go-logr/logr v1.4.2/go.mod h1:9T104GzyrTigFIr8wt5mBrctHMim0Nb2HLGrmQ40KvY= github.com/go-openapi/analysis v0.0.0-20180825180245-b006789cd277/go.mod h1:k70tL6pCuVxPJOHXQ+wIac1FUrvNkHolPie/cLEU6hI= @@ -113,6 +123,8 @@ github.com/go-openapi/validate v0.19.12/go.mod h1:Rzou8hA/CBw8donlS6WNEUQupNvUZ0 github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0= github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY= github.com/gobuffalo/depgen v0.1.0/go.mod h1:+ifsuy7fhi15RWncXQQKjWS9JPkdah5sZvtHc2RXGlg= @@ -142,6 +154,8 @@ github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69 github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/google/btree v1.0.1 h1:gK4Kx5IaGY9CD5sPJ36FHiBJ6ZXl0kilRiiCj+jdYp4= +github.com/google/btree v1.0.1/go.mod h1:xXMiIv4Fb/0kKde4SpL7qlzvu5cMJDRkFDxJfI9uaxA= github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= @@ -153,6 +167,8 @@ github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeN github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo= +github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510 h1:El6M4kTTCOh6aBiKaUGG7oYTSPP8MxqL4YI3kZKwcP4= github.com/google/shlex v0.0.0-20191202100458-e7afc7fbc510/go.mod h1:pupxD2MaaD3pAXIBCelhxNneeOaAeabZDe5s4K6zSpQ= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= @@ -161,6 +177,8 @@ github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= github.com/google/uuid v1.6.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/gookit/color v1.5.2 h1:uLnfXcaFjlrDnQDT+NCBcfhrXqYTx/rcCa6xn01Y8yI= github.com/gookit/color v1.5.2/go.mod h1:w8h4bGiHeeBpvQVePTutdbERIUf3oJE5lZ8HM0UgXyg= +github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79 h1:+ngKgrYPPJrOjhax5N+uePQ0Fh1Z7PheYoUI/0nzkPA= +github.com/gregjones/httpcache v0.0.0-20190611155906-901d90724c79/go.mod h1:FecbI9+v66THATjSRHfNgh1IVFe/9kFxbXtjV0ctIMA= github.com/hashicorp/errwrap v1.0.0 h1:hLrqtEDnRye3+sgx6z4qVLNuviH3MR5aQ0ykNJa/UYA= github.com/hashicorp/errwrap v1.0.0/go.mod h1:YH+1FKiLXxHSkmPseP+kNlulaMuP3n2brvKWEqk/Jc4= github.com/hashicorp/go-multierror v1.1.1 h1:H5DkEtf6CXdFp0N0Em5UCwQpXMWke8IA0+lD48awMYo= @@ -207,15 +225,23 @@ github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= +github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00 h1:n6/2gBQ3RWajuToeY6ZtZTIKv2v7ThUy5KKusIT0yc0= +github.com/monochromegane/go-gitignore v0.0.0-20200626010858-205db1a8cc00/go.mod h1:Pm3mSP3c5uWn86xMLZ5Sa7JB9GsEZySvHYXCTK4E9q4= github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/name212/govalue v1.1.0 h1:kSdUVs21cM5bFp7RW5sWPrwQ0RzC/Xhk3f+A+dUL6TM= github.com/name212/govalue v1.1.0/go.mod h1:3mLA4mFb82esucQHCOIAnUjN7e7AZnRYEfxeaHLKjho= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= +github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM= +github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= +github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4= +github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo= github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= +github.com/peterbourgon/diskv v2.0.1+incompatible h1:UBdAOUP5p4RWqPBg048CAvpKN+vxiaj6gdUUzhl4XmI= +github.com/peterbourgon/diskv v2.0.1+incompatible/go.mod h1:uqqh8zWWbv1HBMNONnaR/tNboyR3/BZd58JJSHlUSCU= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= @@ -223,12 +249,22 @@ github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINE github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= +github.com/prometheus/client_golang v1.19.1 h1:wZWJDwK+NameRJuPGDhlnFgx8e8HN3XHQeLaYJFJBOE= +github.com/prometheus/client_golang v1.19.1/go.mod h1:mP78NwGzrVks5S2H6ab8+ZZGJLZUq1hoULYBAYBw1Ho= +github.com/prometheus/client_model v0.6.1 h1:ZKSh/rekM+n3CeS952MLRAdFwIKqeY8b62p8ais2e9E= +github.com/prometheus/client_model v0.6.1/go.mod h1:OrxVMOVHjw3lKMa8+x6HeMGkHMQyHDk9E3jmP2AmGiY= +github.com/prometheus/common v0.55.0 h1:KEi6DK7lXW/m7Ig5i47x0vRzuBsHuvJdi5ee6Y3G1dc= +github.com/prometheus/common v0.55.0/go.mod h1:2SECS4xJG1kd8XF9IcM1gMX6510RAEL65zxzNImwdc8= +github.com/prometheus/procfs v0.15.1 h1:YagwOFzUgYfKKHX6Dr+sHT7km/hxC76UB0learggepc= +github.com/prometheus/procfs v0.15.1/go.mod h1:fB45yRUv8NstnjriLhBQLuOUt+WW4BsoGhij/e3PBqk= github.com/rogpeppe/go-internal v1.1.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.2.2/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFRclV5y23lUDJ4= github.com/rogpeppe/go-internal v1.14.1 h1:UQB4HGPB6osV0SQTLymcB4TgvyWu6ZyliaW0tI/otEQ= github.com/rogpeppe/go-internal v1.14.1/go.mod h1:MaRKkUm5W0goXpeCfT7UZI6fk/L7L7so1lCWt35ZSgc= github.com/sergi/go-diff v1.0.0/go.mod h1:0CfEIISq7TuYL3j771MWULgwwjU+GofnZX9QAmXWZgo= +github.com/sergi/go-diff v1.2.0 h1:XU+rvMAioB0UC3q1MFrIQy4Vo5/4VsRDQQXHsEya6xQ= +github.com/sergi/go-diff v1.2.0/go.mod h1:STckp+ISIX8hZLjrqAeVduY0gWCT9IjLuqbuNXdaHfM= github.com/sirupsen/logrus v1.4.0/go.mod h1:LxeOpSwHxABJmUn/MG1IvRgCAasNZTLOkJPxbbu5VWo= github.com/sirupsen/logrus v1.4.1/go.mod h1:ni0Sbl8bgC9z8RoU9G6nDWqqs/fq4eDPysMBDgk/93Q= github.com/sirupsen/logrus v1.4.2/go.mod h1:tLMulIdttU9McNUspp0xgXVQah82FyeX6MwdIuYE2rE= @@ -241,6 +277,8 @@ github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+ github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE= github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw= github.com/stretchr/objx v0.5.0/go.mod h1:Yh+to48EsGEfYuaHDzXPcE3xhTkx73EhmCGUpEOglKo= +github.com/stretchr/objx v0.5.2 h1:xuMeJ0Sdp5ZMRXx/aWO6RZxdr3beISkG5/G/aIRr3pY= +github.com/stretchr/objx v0.5.2/go.mod h1:FRsXN1f5AsAjCGJKqEizvkpNtU+EGNCLh3NxZ/8L+MA= github.com/stretchr/testify v1.2.2/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs= github.com/stretchr/testify v1.3.0/go.mod h1:M5WIy9Dh21IEIfnGCwXGc5bZfKNJtfHm1UVUgZn+9EI= github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81PSLYec5m4= @@ -263,6 +301,8 @@ github.com/xdg-go/scram v1.0.2/go.mod h1:1WAq6h33pAW+iRreB34OORO2Nf7qel3VV3fjBj+ github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM= github.com/xdg/scram v0.0.0-20180814205039-7eeb5667e42c/go.mod h1:lB8K/P019DLNhemzwFU4jHLhdvlE6uDZjXFejJXr49I= github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHMO4oL310DaZAKk9ZaJ08SJfe+sJh0HrGL1Y= +github.com/xlab/treeprint v1.2.0 h1:HzHnuAF1plUN2zGlAFHbSQP2qJ0ZAD3XF5XD7OesXRQ= +github.com/xlab/treeprint v1.2.0/go.mod h1:gj5Gd3gPdKtR1ikdDK6fnFLdmIS0X30kTTuNd/WEJu0= github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 h1:QldyIu/L63oPpyvQmHgvgickp1Yw510KJOqX7H24mg8= github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs= github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= @@ -274,6 +314,8 @@ go.mongodb.org/mongo-driver v1.3.0/go.mod h1:MSWZXKOynuguX+JSvwP8i+58jYCXxbia8HS go.mongodb.org/mongo-driver v1.3.4/go.mod h1:MSWZXKOynuguX+JSvwP8i+58jYCXxbia8HS3gZBapIE= go.mongodb.org/mongo-driver v1.5.1 h1:9nOVLGDfOaZ9R0tBumx/BcuqkbFpyTCU2r/Po7A2azI= go.mongodb.org/mongo-driver v1.5.1/go.mod h1:gRXCHX4Jo7J0IJ1oDQyUxF7jfy19UfxniMS4xxMmUqw= +go.uber.org/goleak v1.3.0 h1:2K3zAYmnTNqV73imy9J1T3WC+gmCePx2hEGkimedGto= +go.uber.org/goleak v1.3.0/go.mod h1:CoHD4mav9JJNrW/WLlf7HGZPjdw8EucARQHekz1X6bE= go.yaml.in/yaml/v2 v2.4.2 h1:DzmwEr2rDGHl7lsFgAHxmNz/1NlQ7xLIrlN2h5d1eGI= go.yaml.in/yaml/v2 v2.4.2/go.mod h1:081UH+NErpNdqlCXm3TtEran0rJZGxAYx9hb/ELlsPU= go.yaml.in/yaml/v3 v3.0.3 h1:bXOww4E/J3f66rav3pX3m8w6jDE4knZjGOw8b5Y6iNE= @@ -312,6 +354,8 @@ golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJ golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.19.0 h1:vV+1eWNmZ5geRlYjzm2adRgW2/mcpevXNg50YZtPCE4= +golang.org/x/sync v0.19.0/go.mod h1:9KTHXmSnoGruLpwFjVSX0lNNA75CykiMECbovNTZqGI= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -348,6 +392,8 @@ golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgw golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA= +golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc= golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= @@ -368,16 +414,21 @@ gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.8/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= -gopkg.in/yaml.v2 v2.3.0 h1:clyUAQHOM3G0M3f5vQj7LuJrETvjVot3Z5el9nffUtU= gopkg.in/yaml.v2 v2.3.0/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= +gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY= +gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ= gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= k8s.io/api v0.32.10 h1:ocp4turNfa1V40TuBW/LuA17TeXG9g/GI2ebg0KxBNk= k8s.io/api v0.32.10/go.mod h1:AsMsc4b6TuampYqgMEGSv0HBFpRS4BlKTXAVCAa7oF4= +k8s.io/apiextensions-apiserver v0.32.10 h1:mAZT8fX/jM9pl7qWkFhhsjQZ8ZkmAhEivfUNw8uKXmo= +k8s.io/apiextensions-apiserver v0.32.10/go.mod h1:wEvqU9kFUQOYminqrroY6+fvSs6iMb7QiiFmcN3b6KY= k8s.io/apimachinery v0.32.10 h1:SAg2kUPLYRcBJQj66oniP1BnXSqw+l1GvJFsJlBmVvQ= k8s.io/apimachinery v0.32.10/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= +k8s.io/cli-runtime v0.32.10 h1:NdVJeZ27+frB/Gf7siv38nagk2uw1avvWYiq5flv/Yk= +k8s.io/cli-runtime v0.32.10/go.mod h1:4zBnMXj6rsJH8b1DE4TEYBcw+AZ7MtZ4YPaq+EkzVOo= k8s.io/client-go v0.32.10 h1:MFmIjsKtcnn7mStjrJG1ZW2WzLsKKn6ZtL9hHM/W0xU= k8s.io/client-go v0.32.10/go.mod h1:qJy/Ws3zSwnu/nD75D+/of1uxbwWHxrYT5P3FuobVLI= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= @@ -388,6 +439,10 @@ k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6J k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8= sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= +sigs.k8s.io/kustomize/api v0.18.0 h1:hTzp67k+3NEVInwz5BHyzc9rGxIauoXferXyjv5lWPo= +sigs.k8s.io/kustomize/api v0.18.0/go.mod h1:f8isXnX+8b+SGLHQ6yO4JG1rdkZlvhaCf/uZbLVMb0U= +sigs.k8s.io/kustomize/kyaml v0.18.1 h1:WvBo56Wzw3fjS+7vBjN6TeivvpbW9GmRaWZ9CIVmt4E= +sigs.k8s.io/kustomize/kyaml v0.18.1/go.mod h1:C3L2BFVU1jgcddNBE1TxuVLgS46TjObMwW5FT9FcjYo= sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA= sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= diff --git a/hack/kind/cluster-kube-proxy.yml b/hack/kind/cluster-kube-proxy.yml deleted file mode 100644 index 5449d97..0000000 --- a/hack/kind/cluster-kube-proxy.yml +++ /dev/null @@ -1,19 +0,0 @@ -# Copyright 2026 Flant JSC -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -kind: Cluster -apiVersion: kind.x-k8s.io/v1alpha4 -name: test-connection-kube-proxy -nodes: - - role: control-plane \ No newline at end of file diff --git a/pkg/kube.go b/pkg/kube.go new file mode 100644 index 0000000..172bd46 --- /dev/null +++ b/pkg/kube.go @@ -0,0 +1,67 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package pkg + +import ( + "context" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/client-go/dynamic" + "k8s.io/client-go/kubernetes" +) + +type KubeClient interface { + kubernetes.Interface + Dynamic() dynamic.Interface + APIResourceList(apiVersion string) ([]*metav1.APIResourceList, error) + APIResource(apiVersion, kind string) (*metav1.APIResource, error) + GroupVersionResource(apiVersion, kind string) (schema.GroupVersionResource, error) + InvalidateDiscoveryCache() +} + +type KubeProvider interface { + // Client + // create new client and initialize it + // if it uses over ssh will use current ssh client + // Created client will cache + // if it uses client over ssh can create new client + // if ssh client was switched + // current client will stop if new client was created but not fully + // because if we use over ssh current client can used in another routines + Client(ctx context.Context) (KubeClient, error) + + // NewAdditionalClient + // create new additional client and initialize it + // if use over ssh create new ssh client for it + // you should call kube.Stop if client does not need. kube.Stop + // save for all clients not only over ssh + // also provider save all these clients for stop in Cleanup + NewAdditionalClient(ctx context.Context) (KubeClient, error) + + // NewAdditionalClientWithoutInitialize + // create new additional client without initialize + // if use over ssh create new ssh client for it + // you should call kube.Stop if client does not need. kube.Stop + // save for all clients not only over ssh + // also provider save all these clients for stop in Cleanup + NewAdditionalClientWithoutInitialize(ctx context.Context) (KubeClient, error) + + // Cleanup + // Stops all additional clients got from NewAdditionalClient and NewAdditionalClientWithoutInitialize + // also current client also stop, but not fully + // because if we use over ssh current client can used in another routines + Cleanup(ctx context.Context) error +} diff --git a/pkg/kube/client.go b/pkg/kube/client.go new file mode 100644 index 0000000..89310bb --- /dev/null +++ b/pkg/kube/client.go @@ -0,0 +1,324 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kube + +import ( + "context" + "fmt" + "time" + + // skip linting I do not understand wy golang-ci-lint fail here + //nolint:goimports + "github.com/deckhouse/lib-dhctl/pkg/retry" + //nolint:goimports + klient "github.com/flant/kube-client/client" + //nolint:goimports + "github.com/name212/govalue" + //nolint:goimports + "k8s.io/apimachinery/pkg/runtime/schema" + // oidc allows using oidc provider in kubeconfig + //nolint:goimports + _ "k8s.io/client-go/plugin/pkg/client/auth/oidc" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/settings" + "github.com/deckhouse/lib-connection/pkg/ssh" + "github.com/deckhouse/lib-connection/pkg/ssh/gossh" + "github.com/deckhouse/lib-connection/pkg/ssh/local" +) + +var ( + _ connection.KubeClient = &KubernetesClient{} + + ErrStoppedKubeClient = fmt.Errorf("use stopped kube client") +) + +type ClientLoopParams struct { + StartingKubeProxy retry.Params +} + +var defaultStartKubeProxyLoopParamsOps = []retry.ParamsBuilderOpt{ + retry.WithWait(1 * time.Second), +} + +// KubernetesClient connects to kubernetes API server through ssh tunnel and kubectl proxy. +type KubernetesClient struct { + connection.KubeClient + NodeInterface connection.Interface + KubeProxy connection.KubeProxy + + loopsParams ClientLoopParams + + settings settings.Settings +} + +func NewKubernetesClient(sett settings.Settings) *KubernetesClient { + return &KubernetesClient{ + settings: sett, + } +} + +func NewFakeKubernetesClient() *KubernetesClient { + return &KubernetesClient{KubeClient: klient.NewFake(nil)} +} + +func NewFakeKubernetesClientWithListGVR(gvr map[schema.GroupVersionResource]string) *KubernetesClient { + return &KubernetesClient{KubeClient: klient.NewFake(gvr)} +} + +func (k *KubernetesClient) WithNodeInterface(client connection.Interface) *KubernetesClient { + if !govalue.Nil(client) { + k.NodeInterface = client + } + return k +} + +func (k *KubernetesClient) WithLoopsParams(p ClientLoopParams) *KubernetesClient { + k.loopsParams = p + return k +} + +func (k *KubernetesClient) NodeInterfaceAsSSHClient() connection.SSHClient { + if govalue.Nil(k.NodeInterface) { + return nil + } + + cl, ok := k.NodeInterface.(*ssh.NodeInterfaceWrapper) + if !ok { + return nil + } + + return cl.Client() +} + +type InitOpts struct { + NoStartKubeProxy bool + UseLocalPort int +} + +type InitOpt func(*InitOpts) + +func InitWithNoStartKubeProxy() InitOpt { + return func(initOpts *InitOpts) { + initOpts.NoStartKubeProxy = true + } +} + +func InitWithLocalPort(port int) InitOpt { + return func(initOpts *InitOpts) { + initOpts.UseLocalPort = port + } +} + +// Init initializes kubernetes client +// Deprecated: +// use InitContext +// Warning! use InitWithNoStartKubeProxy for only testing purposes +func (k *KubernetesClient) Init(params *Config, opts ...InitOpt) error { + return k.InitContext(context.Background(), params, opts...) +} + +// InitContext +// Warning! use InitWithNoStartKubeProxy for only testing purposes +func (k *KubernetesClient) InitContext(ctx context.Context, params *Config, opts ...InitOpt) error { + return k.initContext(ctx, params, opts...) +} + +func (k *KubernetesClient) initContext(ctx context.Context, params *Config, opts ...InitOpt) error { + options := &InitOpts{ + UseLocalPort: -1, + } + + for _, opt := range opts { + opt(options) + } + + if isFake(k.KubeClient) { + return nil + } + + kubeClient := klient.New() + kubeClient.WithRateLimiterSettings(30, 60) + _, isLocalRun := k.NodeInterface.(*local.NodeInterface) + + switch { + case params.KubeConfigInCluster: + case params.KubeConfig != "": + kubeClient.WithContextName(params.KubeConfigContext) + kubeClient.WithConfigPath(params.KubeConfig) + case params.RestConfig != nil: + kubeClient.WithRestConfig(params.RestConfig) + case isLocalRun: + if !options.NoStartKubeProxy { + _, err := k.StartKubernetesProxy(ctx, options) + if err != nil { + return err + } + } + default: + if !options.NoStartKubeProxy { + port, err := k.StartKubernetesProxy(ctx, options) + if err != nil { + return err + } + kubeClient.WithServer("http://localhost:" + port) + } + } + + // Initialize kube client for kube events hooks. + err := kubeClient.Init() + if err != nil { + return fmt.Errorf("initialize kube client: %s", err) + } + + k.KubeClient = kubeClient + return nil +} + +// StartKubernetesProxy initializes kubectl-proxy on remote host and establishes ssh tunnel to it +func (k *KubernetesClient) StartKubernetesProxy(ctx context.Context, opts *InitOpts) (string, error) { + wrapper, ok := k.NodeInterface.(*ssh.NodeInterfaceWrapper) + if !ok { + return "6445", nil + } + + port, err := k.startRemoteKubeProxy(ctx, wrapper.Client(), opts) + + if err != nil { + return "", fmt.Errorf("start kube proxy: %s", err) + } + + return port, nil +} + +func (k *KubernetesClient) startRemoteKubeProxy(ctx context.Context, sshCl connection.SSHClient, opts *InitOpts) (string, error) { + logger := k.settings.Logger() + startLoopParams := retry.SafeCloneOrNewParams(k.loopsParams.StartingKubeProxy, defaultStartKubeProxyLoopParamsOps...). + Clone( + retry.WithName("Starting kube proxy"), + retry.WithLogger(logger), + retry.WithAttempts(sshCl.Session().CountHosts()), + ) + + port := "" + err := retry.NewLoopWithParams(startLoopParams). + RunContext(ctx, func() error { + logger.InfoF("Using host %s\n", sshCl.Session().Host()) + + k.KubeProxy = sshCl.KubeProxy() + var err error + port, err = k.KubeProxy.Start(opts.UseLocalPort) + + if err != nil { + sshCl.Session().ChoiceNewHost() + return fmt.Errorf("start kube proxy: %v", err) + } + + return nil + }) + + if err != nil { + return "", err + } + + logger.InfoF("Proxy started on port %s\n", port) + + return port, nil +} + +// Stop +// pass full for fully stop client +// for example if use over ssh full stop client also with stop proxy +// it is safe for call with nil client +func Stop(client connection.KubeClient, full bool) { + if govalue.Nil(client) { + return + } + + kubeClient, ok := client.(*KubernetesClient) + if !ok { + return + } + + stopProxyAndSSH(kubeClient, full) + + errorClient, err := NewErrorKubernetesClient(ErrStoppedKubeClient) + if err == nil { + kubeClient.KubeClient = errorClient + } +} + +// IsLive +// check that client is live (can connect to API) +// you can pass retry loop paras as first variadic option +// if not pass use 2 attempts with 2 seconds wait +func IsLive(ctx context.Context, client connection.KubeClient, loopParams ...retry.Params) error { + if govalue.Nil(client) { + return nil + } + + kubeClient, ok := client.(*KubernetesClient) + if !ok { + return fmt.Errorf("not a KubernetesClient") + } + + if govalue.Nil(kubeClient.KubeClient) { + return fmt.Errorf("kube client does not initialized") + } + + var retryParams retry.Params + if len(loopParams) > 0 { + retryParams = loopParams[0] + } + + readyLoopParams := retry.SafeCloneOrNewParams(retryParams, defaultLiveLoopParamsOpts...).Clone( + retry.WithName("Waiting for Kubernetes API to become Ready"), + retry.WithLogger(kubeClient.settings.Logger()), + ) + + return retry.NewLoopWithParams(readyLoopParams).RunContext(ctx, func() error { + _, err := client.Discovery().ServerVersion() + if err == nil { + return nil + } + return fmt.Errorf("kubernetes API is not Ready: %w", err) + }) +} + +func stopProxyAndSSH(kubeClient *KubernetesClient, full bool) { + if !govalue.Nil(kubeClient.KubeProxy) { + kubeClient.KubeProxy.Stop(-1) + kubeClient.KubeProxy = nil + } + + if !full { + return + } + + wrapper, ok := kubeClient.NodeInterface.(*ssh.NodeInterfaceWrapper) + if !ok { + return + } + + sshClient := wrapper.Client() + if _, ok := sshClient.(*gossh.Client); ok { + sshClient.Stop() + } +} + +var defaultLiveLoopParamsOpts = []retry.ParamsBuilderOpt{ + retry.WithWait(2 * time.Second), + retry.WithAttempts(2), +} diff --git a/pkg/kube/config.go b/pkg/kube/config.go index 5a69fea..3ec7e53 100644 --- a/pkg/kube/config.go +++ b/pkg/kube/config.go @@ -14,12 +14,59 @@ package kube -import "k8s.io/client-go/rest" +import ( + "fmt" + "strings" + + "github.com/name212/govalue" + "k8s.io/client-go/rest" +) type Config struct { - KubeConfig string - KubeConfigContext string + KubeConfig string + KubeConfigContext string + KubeConfigInCluster bool + LocalKubeClient bool RestConfig *rest.Config } + +func (c *Config) IsConflict() error { + modesSet := c.getModes() + + if len(modesSet) > 1 { + return fmt.Errorf("conflicting kube flags: set modes: %s", strings.Join(modesSet, " ")) + } + + return nil +} + +func (c *Config) IsRest() bool { + return !govalue.Nil(c.RestConfig) +} + +func (c *Config) OverSSH() bool { + modesSet := c.getModes() + + return len(modesSet) == 0 +} + +func (c *Config) getModes() []string { + modes := map[string]bool{ + "kubeconfig": c.KubeConfig != "", + "in-cluster": c.KubeConfigInCluster, + "local": c.LocalKubeClient, + "rest": c.IsRest(), + } + + modesSet := make([]string, 0) + + for mode, isSet := range modes { + if isSet { + modesSet = append(modesSet, mode) + } + } + + return modesSet +} diff --git a/pkg/kube/config_test.go b/pkg/kube/config_test.go new file mode 100644 index 0000000..c8798bf --- /dev/null +++ b/pkg/kube/config_test.go @@ -0,0 +1,172 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kube + +import ( + "fmt" + "testing" + + "github.com/stretchr/testify/require" + "k8s.io/client-go/rest" +) + +func TestConfigIsConflict(t *testing.T) { + type testCase struct { + name string + config *Config + } + + t.Run("no conflict", func(t *testing.T) { + configs := []testCase{ + { + name: "empty", + config: &Config{}, + }, + { + name: "kube config", + config: &Config{ + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + }, + }, + { + name: "in cluster", + config: &Config{ + KubeConfigInCluster: true, + }, + }, + { + name: "local", + config: &Config{ + LocalKubeClient: true, + }, + }, + { + name: "rest", + config: &Config{ + RestConfig: &rest.Config{}, + }, + }, + } + + for _, c := range configs { + t.Run(c.name, func(t *testing.T) { + err := c.config.IsConflict() + require.NoError(t, err, "should not conflict") + }) + } + }) + + t.Run("conflict", func(t *testing.T) { + configs := []testCase{ + { + name: "local with kube config", + config: &Config{ + LocalKubeClient: true, + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + }, + }, + { + name: "kube config with in cluster", + config: &Config{ + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + KubeConfigInCluster: true, + }, + }, + { + name: "local with in cluster", + config: &Config{ + LocalKubeClient: true, + KubeConfigInCluster: true, + }, + }, + { + name: "kube config and rest", + config: &Config{ + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + RestConfig: &rest.Config{}, + }, + }, + { + name: "local with kube config and rest", + config: &Config{ + LocalKubeClient: true, + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + RestConfig: &rest.Config{}, + }, + }, + { + name: "all", + config: &Config{ + KubeConfigInCluster: true, + LocalKubeClient: true, + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + RestConfig: &rest.Config{}, + }, + }, + } + + for _, c := range configs { + t.Run(c.name, func(t *testing.T) { + err := c.config.IsConflict() + require.Error(t, err, "should conflict") + }) + } + }) +} + +func TestOverSSH(t *testing.T) { + type testCase struct { + name string + config *Config + } + + configs := []testCase{ + { + name: "kube config", + config: &Config{ + KubeConfig: "/tmp/not-exists.rgg4g4.yaml", + }, + }, + { + name: "in cluster", + config: &Config{ + KubeConfigInCluster: true, + }, + }, + { + name: "local", + config: &Config{ + LocalKubeClient: true, + }, + }, + { + name: "rest", + config: &Config{ + RestConfig: &rest.Config{}, + }, + }, + } + + for _, c := range configs { + t.Run(fmt.Sprintf("set %s", c.name), func(t *testing.T) { + require.False(t, c.config.OverSSH(), "should not over ssh") + }) + } + + t.Run("over ssh", func(t *testing.T) { + cfg := &Config{} + require.True(t, cfg.OverSSH(), "should over ssh") + }) +} diff --git a/pkg/kube/error_client.go b/pkg/kube/error_client.go new file mode 100644 index 0000000..3082ba7 --- /dev/null +++ b/pkg/kube/error_client.go @@ -0,0 +1,93 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kube + +import ( + "fmt" + "net/http" + + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/client-go/dynamic" + "k8s.io/client-go/kubernetes" + "k8s.io/client-go/rest" + + connection "github.com/deckhouse/lib-connection/pkg" +) + +var ( + _ connection.KubeClient = &ErrorKubernetesClient{} +) + +type ErrorKubernetesClient struct { + kubernetes.Interface + + dynamic dynamic.Interface + err error +} + +func NewErrorKubernetesClient(errToReturn error) (*ErrorKubernetesClient, error) { + config := &rest.Config{ + Host: "127.0.0.1:0", + Transport: &errorRoundTripper{err: errToReturn}, + } + + k, err := kubernetes.NewForConfig(config) + if err != nil { + return nil, err + } + + d, err := dynamic.NewForConfig(config) + if err != nil { + return nil, err + } + + return &ErrorKubernetesClient{ + Interface: k, + dynamic: d, + err: errToReturn, + }, nil +} + +func (i *ErrorKubernetesClient) Dynamic() dynamic.Interface { + return i.dynamic +} + +func (i *ErrorKubernetesClient) APIResourceList(apiVersion string) ([]*metav1.APIResourceList, error) { + return nil, fmt.Errorf("cannot get APIResourceList for %s: %w", apiVersion, i.err) +} + +func (i *ErrorKubernetesClient) APIResource(apiVersion, kind string) (*metav1.APIResource, error) { + return nil, fmt.Errorf("cannot get APIResource for %s/%s: %w", apiVersion, kind, i.err) +} + +func (i *ErrorKubernetesClient) GroupVersionResource(apiVersion, kind string) (schema.GroupVersionResource, error) { + return schema.GroupVersionResource{}, fmt.Errorf("cannot get GroupVersionResource for %s/%s: %w", apiVersion, kind, i.err) +} + +func (i *ErrorKubernetesClient) InvalidateDiscoveryCache() {} + +type errorRoundTripper struct { + err error +} + +// RoundTrip implements the http.RoundTripper interface. +func (r *errorRoundTripper) RoundTrip(req *http.Request) (*http.Response, error) { + if req == nil { + return nil, r.err + } + + return nil, fmt.Errorf("cannot send request %s %s: %w", req.Method, req.RequestURI, r.err) +} diff --git a/pkg/kube/error_client_test.go b/pkg/kube/error_client_test.go new file mode 100644 index 0000000..a720397 --- /dev/null +++ b/pkg/kube/error_client_test.go @@ -0,0 +1,225 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kube + +import ( + "context" + "fmt" + "testing" + + "github.com/stretchr/testify/require" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + "k8s.io/apimachinery/pkg/apis/meta/v1/unstructured" + "k8s.io/apimachinery/pkg/runtime/schema" + "k8s.io/apimachinery/pkg/types" + applyconf "k8s.io/client-go/applyconfigurations/core/v1" + "k8s.io/client-go/kubernetes/scheme" +) + +func TestErrorKubernetesClient(t *testing.T) { + failError := fmt.Errorf("use error kube client") + getErrorClient := func(t *testing.T) *ErrorKubernetesClient { + c, err := NewErrorKubernetesClient(failError) + require.NoError(t, err, "error client should be created") + return c + } + + assertError := func(t *testing.T, do func() error) { + doNotPanics := func() { + err := do() + + require.Error(t, err, "should return error") + require.ErrorIs(t, err, failError) + } + + require.NotPanics(t, doNotPanics, "should not panic") + } + + t.Run("default interface", func(t *testing.T) { + client := getErrorClient(t) + + ctx := context.TODO() + cmClient := client.CoreV1().ConfigMaps("default") + + assertError(t, func() error { + _, err := cmClient.Get(ctx, "foo", metav1.GetOptions{}) + return err + }) + + assertError(t, func() error { + _, err := cmClient.List(ctx, metav1.ListOptions{}) + return err + }) + + cm := v1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Namespace: "default", + Name: "foo", + }, + Data: map[string]string{ + "key": "value", + }, + } + + assertError(t, func() error { + _, err := cmClient.Create(ctx, &cm, metav1.CreateOptions{}) + return err + }) + + assertError(t, func() error { + _, err := cmClient.Update(ctx, &cm, metav1.UpdateOptions{}) + return err + }) + + assertError(t, func() error { + applyCm := applyconf.ConfigMapApplyConfiguration{} + applyCm.WithName("foo").WithData(map[string]string{"key": "value"}) + _, err := cmClient.Apply(ctx, &applyCm, metav1.ApplyOptions{}) + return err + }) + + assertError(t, func() error { + return cmClient.Delete(ctx, "foo", metav1.DeleteOptions{}) + }) + + assertError(t, func() error { + return cmClient.DeleteCollection(ctx, metav1.DeleteOptions{}, metav1.ListOptions{}) + }) + + assertError(t, func() error { + data := []byte(`{"var": "foo"}`) + _, err := cmClient.Patch(ctx, "foo", types.JSONPatchType, data, metav1.PatchOptions{}) + return err + }) + + assertError(t, func() error { + _, err := cmClient.Watch(ctx, metav1.ListOptions{}) + return err + }) + + // without namespace + assertError(t, func() error { + _, err := client.CoreV1().Nodes().Get(ctx, "foo", metav1.GetOptions{}) + return err + }) + }) + + t.Run("dynamic", func(t *testing.T) { + client := getErrorClient(t) + + ctx := context.TODO() + + gvr := schema.GroupVersionResource{ + Group: "deckhouse.io", + Version: "v1", + Resource: "nodeusers", + } + + dClient := client.Dynamic().Resource(gvr).Namespace("default") + + assertError(t, func() error { + _, err := dClient.Get(ctx, "foo", metav1.GetOptions{}) + return err + }) + + assertError(t, func() error { + _, err := dClient.List(ctx, metav1.ListOptions{}) + return err + }) + + obj := &unstructured.Unstructured{} + docData := []byte(` +apiVersion: deckhouse.io/v1 +kind: NodeUser +metadata: + name: tes +spec: + isSudoer: false + nodeGroups: + - '*' + passwordHash: "6" + sshPublicKey: ssh-rsa AAA + uid: 1001 +`) + _, _, err := scheme.Codecs.UniversalDecoder().Decode(docData, nil, obj) + require.NoError(t, err, "should marshal to unstructured") + + assertError(t, func() error { + _, err := dClient.Create(ctx, obj, metav1.CreateOptions{}) + return err + }) + + assertError(t, func() error { + _, err := dClient.Update(ctx, obj, metav1.UpdateOptions{}) + return err + }) + + assertError(t, func() error { + _, err := dClient.Apply(ctx, "foo", obj, metav1.ApplyOptions{}) + return err + }) + + assertError(t, func() error { + return dClient.Delete(ctx, "foo", metav1.DeleteOptions{}) + }) + + assertError(t, func() error { + return dClient.DeleteCollection(ctx, metav1.DeleteOptions{}, metav1.ListOptions{}) + }) + + assertError(t, func() error { + data := []byte(`{"var": "foo"}`) + _, err := dClient.Patch(ctx, "foo", types.JSONPatchType, data, metav1.PatchOptions{}) + return err + }) + + assertError(t, func() error { + _, err := dClient.Watch(ctx, metav1.ListOptions{}) + return err + }) + + // without namespace + assertError(t, func() error { + _, err := client.Dynamic().Resource(gvr).Get(ctx, "foo", metav1.GetOptions{}) + return err + }) + }) + + t.Run("our interface", func(t *testing.T) { + client := getErrorClient(t) + + assertError(t, func() error { + _, err := client.APIResourceList("v1") + return err + }) + + assertError(t, func() error { + _, err := client.APIResource("v1", "Node") + return err + }) + + assertError(t, func() error { + _, err := client.GroupVersionResource("v1", "Node") + return err + }) + + doInvalidate := func() { + client.InvalidateDiscoveryCache() + } + + require.NotPanics(t, doInvalidate, "InvalidateDiscoveryCache should not panic") + }) +} diff --git a/pkg/kube/parse_flags.go b/pkg/kube/parse_flags.go index 55068ad..c5b2b4a 100644 --- a/pkg/kube/parse_flags.go +++ b/pkg/kube/parse_flags.go @@ -97,6 +97,14 @@ func (f *Flags) RewriteFromEnvs() error { return nil } +func (f *Flags) FlagSet() (*flag.FlagSet, error) { + if err := f.baseFlags.IsInitialized(); err != nil { + return nil, err + } + + return f.baseFlags.FlagSet(), nil +} + type FlagsParser struct { *baseflags.BaseParser } diff --git a/pkg/kube/parse_flags_test.go b/pkg/kube/parse_flags_test.go index 253fa39..49704a0 100644 --- a/pkg/kube/parse_flags_test.go +++ b/pkg/kube/parse_flags_test.go @@ -314,7 +314,7 @@ func TestParseFlags(t *testing.T) { appendKubeConfigArgument(ts, "/tmp/not-exsists-2dfr.yaml") }, - hasErrorContains: "Cannot get kube config file info for /tmp/not-exsists-2dfr.yaml", + hasErrorContains: "cannot get kube config file info for /tmp/not-exsists-2dfr.yaml", }, { @@ -325,7 +325,7 @@ func TestParseFlags(t *testing.T) { appendKubeConfigArgument(ts, dir) }, - hasErrorContains: "should be regular file", + hasErrorContains: "should be a file not dir", }, { diff --git a/pkg/kube/utils.go b/pkg/kube/utils.go new file mode 100644 index 0000000..6679d28 --- /dev/null +++ b/pkg/kube/utils.go @@ -0,0 +1,41 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kube + +import ( + klient "github.com/flant/kube-client/client" + "github.com/name212/govalue" + "k8s.io/client-go/kubernetes/fake" + + connection "github.com/deckhouse/lib-connection/pkg" +) + +func isFake(kubeClient connection.KubeClient) bool { + if govalue.Nil(kubeClient) { + return false + } + + client, ok := kubeClient.(*klient.Client) + if !ok { + return false + } + + if govalue.Nil(client.Interface) { + return false + } + + _, fk := client.Interface.(*fake.Clientset) + return fk +} diff --git a/pkg/provider/README.md b/pkg/provider/README.md new file mode 100644 index 0000000..bfc96c4 --- /dev/null +++ b/pkg/provider/README.md @@ -0,0 +1,8 @@ +# provider package + +Contains kube and ssh provider. + +SSH provider integration tests located in `../ssh/testssh/provider_test.go`. + +Kube provider does not contain unit tests only integration. +Integration tests located in `../tests/provider/kube_test.go`. \ No newline at end of file diff --git a/pkg/provider/kube.go b/pkg/provider/kube.go new file mode 100644 index 0000000..f4ea014 --- /dev/null +++ b/pkg/provider/kube.go @@ -0,0 +1,285 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package provider + +import ( + "context" + "fmt" + "sync" + "time" + + "github.com/deckhouse/lib-dhctl/pkg/log" + "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/name212/govalue" + "k8s.io/apimachinery/pkg/runtime/schema" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/kube" + "github.com/deckhouse/lib-connection/pkg/settings" +) + +var ( + _ connection.KubeProvider = &DefaultKubeProvider{} + _ connection.KubeProvider = &FakeKubeProvider{} +) + +type KubeProviderLoopsParams struct { + InitClient retry.Params + WaitingReady retry.Params +} + +type DefaultKubeProvider struct { + mu sync.Mutex + + sett settings.Settings + config *kube.Config + + currentClient connection.KubeClient + additionalClients []connection.KubeClient + + runnerInterface RunnerInterface + + loopsParams KubeProviderLoopsParams +} + +// NewDefaultKubeProvider +// if use rest config sshProvider can be nil +func NewDefaultKubeProvider(sett settings.Settings, config *kube.Config, runnerInterface RunnerInterface) *DefaultKubeProvider { + return &DefaultKubeProvider{ + sett: sett, + config: config, + runnerInterface: runnerInterface, + additionalClients: make([]connection.KubeClient, 0), + } +} + +func (p *DefaultKubeProvider) WithLoopsParams(l KubeProviderLoopsParams) *DefaultKubeProvider { + p.loopsParams = l + return p +} + +func (p *DefaultKubeProvider) Client(ctx context.Context) (connection.KubeClient, error) { + p.mu.Lock() + defer p.mu.Unlock() + + switched, err := p.runnerInterface.IsSwitched(ctx) + if err != nil { + return nil, err + } + + if govalue.Nil(p.currentClient) || switched { + // does not stop because we can use current ssh client in runner + client, err := p.createAndInitClient(ctx, false, SetNodeInterfaceOptWithRunChecks()) + if err != nil { + // need finalize to drop per session variables + p.runnerInterface.Finalize(true) + return nil, err + } + + kube.Stop(p.currentClient, false) + + p.currentClient = client + p.runnerInterface.Finalize(false) + + return client, nil + } + + return p.currentClient, nil +} + +func (p *DefaultKubeProvider) NewAdditionalClient(ctx context.Context) (connection.KubeClient, error) { + // need lock for safe call RunnerInterface.SetNodeInterface + p.mu.Lock() + defer p.mu.Unlock() + + // use additional client over ssh need to stop it fully + client, err := p.createAndInitClient( + ctx, + true, + SetNodeInterfaceOptWithRunChecks(), + SetNodeInterfaceOptWithNewNodeInterface(), + ) + + if err != nil { + return nil, err + } + + p.additionalClients = append(p.additionalClients, client) + return client, nil +} + +// NewAdditionalClientWithoutInitialize +// create new additional client without initialize +func (p *DefaultKubeProvider) NewAdditionalClientWithoutInitialize(ctx context.Context) (connection.KubeClient, error) { + // need lock for safe call RunnerInterface.SetNodeInterface + p.mu.Lock() + defer p.mu.Unlock() + + client, err := p.newClient(ctx, true, SetNodeInterfaceOptWithNewNodeInterface()) + if err != nil { + return nil, err + } + + p.additionalClients = append(p.additionalClients, client) + return client, nil +} + +func (p *DefaultKubeProvider) Cleanup(context.Context) error { + kube.Stop(p.currentClient, false) + p.currentClient = nil + + for _, client := range p.additionalClients { + kube.Stop(client, true) + } + + p.additionalClients = make([]connection.KubeClient, 0) + + return nil +} + +func (p *DefaultKubeProvider) AdditionalClientsCount() int { + return len(p.additionalClients) +} + +func (p *DefaultKubeProvider) HasCurrent() bool { + return !govalue.Nil(p.currentClient) +} + +func (p *DefaultKubeProvider) newClient(ctx context.Context, stopOnError bool, opts ...SetNodeInterfaceOpt) (*kube.KubernetesClient, error) { + client := kube.NewKubernetesClient(p.sett) + if err := p.runnerInterface.SetNodeInterface(ctx, client, opts...); err != nil { + if stopOnError { + kube.Stop(client, stopOnError) + } + return nil, err + } + + return client, nil +} + +func (p *DefaultKubeProvider) createAndInitClient(ctx context.Context, forceStopOnError bool, opts ...SetNodeInterfaceOpt) (connection.KubeClient, error) { + config := p.config + + if err := config.IsConflict(); err != nil { + return nil, err + } + + initOpts := p.runnerInterface.InitOptions() + + logger := p.sett.Logger() + + var client *kube.KubernetesClient + + err := logger.Process(log.ProcessCommon, "Connect to Kubernetes API", func() error { + // await availability if need here + newClient, err := p.newClient(ctx, forceStopOnError, opts...) + if err != nil { + return err + } + + if err := p.connectToKubernetesAPI(ctx, newClient, initOpts); err != nil { + // if does not connect to api need to stop client + kube.Stop(newClient, forceStopOnError) + return err + } + + client = newClient + + return nil + }) + + if err != nil { + return nil, err + } + + return client, nil +} + +func (p *DefaultKubeProvider) connectToKubernetesAPI(ctx context.Context, client *kube.KubernetesClient, kubeInitOpts []kube.InitOpt) error { + logger := p.sett.Logger() + + initClientLoopParams := retry.SafeCloneOrNewParams(p.loopsParams.InitClient, defaultInitClientParamsOpts...).Clone( + retry.WithName("Get Kubernetes API client"), + retry.WithLogger(logger), + ) + + err := retry.NewLoopWithParams(initClientLoopParams).RunContext(ctx, func() error { + if err := client.InitContext(ctx, p.config, kubeInitOpts...); err != nil { + return fmt.Errorf("open kubernetes connection: %v", err) + } + return nil + }) + + if err != nil { + return err + } + + time.Sleep(50 * time.Millisecond) // tick to prevent first probable fail + + readyLoopParams := retry.SafeCloneOrNewParams(p.loopsParams.WaitingReady, defaultWaitingReadyParamsOpts...) + + return kube.IsLive(ctx, client, readyLoopParams) +} + +var defaultInitClientParamsOpts = []retry.ParamsBuilderOpt{ + retry.WithWait(5 * time.Second), + retry.WithAttempts(45), +} + +var defaultWaitingReadyParamsOpts = []retry.ParamsBuilderOpt{ + retry.WithWait(5 * time.Second), + retry.WithAttempts(45), +} + +type FakeKubeProvider struct { + current *kube.KubernetesClient +} + +func NewFakeKubeProvider(gvrs ...map[schema.GroupVersionResource]string) *FakeKubeProvider { + resGVR := make(map[schema.GroupVersionResource]string) + for _, gvrMap := range gvrs { + for gvr, kind := range gvrMap { + resGVR[gvr] = kind + } + } + + return &FakeKubeProvider{ + current: newFake(resGVR), + } +} + +func (p *FakeKubeProvider) Client(context.Context) (connection.KubeClient, error) { + return p.current, nil +} + +func (p *FakeKubeProvider) NewAdditionalClient(ctx context.Context) (connection.KubeClient, error) { + return p.current, nil +} + +func (p *FakeKubeProvider) NewAdditionalClientWithoutInitialize(ctx context.Context) (connection.KubeClient, error) { + return p.current, nil +} + +func (p *FakeKubeProvider) Cleanup(context.Context) error { + return nil +} + +func newFake(gvr map[schema.GroupVersionResource]string) *kube.KubernetesClient { + if len(gvr) == 0 { + return kube.NewFakeKubernetesClient() + } + + return kube.NewFakeKubernetesClientWithListGVR(gvr) +} diff --git a/pkg/provider/kube_runner.go b/pkg/provider/kube_runner.go new file mode 100644 index 0000000..95e1eff --- /dev/null +++ b/pkg/provider/kube_runner.go @@ -0,0 +1,150 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package provider + +import ( + "context" + "fmt" + + "github.com/name212/govalue" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/kube" + "github.com/deckhouse/lib-connection/pkg/settings" + "github.com/deckhouse/lib-connection/pkg/ssh/local" +) + +type SetNodeInterfaceOpts struct { + RunChecks bool + NewNodeInterface bool +} + +type SetNodeInterfaceOpt func(opts *SetNodeInterfaceOpts) + +func SetNodeInterfaceOptWithRunChecks() SetNodeInterfaceOpt { + return func(opts *SetNodeInterfaceOpts) { + opts.RunChecks = true + } +} + +func SetNodeInterfaceOptWithNewNodeInterface() SetNodeInterfaceOpt { + return func(opts *SetNodeInterfaceOpts) { + opts.NewNodeInterface = true + } +} + +type RunnerInterface interface { + IsSwitched(ctx context.Context) (bool, error) + SetNodeInterface(ctx context.Context, client *kube.KubernetesClient, opts ...SetNodeInterfaceOpt) error + Finalize(isError bool) + InitOptions() []kube.InitOpt +} + +type RunnerInterfaceOpt func(RunnerInterface) + +func GetRunnerInterface(config *kube.Config, sett settings.Settings, sshProvider connection.SSHProvider, opts ...RunnerInterfaceOpt) (RunnerInterface, error) { + r, err := getRunner(config, sett, sshProvider) + if err != nil { + return nil, err + } + + for _, o := range opts { + o(r) + } + + return r, nil +} + +func getRunner(config *kube.Config, sett settings.Settings, sshProvider connection.SSHProvider) (RunnerInterface, error) { + if err := config.IsConflict(); err != nil { + return nil, err + } + + switch { + case config.KubeConfigInCluster: + fallthrough + case config.KubeConfig != "": + fallthrough + case config.IsRest(): + return &RunnerInterfaceNoAction{}, nil + case config.LocalKubeClient: + return NewRunnerInterfaceLocal(sett), nil + } + + if govalue.Nil(sshProvider) { + return nil, fmt.Errorf("No SSH provider specified for create kubernetes client over ssh") + } + + return NewRunnerInterfaceSSH(sett, sshProvider), nil +} + +type RunnerInterfaceNoAction struct{} + +func (*RunnerInterfaceNoAction) IsSwitched(context.Context) (bool, error) { + return false, nil +} + +func (*RunnerInterfaceNoAction) Finalize(bool) {} + +func (*RunnerInterfaceNoAction) SetNodeInterface(context.Context, *kube.KubernetesClient, ...SetNodeInterfaceOpt) error { + return nil +} + +func (*RunnerInterfaceNoAction) InitOptions() []kube.InitOpt { + return nil +} + +type RunnerInterfaceLocal struct { + node *local.NodeInterface +} + +func NewRunnerInterfaceLocal(sett settings.Settings) *RunnerInterfaceLocal { + return &RunnerInterfaceLocal{ + node: local.NewNodeInterface(sett), + } +} + +func (r *RunnerInterfaceLocal) IsSwitched(context.Context) (bool, error) { + return false, nil +} + +func (r *RunnerInterfaceLocal) Finalize(bool) {} + +func (r *RunnerInterfaceLocal) SetNodeInterface(_ context.Context, client *kube.KubernetesClient, _ ...SetNodeInterfaceOpt) error { + client.WithNodeInterface(r.node) + return nil +} + +func (r *RunnerInterfaceLocal) InitOptions() []kube.InitOpt { + return nil +} + +func RunnerInterfaceWithSSHLoopsParams(p RunnerInterfaceSSHLoopsParams) RunnerInterfaceOpt { + return func(r RunnerInterface) { + sshRI, ok := r.(*RunnerInterfaceSSH) + if ok { + sshRI.WithLoopParams(p) + } + } +} + +func RunnerInterfaceWithInitOpts(opts ...kube.InitOpt) RunnerInterfaceOpt { + return func(r RunnerInterface) { + sshRI, ok := r.(*RunnerInterfaceSSH) + if ok { + sshRI.WithInitOptions(opts...) + } + } +} diff --git a/pkg/provider/kube_ssh_runner.go b/pkg/provider/kube_ssh_runner.go new file mode 100644 index 0000000..c3d1824 --- /dev/null +++ b/pkg/provider/kube_ssh_runner.go @@ -0,0 +1,167 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package provider + +import ( + "context" + "fmt" + + "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/name212/govalue" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/kube" + "github.com/deckhouse/lib-connection/pkg/settings" + "github.com/deckhouse/lib-connection/pkg/ssh" + "github.com/deckhouse/lib-connection/pkg/ssh/gossh" + "github.com/deckhouse/lib-connection/pkg/ssh/session" +) + +type RunnerInterfaceSSHLoopsParams struct { + AwaitAvailabilityOverSSH retry.Params +} + +type RunnerInterfaceSSH struct { + sett settings.Settings + + sshProvider connection.SSHProvider + + currentSSHClient connection.SSHClient + fromSwitchCall connection.SSHClient + + currentSSHClientSession *session.SessionWithPrivateKeys + + loopsParams RunnerInterfaceSSHLoopsParams + + initOpts []kube.InitOpt +} + +func NewRunnerInterfaceSSH(sett settings.Settings, sshProvider connection.SSHProvider) *RunnerInterfaceSSH { + return &RunnerInterfaceSSH{ + sshProvider: sshProvider, + sett: sett, + } +} + +func (r *RunnerInterfaceSSH) WithLoopParams(p RunnerInterfaceSSHLoopsParams) *RunnerInterfaceSSH { + r.loopsParams = p + return r +} + +func (r *RunnerInterfaceSSH) WithInitOptions(opts ...kube.InitOpt) *RunnerInterfaceSSH { + r.initOpts = opts + return r +} + +func (r *RunnerInterfaceSSH) IsSwitched(ctx context.Context) (bool, error) { + sshClient, err := r.sshProvider.Client(ctx) + if err != nil { + return false, err + } + + fromClient := &session.SessionWithPrivateKeys{ + Session: sshClient.Session(), + Keys: sshClient.PrivateKeys(), + } + + r.fromSwitchCall = sshClient + + return !session.CompareWithKeys(fromClient, r.currentSSHClientSession), nil +} + +func (r *RunnerInterfaceSSH) SetNodeInterface(ctx context.Context, client *kube.KubernetesClient, opts ...SetNodeInterfaceOpt) error { + options := &SetNodeInterfaceOpts{} + for _, opt := range opts { + opt(options) + } + + cleanupIfChecksFailed := noCleanupOnFailChecks + + // can use fromSwitchCall because DefaultKubeProvider use mutex for all interfaces + sshClient := r.fromSwitchCall + if govalue.Nil(sshClient) { + // this case if call NewAdditionalClient + var err error + sshClient, cleanupIfChecksFailed, err = r.getCurrentForAdditional(ctx, options) + if err != nil { + return err + } + } + + if options.RunChecks { + if err := sshClient.Check().WithDelaySeconds(1).AwaitAvailability(ctx, r.loopsParams.AwaitAvailabilityOverSSH); err != nil { + cleanupIfChecksFailed() + return fmt.Errorf("await master available: %v", err) + } + } + + client.WithNodeInterface(ssh.NewNodeInterfaceWrapper(sshClient, r.sett)) + return nil +} + +func (r *RunnerInterfaceSSH) Finalize(isError bool) { + if !isError && !govalue.Nil(r.fromSwitchCall) { + r.currentSSHClient = r.fromSwitchCall + r.updateSessionFromCurrent() + } + + r.fromSwitchCall = nil +} + +func (r *RunnerInterfaceSSH) InitOptions() []kube.InitOpt { + return r.initOpts +} + +func (r *RunnerInterfaceSSH) updateSessionFromCurrent() { + r.currentSSHClientSession = &session.SessionWithPrivateKeys{ + Session: r.currentSSHClient.Session().Copy(), + Keys: r.currentSSHClient.PrivateKeys(), + } +} + +func noCleanupOnFailChecks() {} + +func (r *RunnerInterfaceSSH) getCurrentForAdditional(ctx context.Context, opts *SetNodeInterfaceOpts) (connection.SSHClient, func(), error) { + if opts.NewNodeInterface { + client, err := r.sshProvider.NewAdditionalClient(ctx) + if err != nil { + return nil, noCleanupOnFailChecks, err + } + + cleanup := func() { + // need stop only gossh client because cli ssh init agent for all + if _, ok := client.(*gossh.Client); ok { + client.Stop() + } + } + + return client, cleanup, nil + } + + // need use if call NewAdditionalClient* before Client + if !govalue.Nil(r.currentSSHClient) { + return r.currentSSHClient, noCleanupOnFailChecks, nil + } + + client, err := r.sshProvider.Client(ctx) + if err != nil { + return nil, noCleanupOnFailChecks, err + } + + r.currentSSHClient = client + r.updateSessionFromCurrent() + + return client, noCleanupOnFailChecks, nil +} diff --git a/pkg/provider/provider.go b/pkg/provider/provider.go deleted file mode 100644 index 81d1dfd..0000000 --- a/pkg/provider/provider.go +++ /dev/null @@ -1,53 +0,0 @@ -// Copyright 2026 Flant JSC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package provider - -import ( - "context" - - "github.com/hashicorp/go-multierror" - - connection "github.com/deckhouse/lib-connection/pkg" - "github.com/deckhouse/lib-connection/pkg/settings" - sshconfig "github.com/deckhouse/lib-connection/pkg/ssh/config" -) - -var ( - _ connection.Provider = &DefaultProvider{} -) - -type DefaultProvider struct { - sshProvider connection.SSHProvider -} - -func NewDefaultProvider(sett settings.Settings, sshConnectionConfig *sshconfig.ConnectionConfig) *DefaultProvider { - return &DefaultProvider{ - sshProvider: NewDefaultSSHProvider(sett, sshConnectionConfig), - } -} - -func (p *DefaultProvider) SSHProvider() connection.SSHProvider { - return p.sshProvider -} - -func (p *DefaultProvider) Cleanup(ctx context.Context) error { - var errs *multierror.Error - - if err := p.sshProvider.Cleanup(ctx); err != nil { - errs = multierror.Append(errs, err) - } - - return errs.ErrorOrNil() -} diff --git a/pkg/provider/ssh.go b/pkg/provider/ssh.go index e5e1bab..c4cbaeb 100644 --- a/pkg/provider/ssh.go +++ b/pkg/provider/ssh.go @@ -36,6 +36,7 @@ import ( var ( _ connection.SSHProvider = &DefaultSSHProvider{} + _ connection.SSHProvider = &ErrorSSHProvider{} ) type SSHClientOptions struct { @@ -43,6 +44,7 @@ type SSHClientOptions struct { ForceGoSSH bool LoopsParams gossh.ClientLoopsParams StartClient bool + ClientID string } type SSHClientOption func(options *SSHClientOptions) @@ -70,6 +72,12 @@ func SSHClientWithLoopsParams(params gossh.ClientLoopsParams) SSHClientOption { } } +func SSHClientWithID(id string) SSHClientOption { + return func(options *SSHClientOptions) { + options.ClientID = id + } +} + type DefaultSSHProvider struct { mu sync.Mutex @@ -252,6 +260,28 @@ func (p *DefaultSSHProvider) WithOptions(opts ...SSHClientOption) *DefaultSSHPro return p } +func (p *DefaultSSHProvider) WithID(id string) *DefaultSSHProvider { + p.mu.Lock() + defer p.mu.Unlock() + + p.options.ClientID = id + + return p +} + +// AdditionalClients +// please use for testing purposes only! +func (p *DefaultSSHProvider) AdditionalClients() []connection.SSHClient { + dest := make([]connection.SSHClient, len(p.additionalClients)) + copy(dest, p.additionalClients) + + return dest +} + +func (p *DefaultSSHProvider) HasCurrent() bool { + return !govalue.Nil(p.currentClient) +} + func (p *DefaultSSHProvider) doGetCurrentClient(ctx context.Context) (connection.SSHClient, error) { if !govalue.Nil(p.currentClient) { return p.currentClient, nil @@ -303,10 +333,10 @@ func (p *DefaultSSHProvider) createClient(ctx context.Context, parent *session.S func (p *DefaultSSHProvider) constructClient(ctx context.Context, sess *session.Session, privateKeys []session.AgentPrivateKey) connection.SSHClient { if p.useGoSSH(true) { return gossh.NewClient(ctx, p.sett, sess, privateKeys). - WithLoopsParams(p.options.LoopsParams) + WithLoopsParams(p.options.LoopsParams).WithID(p.options.ClientID) } - return clissh.NewClient(p.sett, sess, privateKeys, p.options.InitializeNewAgent) + return clissh.NewClient(p.sett, sess, privateKeys, p.options.InitializeNewAgent).WithID(p.options.ClientID) } func (p *DefaultSSHProvider) stopCurrentClientIfNeed() { @@ -653,3 +683,47 @@ func fileExists(path string) (bool, error) { return true, nil } + +type ErrorSSHProvider struct { + err error +} + +// NewErrorSSHProvider +// Special provider that always return error for all operations +// expected cleanup +// it can be used with GetRunnerInterface if you are you sure that +// you do not use KubeClient over SSH +func NewErrorSSHProvider(err error) *ErrorSSHProvider { + if err == nil { + err = fmt.Errorf("ErrorSSHProvider: error not provided") + } + return &ErrorSSHProvider{err: err} +} + +func (p *ErrorSSHProvider) Client(context.Context) (connection.SSHClient, error) { + return nil, p.returnError("Client") +} + +func (p *ErrorSSHProvider) NewAdditionalClient(context.Context) (connection.SSHClient, error) { + return nil, p.returnError("NewAdditionalClient") +} + +func (p *ErrorSSHProvider) NewStandaloneClient(context.Context, *session.Session, []session.AgentPrivateKey, ...connection.StandaloneClientOpt) (connection.SSHClient, error) { + return nil, p.returnError("NewStandaloneClient") +} + +func (p *ErrorSSHProvider) SwitchClient(context.Context, *session.Session, []session.AgentPrivateKey) (connection.SSHClient, error) { + return nil, p.returnError("SwitchClient") +} + +func (p *ErrorSSHProvider) SwitchToDefault(context.Context) (connection.SSHClient, error) { + return nil, p.returnError("SwitchToDefault") +} + +func (p *ErrorSSHProvider) Cleanup(context.Context) error { + return nil +} + +func (p *ErrorSSHProvider) returnError(op string) error { + return fmt.Errorf("cannot provide ssh client with %s: %w", op, p.err) +} diff --git a/pkg/provider/ssh_test.go b/pkg/provider/ssh_test.go index fe1b74f..df833b4 100644 --- a/pkg/provider/ssh_test.go +++ b/pkg/provider/ssh_test.go @@ -1517,6 +1517,55 @@ func TestSSHProviderClient(t *testing.T) { require.NotEqual(t, privateKeyPathBeforeCleanup, provider.privateKeysTmp, "should create new tmp dir") }) }) + + t.Run("No auth methods", func(t *testing.T) { + test := newTest(t) + config := testCreateSSHConnectionConfigWithPrivateKeyContent(t, connectionConfigParams{ + mode: sshconfig.Mode{ + ForceLegacy: true, + }, + test: test, + bastionPort: nil, + port: nil, + }) + + config.Config.PrivateKeys = make([]sshconfig.AgentPrivateKey, 0) + config.Config.SudoPassword = "" + config.Config.ForceUseSSHAgent = false + + sett := test.Settings() + + provider := newTestProvider(sett, config) + + ctx := context.TODO() + + _, err := provider.Client(ctx) + require.Error(t, err, "should fail") + + _, err = provider.NewAdditionalClient(ctx) + require.Error(t, err, "should fail") + + sess := session.NewSession(session.Input{ + User: "uuser", + Port: "22013", + BecomePass: "not secure standalone", + AvailableHosts: []session.Host{ + { + Host: "192.168.101.9", + Name: "192.168.101.9", + }, + }, + }) + + _, err = provider.NewStandaloneClient(ctx, sess, nil) + require.Error(t, err, "should fail") + + _, err = provider.SwitchClient(ctx, sess, nil) + require.Error(t, err, "should fail") + + _, err = provider.SwitchToDefault(ctx) + require.Error(t, err, "should fail") + }) } func newTest(t *testing.T) *tests.Test { diff --git a/pkg/settings/consts.go b/pkg/settings/consts.go new file mode 100644 index 0000000..a00c9d9 --- /dev/null +++ b/pkg/settings/consts.go @@ -0,0 +1,19 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package settings + +const ( + SSHAgentAuthSockEnv = "SSH_AUTH_SOCK" +) diff --git a/pkg/settings/settings.go b/pkg/settings/settings.go index b70eac9..041dffb 100644 --- a/pkg/settings/settings.go +++ b/pkg/settings/settings.go @@ -115,7 +115,7 @@ func (b *BaseProviders) AuthSock() string { return b.params.AuthSock } - return os.Getenv("SSH_AUTH_SOCK") + return os.Getenv(SSHAgentAuthSockEnv) } func (b *BaseProviders) EnvsPrefix() string { @@ -134,6 +134,12 @@ func CloneWithEnvsPrefix(prefix string) CloneOpt { } } +func CloneWithAuthSock(path string) CloneOpt { + return func(p *BaseProviders) { + p.params.AuthSock = path + } +} + func (b *BaseProviders) Clone(opts ...CloneOpt) *BaseProviders { clone := *b @@ -143,14 +149,3 @@ func (b *BaseProviders) Clone(opts ...CloneOpt) *BaseProviders { return &clone } - -// SetDefaultLogger -// Deprecated: -// for backward compatibility please pass logger to all structure directly -func SetDefaultLogger(logger log.Logger) { - defaultLogger = logger -} - -func SetNodeTmpPath(path string) { - defaultNodeTmpPath = path -} diff --git a/pkg/interface.go b/pkg/ssh.go similarity index 98% rename from pkg/interface.go rename to pkg/ssh.go index b080f19..4dead9b 100644 --- a/pkg/interface.go +++ b/pkg/ssh.go @@ -84,11 +84,6 @@ type SSHProvider interface { Cleanup(ctx context.Context) error } -type Provider interface { - SSHProvider() SSHProvider - Cleanup(ctx context.Context) error -} - type Interface interface { Command(name string, args ...string) Command File() File @@ -287,3 +282,9 @@ type SSHClient interface { IsStopped() bool } + +type KubeProxyCommand interface { + Command + WaitError() error + Stop() +} diff --git a/pkg/ssh/base/kubeproxy/kube_proxy.go b/pkg/ssh/base/kubeproxy/kube_proxy.go new file mode 100644 index 0000000..647ed42 --- /dev/null +++ b/pkg/ssh/base/kubeproxy/kube_proxy.go @@ -0,0 +1,552 @@ +// Copyright 2025 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kubeproxy + +import ( + "fmt" + "math/rand" + "os" + "regexp" + "strconv" + "sync" + "time" + + "github.com/name212/govalue" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/settings" + "github.com/deckhouse/lib-connection/pkg/ssh/session" +) + +type StartCommandParams struct { + OnStart func() + StdoutHandler func(string) + WaitHandler func(err error) + Cmd string +} + +type Runner interface { + StartCommand(params StartCommandParams) (connection.KubeProxyCommand, error) + UpTunnel(localPort int, kubeProxyPort string) (connection.Tunnel, string, error) + ClientID() string +} + +type BaseKubeProxy struct { + session *session.Session + sett settings.Settings + + runner Runner + + KubeProxyPort string + LocalPort string + + proxyMutex sync.RWMutex + proxy connection.KubeProxyCommand + + tunnelMu sync.RWMutex + tunnel connection.Tunnel + + stop bool + port string + localPort int + + monitorsMu sync.Mutex + healthMonitorsByStartID map[int]chan struct{} +} + +func NewBaseKubeProxy(runner Runner, sett settings.Settings, sess *session.Session) *BaseKubeProxy { + return &BaseKubeProxy{ + runner: runner, + sett: sett, + session: sess, + port: "0", + localPort: DefaultLocalAPIPort, + healthMonitorsByStartID: make(map[int]chan struct{}), + } +} + +func (k *BaseKubeProxy) Start(useLocalPort int) (string, error) { + startID := rand.Int() + + k.debugWithID(startID, "Call start Kube-proxy port:%d", useLocalPort) + + success := false + defer func() { + k.stop = false + if !success { + k.debugWithID(startID, "Kube-proxy was not started. Try to clear all") + k.Stop(startID) + } + k.debugWithID(startID, "Kube-proxy starting on %d was finished", k.localPort) + }() + + proxyCommandErrorCh := make(chan error, 1) + var proxy connection.KubeProxyCommand + var port string + var err error + for { + proxy, port, err = k.runKubeProxy(proxyCommandErrorCh, startID) + if err != nil { + k.debugWithID(startID, "Got error from runKubeProxy func: %v", err) + return "", err + } + + k.stop = false + portNum, err := strconv.Atoi(port) + if err != nil { + continue + } + if portNum > 1024 { + break + } + k.debugWithID(startID, "Proxy run on privileged port %s and will be stopped and restarted", port) + k.Stop(startID) + } + + k.debugWithID(startID, "Proxy was started successfully") + + k.setProxy(proxy) + k.port = port + + tunnelErrorCh := make(chan error) + tun, localPort, lastError := k.upTunnel(port, useLocalPort, tunnelErrorCh, startID) + if lastError != nil { + k.debugWithID(startID, "Got error from upTunnel func: %v", err) + return "", fmt.Errorf("tunnel up error: max retries reached, last error: %w", lastError) + } + + k.setTunnel(tun) + k.localPort = localPort + + monitorCh := k.createMonitorCh(startID) + go k.healthMonitor( + proxyCommandErrorCh, + tunnelErrorCh, + monitorCh, + startID, + ) + + success = true + + return fmt.Sprintf("%d", k.localPort), nil +} + +func (k *BaseKubeProxy) StopAll() { + k.Stop(-1) +} + +func (k *BaseKubeProxy) Stop(startID int) { + if k == nil { + return + } + + if k.stop { + k.debugWithID(startID, "Stop kube-proxy: kube proxy already stopped. Skip") + return + } + + if startID < 1 { + for id := range k.healthMonitorsByStartID { + k.stopHealthMonitor(id) + } + } else { + k.stopHealthMonitor(startID) + } + + proxy, err := k.getProxy() + if err == nil { + k.debugWithID(startID, "Stop proxy command") + proxy.Stop() + k.debugWithID(startID, "Proxy command stopped") + k.setProxy(nil) + k.port = "0" + } + + tun, err := k.getTunnel() + if err == nil { + k.debugWithID(startID, "Stop tunnel") + tun.Stop() + k.debugWithID(startID, "Tunnel stopped") + k.setTunnel(nil) + } + + k.stop = true +} + +func (k *BaseKubeProxy) stopHealthMonitor(startID int) { + k.monitorsMu.Lock() + defer k.monitorsMu.Unlock() + + ch, ok := k.healthMonitorsByStartID[startID] + if !ok || ch == nil { + return + } + + ch <- struct{}{} + delete(k.healthMonitorsByStartID, startID) +} + +func (k *BaseKubeProxy) createMonitorCh(startID int) chan struct{} { + k.monitorsMu.Lock() + defer k.monitorsMu.Unlock() + + c := make(chan struct{}, 1) + k.healthMonitorsByStartID[startID] = c + + return c +} + +func (k *BaseKubeProxy) tryToRestartFully(startID int) { + k.debugWithID(startID, "Try restart kube proxy fully") + sleep := 4 * time.Second + + for { + k.Stop(startID) + + _, err := k.Start(k.localPort) + + if err == nil { + k.stop = false + k.debugWithID(startID, "Proxy was restarted successfully") + return + } + + // need warn for human + msg, args := k.appendIDs( + startID, + "Proxy was not restarted: %v. Sleep %s seconds before next attempt", + err, + sleep.String(), + ) + + k.sett.Logger().WarnF(msg, args...) + + time.Sleep(sleep) + + k.session.ChoiceNewHost() + k.debugWithID(startID, "New host selected on fully restart %s", k.session.Host()) + } +} + +func (k *BaseKubeProxy) healthMonitor( + proxyErrorCh, tunnelErrorCh chan error, + stopCh chan struct{}, + startID int, +) { + k.debugWithID(startID, "Kube proxy health monitor started") + defer k.debugWithID(startID, "Kube proxy health monitor stopped") + + proxyErrorCount := 0 + for { + k.debugWithID(startID, "Kube proxy monitor step") + select { + case err := <-proxyErrorCh: + k.debugWithID(startID, "Proxy failed with error %v", err) + // if proxy crushed, we need to restart kube-proxy fully + // with proxy and tunnel (tunnel depends on proxy) + k.tryToRestartFully(startID) + // if we restart proxy fully + // this monitor must be finished because new monitor was started + return + + case err := <-tunnelErrorCh: + k.debugWithID(startID, "Tunnel failed. Stopping previous tunnel: %v", err) + // we need fully stop tunnel because + tun, err := k.getTunnel() + if err == nil { + tun.Stop() + } + + k.debugWithID(startID, "Tunnel stopped before restart. Starting new tunnel...") + + if proxyErrorCount < 3 { + var err error + tun, _, err := k.upTunnel(k.port, k.localPort, tunnelErrorCh, startID) + if err != nil { + k.debugWithID(startID, "Tunnel was not up: %v. Try to restart fully", err) + k.tryToRestartFully(startID) + return + } else { + k.setTunnel(tun) + } + + proxyErrorCount++ + } else { + k.tryToRestartFully(startID) + return + } + + k.debugWithID(startID, "Tunnel re up successfully") + + case <-stopCh: + k.debugWithID(startID, "Got kube proxy stopped message") + return + } + } +} + +func (k *BaseKubeProxy) upTunnel( + kubeProxyPort string, + useLocalPort int, + tunnelErrorCh chan error, + startID int, +) (connection.Tunnel, int, error) { + k.debugWithID(startID, + "Starting up tunnel with proxy port %s and local port %d", + kubeProxyPort, + useLocalPort, + ) + + rewriteLocalPort := false + localPort := useLocalPort + + portProvider := NewPortProvider(useLocalPort) + + if useLocalPort < 1 { + k.debugWithID(startID, + "Incorrect local port %d use default %d", + useLocalPort, + DefaultLocalAPIPort, + ) + localPort = portProvider.Next() + rewriteLocalPort = true + } + + maxRetries := 5 + retries := 0 + var lastError error + var tun connection.Tunnel + for { + k.debugWithID(startID, "Start %d iteration for up tunnel on %d", retries, localPort) + proxy, getProxyErr := k.getProxy() + if getProxyErr != nil { + return nil, 0, fmt.Errorf("failed to get proxy proxy is: %v", getProxyErr) + } + + if proxy.WaitError() != nil { + lastError = fmt.Errorf("proxy was failed while restart tunnel") + break + } + + newTun, tunnelAddress, upTunnelErr := k.runner.UpTunnel(localPort, kubeProxyPort) + if upTunnelErr != nil { + k.debugWithID(startID, "Start tunnel was failed. Cleaning...") + + if !govalue.Nil(tun) { + tun.Stop() + } + + lastError = fmt.Errorf("tunnel '%s': %w", tunnelAddress, upTunnelErr) + k.debugWithID(startID, "Start tunnel was failed. Error: %v", lastError) + + if rewriteLocalPort { + localPort = portProvider.Next() + k.debugWithID(startID, "New local port %d", localPort) + } + + retries++ + if retries >= maxRetries { + k.debugWithID(startID, "Last iteration finished") + tun = nil + break + } + } else { + k.debugWithID(startID, "Tunnel was started on %s. Starting health monitor", tunnelAddress) + go newTun.HealthMonitor(tunnelErrorCh) + lastError = nil + tun = newTun + break + } + } + + dbgMsg := fmt.Sprintf("Tunnel up on local port %d", localPort) + if lastError != nil { + dbgMsg = fmt.Sprintf("Tunnel was not up: %v", lastError) + } + + k.debugWithID(startID, "%s", dbgMsg) + + return tun, localPort, lastError +} + +var portRe = regexp.MustCompile(`Starting to serve on .*?:(\d+)`) + +func (k *BaseKubeProxy) runKubeProxy( + waitCh chan error, + startID int, +) (connection.KubeProxyCommand, string, error) { + k.debugWithID(startID, "Begin starting proxy") + + cmd := k.proxyCMD(startID) + + port := "" + portReady := make(chan struct{}, 1) + + stdOutHandler := func(line string) { + m := portRe.FindStringSubmatch(line) + if len(m) == 2 && m[1] != "" { + port = m[1] + k.debugWithID(startID, "Got proxy port = %s on host %s", port, k.session.Host()) + portReady <- struct{}{} + } + } + + onStart := make(chan struct{}, 1) + + onStartHandler := func() { + k.debugWithID(startID, "Proxy command started") + onStart <- struct{}{} + } + + waitHandler := func(err error) { + k.debugWithID(startID, "Proxy command wait error: %v", err) + waitCh <- err + } + + k.debugWithID(startID, "Starting proxy command") + + proxy, err := k.runner.StartCommand(StartCommandParams{ + OnStart: onStartHandler, + StdoutHandler: stdOutHandler, + WaitHandler: waitHandler, + Cmd: cmd, + }) + + if err != nil { + k.debugWithID(startID, "Start proxy command error: %v", err) + return nil, "", fmt.Errorf("start kubectl proxy: %w", err) + } + + k.debugWithID(startID, "Proxy command was started") + + returnWaitErr := func(err error) error { + k.debugWithID(startID, "Proxy command waiting error: %v", err) + template := `Proxy exited suddenly: %s%s +Status: %w` + return fmt.Errorf(template, string(proxy.StdoutBytes()), string(proxy.StderrBytes()), err) + } + + // we need to check that kubeproxy was started + // that checking wait string pattern in output + // but we may receive error and this error will get from waitCh + select { + case <-onStart: + case err := <-waitCh: + return nil, "", returnWaitErr(err) + } + + // Wait for proxy startup + t := time.NewTicker(20 * time.Second) + defer t.Stop() + select { + case e := <-waitCh: + return nil, "", returnWaitErr(e) + case <-t.C: + k.debugWithID(startID, "Starting proxy command timeout") + return nil, "", fmt.Errorf("timeout waiting for api proxy port") + case <-portReady: + if port == "" { + k.debugWithID(startID, "Starting proxy command: empty port") + return nil, "", fmt.Errorf("got empty port from kubectl proxy") + } + } + + k.debugWithID(startID, "Proxy process started with port: %s", port) + return proxy, port, nil +} + +func (k *BaseKubeProxy) proxyCMD(startID int) string { + kubectlProxy := fmt.Sprintf( + // --disable-filter is needed to exec into etcd pods + "kubectl proxy --as=dhctl --as-group=system:masters --port=%s --kubeconfig /etc/kubernetes/admin.conf --disable-filter", + k.port, + ) + if v := os.Getenv("KUBE_PROXY_ACCEPT_HOSTS"); v != "" { + kubectlProxy += fmt.Sprintf(" --accept-hosts='%s'", v) + } + command := fmt.Sprintf("PATH=$PATH:%s/; %s", k.sett.NodeBinPath(), kubectlProxy) + + k.debugWithID(startID, "Proxy command for start: %s", command) + + return command +} + +var errEmpty = fmt.Errorf("empty") + +func (k *BaseKubeProxy) setProxy(c connection.KubeProxyCommand) { + k.proxyMutex.Lock() + defer k.proxyMutex.Unlock() + + k.proxy = c +} + +func (k *BaseKubeProxy) getProxy() (connection.KubeProxyCommand, error) { + k.proxyMutex.RLock() + defer k.proxyMutex.RUnlock() + + c := k.proxy + if govalue.Nil(c) { + return nil, errEmpty + } + + return c, nil +} + +func (k *BaseKubeProxy) setTunnel(t connection.Tunnel) { + k.tunnelMu.Lock() + defer k.tunnelMu.Unlock() + + k.tunnel = t +} + +func (k *BaseKubeProxy) getTunnel() (connection.Tunnel, error) { + k.tunnelMu.RLock() + defer k.tunnelMu.RUnlock() + + t := k.tunnel + if govalue.Nil(t) { + return nil, errEmpty + } + + return t, nil +} + +func (k *BaseKubeProxy) appendIDs(id int, f string, args ...any) (string, []any) { + if id > 0 { + f = "[%d] " + f + args = append([]any{id}, args...) + } + + clientID := k.runner.ClientID() + if clientID != "" { + f = "[%s] " + f + args = append([]any{clientID}, args...) + } + + return f, args +} + +func (k *BaseKubeProxy) debugWithID(id int, f string, args ...any) { + f, args = k.appendIDs(id, f, args...) + k.sett.Logger().DebugF(f, args...) +} + +func ExtractTunnelAddressFromEnv(localPort int, kubeProxyPort string) string { + if v := os.Getenv("KUBE_PROXY_BIND_ADDR"); v != "" { + return fmt.Sprintf("%s:%d:localhost:%s", v, localPort, kubeProxyPort) + } + + return "" +} diff --git a/pkg/ssh/base/kubeproxy/port_provider.go b/pkg/ssh/base/kubeproxy/port_provider.go new file mode 100644 index 0000000..18eb134 --- /dev/null +++ b/pkg/ssh/base/kubeproxy/port_provider.go @@ -0,0 +1,56 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kubeproxy + +import ( + "github.com/deckhouse/lib-connection/pkg/utils/rand" +) + +const ( + DefaultLocalAPIPort = 22322 + + kubeProxyPortRangeStart = 22340 + kubeProxyPortRangeEnd = 22499 +) + +type PortProvider struct { + used map[int]struct{} + startPort int +} + +func NewPortProvider(startPort int) *PortProvider { + return &PortProvider{ + used: make(map[int]struct{}), + startPort: startPort, + } +} + +func (p *PortProvider) Next() int { + if p.startPort > 0 { + return p.startPort + } + + if len(p.used) == 0 { + return p.addToUsedAndReturn(DefaultLocalAPIPort) + } + + nextPort := rand.RangeExclude(kubeProxyPortRangeStart, kubeProxyPortRangeEnd, p.used) + return p.addToUsedAndReturn(nextPort) +} + +func (p *PortProvider) addToUsedAndReturn(port int) int { + p.used[port] = struct{}{} + return port +} diff --git a/pkg/ssh/base/kubeproxy/port_provider_test.go b/pkg/ssh/base/kubeproxy/port_provider_test.go new file mode 100644 index 0000000..d93e128 --- /dev/null +++ b/pkg/ssh/base/kubeproxy/port_provider_test.go @@ -0,0 +1,55 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package kubeproxy + +import ( + "testing" + + "github.com/stretchr/testify/require" +) + +func TestPortProvider(t *testing.T) { + t.Run("Use non default", func(t *testing.T) { + const usedPort = 22222 + provider := NewPortProvider(usedPort) + + for i := 0; i < 5; i++ { + got := provider.Next() + require.Equal(t, usedPort, got, "should always return the same port") + require.Len(t, provider.used, 0, "not save same port in used") + } + }) + + t.Run("Use default", func(t *testing.T) { + provider := NewPortProvider(-1) + + // first call return default port + got := provider.Next() + require.Equal(t, DefaultLocalAPIPort, got, "should return default") + + // next calls return random + for i := 1; i < 5; i++ { + got := provider.Next() + + require.NotEqual(t, DefaultLocalAPIPort, got, "should not default port") + + require.True(t, got >= 22340, "should port in range") + require.True(t, got <= 22499, "should port in range") + + require.Len(t, provider.used, i+1, "used ports should expected len") + require.Contains(t, provider.used, got, "used ports should add new port") + } + }) +} diff --git a/pkg/ssh/clissh/agent.go b/pkg/ssh/clissh/agent.go index 32b4de2..d569416 100644 --- a/pkg/ssh/clissh/agent.go +++ b/pkg/ssh/clissh/agent.go @@ -17,7 +17,6 @@ package clissh import ( "fmt" "net" - "os" "sync" "time" @@ -98,7 +97,7 @@ func (a *Agent) Start() error { a.agent = cmd.NewAgent(a.sshSettings, a.agentSettings) if len(a.agentSettings.PrivateKeys) == 0 { - a.agent.WithAuthSock(os.Getenv("SSH_AUTH_SOCK")) + a.agent.WithAuthSock(a.sshSettings.AuthSock()) return nil } diff --git a/pkg/ssh/clissh/client.go b/pkg/ssh/clissh/client.go index 389327e..d505a0d 100644 --- a/pkg/ssh/clissh/client.go +++ b/pkg/ssh/clissh/client.go @@ -53,6 +53,8 @@ type Client struct { kubeProxies []*KubeProxy stopped bool + + id string } func (s *Client) OnlyPreparePrivateKeys() error { @@ -94,7 +96,7 @@ func (s *Client) Command(name string, arg ...string) connection.Command { // KubeProxy is used to start kubectl proxy and create a tunnel from local port to proxy port func (s *Client) KubeProxy() connection.KubeProxy { - p := NewKubeProxy(s.settings, s.SessionSettings) + p := NewKubeProxy(s) s.kubeProxies = append(s.kubeProxies, p) return p } @@ -173,6 +175,11 @@ func (s *Client) IsStopped() bool { return s.stopped } +func (s *Client) WithID(id string) *Client { + s.id = id + return s +} + func (s *Client) stopAgent() { if govalue.Nil(s.Agent) { return diff --git a/pkg/ssh/clissh/cmd/scp.go b/pkg/ssh/clissh/cmd/scp.go index 1ab401c..5e84a47 100644 --- a/pkg/ssh/clissh/cmd/scp.go +++ b/pkg/ssh/clissh/cmd/scp.go @@ -82,7 +82,10 @@ func (s *SCP) WithPreserve(preserve bool) *SCP { func (s *SCP) SCP(ctx context.Context) *SCP { // env := append(os.Environ(), s.Env...) - env := append(os.Environ(), s.Session.AgentSettings.AuthSockEnv()) + env := os.Environ() + if s.Session.AgentSettings != nil { + env = append(os.Environ(), s.Session.AgentSettings.AuthSockEnv()) + } // set absolute path to the ssh binary, because scp contains predefined absolute path to ssh binary (/ssh/bin/ssh) as we set in the building process of the static ssh utils sshPathArgs := []string{"-S", fmt.Sprintf("%s/bin/ssh", os.Getenv("PWD"))} diff --git a/pkg/ssh/clissh/cmd/ssh.go b/pkg/ssh/clissh/cmd/ssh.go index aabc8ab..24bd734 100644 --- a/pkg/ssh/clissh/cmd/ssh.go +++ b/pkg/ssh/clissh/cmd/ssh.go @@ -71,7 +71,9 @@ func (s *SSH) WithCommand(name string, arg ...string) *SSH { // TODO move connection settings from ExecuteCmd func (s *SSH) Cmd(ctx context.Context) *exec.Cmd { env := append(os.Environ(), s.Env...) - env = append(env, s.Session.AgentSettings.AuthSockEnv()) + if s.Session.AgentSettings != nil { + env = append(env, s.Session.AgentSettings.AuthSockEnv()) + } // ssh connection settings // ANSIBLE_SSH_ARGS="${ANSIBLE_SSH_ARGS:-"-C diff --git a/pkg/ssh/clissh/kube-proxy.go b/pkg/ssh/clissh/kube-proxy.go deleted file mode 100644 index bb75e85..0000000 --- a/pkg/ssh/clissh/kube-proxy.go +++ /dev/null @@ -1,412 +0,0 @@ -// Copyright 2021 Flant JSC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package clissh - -import ( - "context" - "fmt" - "math/rand" - "os" - "regexp" - "time" - - connection "github.com/deckhouse/lib-connection/pkg" - "github.com/deckhouse/lib-connection/pkg/settings" - "github.com/deckhouse/lib-connection/pkg/ssh/session" -) - -var ( - _ connection.KubeProxy = &KubeProxy{} -) - -const DefaultLocalAPIPort = 22322 - -type KubeProxy struct { - Session *session.Session - - settings settings.Settings - - KubeProxyPort string - LocalPort string - - proxy *Command - tunnel *Tunnel - - stop bool - port string - localPort int - - healthMonitorsByStartID map[int]chan struct{} -} - -func NewKubeProxy(sett settings.Settings, sess *session.Session) *KubeProxy { - return &KubeProxy{ - settings: sett, - Session: sess, - port: "0", - localPort: DefaultLocalAPIPort, - healthMonitorsByStartID: make(map[int]chan struct{}), - } -} - -func (k *KubeProxy) Start(useLocalPort int) (string, error) { - startID := rand.Int() - - logger := k.settings.Logger() - logger.DebugF("Kube-proxy start id=[%d]; port:%d\n", startID, useLocalPort) - - success := false - defer func() { - k.stop = false - if !success { - logger.DebugF("[%d] Kube-proxy was not started. Try to clear all\n", startID) - k.Stop(startID) - } - logger.DebugF("[%d] Kube-proxy starting was finished\n", startID) - }() - - proxyCommandErrorCh := make(chan error, 1) - proxy, port, err := k.runKubeProxy(proxyCommandErrorCh, startID) - if err != nil { - logger.DebugF("[%d] Got error from runKubeProxy func: %v\n", startID, err) - return "", err - } - - logger.DebugF("[%d] Proxy was started successfully\n", startID) - - k.proxy = proxy - k.port = port - - tunnelErrorCh := make(chan error) - tun, localPort, lastError := k.upTunnel(port, useLocalPort, tunnelErrorCh, startID) - if lastError != nil { - logger.DebugF("[%d] Got error from upTunnel func: %v\n", startID, err) - return "", fmt.Errorf("tunnel up error: max retries reached, last error: %w", lastError) - } - - k.tunnel = tun - k.localPort = localPort - - k.healthMonitorsByStartID[startID] = make(chan struct{}, 1) - go k.healthMonitor( - proxyCommandErrorCh, - tunnelErrorCh, - k.healthMonitorsByStartID[startID], - startID, - ) - - success = true - - return fmt.Sprintf("%d", k.localPort), nil -} - -func (k *KubeProxy) StopAll() { - for startID := range k.healthMonitorsByStartID { - k.Stop(startID) - } -} - -func (k *KubeProxy) Stop(startID int) { - if k == nil { - return - } - - logger := k.settings.Logger() - - if k.stop { - logger.DebugF("[%d] Stop kube-proxy: kube proxy already stopped. Skip.\n", startID) - return - } - - if k.healthMonitorsByStartID[startID] != nil { - k.healthMonitorsByStartID[startID] <- struct{}{} - delete(k.healthMonitorsByStartID, startID) - } - if k.proxy != nil { - logger.DebugF("[%d] Stop proxy command\n", startID) - k.proxy.Stop() - logger.DebugF("[%d] Proxy command stopped\n", startID) - k.proxy = nil - k.port = "0" - } - if k.tunnel != nil { - logger.DebugF("[%d] Stop tunnel\n", startID) - k.tunnel.Stop() - logger.DebugF("[%d] Tunnel stopped\n", startID) - k.tunnel = nil - } - k.stop = true -} - -func (k *KubeProxy) tryToRestartFully(startID int) { - logger := k.settings.Logger() - logger.DebugF("[%d] Try restart kubeproxy fully\n", startID) - for { - k.Stop(startID) - - _, err := k.Start(k.localPort) - - if err == nil { - k.stop = false - logger.DebugF("[%d] Proxy was restarted successfully\n", startID) - return - } - - const sleepTimeout = 5 - - // need warn for human - logger.WarnF( - "Proxy was not restarted: %v. Sleep %d seconds before next attempt.\n", - err, - sleepTimeout, - ) - time.Sleep(sleepTimeout * time.Second) - - k.Session.ChoiceNewHost() - logger.DebugF("[%d] New host selected %v\n", startID, k.Session.Host()) - } -} - -func (k *KubeProxy) proxyCMD(startID int) *Command { - kubectlProxy := fmt.Sprintf( - // --disable-filter is needed to exec into etcd pods - "kubectl proxy --as=dhctl --as-group=system:masters --port=%s --kubeconfig /etc/kubernetes/admin.conf --disable-filter", - k.port, - ) - if v := os.Getenv("KUBE_PROXY_ACCEPT_HOSTS"); v != "" { - kubectlProxy += fmt.Sprintf(" --accept-hosts='%s'", v) - } - command := fmt.Sprintf("PATH=$PATH:%s/; %s", k.settings.NodeBinPath(), kubectlProxy) - - k.settings.Logger().DebugF("[%d] Proxy command for start: %s\n", startID, command) - - cmd := NewCommand(k.settings, k.Session, command) - cmd.Sudo(k.ctx()) - cmd.Executor = cmd.Executor.CaptureStderr(nil).CaptureStdout(nil) - return cmd -} - -func (k *KubeProxy) healthMonitor( - proxyErrorCh, tunnelErrorCh chan error, - stopCh chan struct{}, - startID int, -) { - logger := k.settings.Logger() - - defer logger.DebugF("[%d] Kubeproxy health monitor stopped\n", startID) - logger.DebugF("[%d] Kubeproxy health monitor started\n", startID) - - for { - logger.DebugF("[%d] Kubeproxy Monitor step\n", startID) - select { - case err := <-proxyErrorCh: - logger.DebugF("[%d] Proxy failed with error %v\n", startID, err) - // if proxy crushed, we need to restart kube-proxy fully - // with proxy and tunnel (tunnel depends on proxy) - k.tryToRestartFully(startID) - // if we restart proxy fully - // this monitor must be finished because new monitor was started - return - - case err := <-tunnelErrorCh: - logger.DebugF("[%d] Tunnel failed %v. Stopping previous tunnel\n", startID, err) - // we need fully stop tunnel because - k.tunnel.Stop() - - logger.DebugF("[%d] Tunnel stopped before restart. Starting new tunnel...\n", startID) - - k.tunnel, _, err = k.upTunnel(k.port, k.localPort, tunnelErrorCh, startID) - if err != nil { - logger.DebugF("[%d] Tunnel was not up: %v. Try to restart fully\n", startID, err) - k.tryToRestartFully(startID) - return - } - - logger.DebugF("[%d] Tunnel re up successfully\n") - - case <-stopCh: - logger.DebugF("[%d] Kubeproxy monitor stopped") - return - } - } -} - -func (k *KubeProxy) ctx() context.Context { - return context.Background() -} - -func (k *KubeProxy) upTunnel( - kubeProxyPort string, - useLocalPort int, - tunnelErrorCh chan error, - startID int, -) (*Tunnel, int, error) { - logger := k.settings.Logger() - - logger.DebugF( - "[%d] Starting up tunnel with proxy port %s and local port %d\n", - startID, - kubeProxyPort, - useLocalPort, - ) - - rewriteLocalPort := false - localPort := useLocalPort - - if useLocalPort < 1 { - logger.DebugF( - "[%d] Incorrect local port %d use default %d\n", - startID, - useLocalPort, - DefaultLocalAPIPort, - ) - localPort = DefaultLocalAPIPort - rewriteLocalPort = true - } - - maxRetries := 5 - retries := 0 - var lastError error - var tun *Tunnel - for { - logger.DebugF("[%d] Start %d iteration for up tunnel\n", startID, retries) - - if k.proxy.WaitError() != nil { - lastError = fmt.Errorf("proxy was failed while restart tunnel") - break - } - - // try to start tunnel from localPort to proxy port - var tunnelAddress string - if v := os.Getenv("KUBE_PROXY_BIND_ADDR"); v != "" { - tunnelAddress = fmt.Sprintf("%s:%d:localhost:%s", v, localPort, kubeProxyPort) - } else { - tunnelAddress = fmt.Sprintf("%d:localhost:%s", localPort, kubeProxyPort) - } - - logger.DebugF("[%d] Try up tunnel on %v\n", startID, tunnelAddress) - tun = NewTunnel(k.settings, k.Session, "L", tunnelAddress) - err := tun.Up(k.ctx()) - if err != nil { - logger.DebugF("[%d] Start tunnel was failed. Cleaning...\n", startID) - tun.Stop() - lastError = fmt.Errorf("tunnel '%s': %w", tunnelAddress, err) - logger.DebugF("[%d] Start tunnel was failed. Error: %v\n", startID, lastError) - if rewriteLocalPort { - localPort++ - logger.DebugF("[%d] New local port %d\n", startID, localPort) - } - - retries++ - if retries >= maxRetries { - logger.DebugF("[%d] Last iteration finished\n", startID) - tun = nil - break - } - } else { - logger.DebugF("[%d] Tunnel was started. Starting health monitor\n", startID) - go tun.HealthMonitor(tunnelErrorCh) - lastError = nil - break - } - } - - dbgMsg := fmt.Sprintf("Tunnel up on local port %d", localPort) - if lastError != nil { - dbgMsg = fmt.Sprintf("Tunnel was not up: %v", lastError) - } - - logger.DebugF("[%d] %s\n", startID, dbgMsg) - - return tun, localPort, lastError -} - -func (k *KubeProxy) runKubeProxy( - waitCh chan error, - startID int, -) (*Command, string, error) { - logger := k.settings.Logger() - - logger.DebugF("[%d] Begin starting proxy\n", startID) - proxy := k.proxyCMD(startID) - - port := "" - portReady := make(chan struct{}, 1) - portRe := regexp.MustCompile(`Starting to serve on .*?:(\d+)`) - - proxy.WithStdoutHandler(func(line string) { - m := portRe.FindStringSubmatch(line) - if len(m) == 2 && m[1] != "" { - port = m[1] - logger.DebugF("Got proxy port = %s on host %s\n", port, k.Session.Host()) - portReady <- struct{}{} - } - }) - - onStart := make(chan struct{}, 1) - proxy.OnCommandStart(func() { - logger.DebugF("[%d] Command started\n", startID) - onStart <- struct{}{} - }) - - proxy.WithWaitHandler(func(err error) { - logger.DebugF("[%d] Wait error: %v\n", startID, err) - waitCh <- err - }) - - logger.DebugF("[%d] Start proxy command\n", startID) - err := proxy.Start() - if err != nil { - logger.DebugF("[%d] Start proxy command error: %v\n", startID, err) - return nil, "", fmt.Errorf("start kubectl proxy: %w", err) - } - - logger.DebugF("[%d] Proxy command was started\n", startID) - - returnWaitErr := func(err error) error { - logger.DebugF("[%d] Proxy command waiting error: %v\n", startID, err) - template := `Proxy exited suddenly: %s%s -Status: %w` - return fmt.Errorf(template, string(proxy.StdoutBytes()), string(proxy.StderrBytes()), err) - } - - // we need to check that kubeproxy was started - // that checking wait string pattern in output - // but we may receive error and this error will get from waitCh - select { - case <-onStart: - case err := <-waitCh: - return nil, "", returnWaitErr(err) - } - - // Wait for proxy startup - t := time.NewTicker(20 * time.Second) - defer t.Stop() - select { - case e := <-waitCh: - return nil, "", returnWaitErr(e) - case <-t.C: - logger.DebugF("[%d] Starting proxy command timeout\n", startID) - return nil, "", fmt.Errorf("timeout waiting for api proxy port") - case <-portReady: - if port == "" { - logger.DebugF("[%d] Starting proxy command: empty port\n", startID) - return nil, "", fmt.Errorf("got empty port from kubectl proxy") - } - } - - logger.DebugF("[%d] Proxy process started with port: %s\n", startID, port) - return proxy, port, nil -} diff --git a/pkg/ssh/clissh/kube_proxy.go b/pkg/ssh/clissh/kube_proxy.go new file mode 100644 index 0000000..d9f9a5a --- /dev/null +++ b/pkg/ssh/clissh/kube_proxy.go @@ -0,0 +1,88 @@ +// Copyright 2021 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package clissh + +import ( + "context" + "fmt" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/ssh/base/kubeproxy" +) + +var ( + _ connection.KubeProxy = &KubeProxy{} +) + +type KubeProxy struct { + *kubeproxy.BaseKubeProxy +} + +func NewKubeProxy(client *Client) *KubeProxy { + runner := newKubeProxyRunner(context.Background(), client) + return &KubeProxy{ + BaseKubeProxy: kubeproxy.NewBaseKubeProxy(runner, client.Settings(), client.Session()), + } +} + +type kubeProxyRunner struct { + client *Client + ctx context.Context +} + +func newKubeProxyRunner(ctx context.Context, client *Client) *kubeProxyRunner { + return &kubeProxyRunner{ + client: client, + ctx: ctx, + } +} + +func (r *kubeProxyRunner) StartCommand(params kubeproxy.StartCommandParams) (connection.KubeProxyCommand, error) { + cmd := NewCommand(r.client.Settings(), r.client.Session(), params.Cmd) + cmd.Sudo(r.ctx) + + cmd.OnCommandStart(params.OnStart) + cmd.WithStdoutHandler(params.StdoutHandler) + cmd.WithWaitHandler(params.WaitHandler) + + cmd.Executor = cmd.Executor.CaptureStderr(nil).CaptureStdout(nil) + + if err := cmd.Start(); err != nil { + return nil, err + } + + return cmd, nil +} + +func (r *kubeProxyRunner) UpTunnel(localPort int, kubeProxyPort string) (connection.Tunnel, string, error) { + address := kubeproxy.ExtractTunnelAddressFromEnv(localPort, kubeProxyPort) + if address == "" { + address = fmt.Sprintf("%d:127.0.0.1:%s", localPort, kubeProxyPort) + } + + r.client.settings.Logger().DebugF("Try up tunnel for kube proxy on %s", address) + + tun := r.client.Tunnel(address) + + if err := tun.Up(r.ctx); err != nil { + return nil, address, err + } + + return tun, address, nil +} + +func (r *kubeProxyRunner) ClientID() string { + return r.client.id +} diff --git a/pkg/ssh/clissh/process/executor.go b/pkg/ssh/clissh/process/executor.go index c9dbb86..9e3c63c 100644 --- a/pkg/ssh/clissh/process/executor.go +++ b/pkg/ssh/clissh/process/executor.go @@ -373,7 +373,7 @@ func (e *Executor) readFromStreams(stdoutReadPipe io.Reader, stdoutHandlerWriteP return } - logger.DebugF("Start read from streams for command: ", e.cmd.String()) + logger.DebugF("Start read from streams for command: %s", e.cmd.String()) buf := make([]byte, 16) var matchersDone bool diff --git a/pkg/ssh/config/config.go b/pkg/ssh/config/config.go index 684e165..bd5775e 100644 --- a/pkg/ssh/config/config.go +++ b/pkg/ssh/config/config.go @@ -46,6 +46,8 @@ type Config struct { BastionPassword string `json:"sshBastionPassword,omitempty"` ExtraArgs string `json:"sshExtraArgs,omitempty"` + + ForceUseSSHAgent bool `json:"forceUseSSHAgent,omitempty"` } func (c *Config) FillDefaults() *Config { @@ -94,6 +96,8 @@ func (c *Config) Clone() *Config { BastionPassword: c.BastionPassword, ExtraArgs: c.ExtraArgs, + + ForceUseSSHAgent: c.ForceUseSSHAgent, } } @@ -114,7 +118,7 @@ func (c *Config) BastionPortString() string { } func (c *Config) HaveAuthMethods() bool { - if len(c.PrivateKeys) > 0 || c.SudoPassword != "" { + if len(c.PrivateKeys) > 0 || c.SudoPassword != "" || c.ForceUseSSHAgent { return true } diff --git a/pkg/ssh/config/config_test.go b/pkg/ssh/config/config_test.go index 2d301d5..d8c9701 100644 --- a/pkg/ssh/config/config_test.go +++ b/pkg/ssh/config/config_test.go @@ -162,4 +162,89 @@ func TestConfigClone(t *testing.T) { c.BastionPort = intPtr(3335) }) }) + + t.Run("force use ssh agent", func(t *testing.T) { + cfg := &Config{ + Mode: Mode{ + ForceLegacy: false, + ForceModern: true, + }, + + User: "user", + Port: intPtr(2220), + + ForceUseSSHAgent: true, + } + + cpy := cfg.Clone() + + assertCloned(t, cfg, cpy) + assertNotAffected(t, cfg, cpy, func(c *Config) { + c.ForceModern = false + c.Port = intPtr(2222) + c.ForceUseSSHAgent = false + }) + }) +} + +func TestHaveAuthMethod(t *testing.T) { + type testCase struct { + name string + cfg *Config + expected bool + } + + tests := []testCase{ + { + name: "have private keys", + cfg: &Config{ + User: "user", + Port: intPtr(2228), + PrivateKeys: []AgentPrivateKey{ + { + Key: "content", + Passphrase: "not secure key", + IsPath: false, + }, + }, + }, + expected: true, + }, + + { + name: "have sudo password", + cfg: &Config{ + User: "user", + Port: intPtr(2228), + SudoPassword: "not secure", + }, + expected: true, + }, + + { + name: "force agent", + cfg: &Config{ + User: "user", + Port: intPtr(2228), + ForceUseSSHAgent: true, + }, + expected: true, + }, + + { + name: "no methods", + cfg: &Config{ + User: "user", + Port: intPtr(2228), + }, + expected: false, + }, + } + + for _, tst := range tests { + t.Run(tst.name, func(t *testing.T) { + r := tst.cfg.HaveAuthMethods() + require.Equal(t, tst.expected, r, "have valid result of HaveAuthMethods") + }) + } } diff --git a/pkg/ssh/config/openapi/ssh_configuration.yaml b/pkg/ssh/config/openapi/ssh_configuration.yaml index f236432..97df0d9 100644 --- a/pkg/ssh/config/openapi/ssh_configuration.yaml +++ b/pkg/ssh/config/openapi/ssh_configuration.yaml @@ -23,6 +23,7 @@ apiVersions: anyOf: - required: [apiVersion, kind, sshUser, sshAgentPrivateKeys] - required: [apiVersion, kind, sshUser, sudoPassword] + - required: [apiVersion, kind, sshUser, forceUseSSHAgent] x-examples: - apiVersion: dhctl.deckhouse.io/v1 kind: SSHConfig @@ -89,4 +90,9 @@ apiVersions: description: | Switch to modern SSH mode (gossh). type: boolean + forceUseSSHAgent: + description: | + Force use SSH agent passed with env SSH_AUTH_SOCK if sshAgentPrivateKeys and sudoPassword not provided. + type: boolean + default: false diff --git a/pkg/ssh/config/opts.go b/pkg/ssh/config/opts.go index 0e20ecb..e67caed 100644 --- a/pkg/ssh/config/opts.go +++ b/pkg/ssh/config/opts.go @@ -17,10 +17,11 @@ package config const DefaultPort = 22 type validateOptions struct { - omitDocInError bool - strictUnmarshal bool - requiredSSHHost bool - noPrettyError bool + omitDocInError bool + strictUnmarshal bool + requiredSSHHost bool + noPrettyError bool + skipUnknownKinds bool } type ValidateOption func(o *validateOptions) @@ -48,3 +49,9 @@ func ParseWithNoPrettyError(v bool) ValidateOption { o.noPrettyError = v } } + +func ParseWithSkipUnknownKinds(v bool) ValidateOption { + return func(o *validateOptions) { + o.skipUnknownKinds = v + } +} diff --git a/pkg/ssh/config/parse_config.go b/pkg/ssh/config/parse_config.go index c2163d0..a03a630 100644 --- a/pkg/ssh/config/parse_config.go +++ b/pkg/ssh/config/parse_config.go @@ -18,6 +18,7 @@ import ( "errors" "fmt" "io" + "slices" "strings" "github.com/deckhouse/lib-dhctl/pkg/log" @@ -33,6 +34,8 @@ const ( sshHostKind = "SSHHost" ) +var supportedKinds = []string{sshConfigKind, sshHostKind} + func ParseConnectionConfig(reader io.Reader, sett settings.Settings, opts ...ValidateOption) (*ConnectionConfig, error) { options := &validateOptions{ requiredSSHHost: true, @@ -76,6 +79,16 @@ func ParseConnectionConfig(reader io.Reader, sett settings.Settings, opts ...Val continue } + if !slices.Contains(supportedKinds, index.Kind) { + if options.skipUnknownKinds { + logger.DebugF("Skip document %d with unknown kind %s", i, index.Kind) + } else { + errs.appendUnknownKind(index, i) + } + + continue + } + logger.DebugF("Process validate and parse connection config document %d for index %v", i, index) err = validator.ValidateWithIndex(index, &docData, validatorOpts...) diff --git a/pkg/ssh/config/parse_config_test.go b/pkg/ssh/config/parse_config_test.go index 3567e8b..1c5b700 100644 --- a/pkg/ssh/config/parse_config_test.go +++ b/pkg/ssh/config/parse_config_test.go @@ -149,6 +149,65 @@ sudoPassword: "not_secure_password" }, }, + { + name: "only connection: with unknown kind and skip it", + input: ` +apiVersion: dhctl.deckhouse.io/v1 +kind: SSHConfig +sshPort: 22 +sshUser: ubuntu +sudoPassword: "not_secure_password" +--- +apiVersion: dhctl.deckhouse.io/v1 +kind: Unknown +key: key +val: 1 +--- +apiVersion: dhctl.deckhouse.io/v1 +kind: SSHHost +host: "192.168.0.10" +`, + hasErrorContains: "", + opts: []ValidateOption{ + ParseWithRequiredSSHHost(true), + ParseWithSkipUnknownKinds(true), + }, + expected: &ConnectionConfig{ + Config: &Config{ + Port: intPtr(22), + User: "ubuntu", + SudoPassword: "not_secure_password", + BastionPort: nil, + }, + Hosts: []Host{ + { + Host: "192.168.0.10", + }, + }, + }, + }, + + { + name: "only connection: force agent", + input: ` +apiVersion: dhctl.deckhouse.io/v1 +kind: SSHConfig +sshPort: 22 +sshUser: ubuntu +forceUseSSHAgent: true +`, + hasErrorContains: "", + opts: noRequiredHostsOpts, + expected: &ConnectionConfig{ + Config: &Config{ + Port: intPtr(22), + User: "ubuntu", + BastionPort: nil, + ForceUseSSHAgent: true, + }, + }, + }, + { name: "only connection: correct no port", input: ` diff --git a/pkg/ssh/config/parse_flags.go b/pkg/ssh/config/parse_flags.go index c87d3b0..e2cb678 100644 --- a/pkg/ssh/config/parse_flags.go +++ b/pkg/ssh/config/parse_flags.go @@ -35,30 +35,32 @@ import ( ) const ( - AgentPrivateKeysEnv = "SSH_AGENT_PRIVATE_KEYS" - BastionHostEnv = "SSH_BASTION_HOST" - BastionUserEnv = "SSH_BASTION_USER" - BastionPortEnv = "SSH_BASTION_PORT" - UserEnv = "SSH_USER" - HostsEnv = "SSH_HOSTS" - PortEnv = "SSH_PORT" - ExtraArgsEnv = "SSH_EXTRA_ARGS" - ConnectionConfigEnv = "CONNECTION_CONFIG" - LegacyModeEnv = "SSH_LEGACY_MODE" - ModernModeEnv = "SSH_MODERN_MODE" - AskBastionPasswordEnv = "ASK_BASTION_PASS" - AskSudoPasswordEnv = "ASK_BECOME_PASS" - ForceNoPrivateKeysEnv = "FORCE_NO_PRIVATE_KEYS" + AgentPrivateKeysEnv = "SSH_AGENT_PRIVATE_KEYS" + BastionHostEnv = "SSH_BASTION_HOST" + BastionUserEnv = "SSH_BASTION_USER" + BastionPortEnv = "SSH_BASTION_PORT" + UserEnv = "SSH_USER" + HostsEnv = "SSH_HOSTS" + PortEnv = "SSH_PORT" + ExtraArgsEnv = "SSH_EXTRA_ARGS" + ConnectionConfigEnv = "CONNECTION_CONFIG" + LegacyModeEnv = "SSH_LEGACY_MODE" + ModernModeEnv = "SSH_MODERN_MODE" + AskBastionPasswordEnv = "ASK_BASTION_PASS" + AskSudoPasswordEnv = "ASK_BECOME_PASS" + ForceNoPrivateKeysEnv = "FORCE_NO_PRIVATE_KEYS" + UseAgentWithNoPrivateKeysEnv = "USE_AGENT_WITH_NO_PRIVATE_KEYS" ) const ( - sshHostsFlag = "ssh-host" - legacyModeFlag = "ssh-legacy-mode" - modernModeFlag = "ssh-modern-mode" - connectionConfigFlag = "connection-config" - askSudoPasswordFlag = "ask-become-pass" - privateKeysFlag = "ssh-agent-private-keys" - forceNoPrivateKeysFlag = "force-no-private-keys" + sshHostsFlag = "ssh-host" + legacyModeFlag = "ssh-legacy-mode" + modernModeFlag = "ssh-modern-mode" + connectionConfigFlag = "connection-config" + askSudoPasswordFlag = "ask-become-pass" + privateKeysFlag = "ssh-agent-private-keys" + forceNoPrivateKeysFlag = "force-no-private-keys" + useAgentWithNoPrivateKeysFlag = "use-agent-with-no-private-keys" ) type Flags struct { @@ -81,7 +83,8 @@ type Flags struct { AskBastionPass bool AskSudoPass bool - forceNoPrivateKeys bool + forceNoPrivateKeys bool + useAgentWithNoPrivateKeys bool baseFlags *baseflags.BaseFlags } @@ -146,8 +149,10 @@ func (f *Flags) FillDefaults() error { } // if not use private keys force ask sudo pass - if f.forceNoPrivateKeys && !f.AskSudoPass { - f.AskSudoPass = true + if f.forceNoPrivateKeys { + if !f.AskSudoPass && !f.useAgentWithNoPrivateKeys { + f.AskSudoPass = true + } } return nil @@ -165,6 +170,7 @@ func (f *Flags) RewriteFromEnvs() error { env.NewVar(BastionPortEnv, &f.BastionPort), env.NewVar(PortEnv, &f.Port), env.NewVar(ForceNoPrivateKeysEnv, &f.forceNoPrivateKeys), + env.NewVar(UseAgentWithNoPrivateKeysEnv, &f.useAgentWithNoPrivateKeys), privateKeysVal, env.NewVar(BastionHostEnv, &f.BastionHost), env.NewVar(BastionUserEnv, &f.BastionUser), @@ -191,6 +197,14 @@ func (f *Flags) RewriteFromEnvs() error { return nil } +func (f *Flags) FlagSet() (*flag.FlagSet, error) { + if err := f.baseFlags.IsInitialized(); err != nil { + return nil, err + } + + return f.baseFlags.FlagSet(), nil +} + func (f *Flags) userExtractor() func() (string, error) { var currentUser *string @@ -430,6 +444,20 @@ func (p *FlagsParser) InitFlags(set *flag.FlagSet) (*Flags, error) { ), ) + set.BoolVar( + &flags.useAgentWithNoPrivateKeys, + useAgentWithNoPrivateKeysFlag, + false, + envsExtractor.AddEnvToUsage( + fmt.Sprintf( + "Do not ask sudo password if private keys did not provided. Use with '--%s' Force use ssh agent over %s", + forceNoPrivateKeysFlag, + settings.SSHAgentAuthSockEnv, + ), + UseAgentWithNoPrivateKeysEnv, + ), + ) + return flags, nil } @@ -489,6 +517,13 @@ func (p *FlagsParser) ExtractConfigAfterParse(flags *Flags, opts ...ValidateOpti }) } + if flags.forceNoPrivateKeys && flags.useAgentWithNoPrivateKeys { + authSockPath := p.Settings().AuthSock() + if err := file.IsExists(authSockPath, "auth socket from env "+settings.SSHAgentAuthSockEnv); err != nil { + return nil, err + } + } + err := validateOnlyUniqueHosts(hosts, options).flagsError() if err != nil { return nil, err @@ -529,17 +564,21 @@ func (p *FlagsParser) ExtractConfigAfterParse(flags *Flags, opts ...ValidateOpti BastionPassword: passwords.Bastion, SudoPassword: passwords.Sudo, + + ForceUseSSHAgent: flags.useAgentWithNoPrivateKeys, }, Hosts: hosts, } if !res.Config.HaveAuthMethods() { return nil, fmt.Errorf( - "No auth methods configured. Please pass --%s and/or --%s or --%s and --%s", + "No auth methods configured. Please pass --%s and/or --%s or --%s with --%s or --%s with --%s", privateKeysFlag, askSudoPasswordFlag, forceNoPrivateKeysFlag, askSudoPasswordFlag, + forceNoPrivateKeysFlag, + useAgentWithNoPrivateKeysFlag, ) } diff --git a/pkg/ssh/config/parse_flags_test.go b/pkg/ssh/config/parse_flags_test.go index dd8958b..84b3fa7 100644 --- a/pkg/ssh/config/parse_flags_test.go +++ b/pkg/ssh/config/parse_flags_test.go @@ -590,6 +590,45 @@ func TestParseFlags(t *testing.T) { }, }, + { + name: "force no private keys with use agent", + + arguments: []string{ + "--force-no-private-keys", + "--use-agent-with-no-private-keys", + }, + + hasErrorContains: "", + + privateKeyExtractor: defaultPrivateKeyExtractor(currentHomeDir), + + before: func(t *testing.T, ts *test, logger log.Logger) { + p := ts.test.MustCreateTmpFile(t, "", false, "auth_sock") + ts.test.WithAuthSock(p) + }, + + expected: &ConnectionConfig{ + Config: &Config{ + Mode: Mode{ + ForceLegacy: false, + ForceModern: false, + }, + User: currentUserName, + Port: intPtr(22), + + SudoPassword: "", + + PrivateKeys: make([]AgentPrivateKey, 0), + + BastionUser: currentUserName, + BastionPort: intPtr(22), + + ForceUseSSHAgent: true, + }, + Hosts: make([]Host, 0), + }, + }, + { name: "connection config", @@ -747,7 +786,7 @@ sshBastionPassword: "not_secure_password_bastion" arguments: []string{ "--connection-config=/tmp/not_exists.86t6ff6d.yaml", }, - hasErrorContains: "Cannot get connection config file info for /tmp/not_exists.86t6ff6d.yaml", + hasErrorContains: "cannot get connection config file info for /tmp/not_exists.86t6ff6d.yaml", }, { @@ -758,7 +797,7 @@ sshBastionPassword: "not_secure_password_bastion" configPath := tst.test.MustMkSubDirs(t, "connection-config-dir") tst.arguments = append(tst.arguments, fmt.Sprintf("--connection-config=%s", configPath)) }, - hasErrorContains: "should be regular file", + hasErrorContains: "should be a file not dir", }, { @@ -875,17 +914,47 @@ sshBastionPassword: "not_secure_password_bastion" "--force-no-private-keys", }, - hasErrorContains: "No auth methods configured. Please pass --ssh-agent-private-keys and/or --ask-become-pass or --force-no-private-keys and --ask-become-pass", + hasErrorContains: "No auth methods configured. Please pass --ssh-agent-private-keys and/or --ask-become-pass or --force-no-private-keys with --ask-become-pass or --force-no-private-keys with --use-agent-with-no-private-keys", + + privateKeyExtractor: defaultPrivateKeyExtractor(currentHomeDir), + }, + + { + name: "force no private keys with use agent no sock env", + + arguments: []string{ + "--force-no-private-keys", + "--use-agent-with-no-private-keys", + }, + + hasErrorContains: "pass empty path for auth socket from env SSH_AUTH_SOCK", + + privateKeyExtractor: defaultPrivateKeyExtractor(currentHomeDir), + }, + + { + name: "force no private keys with use agent incorrect sock env", + + arguments: []string{ + "--force-no-private-keys", + "--use-agent-with-no-private-keys", + }, + + hasErrorContains: "auth socket from env SSH_AUTH_SOCK path", privateKeyExtractor: defaultPrivateKeyExtractor(currentHomeDir), + + before: func(t *testing.T, ts *test, logger log.Logger) { + p := ts.test.MustMkSubDirs(t, "auth_sock") + ts.test.WithAuthSock(p) + }, }, } for _, testCase := range testCases { t.Run(testCase.name, func(t *testing.T) { tst := tests.ShouldNewTest(t, testCase.name) - sett := tst.Settings() - logger := sett.Logger() + logger := tst.Settings().Logger() testCase.test = tst @@ -896,6 +965,8 @@ sshBastionPassword: "not_secure_password_bastion" testCase.before(t, &testCase, logger) } + sett := tst.Settings() + parser := NewFlagsParser(sett) parser.WithEnvsPrefix(testCase.envsPrefix) @@ -1138,7 +1209,7 @@ func TestParseFlagsAndExtractConfigNoArgs(t *testing.T) { func TestParseFlagsHelp(t *testing.T) { tests.AssertParseFlagsHelp(t, tests.AssertParseFlagsHelpParams{ - ExpectedFlags: 14, + ExpectedFlags: 15, Name: "ssh-flags", Provider: func(sett settings.Settings, envsPrefix string) tests.TestFlagsParser { parser := NewFlagsParser(sett) diff --git a/pkg/ssh/gossh/client.go b/pkg/ssh/gossh/client.go index 4a40b8d..6f6688a 100644 --- a/pkg/ssh/gossh/client.go +++ b/pkg/ssh/gossh/client.go @@ -109,6 +109,8 @@ type Client struct { silent bool stopped bool + + id string } func (s *Client) WithLoopsParams(p ClientLoopsParams) *Client { @@ -137,14 +139,14 @@ func (s *Client) Command(name string, arg ...string) connection.Command { // KubeProxy is used to start kubectl proxy and create a tunnel from local port to proxy port func (s *Client) KubeProxy() connection.KubeProxy { - p := NewKubeProxy(s, s.sessionClient) + p := NewKubeProxy(s) s.kubeProxies = append(s.kubeProxies, p) return p } // File is used to upload and download files and directories func (s *Client) File() connection.File { - return NewSSHFile(s.settings, s.sshClient) + return NewSSHFile(s.settings, s) } // UploadScript is used to upload script and execute it on remote server @@ -259,6 +261,11 @@ func (s *Client) IsStopped() bool { return s.stopped } +func (s *Client) WithID(id string) *Client { + s.id = id + return s +} + func (s *Client) stopAfterStartFailed(cause string, err error) error { s.stopAllAndLogErrors(cause) return err @@ -538,7 +545,7 @@ func (s *Client) authMethods(password string) ([]gossh.AuthMethod, error) { } if len(authMethods) == 0 { - return nil, fmt.Errorf("Private keys or SSH_AUTH_SOCK environment variable or become password should passed") + return nil, fmt.Errorf("Private keys or %s environment variable or become password should passed", settings.SSHAgentAuthSockEnv) } return authMethods, nil diff --git a/pkg/ssh/gossh/command.go b/pkg/ssh/gossh/command.go index cd07e70..bd822a0 100644 --- a/pkg/ssh/gossh/command.go +++ b/pkg/ssh/gossh/command.go @@ -101,7 +101,8 @@ func NewSSHCommand(client *Client, name string, arg ...string) *SSHCommand { } // todo move new session to Start() - session, _ := client.NewSSHSession() + session, err := client.NewSSHSession() + client.settings.Logger().DebugF("Cannot create new SSH session for command '%s': %v", name, err) return &SSHCommand{ // Executor: process.NewDefaultExecutor(sess.Run(cmd)), diff --git a/pkg/ssh/gossh/common_test.go b/pkg/ssh/gossh/common_test.go index 623beb4..53c8187 100644 --- a/pkg/ssh/gossh/common_test.go +++ b/pkg/ssh/gossh/common_test.go @@ -18,8 +18,6 @@ import ( "context" "fmt" "net" - "regexp" - "strings" "testing" "time" @@ -48,6 +46,12 @@ func assertFilesViaRemoteRun(t *testing.T, sshClient *Client, cmd string, expect func startContainerAndClientWithContainer(t *testing.T, test *tests.Test, opts ...tests.TestContainerWrapperSettingsOpts) (*Client, *tests.TestContainerWrapper) { container := tests.NewTestContainerWrapper(t, test, opts...) + sshClient := startClient(t, test, container) + + return sshClient, container +} + +func startClient(t *testing.T, test *tests.Test, container *tests.TestContainerWrapper) *Client { sess := tests.Session(container) keys := container.AgentPrivateKeys() @@ -71,7 +75,7 @@ func startContainerAndClientWithContainer(t *testing.T, test *tests.Test, opts . registerStopClient(t, sshClient) - return sshClient, container + return sshClient } func startContainerAndClient(t *testing.T, test *tests.Test) *Client { @@ -103,47 +107,18 @@ func registerStopTunnel(t *testing.T, tunnel *Tunnel) { func startContainerAndClientAndKind(t *testing.T, test *tests.Test, opts ...tests.TestContainerWrapperSettingsOpts) (*Client, *tests.TestContainerWrapper) { sshClient, container := startContainerAndClientWithContainer(t, test, opts...) - err := tests.CreateKINDCluster() - require.NoError(t, err) - - t.Cleanup(func() { - _ = tests.DeleteKindCluster() + kindCluster := tests.CreateKINDCluster(t, &tests.KINDClusterCreateParams{ + Test: test, + ClusterName: "kube-proxy", + Containers: []*tests.SSHContainersForKind{ + { + Client: sshClient, + Container: container, + }, + }, }) - err = container.Container.DockerNetworkConnect(false, "kind") - require.NoError(t, err) - - ip, err := tests.GetKINDControlPlaneIP() - require.NoError(t, err) - ip = strings.TrimSpace(ip) - - kubeconfig, err := tests.GetKINDKubeconfig() - require.NoError(t, err) - - re := regexp.MustCompile("127[.]0[.]0[.]1:[0-9]{4,5}") - newKubeconfig := re.ReplaceAllString(kubeconfig, ip+":6443") - - err = container.Container.CreateDirectory("/config/.kube") - require.NoError(t, err) - - // TODO revome it. w/o sleep file upload failed - time.Sleep(30 * time.Second) - - config := test.MustCreateTmpFile(t, newKubeconfig, false, "config") - file := sshClient.File() - err = retry.NewLoop("uploading kubeconfig", 20, 3*time.Second).Run(func() error { - return file.Upload(context.Background(), config, ".kube/config") - }) - - require.NoError(t, err) - - err = container.Container.DownloadKubectl("v1.35.0") - require.NoError(t, err) - - err = container.Container.CreateDirectory("/etc/kubernetes/") - require.NoError(t, err) - err = container.Container.ExecToContainer("symlink of kubeconfig", "ln", "-s", "/config/.kube/config", "/etc/kubernetes/admin.conf") - require.NoError(t, err) + kindCluster.RegisterCleanup(t) return sshClient, container } diff --git a/pkg/ssh/gossh/file.go b/pkg/ssh/gossh/file.go index 25e5dcf..e332acb 100644 --- a/pkg/ssh/gossh/file.go +++ b/pkg/ssh/gossh/file.go @@ -41,10 +41,10 @@ var ( type SSHFile struct { settings settings.Settings - sshClient *gossh.Client + sshClient *Client } -func NewSSHFile(sett settings.Settings, client *gossh.Client) *SSHFile { +func NewSSHFile(sett settings.Settings, client *Client) *SSHFile { return &SSHFile{ sshClient: client, settings: sett, @@ -59,11 +59,12 @@ func (f *SSHFile) Upload(ctx context.Context, srcPath, remotePath string) error return fmt.Errorf("failed to open local file: %w", err) } - session, err := f.sshClient.NewSession() + session, cleanup, err := f.newSession() if err != nil { return err } - defer session.Close() + + defer cleanup() if fType != "DIR" { localFile, err := os.Open(srcPath) @@ -72,7 +73,7 @@ func (f *SSHFile) Upload(ctx context.Context, srcPath, remotePath string) error } defer localFile.Close() - rType, err := getRemoteFileStat(f.sshClient, remotePath, logger) + rType, err := f.getRemoteFileStat(remotePath, logger) if err != nil { if !strings.ContainsAny(err.Error(), "No such file or directory") { return err @@ -131,7 +132,7 @@ func (f *SSHFile) UploadBytes(ctx context.Context, data []byte, remotePath strin func (f *SSHFile) Download(ctx context.Context, remotePath, dstPath string) error { logger := f.settings.Logger() - fType, err := getRemoteFileStat(f.sshClient, remotePath, logger) + fType, err := f.getRemoteFileStat(remotePath, logger) if err != nil { return err } @@ -152,12 +153,12 @@ func (f *SSHFile) Download(ctx context.Context, remotePath, dstPath string) erro return fmt.Errorf("failed to open local file: %w", err) } defer localFile.Close() - if err := CopyFromRemote(ctx, localFile, remotePath, f.sshClient); err != nil { + if err := f.copyFromRemote(ctx, localFile, remotePath); err != nil { return fmt.Errorf("failed to copy file from remote host: %w", err) } } else { // recursive copy logic - filesString, err := getRemoteFilesList(f.sshClient, remotePath) + filesString, err := f.getRemoteFilesList(remotePath) if err != nil { return err } @@ -210,16 +211,30 @@ func (f *SSHFile) DownloadBytes(ctx context.Context, remotePath string) ([]byte, return data, nil } -func getRemoteFileStat(client *gossh.Client, remoteFilePath string, logger log.Logger) (string, error) { +func (f *SSHFile) newSession() (*gossh.Session, func(), error) { + session, err := f.sshClient.NewSSHSession() + if err != nil { + return nil, nil, err + } + + cleanup := func() { + f.sshClient.UnregisterSession(session) + _ = session.Close() + } + + return session, cleanup, nil +} + +func (f *SSHFile) getRemoteFileStat(remoteFilePath string, logger log.Logger) (string, error) { if remoteFilePath == "." { return "DIR", nil } - session, err := client.NewSession() + session, cleanup, err := f.newSession() if err != nil { return "", fmt.Errorf("failed to create session: %w", err) } - defer session.Close() + defer cleanup() command := fmt.Sprint("LC_ALL=en_US.utf8 stat -c %F " + remoteFilePath) output, err := session.CombinedOutput(command) @@ -237,12 +252,12 @@ func getRemoteFileStat(client *gossh.Client, remoteFilePath string, logger log.L return "", err } -func getRemoteFilesList(client *gossh.Client, remoteFilePath string) (string, error) { - session, err := client.NewSession() +func (f *SSHFile) getRemoteFilesList(remoteFilePath string) (string, error) { + session, cleanup, err := f.newSession() if err != nil { return "", fmt.Errorf("failed to create session: %w", err) } - defer session.Close() + defer cleanup() command := fmt.Sprint("ls " + remoteFilePath) output, err := session.CombinedOutput(command) @@ -411,12 +426,12 @@ func wait(wg *sync.WaitGroup, ctx context.Context) error { } } -func CopyFromRemote(ctx context.Context, file *os.File, remotePath string, sshClient *gossh.Client) error { - session, err := sshClient.NewSession() +func (f *SSHFile) copyFromRemote(ctx context.Context, file *os.File, remotePath string) error { + session, cleanup, err := f.newSession() if err != nil { return fmt.Errorf("Error creating ssh session in copy from remote: %v", err) } - defer session.Close() + defer cleanup() wg := sync.WaitGroup{} errCh := make(chan error, 4) diff --git a/pkg/ssh/gossh/kube-proxy.go b/pkg/ssh/gossh/kube-proxy.go deleted file mode 100644 index b15e433..0000000 --- a/pkg/ssh/gossh/kube-proxy.go +++ /dev/null @@ -1,435 +0,0 @@ -// Copyright 2025 Flant JSC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package gossh - -import ( - "context" - "fmt" - "math/rand" - "os" - "regexp" - "strconv" - "time" - - connection "github.com/deckhouse/lib-connection/pkg" - "github.com/deckhouse/lib-connection/pkg/ssh/session" -) - -var ( - _ connection.KubeProxy = &KubeProxy{} -) - -const DefaultLocalAPIPort = 22322 - -type KubeProxy struct { - Session *session.Session - sshClient *Client - - KubeProxyPort string - LocalPort string - - proxy *SSHCommand - tunnel *Tunnel - - stop bool - port string - localPort int - - healthMonitorsByStartID map[int]chan struct{} -} - -func NewKubeProxy(client *Client, sess *session.Session) *KubeProxy { - return &KubeProxy{ - sshClient: client, - Session: sess, - port: "0", - localPort: DefaultLocalAPIPort, - healthMonitorsByStartID: make(map[int]chan struct{}), - } -} - -func (k *KubeProxy) Start(useLocalPort int) (string, error) { - startID := rand.Int() - - logger := k.sshClient.settings.Logger() - - logger.DebugF("Kube-proxy start id=[%d]; port:%d", startID, useLocalPort) - - success := false - defer func() { - k.stop = false - if !success { - logger.DebugF("[%d] Kube-proxy was not started. Try to clear all", startID) - k.Stop(startID) - } - logger.DebugF("[%d] Kube-proxy starting was finished", startID) - }() - - proxyCommandErrorCh := make(chan error, 1) - var proxy *SSHCommand - var port string - var err error - for { - proxy, port, err = k.runKubeProxy(proxyCommandErrorCh, startID) - if err != nil { - logger.DebugF("[%d] Got error from runKubeProxy func: %v\n", startID, err) - return "", err - } - - k.stop = false - portNum, err := strconv.Atoi(port) - if err != nil { - continue - } - if portNum > 1024 { - break - } - logger.DebugF("Proxy run on privileged port %s and will be stopped and restarted\n", port) - k.Stop(startID) - } - - logger.DebugF("[%d] Proxy was started successfully\n", startID) - - k.proxy = proxy - k.port = port - - tunnelErrorCh := make(chan error) - tun, localPort, lastError := k.upTunnel(port, useLocalPort, tunnelErrorCh, startID) - if lastError != nil { - logger.DebugF("[%d] Got error from upTunnel func: %v\n", startID, err) - return "", fmt.Errorf("tunnel up error: max retries reached, last error: %w", lastError) - } - - k.tunnel = tun - k.localPort = localPort - - k.healthMonitorsByStartID[startID] = make(chan struct{}, 1) - go k.healthMonitor( - proxyCommandErrorCh, - tunnelErrorCh, - k.healthMonitorsByStartID[startID], - startID, - ) - - success = true - - return fmt.Sprintf("%d", k.localPort), nil -} - -func (k *KubeProxy) StopAll() { - for startID := range k.healthMonitorsByStartID { - k.Stop(startID) - } -} - -func (k *KubeProxy) Stop(startID int) { - if k == nil { - return - } - - logger := k.sshClient.settings.Logger() - - if k.stop { - logger.DebugF("[%d] Stop kube-proxy: kube proxy already stopped. Skip.\n", startID) - return - } - - if k.healthMonitorsByStartID[startID] != nil { - k.healthMonitorsByStartID[startID] <- struct{}{} - delete(k.healthMonitorsByStartID, startID) - } - - if k.proxy != nil { - logger.DebugF("[%d] Stop proxy command\n", startID) - k.proxy.Stop() - logger.DebugF("[%d] Proxy command stopped\n", startID) - k.proxy = nil - k.port = "0" - } - if k.tunnel != nil { - logger.DebugF("[%d] Stop tunnel\n", startID) - k.tunnel.Stop() - logger.DebugF("[%d] Tunnel stopped\n", startID) - k.tunnel = nil - } - k.stop = true -} - -func (k *KubeProxy) tryToRestartFully(startID int) { - logger := k.sshClient.settings.Logger() - logger.DebugF("[%d] Try restart kubeproxy fully\n", startID) - for { - k.Stop(startID) - - _, err := k.Start(k.localPort) - - if err == nil { - k.stop = false - logger.DebugF("[%d] Proxy was restarted successfully\n", startID) - return - } - - const sleepTimeout = 5 - - // need warn for human - logger.WarnF( - "Proxy was not restarted: %v. Sleep %d seconds before next attempt.\n", - err, - sleepTimeout, - ) - time.Sleep(sleepTimeout * time.Second) - - k.Session.ChoiceNewHost() - logger.DebugF("[%d] New host selected %v\n", startID, k.Session.Host()) - } -} - -func (k *KubeProxy) proxyCMD(startID int) *SSHCommand { - kubectlProxy := fmt.Sprintf( - // --disable-filter is needed to exec into etcd pods - "kubectl proxy --as=dhctl --as-group=system:masters --port=%s --kubeconfig /etc/kubernetes/admin.conf --disable-filter", - k.port, - ) - if v := os.Getenv("KUBE_PROXY_ACCEPT_HOSTS"); v != "" { - kubectlProxy += fmt.Sprintf(" --accept-hosts='%s'", v) - } - command := fmt.Sprintf("PATH=$PATH:%s/; %s", k.sshClient.settings.NodeBinPath(), kubectlProxy) - - k.sshClient.settings.Logger().DebugF("[%d] Proxy command for start: %s\n", startID, command) - - cmd := NewSSHCommand(k.sshClient, command) - cmd.Sudo(k.ctx()) - return cmd -} - -func (k *KubeProxy) ctx() context.Context { - return context.Background() -} - -func (k *KubeProxy) healthMonitor( - proxyErrorCh, tunnelErrorCh chan error, - stopCh chan struct{}, - startID int, -) { - logger := k.sshClient.settings.Logger() - - defer logger.DebugF("[%d] Kubeproxy health monitor stopped\n", startID) - logger.DebugF("[%d] Kubeproxy health monitor started\n", startID) - - proxyErrorCount := 0 - for { - logger.DebugF("[%d] Kubeproxy Monitor step\n", startID) - select { - case err := <-proxyErrorCh: - logger.DebugF("[%d] Proxy failed with error %v\n", startID, err) - // if proxy crushed, we need to restart kube-proxy fully - // with proxy and tunnel (tunnel depends on proxy) - k.tryToRestartFully(startID) - // if we restart proxy fully - // this monitor must be finished because new monitor was started - return - - case err := <-tunnelErrorCh: - logger.DebugF("[%d] Tunnel failed %v. Stopping previous tunnel\n", startID, err) - // we need fully stop tunnel because - k.tunnel.Stop() - - logger.DebugF("[%d] Tunnel stopped before restart. Starting new tunnel...\n", startID) - - if proxyErrorCount < 3 { - k.tunnel, _, err = k.upTunnel(k.port, k.localPort, tunnelErrorCh, startID) - if err != nil { - logger.DebugF("[%d] Tunnel was not up: %v. Try to restart fully\n", startID, err) - k.tryToRestartFully(startID) - return - } - proxyErrorCount++ - } else { - k.tryToRestartFully(startID) - return - } - - logger.DebugF("[%d] Tunnel re up successfully\n") - - case <-stopCh: - logger.DebugF("[%d] Kubeproxy monitor stopped") - return - } - } -} - -func (k *KubeProxy) upTunnel( - kubeProxyPort string, - useLocalPort int, - tunnelErrorCh chan error, - startID int, -) (*Tunnel, int, error) { - logger := k.sshClient.settings.Logger() - - logger.DebugF( - "[%d] Starting up tunnel with proxy port %s and local port %d\n", - startID, - kubeProxyPort, - useLocalPort, - ) - - rewriteLocalPort := false - localPort := useLocalPort - - if useLocalPort < 1 { - logger.DebugF( - "[%d] Incorrect local port %d use default %d\n", - startID, - useLocalPort, - DefaultLocalAPIPort, - ) - localPort = DefaultLocalAPIPort - rewriteLocalPort = true - } - - maxRetries := 5 - retries := 0 - var lastError error - var tun *Tunnel - for { - logger.DebugF("[%d] Start %d iteration for up tunnel\n", startID, retries) - - if k.proxy.WaitError() != nil { - lastError = fmt.Errorf("proxy was failed while restart tunnel") - break - } - - // try to start tunnel from localPort to proxy port - var tunnelAddress string - if v := os.Getenv("KUBE_PROXY_BIND_ADDR"); v != "" { - tunnelAddress = fmt.Sprintf("%s:%d:localhost:%s", v, localPort, kubeProxyPort) - } else { - tunnelAddress = fmt.Sprintf("%s:%s:localhost:%d", "127.0.0.1", kubeProxyPort, localPort) - } - - logger.DebugF("[%d] Try up tunnel on %v\n", startID, tunnelAddress) - tun = NewTunnel(k.sshClient, tunnelAddress) - err := tun.Up(k.ctx()) - if err != nil { - logger.DebugF("[%d] Start tunnel was failed. Cleaning...\n", startID) - tun.Stop() - lastError = fmt.Errorf("tunnel '%s': %w", tunnelAddress, err) - logger.DebugF("[%d] Start tunnel was failed. Error: %v\n", startID, lastError) - if rewriteLocalPort { - localPort++ - logger.DebugF("[%d] New local port %d\n", startID, localPort) - } - - retries++ - if retries >= maxRetries { - logger.DebugF("[%d] Last iteration finished\n", startID) - tun = nil - break - } - } else { - logger.DebugF("[%d] Tunnel was started. Starting health monitor\n", startID) - go tun.HealthMonitor(tunnelErrorCh) - lastError = nil - break - } - } - - dbgMsg := fmt.Sprintf("Tunnel up on local port %d", localPort) - if lastError != nil { - dbgMsg = fmt.Sprintf("Tunnel was not up: %v", lastError) - } - - logger.DebugF("[%d] %s\n", startID, dbgMsg) - - return tun, localPort, lastError -} - -func (k *KubeProxy) runKubeProxy( - waitCh chan error, - startID int, -) (*SSHCommand, string, error) { - logger := k.sshClient.settings.Logger() - - logger.DebugF("[%d] Begin starting proxy\n", startID) - proxy := k.proxyCMD(startID) - - port := "" - portReady := make(chan struct{}, 1) - portRe := regexp.MustCompile(`Starting to serve on .*?:(\d+)`) - - proxy.WithStdoutHandler(func(line string) { - m := portRe.FindStringSubmatch(line) - if len(m) == 2 && m[1] != "" { - port = m[1] - logger.DebugF("Got proxy port = %s on host %s\n", port, k.Session.Host()) - portReady <- struct{}{} - } - }) - - onStart := make(chan struct{}, 1) - proxy.OnCommandStart(func() { - logger.DebugF("[%d] Command started\n", startID) - onStart <- struct{}{} - }) - - proxy.WithWaitHandler(func(err error) { - logger.DebugF("[%d] Wait error: %v\n", startID, err) - waitCh <- err - }) - - logger.DebugF("[%d] Start proxy command\n", startID) - err := proxy.Start() - if err != nil { - logger.DebugF("[%d] Start proxy command error: %v\n", startID, err) - return nil, "", fmt.Errorf("start kubectl proxy: %w", err) - } - - logger.DebugF("[%d] Proxy command was started\n", startID) - - returnWaitErr := func(err error) error { - logger.DebugF("[%d] Proxy command waiting error: %v\n", startID, err) - template := `Proxy exited suddenly: %s%s -Status: %w` - return fmt.Errorf(template, string(proxy.StdoutBytes()), string(proxy.StderrBytes()), err) - } - - // we need to check that kubeproxy was started - // that checking wait string pattern in output - // but we may receive error and this error will get from waitCh - select { - case <-onStart: - case err := <-waitCh: - return nil, "", returnWaitErr(err) - } - - // Wait for proxy startup - t := time.NewTicker(20 * time.Second) - defer t.Stop() - select { - case e := <-waitCh: - return nil, "", returnWaitErr(e) - case <-t.C: - logger.DebugF("[%d] Starting proxy command timeout\n", startID) - return nil, "", fmt.Errorf("timeout waiting for api proxy port") - case <-portReady: - if port == "" { - logger.DebugF("[%d] Starting proxy command: empty port\n", startID) - return nil, "", fmt.Errorf("got empty port from kubectl proxy") - } - } - - logger.DebugF("[%d] Proxy process started with port: %s\n", startID, port) - return proxy, port, nil -} diff --git a/pkg/ssh/gossh/kube-proxy_test.go b/pkg/ssh/gossh/kube-proxy_test.go deleted file mode 100644 index 6ab86b8..0000000 --- a/pkg/ssh/gossh/kube-proxy_test.go +++ /dev/null @@ -1,89 +0,0 @@ -// Copyright 2025 Flant JSC -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -package gossh - -import ( - "context" - "fmt" - "testing" - "time" - - "github.com/deckhouse/lib-dhctl/pkg/retry" - "github.com/stretchr/testify/require" - - "github.com/deckhouse/lib-connection/pkg/tests" -) - -func TestKubeProxy(t *testing.T) { - test := tests.ShouldNewIntegrationTest(t, "TestKubeProxy") - - sshClient, container := startContainerAndClientAndKind(t, test) - - cmd := NewSSHCommand(sshClient, "kubectl", "get", "no") - out, err := cmd.CombinedOutput(context.Background()) - test.Logger.InfoF("kubectl get no\n%s", out) - require.NoError(t, err) - - t.Run("Kubeproxy with HealthMonitor", func(t *testing.T) { - kp := sshClient.KubeProxy() - port, err := kp.Start(-1) - require.NoError(t, err) - - checkKubeProxy(t, test, port, false) - - // restart container case - restartSleep := 5 * time.Second - err = container.Container.SoftRestart(true, restartSleep) - require.NoError(t, err) - - // wait for ssh client/tunnel/kubeproxy restart - time.Sleep(20 * time.Second) - checkKubeProxy(t, test, port, false) - - // network issue case - err = container.Container.FailAndUpConnection(restartSleep) - require.NoError(t, err) - - // wait for ssh client/tunnel/kubeproxy restart - time.Sleep(20 * time.Second) - checkKubeProxy(t, test, port, false) - - kp.StopAll() - }) -} - -func checkKubeProxy(t *testing.T, test *tests.Test, localServerPort string, wantError bool) { - url := fmt.Sprintf("http://127.0.0.1:%s/api/v1/nodes", localServerPort) - - requestLoop := retry.NewEmptyParams( - retry.WithName("Check kube proxy available by %s", url), - retry.WithAttempts(10), - retry.WithWait(500*time.Millisecond), - retry.WithLogger(test.Logger), - ) - - _, err := tests.DoGetRequest( - url, - requestLoop, - tests.NewPrefixLogger(test.Logger).WithPrefix(test.FullName()), - ) - - assert := require.NoError - if wantError { - assert = require.Error - } - - assert(t, err, "check local tunnel. Want error %v", wantError) -} diff --git a/pkg/ssh/gossh/kube_proxy.go b/pkg/ssh/gossh/kube_proxy.go new file mode 100644 index 0000000..4150941 --- /dev/null +++ b/pkg/ssh/gossh/kube_proxy.go @@ -0,0 +1,86 @@ +// Copyright 2025 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gossh + +import ( + "context" + "fmt" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/ssh/base/kubeproxy" +) + +var ( + _ connection.KubeProxy = &KubeProxy{} +) + +type KubeProxy struct { + *kubeproxy.BaseKubeProxy +} + +func NewKubeProxy(client *Client) *KubeProxy { + runner := newKubeProxyRunner(context.Background(), client) + return &KubeProxy{ + BaseKubeProxy: kubeproxy.NewBaseKubeProxy(runner, client.settings, client.Session()), + } +} + +type kubeProxyRunner struct { + client *Client + ctx context.Context +} + +func newKubeProxyRunner(ctx context.Context, client *Client) *kubeProxyRunner { + return &kubeProxyRunner{ + client: client, + ctx: ctx, + } +} + +func (r *kubeProxyRunner) StartCommand(params kubeproxy.StartCommandParams) (connection.KubeProxyCommand, error) { + cmd := NewSSHCommand(r.client, params.Cmd) + cmd.Sudo(r.ctx) + + cmd.OnCommandStart(params.OnStart) + cmd.WithStdoutHandler(params.StdoutHandler) + cmd.WithWaitHandler(params.WaitHandler) + + if err := cmd.Start(); err != nil { + return nil, err + } + + return cmd, nil +} + +func (r *kubeProxyRunner) UpTunnel(localPort int, kubeProxyPort string) (connection.Tunnel, string, error) { + address := kubeproxy.ExtractTunnelAddressFromEnv(localPort, kubeProxyPort) + if address == "" { + address = fmt.Sprintf("127.0.0.1:%s:127.0.0.1:%d", kubeProxyPort, localPort) + } + + r.client.settings.Logger().DebugF("Try up tunnel for kube proxy on %s", address) + + tun := NewTunnel(r.client, address) + + if err := tun.Up(r.ctx); err != nil { + return nil, address, err + } + + return tun, address, nil +} + +func (r *kubeProxyRunner) ClientID() string { + return r.client.id +} diff --git a/pkg/ssh/gossh/kube_proxy_test.go b/pkg/ssh/gossh/kube_proxy_test.go new file mode 100644 index 0000000..92d6c18 --- /dev/null +++ b/pkg/ssh/gossh/kube_proxy_test.go @@ -0,0 +1,176 @@ +// Copyright 2025 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package gossh + +import ( + "context" + "fmt" + "testing" + "time" + + "github.com/stretchr/testify/require" + + "github.com/deckhouse/lib-connection/pkg/ssh/base/kubeproxy" + "github.com/deckhouse/lib-connection/pkg/tests" +) + +func TestKubeProxy(t *testing.T) { + test := tests.ShouldNewIntegrationTest(t, "TestKubeGoProxy") + + sshClient, container := prepareContainerForTestKubeProxy(t, test) + + waitRestart := func(op string) { + sleep := 20 * time.Second + test.GetLogger().InfoF("Waiting %s for finish %s", sleep.String(), op) + time.Sleep(sleep) + } + + assertPort := func(t *testing.T, got string, expected int) { + require.Equal(t, fmt.Sprintf("%d", expected), got, "proxy should start with port %d", expected) + } + + excludes := []int{container.LocalPort(), kubeproxy.DefaultLocalAPIPort} + + portForStopProxy := tests.RandPortExclude(excludes) + excludes = append(excludes, portForStopProxy) + portForStopClient := tests.RandPortExclude(excludes) + + assertProxyStoppedAndNotRestarted := func(t *testing.T, test *tests.Test) { + sett := test.Settings() + + // stop all + tests.AssertLogMessagesCount(t, sett, "Proxy command stopped", 1) + tests.AssertLogMessagesCount(t, sett, "Tunnel stopped", 1) + tests.AssertLogMessagesCount(t, sett, "Kube proxy health monitor started", 1) + tests.AssertLogMessagesCount(t, sett, "Kube proxy health monitor stopped", 1) + tests.AssertLogMessagesCount(t, sett, "Got kube proxy stopped message", 1) + + // not restart proxy + tests.AssertNoLogMessage(t, sett, "Stopping previous tunnel") + tests.AssertNoLogMessage(t, sett, "Tunnel failed. Stopping previous tunnel") + tests.AssertNoLogMessage(t, sett, "Tunnel stopped before restart. Starting new tunnel") + tests.AssertNoLogMessage(t, sett, "Tunnel re up successfully") + tests.AssertNoLogMessage(t, sett, "Try restart kube proxy fully") + tests.AssertNoLogMessage(t, sett, "New host selected on fully restart") + } + + stopClient := func(client *Client) func() { + return func() { + client.Stop() + } + } + + t.Run("Kube proxy with HealthMonitor", func(t *testing.T) { + kp := sshClient.KubeProxy() + port, err := kp.Start(-1) + require.NoError(t, err, "failed to start kube proxy") + assertPort(t, port, kubeproxy.DefaultLocalAPIPort) + + tests.AssertKubeProxy(t, test, port, false) + + sshClient.WithID("Restart container") + + // restart container case + restartSleep := 5 * time.Second + test.GetLogger().InfoF("Restart container with wait %s", restartSleep.String()) + err = container.Container.SoftRestart(true, restartSleep) + require.NoError(t, err, "container should restart") + + // wait for ssh client/tunnel/kubeproxy restart + waitRestart("restart container") + tests.AssertKubeProxy(t, test, port, false) + + sshClient.WithID("") + + // network issue case + err = container.Container.FailAndUpConnection(restartSleep) + require.NoError(t, err) + + // wait for ssh client/tunnel/kubeproxy restart + waitRestart("network issue") + tests.AssertKubeProxy(t, test, port, false) + + kp.StopAll() + + waitRestart("stop all") + }) + + t.Run("Stop kube proxy", func(t *testing.T) { + stopProxyTest := tests.ShouldNewIntegrationTest(t, "TestKubeGoProxyStop") + sshClientForStopProxy := startClient(t, stopProxyTest, container) + + kp := sshClientForStopProxy.KubeProxy() + + port, err := kp.Start(portForStopProxy) + require.NoError(t, err, "proxy should start with port %d", portForStopProxy) + assertPort(t, port, portForStopProxy) + + tests.AssertKubeProxy(t, stopProxyTest, port, false) + + kp.StopAll() + + waitRestart("stop kube proxy") + + tests.AssertKubeProxy(t, stopProxyTest, port, true) + + stopAll := func() { + kp.StopAll() + } + + require.NotPanics(t, stopAll, "second StopAll should not panics") + + waitRestart("second stop all") + + assertProxyStoppedAndNotRestarted(t, stopProxyTest) + tests.AssertLogMessagesCount(t, stopProxyTest.Settings(), "Stop kube-proxy: kube proxy already stopped. Skip", 1) + + require.NotPanics(t, stopClient(sshClientForStopProxy), "stop client after stop proxy does not panics") + }) + + t.Run("Stop client", func(t *testing.T) { + stopClientTest := tests.ShouldNewIntegrationTest(t, "TestKubeGoProxyStopClient") + sshClientForStopClient := startClient(t, stopClientTest, container) + + kp := sshClientForStopClient.KubeProxy() + + port, err := kp.Start(portForStopClient) + require.NoError(t, err, "proxy should start with port %d", portForStopClient) + assertPort(t, port, portForStopClient) + + tests.AssertKubeProxy(t, stopClientTest, port, false) + + sshClientForStopClient.Stop() + + waitRestart("stop client") + + tests.AssertKubeProxy(t, stopClientTest, port, true) + + assertProxyStoppedAndNotRestarted(t, stopClientTest) + + require.NotPanics(t, stopClient(sshClientForStopClient), "second stop client does not panics") + }) +} + +func prepareContainerForTestKubeProxy(t *testing.T, test *tests.Test) (*Client, *tests.TestContainerWrapper) { + sshClient, container := startContainerAndClientAndKind(t, test) + + test.GetLogger().InfoF("Try to check run kubectl on ssh container...") + cmd := NewSSHCommand(sshClient, "kubectl", "get", "no") + out, err := cmd.CombinedOutput(context.Background()) + test.Logger.InfoF("kubectl get no\n%s", out) + require.NoError(t, err) + + return sshClient, container +} diff --git a/pkg/ssh/gossh/tunnel.go b/pkg/ssh/gossh/tunnel.go index dbc4cf9..4201e34 100644 --- a/pkg/ssh/gossh/tunnel.go +++ b/pkg/ssh/gossh/tunnel.go @@ -35,13 +35,16 @@ var ( ) type Tunnel struct { + globalMu sync.Mutex + sshClient *Client address string - tunMutex sync.Mutex + started bool + + stopCh chan struct{} - started bool - stopCh chan struct{} + tunMutex sync.RWMutex remoteListener net.Listener errorCh chan error @@ -61,13 +64,11 @@ func (t *Tunnel) Up(ctx context.Context) error { } func (t *Tunnel) upNewTunnel(ctx context.Context, oldId int) (int, error) { - logger := t.sshClient.settings.Logger() - - t.tunMutex.Lock() - defer t.tunMutex.Unlock() + t.globalMu.Lock() + defer t.globalMu.Unlock() if t.started { - logger.DebugF("[%d] Tunnel already up\n", oldId) + t.debugWithID(oldId, "[%d] Tunnel already up") return -1, fmt.Errorf("already up") } @@ -80,39 +81,37 @@ func (t *Tunnel) upNewTunnel(ctx context.Context, oldId int) (int, error) { remoteBind, remotePort, localBind, localPort := parts[0], parts[1], parts[2], parts[3] - logger.DebugF("[%d] Remote bind: %s remote port: %s local bind: %s local port: %s\n", id, remoteBind, remotePort, localBind, localPort) - - logger.DebugF("[%d] Start tunnel\n", id) + t.debugWithID( + id, + "Start tunnel. Remote bind: %s remote port: %s local bind: %s local port: %s", + remoteBind, + remotePort, + localBind, + localPort, + ) remoteAddress := net.JoinHostPort(remoteBind, remotePort) localAddress := net.JoinHostPort(localBind, localPort) listener, err := net.Listen("tcp", localAddress) if err != nil { - return -1, errors.Wrap(err, fmt.Sprintf("failed to listen local on %s", localAddress)) + return -1, fmt.Errorf("failed to listen local on %s: %w", localAddress, err) } - tcpListener, ok := listener.(*net.TCPListener) - if !ok { - _ = listener.Close() - return -1, fmt.Errorf("Failed to up tunnel: got not TCPListner") - } + t.setListener(listener) - logger.DebugF("[%d] Listen remote on %s successful", id, localAddress) - - logger.DebugF("[%d] Tunnel %s up. Starting accept tunnel connection", id, localAddress) + t.debugWithID(id, "Listen remote on %s successful. Starting monitors...", localAddress) go t.monitorContext(ctx, id) - go t.acceptTunnelConnection(ctx, id, remoteAddress, tcpListener) + go t.acceptTunnelConnection(ctx, id, remoteAddress) - t.remoteListener = listener t.started = true return id, nil } -func (t *Tunnel) remoteConn(ctx context.Context, remoteAddress string) (net.Conn, error) { - cctx, cancel := context.WithTimeout(ctx, 10*time.Second) +func (t *Tunnel) dialRemote(ctx context.Context, remoteAddress string) (net.Conn, error) { + cctx, cancel := context.WithTimeout(ctx, 5*time.Second) defer cancel() remoteConn, err := t.sshClient.GetClient().DialContext(cctx, "tcp", remoteAddress) @@ -123,19 +122,45 @@ func (t *Tunnel) remoteConn(ctx context.Context, remoteAddress string) (net.Conn return remoteConn, nil } +func (t *Tunnel) remoteConn(ctx context.Context, remoteAddress string) (net.Conn, error) { + // use cycle for prevent connection refuse error on start + + // no use retry package because silent logger can tee logs to file + // it is huge for logger + var lastErr error + for i := 0; i < 3; i++ { + conn, err := t.dialRemote(ctx, remoteAddress) + if err != nil { + lastErr = err + time.Sleep(50 * time.Millisecond) + continue + } + return conn, nil + } + + return nil, lastErr +} + func (t *Tunnel) monitorContext(ctx context.Context, id int) { <-ctx.Done() - t.stop(id, true) + t.stop(id) t.errorCh <- ctx.Err() } -func (t *Tunnel) acceptNext(ctx context.Context, id int, remoteAddress string, listener *net.TCPListener) (net.Conn, net.Conn, error) { +var emptyListenerErr = errors.New("empty listener") + +func (t *Tunnel) acceptNext(ctx context.Context, id int, remoteAddress string) (net.Conn, net.Conn, error) { select { case <-ctx.Done(): return nil, nil, ctx.Err() default: } + listener, err := t.getListener() + if err != nil { + return nil, nil, err + } + localConn, err := listener.Accept() if err != nil { @@ -154,11 +179,20 @@ func (t *Tunnel) acceptNext(ctx context.Context, id int, remoteAddress string, l return localConn, remoteConn, nil } -func (t *Tunnel) acceptTunnelConnection(ctx context.Context, id int, remoteAddress string, listener *net.TCPListener) { +func (t *Tunnel) acceptTunnelConnection(ctx context.Context, id int, remoteAddress string) { + t.debugWithID(id, "Start accepting tunnel connection") + defer t.debugWithID(id, "Accepting tunnel connection stopped") + for { - // todo handle listener closed case and break cycle - localConn, remoteConn, err := t.acceptNext(ctx, id, remoteAddress, listener) + localConn, remoteConn, err := t.acceptNext(ctx, id, remoteAddress) if err != nil { + // after stop we can get error on using close connection should not send error + // it is valid operation + if errors.Is(err, emptyListenerErr) { + t.debugWithID(id, "Accept tunnel connection stopped because listener set to nil") + return + } + t.errorCh <- err if isContextError(err) { @@ -189,10 +223,14 @@ func (t *Tunnel) acceptTunnelConnection(ctx context.Context, id int, remoteAddre } func (t *Tunnel) HealthMonitor(errorOutCh chan<- error) { - logger := t.sshClient.settings.Logger() + if _, err := t.getListener(); err != nil || !t.started { + t.debug("Call HealthMonitor. Tunnel stopped") + errorOutCh <- fmt.Errorf("tunnel stopped") + return + } - defer logger.DebugF("Tunnel health monitor stopped\n") - logger.DebugF("Tunnel health monitor started\n") + defer t.debug("Tunnel health monitor stopped") + t.debug("Tunnel health monitor started") t.stopCh = make(chan struct{}, 1) @@ -201,43 +239,42 @@ func (t *Tunnel) HealthMonitor(errorOutCh chan<- error) { case err := <-t.errorCh: errorOutCh <- err case <-t.stopCh: - if !govalue.Nil(t.remoteListener) { - _ = t.remoteListener.Close() - } return } } } func (t *Tunnel) Stop() { - t.stop(-1, true) + t.stop(-1) } -func (t *Tunnel) stop(id int, full bool) { - logger := t.sshClient.settings.Logger() - - t.tunMutex.Lock() - defer t.tunMutex.Unlock() +func (t *Tunnel) stop(id int) { + t.globalMu.Lock() + defer t.globalMu.Unlock() if !t.started { - logger.DebugF("[%d] Tunnel already stopped\n", id) + t.debugWithID(id, "Tunnel already stopped") return } - logger.DebugF("[%d] Stop tunnel\n", id) - defer logger.DebugF("[%d] End stop tunnel\n", id) + t.debugWithID(id, "Stop tunnel") + defer t.debugWithID(id, "End stop tunnel") - if full && t.stopCh != nil { - logger.DebugF("[%d] Stop tunnel health monitor\n", id) + if t.stopCh != nil { + t.debugWithID(id, "Stop tunnel health monitor") t.stopCh <- struct{}{} } - err := t.remoteListener.Close() - if err != nil { - logger.DebugF("[%d] Cannot close listener: %s\n", id, err.Error()) + listener, err := t.getListener() + if err == nil { + t.debugWithID(id, "Close listener") + err := listener.Close() + if err != nil && !errors.Is(err, net.ErrClosed) { + t.debugWithID(id, "Cannot close listener: %v", id, err) + } } - t.remoteListener = nil + t.setListener(nil) t.started = false } @@ -245,6 +282,34 @@ func (t *Tunnel) String() string { return fmt.Sprintf("%s:%s", "L", t.address) } -func (t *Tunnel) debug(format string, args ...interface{}) { +func (t *Tunnel) setListener(l net.Listener) { + t.tunMutex.Lock() + defer t.tunMutex.Unlock() + + t.remoteListener = l +} + +func (t *Tunnel) getListener() (net.Listener, error) { + t.tunMutex.RLock() + defer t.tunMutex.RUnlock() + + listener := t.remoteListener + if govalue.Nil(listener) { + return nil, emptyListenerErr + } + + return listener, nil +} + +func (t *Tunnel) debug(format string, args ...any) { + t.sshClient.settings.Logger().DebugF(format, args...) +} + +func (t *Tunnel) debugWithID(id int, format string, args ...any) { + if id > 0 { + format = "[%d] " + format + args = append([]any{id}, args...) + } + t.sshClient.settings.Logger().DebugF(format, args...) } diff --git a/pkg/ssh/gossh/tunnel_test.go b/pkg/ssh/gossh/tunnel_test.go index da234e0..02abf6a 100644 --- a/pkg/ssh/gossh/tunnel_test.go +++ b/pkg/ssh/gossh/tunnel_test.go @@ -28,50 +28,9 @@ import ( ) func TestTunnel(t *testing.T) { - test := tests.ShouldNewIntegrationTest(t, "TestTunnel") + test := tests.ShouldNewIntegrationTest(t, "TestGoTunnel") - sshClient, container := startContainerAndClientWithContainer(t, test) - sshClient.WithLoopsParams(ClientLoopsParams{ - NewSession: retry.NewEmptyParams( - retry.WithAttempts(5), - retry.WithWait(250*time.Millisecond), - ), - }) - - // we don't have /opt/deckhouse in the container, so we should create it before start any UploadScript with sudo - err := container.Container.CreateDeckhouseDirs() - require.NoError(t, err, "could not create deckhouse dirs") - - remoteServerPort := tests.RandPortExclude([]int{container.Container.RemotePort()}) - remoteServerScript := fmt.Sprintf(`#!/bin/bash -while true ; do { - echo -ne "HTTP/1.0 200 OK\r\nContent-Length: 2\r\n\r\n" ; - echo -n "OK"; -} | nc -l -p %d ; -done`, remoteServerPort) - - const remoteServerFile = "/tmp/server.sh" - localServerFile := test.MustCreateTmpFile(t, remoteServerScript, true, "remote_server", "server.sh") - - err = sshClient.File().Upload(context.TODO(), localServerFile, remoteServerFile) - require.NoError(t, err) - - runRemoteServerSession, err := sshClient.NewSSHSession() - require.NoError(t, err) - - t.Cleanup(func() { - err := runRemoteServerSession.Signal(ssh.SIGKILL) - if err != nil { - test.Logger.ErrorF("error killing remote server: %v", err) - } - err = runRemoteServerSession.Close() - if err != nil { - test.Logger.ErrorF("error closing remote server session: %v", err) - } - }) - - err = runRemoteServerSession.Start(remoteServerFile) - require.NoError(t, err, "error starting remote server") + sshClient, container, remoteServerPort := prepareContainerForTunnelTest(t, test) localsReservedPorts := []int{container.LocalPort()} @@ -116,7 +75,7 @@ done`, remoteServerPort) ctx := context.TODO() tun := NewTunnel(sshClient, c.address) - err = tun.Up(ctx) + err := tun.Up(ctx) registerStopTunnel(t, tun) if !c.wantErr { @@ -136,7 +95,8 @@ done`, remoteServerPort) t.Run("Health monitor", func(t *testing.T) { upTunnelWithMonitor := func(t *testing.T, ctx context.Context, address string) chan error { tun := NewTunnel(sshClient, address) - err = tun.Up(ctx) + err := tun.Up(ctx) + require.NoError(t, err, "failed to up tunnel") registerStopTunnel(t, tun) // starting HealthMonitor @@ -182,6 +142,85 @@ done`, remoteServerPort) }) } +func TestTunnelStop(t *testing.T) { + test := tests.ShouldNewIntegrationTest(t, "TestGoTunnelStop", tests.TestWithDebug(false)) + + sshClient, container, remoteServerPort := prepareContainerForTunnelTest(t, test) + + localPort := container.LocalPort() + + localServerPort := tests.RandPortExclude([]int{localPort}) + + tun := NewTunnel(sshClient, tunnelAddressString(localServerPort, remoteServerPort)) + err := tun.Up(context.TODO()) + require.NoError(t, err, "failed to up tunnel") + + // starting HealthMonitor + errChan := make(chan error, 10) + go tun.HealthMonitor(errChan) + + checkLocalTunnel(t, test, localServerPort, false) + + waitAfter := func(op string) { + sleep := 3 * time.Second + test.GetLogger().InfoF("Waiting %s perform operation after %s", sleep.String(), op) + time.Sleep(sleep) + } + + assertErrorChannel := func(t *testing.T, errChan chan error, errContains string) { + var err error + var chStatus bool + select { + case err, chStatus = <-errChan: + default: + } + + if errContains == "" { + require.False(t, chStatus, "should not be closed") + require.NoError(t, err, "should not have error in channel") + return + } + + require.True(t, chStatus, "should not be closed") + require.Error(t, err, "should have error in chanel") + require.Contains(t, err.Error(), errContains) + } + + tun.Stop() + + waitAfter("first stop") + + tests.AssertLogMessage(t, test.Settings(), "Tunnel health monitor stopped") + + assertErrorChannel(t, errChan, "") + + checkTunnelFailed := func() { + checkLocalTunnel(t, test, localServerPort, true) + } + + require.NotPanics(t, checkTunnelFailed, "not panic check after stop") + + anotherErrChan := make(chan error, 10) + startMonitor := func() { + tun.HealthMonitor(anotherErrChan) + } + + require.NotPanics(t, startMonitor, "startMonitor shouldn't be panic") + waitAfter("health monitor after stop") + + tests.AssertLogMessage(t, test.Settings(), "Call HealthMonitor. Tunnel stopped") + assertErrorChannel(t, anotherErrChan, "tunnel stopped") + + secondStopTunnel := func() { + tun.Stop() + } + + require.NotPanics(t, secondStopTunnel, "startMonitor shouldn't be panic") + waitAfter("second stop") + + tests.AssertLogMessage(t, test.Settings(), "Tunnel already stopped") +} + func checkLocalTunnel(t *testing.T, test *tests.Test, localServerPort int, wantError bool) { url := fmt.Sprintf("http://127.0.0.1:%d", localServerPort) @@ -205,3 +244,54 @@ func checkLocalTunnel(t *testing.T, test *tests.Test, localServerPort int, wantE assert(t, err, "check local tunnel. Want error %v", wantError) } + +func prepareContainerForTunnelTest(t *testing.T, test *tests.Test) (*Client, *tests.TestContainerWrapper, int) { + sshClient, container := startContainerAndClientWithContainer(t, test) + sshClient.WithLoopsParams(ClientLoopsParams{ + NewSession: retry.NewEmptyParams( + retry.WithAttempts(5), + retry.WithWait(250*time.Millisecond), + ), + }) + + // we don't have /opt/deckhouse in the container, so we should create it before start any UploadScript with sudo + err := container.Container.CreateDeckhouseDirs() + require.NoError(t, err, "could not create deckhouse dirs") + + remoteServerPort := tests.RandPortExclude([]int{container.Container.RemotePort()}) + remoteServerScript := fmt.Sprintf(`#!/bin/bash +while true ; do { + echo -ne "HTTP/1.0 200 OK\r\nContent-Length: 2\r\n\r\n" ; + echo -n "OK"; +} | nc -l -p %d ; +done`, remoteServerPort) + + const remoteServerFile = "/tmp/server.sh" + localServerFile := test.MustCreateTmpFile(t, remoteServerScript, true, "remote_server", "server.sh") + + err = sshClient.File().Upload(context.TODO(), localServerFile, remoteServerFile) + require.NoError(t, err) + + runRemoteServerSession, err := sshClient.NewSSHSession() + require.NoError(t, err) + + t.Cleanup(func() { + err := runRemoteServerSession.Signal(ssh.SIGKILL) + if err != nil { + test.Logger.ErrorF("error killing remote server: %v", err) + } + err = runRemoteServerSession.Close() + if err != nil { + test.Logger.ErrorF("error closing remote server session: %v", err) + } + }) + + err = runRemoteServerSession.Start(remoteServerFile) + require.NoError(t, err, "error starting remote server") + + t.Cleanup(func() { + sshClient.Stop() + }) + + return sshClient, container, remoteServerPort +} diff --git a/pkg/ssh/gossh/upload-script.go b/pkg/ssh/gossh/upload-script.go index c5a14de..9e38594 100644 --- a/pkg/ssh/gossh/upload-script.go +++ b/pkg/ssh/gossh/upload-script.go @@ -120,7 +120,7 @@ func (u *SSHUploadScript) Execute(ctx context.Context) ([]byte, error) { remotePath := utils.ExecuteRemoteScriptPath(u, scriptName, false) logger.DebugF("Uploading script %s to %s\n", u.ScriptPath, remotePath) - err := NewSSHFile(u.sshClient.settings, u.sshClient.sshClient).Upload(ctx, u.ScriptPath, remotePath) + err := NewSSHFile(u.sshClient.settings, u.sshClient).Upload(ctx, u.ScriptPath, remotePath) if err != nil { return nil, fmt.Errorf("upload: %v", err) } diff --git a/pkg/ssh/session/session.go b/pkg/ssh/session/session.go index 6276247..342f849 100644 --- a/pkg/ssh/session/session.go +++ b/pkg/ssh/session/session.go @@ -19,6 +19,8 @@ import ( "sort" "strings" "sync" + + "github.com/deckhouse/lib-connection/pkg/settings" ) type Input struct { @@ -47,7 +49,7 @@ type AgentPrivateKey struct { func (s *AgentSettings) AuthSockEnv() string { if s.AuthSock != "" { - return fmt.Sprintf("SSH_AUTH_SOCK=%s", s.AuthSock) + return fmt.Sprintf("%s=%s", settings.SSHAgentAuthSockEnv, s.AuthSock) } return "" } @@ -275,6 +277,7 @@ func (s *Session) Copy() *Session { ses.BastionUser = s.BastionUser ses.BastionPassword = s.BastionPassword ses.ExtraArgs = s.ExtraArgs + ses.BecomePass = s.BecomePass ses.host = s.host if s.AgentSettings != nil { @@ -323,3 +326,115 @@ func (s *Session) selectNewHost() { s.host = host.Host } + +func Compare(first, second *Session) bool { + if first == nil && second == nil { + return true + } + + if first == nil { + return false + } + + if second == nil { + return false + } + + if first.User != second.User { + return false + } + + if first.Port != second.Port { + return false + } + + if first.BecomePass != second.BecomePass { + return false + } + + if first.BastionUser != second.BastionUser { + return false + } + + if first.BastionHost != second.BastionHost { + return false + } + + if first.BastionPort != second.BastionPort { + return false + } + + if first.BastionPassword != second.BastionPassword { + return false + } + + if first.ExtraArgs != second.ExtraArgs { + return false + } + + firstAvailableHosts := first.AvailableHosts() + secondAvailableHosts := second.AvailableHosts() + + if len(firstAvailableHosts) != len(secondAvailableHosts) { + return false + } + + firstHosts := make(map[string]struct{}, len(firstAvailableHosts)) + for _, host := range firstAvailableHosts { + firstHosts[host.Host] = struct{}{} + } + + for _, host := range secondAvailableHosts { + if _, ok := firstHosts[host.Host]; !ok { + return false + } + } + + return true +} + +type SessionWithPrivateKeys struct { + Session *Session + Keys []AgentPrivateKey +} + +func CompareWithKeys(first, second *SessionWithPrivateKeys) bool { + if first == nil && second == nil { + return true + } + + if first == nil { + return false + } + + if second == nil { + return false + } + + if !Compare(first.Session, second.Session) { + return false + } + + if len(first.Keys) != len(second.Keys) { + return false + } + + firstKeys := make(map[string]struct{}, len(first.Keys)) + + for _, key := range first.Keys { + firstKeys[privateKeyString(key)] = struct{}{} + } + + for _, key := range second.Keys { + str := privateKeyString(key) + if _, ok := firstKeys[str]; !ok { + return false + } + } + + return true +} + +func privateKeyString(key AgentPrivateKey) string { + return fmt.Sprintf("%s#%s", key.Key, key.Passphrase) +} diff --git a/pkg/ssh/testssh/kube_proxy_test.go b/pkg/ssh/testssh/kube_proxy_test.go new file mode 100644 index 0000000..dde84f8 --- /dev/null +++ b/pkg/ssh/testssh/kube_proxy_test.go @@ -0,0 +1,198 @@ +// Copyright 2025 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package testssh + +import ( + "context" + "fmt" + "strconv" + "testing" + "time" + + "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/stretchr/testify/require" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/ssh/base/kubeproxy" + "github.com/deckhouse/lib-connection/pkg/ssh/clissh" + sshconfig "github.com/deckhouse/lib-connection/pkg/ssh/config" + "github.com/deckhouse/lib-connection/pkg/ssh/gossh" + "github.com/deckhouse/lib-connection/pkg/tests" +) + +func TestKubeProxy(t *testing.T) { + runTests := []runTest{ + { + name: "Go", + mode: sshconfig.Mode{ + ForceModern: true, + }, + }, + + { + name: "Cli", + mode: sshconfig.Mode{ + ForceLegacy: true, + }, + }, + } + + baseTest := tests.ShouldNewIntegrationTest(t, "TestBaseKubeProxy") + + container := startContainerAndKind(t, baseTest) + + assertGetRandomPort := func(t *testing.T, port string) { + intPort, err := strconv.Atoi(port) + require.NoError(t, err, "should convert port to int") + + require.True(t, intPort >= 22340, "should port in range %d", intPort) + require.True(t, intPort <= 22499, "should port in range %d", intPort) + } + + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := tests.ShouldNewIntegrationTest(t, "TestKubeProxy"+rt.name) + + wait := func(op string) { + test.GetLogger().InfoF("%s", op) + time.Sleep(10 * time.Second) + } + + client := startClientForContainer(t, test, rt, container) + + // use default port + firstProxy := client.KubeProxy() + firstProxyPort, err := firstProxy.Start(-1) + require.NoError(t, err, "proxy should start") + require.Equal(t, fmt.Sprintf("%d", kubeproxy.DefaultLocalAPIPort), firstProxyPort, "should start on default port") + + tests.AssertKubeProxy(t, test, firstProxyPort, false) + + // second proxy start on default but get random port + secondProxy := client.KubeProxy() + secondProxyPort, err := secondProxy.Start(-1) + require.NoError(t, err, "second proxy should start") + assertGetRandomPort(t, secondProxyPort) + + tests.AssertKubeProxy(t, test, secondProxyPort, false) + + // third proxy start on default but get random port + thirdProxy := client.KubeProxy() + thirdProxyPort, err := thirdProxy.Start(-1) + require.NoError(t, err, "second proxy should start") + assertGetRandomPort(t, thirdProxyPort) + + tests.AssertKubeProxy(t, test, thirdProxyPort, false) + + // forth proxy start on custom port + customPort := tests.RandRange(30001, 30199) + test.GetLogger().InfoF("Got custom Port: %d", customPort) + forthProxy := client.KubeProxy() + forthProxyPort, err := forthProxy.Start(customPort) + require.NoError(t, err, "second proxy should start") + require.Equal(t, fmt.Sprintf("%d", customPort), forthProxyPort, "should start on custom port") + + tests.AssertKubeProxy(t, test, forthProxyPort, false) + + anotherOnCustomProxy := client.KubeProxy() + _, err = anotherOnCustomProxy.Start(customPort) + require.Error(t, err, "proxy should not start at same port") + + secondProxy.Stop(-1) + forthProxy.Stop(-1) + + wait("stopping proxies") + + stopped := []string{ + forthProxyPort, + secondProxyPort, + } + + for _, port := range stopped { + tests.AssertKubeProxy(t, test, port, true) + } + + notAffected := []string{ + firstProxyPort, + thirdProxyPort, + } + + for _, port := range notAffected { + tests.AssertKubeProxy(t, test, port, false) + } + }) + } +} + +func startContainerAndKind(t *testing.T, test *tests.Test, opts ...tests.TestContainerWrapperSettingsOpts) *tests.TestContainerWrapper { + container := tests.NewTestContainerWrapper(t, test, opts...) + + rt := runTest{ + name: "start kind", + mode: sshconfig.Mode{ + ForceModern: true, + }, + } + + kindCluster := tests.CreateKINDCluster(t, &tests.KINDClusterCreateParams{ + Test: test, + ClusterName: "kube-proxy-general", + Containers: []*tests.SSHContainersForKind{ + { + Client: startClientForContainer(t, test, rt, container), + Container: container, + }, + }, + }) + + kindCluster.RegisterCleanup(t) + + return container +} + +func startClientForContainer(t *testing.T, test *tests.Test, rt runTest, container *tests.TestContainerWrapper) connection.SSHClient { + sess := tests.Session(container) + keys := container.AgentPrivateKeys() + + defaultLoop := retry.NewEmptyParams( + retry.WithWait(2*time.Second), + retry.WithAttempts(7), + ) + + sshSettings := test.Settings() + ctx := context.TODO() + + var sshClient connection.SSHClient + + if rt.mode.ForceModern { + sshClient = gossh.NewClient(ctx, sshSettings, sess, keys).WithLoopsParams(gossh.ClientLoopsParams{ + ConnectToBastion: defaultLoop.Clone(), + ConnectToHostViaBastion: defaultLoop.Clone(), + ConnectToHostDirectly: defaultLoop.Clone(), + NewSession: defaultLoop.Clone(), + CheckReverseTunnel: defaultLoop.Clone(), + }) + } else { + sshClient = clissh.NewClient(sshSettings, sess, keys, true) + } + + err := sshClient.Start() + // expecting no error on client start + require.NoError(t, err) + + registerStopClient(t, sshClient) + + return sshClient +} diff --git a/pkg/tests/helpers.go b/pkg/tests/helpers.go index 20f76f6..b806731 100644 --- a/pkg/tests/helpers.go +++ b/pkg/tests/helpers.go @@ -20,6 +20,7 @@ import ( "encoding/pem" "fmt" "os" + "regexp" "strings" "testing" "time" @@ -230,15 +231,15 @@ func Name(t *testing.T) string { return prepareTestNames(t.Name()) } -func findLogMsg(t *testing.T, sett settings.Settings, msgInLog string) string { +func findLogMsg(t *testing.T, sett settings.Settings, msgInLog string) []string { loggerInterface := sett.Logger() logger, ok := loggerInterface.(*log.InMemoryLogger) require.True(t, ok, "logger is not of type *log.InMemoryLogger") - getMatch, err := logger.FirstMatch(&log.Match{ - Prefix: []string{ - msgInLog, + getMatch, err := logger.AllMatches(&log.Match{ + Regex: []*regexp.Regexp{ + regexp.MustCompile(fmt.Sprintf(`.*%s.*`, regexp.QuoteMeta(msgInLog))), }, }) @@ -249,10 +250,20 @@ func findLogMsg(t *testing.T, sett settings.Settings, msgInLog string) string { func AssertLogMessage(t *testing.T, sett settings.Settings, msgInLog string) { getMatch := findLogMsg(t, sett, msgInLog) - require.Contains(t, getMatch, msgInLog, "should contain %s", msgInLog) + require.Len(t, getMatch, 1, "should have one match %s", msgInLog) + require.Contains(t, getMatch[0], msgInLog, "should contain %s", msgInLog) } func AssertNoLogMessage(t *testing.T, sett settings.Settings, msgInLog string) { getMatch := findLogMsg(t, sett, msgInLog) - require.Empty(t, getMatch, "should not find log msg %s", msgInLog) + require.Len(t, getMatch, 0, "should not have any match %s", msgInLog) +} + +func AssertLogMessagesCount(t *testing.T, sett settings.Settings, msgInLog string, expected int) { + getMatch := findLogMsg(t, sett, msgInLog) + require.Len(t, getMatch, expected, "should have %d matches %s", expected, msgInLog) + + for _, m := range getMatch { + require.Contains(t, m, msgInLog, "should contain %s", msgInLog) + } } diff --git a/pkg/tests/kind.go b/pkg/tests/kind.go index 7b38db6..6d53868 100644 --- a/pkg/tests/kind.go +++ b/pkg/tests/kind.go @@ -15,75 +15,450 @@ package tests import ( + "context" + "encoding/json" "fmt" - "os" "os/exec" + "regexp" + "strings" + "testing" "time" "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/stretchr/testify/require" + "k8s.io/client-go/rest" + "k8s.io/client-go/tools/clientcmd" + + connection "github.com/deckhouse/lib-connection/pkg" ) const ( - KindConfigPath = "../../../hack/kind/cluster-kube-proxy.yml" - KindClusterName = "k8s-test" - KindBinary = "../../../bin/kind" + KindBinary = "../../../bin/kind" + kindConfig = ` +kind: Cluster +apiVersion: kind.x-k8s.io/v1alpha4 +nodes: + - role: control-plane +` +) + +var ( + extractPortRe = regexp.MustCompile(`server:\s+https:\/\/127\.0\.0\.1:([0-9]{2,5})`) ) -func CreateKINDCluster() error { - // checking out, what kind config exists - _, err := os.Stat(KindConfigPath) +type SSHContainersForKind struct { + Client connection.SSHClient + Container *TestContainerWrapper +} + +type KINDClusterCreateParams struct { + Test *Test + ClusterName string + Containers []*SSHContainersForKind + + NoPrepareLocalKubectlInSSHContainer bool +} + +type KINDCluster struct { + Name string + ControlPlaneIP string + ControlPlanePort string + + test *Test + kubeconfig string + restConfig *rest.Config +} + +func (c *KINDCluster) appendClusterNameArg(args []string) []string { + return append(args, fmt.Sprintf("--name=%s", c.Name)) +} + +func (c *KINDCluster) runKind(args ...string) (string, error) { + cmd := exec.Command(KindBinary, c.appendClusterNameArg(args)...) + out, err := cmd.CombinedOutput() + return string(out), err +} + +func (c *KINDCluster) RegisterCleanup(t *testing.T) { + t.Cleanup(func() { + if err := c.Delete(); err != nil { + c.test.GetLogger().ErrorF("Failed to delete cluster %s: %s", c.Name, err) + } + }) +} + +func (c *KINDCluster) Delete() error { + logger := c.test.GetLogger() + logger.InfoF("Deleting KIND cluster %s...", c.Name) + out, err := c.runKind("delete", "cluster") if err != nil { + logger.ErrorF("Failed to delete KIND cluster %s: %v:\n%s", c.Name, err, out) return err } - // args to command - args := []string{"create", "cluster", "--name=" + KindClusterName, "--config=" + KindConfigPath} - cmd := exec.Command(KindBinary, args...) - out, err := cmd.CombinedOutput() + + logger.InfoF("KIND Cluster %s deleted:\n%s", c.Name, out) + return nil +} + +// extractPort +// first port, second whole server string +func (c *KINDCluster) extractPort() (string, string) { + submatches := extractPortRe.FindStringSubmatch(c.kubeconfig) + if len(submatches) != 2 { + return "", "" + } + + return submatches[1], submatches[0] +} + +func (c *KINDCluster) containerName() string { + return fmt.Sprintf("%s-control-plane", c.Name) +} + +func (c *KINDCluster) Kubeconfig() string { + return c.kubeconfig +} + +// KubeconfigWithIP +// ip and port can empty if empty returns raw +func (c *KINDCluster) KubeconfigWithIP(ip string, port string) string { + if ip == "" { + return c.kubeconfig + } + + extractedPort, full := c.extractPort() + + if port == "" { + port = extractedPort + } + + replace := fmt.Sprintf("server: https://%s:%s", ip, port) + + return strings.ReplaceAll(c.kubeconfig, full, replace) +} + +func (c *KINDCluster) RESTConfig() (*rest.Config, error) { + if c.restConfig != nil { + return c.copyREST(), nil + } + + config, err := clientcmd.Load([]byte(c.Kubeconfig())) + if err != nil { + return nil, err + } + + cluster, ok := config.Clusters[fmt.Sprintf("kind-%s", c.Name)] + if !ok { + return nil, fmt.Errorf("cluster %s not found in kubeconfig", c.Name) + } + + ca := cluster.CertificateAuthorityData + if len(ca) == 0 { + return nil, fmt.Errorf("no CA data for cluster %s", c.Name) + } + + saName := "test-kube-admin" + + _, err = c.runKubectlInSystemNs("Create SA for token", "create", "serviceaccount", saName) + if err != nil { + return nil, err + } + + roleBindingArgs := []string{ + "create", + "clusterrolebinding", + "test-kube-admin-binding", + "--clusterrole=cluster-admin", + fmt.Sprintf("--serviceaccount=kube-system:%s", saName), + } + + _, err = c.runKubectlInSystemNs("Create role binding", roleBindingArgs...) if err != nil { - return fmt.Errorf("could not create kind cluster: %s: %w\n", out, err) + return nil, err } - return err + token, err := c.runKubectlInSystemNs("Create token", "create", "token", saName) + if err != nil { + return nil, err + } + + c.restConfig = &rest.Config{ + Host: fmt.Sprintf("https://127.0.0.1:%s", c.ControlPlanePort), + BearerToken: token, + TLSClientConfig: rest.TLSClientConfig{ + CAData: ca, + }, + } + + return c.copyREST(), nil } -func DeleteKindCluster() error { - args := []string{"delete", "cluster", "--name=" + KindClusterName} - cmd := exec.Command(KindBinary, args...) +func (c *KINDCluster) copyREST() *rest.Config { + ca := c.restConfig.TLSClientConfig.CAData + + cpy := *c.restConfig + cpyCA := make([]byte, len(ca)) + copy(cpyCA, ca) + + cpy.TLSClientConfig.CAData = cpyCA - return cmd.Run() + return &cpy } -func GetKINDControlPlaneIP() (string, error) { - getIPCmd := []string{ - "inspect", - "-f", "{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}", - KindClusterName + "-control-plane", +func (c *KINDCluster) runKubectlInSystemNs(name string, args ...string) (string, error) { + runArgs := []string{ + "kubectl", + "-n", + "kube-system", } - ip := "" - err := retry.NewSilentLoop("discovering IP of control plane noe", 10, 2*time.Second).Run(func() error { - cmd := exec.Command("docker", getIPCmd...) - out, err := cmd.Output() - if err != nil { - return err + runArgs = append(runArgs, args...) + + return execInKINDContainer(c, name, runArgs...) +} + +func CreateKINDCluster(t *testing.T, params *KINDClusterCreateParams) *KINDCluster { + test := params.Test + + configPath := test.MustCreateTmpFile(t, kindConfig, false, "kind-config.yaml") + clusterName := fmt.Sprintf("test-connection-%s", params.ClusterName) + + cluster := &KINDCluster{ + test: test, + Name: clusterName, + } + + // args to command + args := []string{ + "create", + "cluster", + fmt.Sprintf("--config=%s", configPath), + } + + test.GetLogger().InfoF("Creating KIND cluster %s...", clusterName) + + out, err := cluster.runKind(args...) + require.NoError(t, err, "not create kind cluster: %w:%s\n", out) + + test.GetLogger().InfoF("KIND cluster %s created:\n%s", clusterName, out) + + cluster.ControlPlaneIP, err = getKINDControlPlaneIP(cluster) + checkErrorDuringCreateCluster(t, cluster, err, "failed to get kind control plane IP") + + cluster.kubeconfig, err = getKINDKubeconfig(cluster) + checkErrorDuringCreateCluster(t, cluster, err, "failed to get kind control plane IP") + + cluster.ControlPlanePort, _ = cluster.extractPort() + + kubectlPreparator := newLocalKubectlPreparator(cluster) + + for _, sshContainer := range params.Containers { + container := sshContainer.Container.Container + containerName := container.ContainerSettings().ContainerName + + err = container.DockerNetworkConnect(false, "kind") + checkErrorDuringCreateCluster(t, cluster, err, "failed to connect ssh container %s to kind cluster", containerName) + + if !params.NoPrepareLocalKubectlInSSHContainer { + kubectlPreparator.prepareLocalKubeCtlInSSHContainer(t, sshContainer) + } else { + params.Test.GetLogger().InfoF("Skipping prepare local kubectl in ssh container %s", containerName) } - ip = string(out) - return nil + } + + return cluster +} + +func execInKINDContainer(cluster *KINDCluster, name string, args ...string) (string, error) { + a := []string{ + "exec", + cluster.containerName(), + } + + a = append(a, args...) + + return runDockerForKINDContainer(cluster, name, a...) +} + +func runDockerForKINDContainer(cluster *KINDCluster, name string, args ...string) (string, error) { + params := retry.NewEmptyParams( + retry.WithName("%s", name), + retry.WithAttempts(10), + retry.WithWait(2*time.Second), + retry.WithLogger(cluster.test.GetLogger()), + ) + + out := "" + + err := retry.NewLoopWithParams(params).Run(func() error { + var err error + out, err = RunDockerWithOut(args...) + out = strings.TrimSpace(out) + return err }) + if err != nil { return "", err } - return ip, nil + return out, nil } -func GetKINDKubeconfig() (string, error) { - args := []string{"get", "kubeconfig", "--name=" + KindClusterName} - cmd := exec.Command(KindBinary, args...) - out, err := cmd.CombinedOutput() +func getKubectlVersion(cluster *KINDCluster) (string, error) { + args := []string{ + "kubectl", + "version", + "--client", + "-o", + "json", + } + + out, err := execInKINDContainer(cluster, "Get kubectl version", args...) + if err != nil { + return "", err + } + + type clientVersion struct { + GitVersion string `json:"gitVersion"` + } + + type version struct { + ClientVersion clientVersion `json:"clientVersion"` + } + + v := version{} + err = json.Unmarshal([]byte(out), &v) if err != nil { - return "", fmt.Errorf("couldn't get kind kubeconfig: %s: %w", string(out), err) + return "", err } - return string(out), nil + if v.ClientVersion.GitVersion == "" { + return "", fmt.Errorf("failed to get kubectl version") + } + + return v.ClientVersion.GitVersion, nil +} + +func getKINDControlPlaneIP(cluster *KINDCluster) (string, error) { + args := []string{ + "inspect", + "-f", "{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}", + cluster.containerName(), + } + + return runDockerForKINDContainer(cluster, "Discovering IP of control plane node", args...) +} + +func getKINDKubeconfig(cluster *KINDCluster) (string, error) { + out, err := cluster.runKind("get", "kubeconfig") + if err != nil { + return "", fmt.Errorf("couldn't get kind kubeconfig: %s: %w", out, err) + } + + return out, nil +} + +func checkErrorDuringCreateCluster(t *testing.T, cluster *KINDCluster, err error, msg string, args ...any) { + t.Helper() + + if err == nil { + return + } + + deleteErr := cluster.Delete() + if deleteErr != nil { + cluster.test.GetLogger().ErrorF("Cannot delete kind cluster %s after create fail: %w", cluster.Name, deleteErr) + } + + require.NoError(t, err, fmt.Sprintf(msg, args...)) +} + +type localKubectlPreparator struct { + kubectlVersion string + configPath string + cluster *KINDCluster +} + +func newLocalKubectlPreparator(cluster *KINDCluster) *localKubectlPreparator { + return &localKubectlPreparator{ + cluster: cluster, + } +} + +func (p *localKubectlPreparator) getKubectlVersion(t *testing.T) string { + if p.kubectlVersion != "" { + return p.kubectlVersion + } + + kubectlVersion, err := getKubectlVersion(p.cluster) + checkErrorDuringCreateCluster(t, p.cluster, err, "failed to get kubectl version") + + p.kubectlVersion = kubectlVersion + return kubectlVersion +} + +func (p *localKubectlPreparator) getKubeConfigPath(t *testing.T) string { + if p.configPath != "" { + return p.configPath + } + + cluster := p.cluster + + newKubeconfig := cluster.KubeconfigWithIP(cluster.ControlPlaneIP, "6443") + + configTmp, err := cluster.test.CreateTmpFile(newKubeconfig, false, "kubeconfig") + checkErrorDuringCreateCluster(t, cluster, err, "failed to create kind config file to upload") + + p.configPath = configTmp + return configTmp +} + +func (p *localKubectlPreparator) prepareLocalKubeCtlInSSHContainer(t *testing.T, sshContainer *SSHContainersForKind) { + container := sshContainer.Container.Container + containerName := container.ContainerSettings().ContainerName + cluster := p.cluster + test := cluster.test + + kubectlVersion := p.getKubectlVersion(t) + + downloadKubectlParams := retry.NewEmptyParams( + retry.WithName("Download kubectl to ssh container %s", containerName), + retry.WithAttempts(10), + retry.WithWait(2*time.Second), + retry.WithLogger(test.GetLogger()), + ) + err := retry.NewLoopWithParams(downloadKubectlParams).Run(func() error { + return container.DownloadKubectl(kubectlVersion) + }) + checkErrorDuringCreateCluster(t, cluster, err, "failed to download kubectl to ssh container %s", containerName) + + err = container.CreateDirectory("/config/.kube") + checkErrorDuringCreateCluster(t, cluster, err, "failed to create kube config directory in ssh container %s", containerName) + + file := sshContainer.Client.File() + uploadParams := retry.NewEmptyParams( + retry.WithName("Upload kubeconfig to ssh container %s", containerName), + retry.WithAttempts(10), + retry.WithWait(2*time.Second), + retry.WithLogger(test.GetLogger()), + ) + + configTmp := p.getKubeConfigPath(t) + + err = retry.NewLoopWithParams(uploadParams).Run(func() error { + return file.Upload(context.Background(), configTmp, "/config/.kube/config") + }) + checkErrorDuringCreateCluster(t, cluster, err, "failed to upload kubeconfig to ssh container") + + err = container.CreateDirectory("/etc/kubernetes/") + checkErrorDuringCreateCluster(t, cluster, err, "failed to create directory /etc/kubernetes on ssh container %s", containerName) + + err = container.ExecToContainer( + "symlink of kubeconfig", + "ln", + "-s", + "/config/.kube/config", + "/etc/kubernetes/admin.conf", + ) + checkErrorDuringCreateCluster(t, cluster, err, "failed to create link to kube config on ssh container %s", containerName) } diff --git a/pkg/tests/kube_proxy.go b/pkg/tests/kube_proxy.go new file mode 100644 index 0000000..b4a01fd --- /dev/null +++ b/pkg/tests/kube_proxy.go @@ -0,0 +1,76 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package tests + +import ( + "fmt" + "net" + "testing" + "time" + + "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/stretchr/testify/require" +) + +func AssertKubeProxy(t *testing.T, test *Test, localServerPort string, wantError bool) { + url := fmt.Sprintf("http://127.0.0.1:%s/api/v1/nodes", localServerPort) + + test.GetLogger().InfoF("Assert kube proxy on '%s' want err: %v", url, wantError) + + prefixLogger := NewPrefixLogger(test.Logger).WithPrefix(test.FullName()) + + defaultParams := retry.NewEmptyParams( + retry.WithAttempts(10), + retry.WithWait(500*time.Millisecond), + retry.WithLogger(test.Logger), + ) + + if wantError { + dialTo := fmt.Sprintf("127.0.0.1:%s", localServerPort) + + dialParams := defaultParams.Clone(retry.WithName("Try to dial to %s", dialTo)) + getLoopParams := defaultParams.Clone( + retry.WithName("Do request to %s after dial", url), + retry.WithAttempts(3), + ) + + err := retry.NewLoopWithParams(dialParams).Run(func() error { + d, err := net.DialTimeout("tcp", dialTo, 5*time.Second) + if err == nil { + d.Close() + } + + _, errGet := DoGetRequest(url, getLoopParams, prefixLogger) + if errGet == nil && err != nil { + test.GetLogger().InfoF("Dial not success %v but get is success", err) + } + + return err + }) + + require.Error(t, err, "should not reach this destination %s", dialTo) + return + } + + requestLoop := defaultParams.Clone(retry.WithName("Check kube proxy available by %s", url)) + + _, err := DoGetRequest( + url, + requestLoop, + NewPrefixLogger(test.Logger).WithPrefix(test.FullName()), + ) + + require.NoError(t, err, "check local tunnel. Want error %v", wantError) +} diff --git a/pkg/tests/parse_flags.go b/pkg/tests/parse_flags.go index acf50c3..5a199dd 100644 --- a/pkg/tests/parse_flags.go +++ b/pkg/tests/parse_flags.go @@ -190,4 +190,6 @@ func AssertParseFlagsHelp(t *testing.T, params AssertParseFlagsHelpParams) { ), ) } + + logger.InfoF("Has valid help:\n%s", out) } diff --git a/pkg/tests/provider/kube_test.go b/pkg/tests/provider/kube_test.go new file mode 100644 index 0000000..75ec02e --- /dev/null +++ b/pkg/tests/provider/kube_test.go @@ -0,0 +1,1139 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package provider + +import ( + "context" + "fmt" + "strings" + "testing" + "time" + + "github.com/deckhouse/lib-dhctl/pkg/retry" + "github.com/name212/govalue" + "github.com/stretchr/testify/require" + v1 "k8s.io/api/core/v1" + metav1 "k8s.io/apimachinery/pkg/apis/meta/v1" + + connection "github.com/deckhouse/lib-connection/pkg" + "github.com/deckhouse/lib-connection/pkg/kube" + "github.com/deckhouse/lib-connection/pkg/provider" + "github.com/deckhouse/lib-connection/pkg/ssh" + "github.com/deckhouse/lib-connection/pkg/ssh/clissh" + sshconfig "github.com/deckhouse/lib-connection/pkg/ssh/config" + "github.com/deckhouse/lib-connection/pkg/ssh/gossh" + "github.com/deckhouse/lib-connection/pkg/tests" +) + +func TestDefaultKubeProvider(t *testing.T) { + cliRunTest := runTest{ + name: "Cli", + mode: sshconfig.Mode{ + ForceLegacy: true, + }, + } + + runTests := []runTest{ + { + name: "Go", + mode: sshconfig.Mode{ + ForceModern: true, + }, + }, + + cliRunTest, + } + + baseTest := tests.ShouldNewIntegrationTest( + t, + t.Name(), + tests.TestWithParallelRun(false), + ) + + firstContainer := tests.NewTestContainerWrapper(t, baseTest, tests.WithContainerName("first")) + secondContainer := tests.NewTestContainerWrapper( + t, + baseTest, + tests.WithContainerName("second"), + tests.WithConnectToContainerNetwork(firstContainer), + ) + + kindCluster := createKINDCluster(t, baseTest, firstContainer, secondContainer) + + t.Run("OverSSH", func(t *testing.T) { + t.Run("Client", func(t *testing.T) { + t.Run("SimpleGet", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + clients := assertSimpleGetKubeClient(t, test, kubeProvider) + + sshClients := make([]connection.SSHClient, 0, len(clients)) + for _, c := range clients { + sshClients = append(sshClients, extractSSHClient(t, c)) + } + + for i, client := range sshClients { + for _, a := range sshClients[i+1:] { + require.True(t, a == client, "ssh clients should be same") + } + } + }) + } + }) + + t.Run("GetClientAfterSwitch", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, runnerIface := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + ctx := context.TODO() + + firstClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "first client should be created") + + assertKubeClient(t, test, firstClient, true) + + firstSSHClient := extractSSHClient(t, firstClient) + + logClientSwitching(test) + _, err = sshProvider.SwitchClient(ctx, tests.Session(secondContainer), secondContainer.AgentPrivateKeys()) + require.NoError(t, err, "ssh client should be switched") + + // should use custom port for assert that client after switch stopped + provider.RunnerInterfaceWithInitOpts(kube.InitWithLocalPort(tests.RandPort()))(runnerIface) + + afterSwitchClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "after switch client should be created") + + require.False(t, firstClient == afterSwitchClient, "first client should not be equal to second client after switch") + + assertKubeClient(t, test, afterSwitchClient, true) + assertKubeClient(t, test, firstClient, false) + + afterSwitchSSHClient := extractSSHClient(t, afterSwitchClient) + require.False(t, firstSSHClient == afterSwitchSSHClient, "first ssh client should not be equal to second client after switch") + + logClientSwitching(test) + _, err = sshProvider.SwitchToDefault(ctx) + require.NoError(t, err, "ssh client should be switched to default") + + // should use custom port for assert that client after switch stopped + provider.RunnerInterfaceWithInitOpts(kube.InitWithLocalPort(tests.RandPort()))(runnerIface) + + afterSwitchToDefaultClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "after switch to default client should be created") + + require.False(t, firstClient == afterSwitchToDefaultClient, "first client should not be equal to second client after switch to default") + require.False(t, afterSwitchClient == afterSwitchToDefaultClient, "after switch client should not be equal to second client after switch to default") + + assertKubeClient(t, test, afterSwitchToDefaultClient, true) + assertKubeClient(t, test, afterSwitchClient, false) + + afterSwitchToDefaultSSHClient := extractSSHClient(t, afterSwitchToDefaultClient) + require.False(t, firstSSHClient == afterSwitchToDefaultSSHClient, "first SSH client should not be equal to after switch to default SSH client") + require.False(t, afterSwitchSSHClient == afterSwitchToDefaultSSHClient, "after switch SSH client should not be equal to after switch to default SSH client") + }) + } + }) + }) + + t.Run("NewAdditionalClient", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + ctx := context.TODO() + + firstClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "first client should be created") + + assertKubeClient(t, test, firstClient, true) + + additionalClients := make([]connection.KubeClient, 0, 2) + + firstAdditionalClient, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditionalClient) + + secondAdditionalClient, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditionalClient) + + require.Equal(t, kubeProvider.AdditionalClientsCount(), len(additionalClients), "additional client should added to provider") + + assertAdditionalClientsOverSSH(t, test, firstClient, additionalClients, true) + + logClientSwitching(test) + sshProvider.WithID(rt.getName(t) + "AfterSwitch") + _, err = sshProvider.SwitchClient(ctx, tests.Session(secondContainer), secondContainer.AgentPrivateKeys()) + require.NoError(t, err, "ssh client should be switched") + + clientAfterSwitch, err := kubeProvider.Client(ctx) + require.NoError(t, err, "first client should be created") + assertKubeClient(t, test, clientAfterSwitch, true) + + firstAdditionalClientAfterSwitch, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditionalClientAfterSwitch) + + secondAdditionalClientAfterSwitch, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditionalClientAfterSwitch) + + require.Equal(t, kubeProvider.AdditionalClientsCount(), len(additionalClients), "additional client should added to provider") + + assertAdditionalClientsOverSSH(t, test, clientAfterSwitch, additionalClients, true) + + // stop additional client does not affect another + + stoppedClients := []connection.KubeClient{ + firstAdditionalClient, + secondAdditionalClientAfterSwitch, + } + + for _, c := range stoppedClients { + kube.Stop(c, true) + } + + assertKubeClient(t, test, clientAfterSwitch, true) + + for _, c := range stoppedClients { + assertKubeClient(t, test, c, false) + assertSSHClientLive(t, test, extractSSHClient(t, c), false) + } + + liveClients := disJoinClients(additionalClients, stoppedClients) + for _, c := range liveClients { + assertKubeClient(t, test, c, true) + } + + assertSSHClientLive(t, test, extractSSHClient(t, clientAfterSwitch), true) + }) + } + }) + + t.Run("NewAdditionalClientWithoutInitialize", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + assertNewAdditionalClientsWithoutInitialize(t, test, kubeProvider, assertAdditionalClientsOverSSH) + }) + } + }) + + t.Run("FailAdditionalChecks", func(t *testing.T) { + // test only cli because go-ssh will fail on start + // cli ssh always create client but it can connect after creation + + test := newSubTest(t, cliRunTest) + + defaultConfig := connectionConfigForContainer(firstContainer, cliRunTest.mode) + defaultConfig.Config.Port = tests.Ptr(tests.RandPort()) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider, + provider.RunnerInterfaceWithSSHLoopsParams(provider.RunnerInterfaceSSHLoopsParams{ + AwaitAvailabilityOverSSH: retry.NewEmptyParams( + retry.WithWait(2*time.Second), + retry.WithAttempts(4), + ), + }), + ) + registerCleanupKubeProvider(t, test, kubeProvider) + + ctx := context.TODO() + + assertError := func(t *testing.T, err error) { + require.Error(t, err, "first client should be created") + require.Contains(t, err.Error(), "SSH connect failed to") + } + + _, err := kubeProvider.Client(ctx) + assertError(t, err) + + _, err = kubeProvider.NewAdditionalClient(ctx) + assertError(t, err) + + require.Len(t, sshProvider.AdditionalClients(), 1, "additional ssh clients should added to provider") + + _, err = kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "client without initialize should be provided") + + require.Len(t, sshProvider.AdditionalClients(), 2, "additional ssh clients should added to provider") + }) + + t.Run("Cleanup", func(t *testing.T) { + getKubeForCleanupProvider := func(t *testing.T, test *tests.Test, rt runTest) *provider.DefaultKubeProvider { + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + return kubeProvider + } + + t.Run("NoClients", func(t *testing.T) { + rt := runTest{name: "none"} + test := newSubTest(t, rt) + + kubeProvider := getKubeForCleanupProvider(t, test, rt) + + doClean := func() { + err := kubeProvider.Cleanup(context.TODO()) + require.NoError(t, err, "should cleanup") + } + + require.NotPanics(t, doClean, "should cleanup") + }) + + t.Run("OnlyDefault", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeForCleanupProvider(t, test, rt) + + defaultClient, err := kubeProvider.Client(context.TODO()) + require.NoError(t, err, "should be created") + + sshClient := extractSSHClient(t, defaultClient) + + doClean := func() { + err := kubeProvider.Cleanup(context.TODO()) + require.NoError(t, err, "should cleanup") + } + + require.NotPanics(t, doClean, "should cleanup") + + require.False(t, kubeProvider.HasCurrent(), "should not have current") + assertSSHClientLive(t, test, sshClient, true) + }) + } + }) + + t.Run("WithAdditionals", func(t *testing.T) { + for _, rt := range runTests { + t.Run(rt.name, func(t *testing.T) { + test := newSubTest(t, rt) + + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeProviderConfig := &kube.Config{} + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + ctx := context.TODO() + + defaultClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "should be created") + + additionalClients := make([]connection.KubeClient, 0, 3) + + firstAdditional, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditional) + + secondAdditional, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditional) + + noInitClient, err := kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "additional client should be created") + + sshClient := extractSSHClient(t, defaultClient) + + doClean := func() { + err := kubeProvider.Cleanup(context.TODO()) + require.NoError(t, err, "should cleanup") + } + + require.NotPanics(t, doClean, "should cleanup") + + require.False(t, kubeProvider.HasCurrent(), "should not have current") + assertSSHClientLive(t, test, sshClient, true) + + require.Equal(t, 0, kubeProvider.AdditionalClientsCount(), "additional client should removed") + + for _, c := range additionalClients { + assertKubeClient(t, test, c, false) + assertSSHClientLive(t, test, extractSSHClient(t, c), false) + } + + assertSSHClientLive(t, test, extractSSHClient(t, noInitClient), false) + }) + } + }) + }) + }) + + t.Run("OverKubeconfig", func(t *testing.T) { + rt := runTest{name: "overKubeconfig"} + + getKubeconfigKubeProviderWithPort := func(t *testing.T, test *tests.Test, kind *tests.KINDCluster, port string) (*provider.DefaultKubeProvider, *provider.DefaultSSHProvider) { + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + kubeConfig := kind.KubeconfigWithIP("127.0.0.1", port) + + path := test.MustCreateTmpFile(t, kubeConfig, false, "kube-config.yaml") + + kubeProviderConfig := &kube.Config{ + KubeConfig: path, + } + + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + return kubeProvider, sshProvider + } + + getKubeconfigKubeProvider := func(t *testing.T, test *tests.Test, kind *tests.KINDCluster) (*provider.DefaultKubeProvider, *provider.DefaultSSHProvider) { + return getKubeconfigKubeProviderWithPort(t, test, kind, kind.ControlPlanePort) + } + + t.Run("Client", func(t *testing.T) { + t.Run("SimpleGet", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertSimpleGetKubeClient(t, test, kubeProvider) + + assertNoGetSSHConnection(t, sshProvider) + }) + }) + + t.Run("NewAdditionalClient", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertNewAdditional(t, test, kubeProvider) + + assertNoGetSSHConnection(t, sshProvider) + }) + + t.Run("NewAdditionalClientWithoutInitialize", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + assertNewAdditionalClientsWithoutInitialize(t, test, kubeProvider, assertAdditionalClients) + + assertNoGetSSHConnection(t, sshProvider) + }) + + t.Run("WithIncorrectConfig", func(t *testing.T) { + test := newSubTest(t, rt) + + assertIncorrectConfiguration(t, test, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + port := fmt.Sprintf("%d", tests.RandPort()) + kubeProvider, _ := getKubeconfigKubeProviderWithPort(t, test, kindCluster, port) + return kubeProvider + }) + }) + + t.Run("Cleanup", func(t *testing.T) { + assertCleanupWithoutSSH(t, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + kubeProvider, _ := getKubeconfigKubeProvider(t, test, kindCluster) + return kubeProvider + }) + }) + }) + + t.Run("OverRESTConfig", func(t *testing.T) { + rt := runTest{name: "overRESTKubeconfig"} + + getKubeconfigKubeProvider := func(t *testing.T, test *tests.Test, kind *tests.KINDCluster, token ...string) (*provider.DefaultKubeProvider, *provider.DefaultSSHProvider) { + defaultConfig := connectionConfigForContainer(firstContainer, rt.mode) + sshProvider := getSSHProvider(test, defaultConfig) + registerCleanupSSHProvider(t, test, sshProvider) + + restConfig, err := kind.RESTConfig() + require.NoError(t, err, "rest config should be created") + + if len(token) > 0 { + restConfig.BearerToken = token[0] + } + + kubeProviderConfig := &kube.Config{ + RestConfig: restConfig, + } + + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, sshProvider) + registerCleanupKubeProvider(t, test, kubeProvider) + + return kubeProvider, sshProvider + } + + t.Run("Client", func(t *testing.T) { + t.Run("SimpleGet", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertSimpleGetKubeClient(t, test, kubeProvider) + + assertNoGetSSHConnection(t, sshProvider) + }) + }) + + t.Run("NewAdditionalClient", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertNewAdditional(t, test, kubeProvider) + + assertNoGetSSHConnection(t, sshProvider) + }) + + t.Run("NewAdditionalClientWithoutInitialize", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider, sshProvider := getKubeconfigKubeProvider(t, test, kindCluster) + assertNewAdditionalClientsWithoutInitialize(t, test, kubeProvider, assertAdditionalClients) + + assertNoGetSSHConnection(t, sshProvider) + }) + + t.Run("WithIncorrectConfig", func(t *testing.T) { + test := newSubTest(t, rt) + + assertIncorrectConfiguration(t, test, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + token := tests.RandString(24) + kubeProvider, _ := getKubeconfigKubeProvider(t, test, kindCluster, token) + return kubeProvider + }) + }) + + t.Run("Cleanup", func(t *testing.T) { + assertCleanupWithoutSSH(t, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + kubeProvider, _ := getKubeconfigKubeProvider(t, test, kindCluster) + return kubeProvider + }) + }) + }) + + t.Run("LocalRun", func(t *testing.T) { + rt := runTest{name: "local_run"} + + getKubeconfigKubeProvider := func(t *testing.T, test *tests.Test, kind *tests.KINDCluster, rewriteEnv ...string) *provider.DefaultKubeProvider { + path := "" + + if len(rewriteEnv) > 0 { + path = rewriteEnv[0] + } else { + kubeConfig := kind.Kubeconfig() + path = test.MustCreateTmpFile(t, kubeConfig, false, "local-kube-config.yaml") + } + + tests.SetEnvs(t, map[string]string{ + "KUBECONFIG": path, + }) + + kubeProviderConfig := &kube.Config{ + LocalKubeClient: true, + } + + kubeProvider, _ := getKubeProvider(t, test, kubeProviderConfig, nil) + registerCleanupKubeProvider(t, test, kubeProvider) + + return kubeProvider + } + + t.Run("Client", func(t *testing.T) { + t.Run("SimpleGet", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertSimpleGetKubeClient(t, test, kubeProvider) + }) + }) + + t.Run("NewAdditionalClient", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeconfigKubeProvider(t, test, kindCluster) + + assertNewAdditional(t, test, kubeProvider) + }) + + t.Run("NewAdditionalClientWithoutInitialize", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeconfigKubeProvider(t, test, kindCluster) + assertNewAdditionalClientsWithoutInitialize(t, test, kubeProvider, assertAdditionalClients) + }) + + t.Run("WithIncorrectConfig", func(t *testing.T) { + test := newSubTest(t, rt) + + assertIncorrectConfiguration(t, test, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + path := "/tmp/not-exist-ewde" + kubeProvider := getKubeconfigKubeProvider(t, test, kindCluster, path) + return kubeProvider + }) + }) + + t.Run("Cleanup", func(t *testing.T) { + assertCleanupWithoutSSH(t, func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider { + return getKubeconfigKubeProvider(t, test, kindCluster) + }) + }) + }) +} + +func newSubTest(t *testing.T, rt runTest) *tests.Test { + return tests.ShouldNewIntegrationTest(t, rt.getName(t), tests.TestWithParallelRun(false)) +} + +type runTest struct { + mode sshconfig.Mode + name string +} + +func (r runTest) getName(t *testing.T) string { + nameParts := strings.Split(t.Name(), "/") + name := nameParts[len(nameParts)-2] + return fmt.Sprintf("KubeProvider%s%s", name, r.name) +} + +func createKINDCluster(t *testing.T, test *tests.Test, containers ...*tests.TestContainerWrapper) *tests.KINDCluster { + forKind := make([]*tests.SSHContainersForKind, 0, len(containers)) + for _, container := range containers { + client := gossh.NewClient( + context.TODO(), + test.Settings(), + tests.Session(container), + container.AgentPrivateKeys(), + ) + + err := client.Start() + require.NoError(t, err, "client should start for %s", container.Container.ContainerSettings().ContainerName) + + forKind = append(forKind, &tests.SSHContainersForKind{ + Container: container, + Client: client, + }) + } + + kindCluster := tests.CreateKINDCluster(t, &tests.KINDClusterCreateParams{ + Test: test, + ClusterName: "kube-provider-client", + Containers: forKind, + }) + + kindCluster.RegisterCleanup(t) + + for _, c := range forKind { + c.Client.Stop() + } + + return kindCluster +} + +func connectionConfigForContainer(container *tests.TestContainerWrapper, mode sshconfig.Mode) *sshconfig.ConnectionConfig { + containerPrivateKeys := container.AgentPrivateKeys() + privateKeys := make([]sshconfig.AgentPrivateKey, 0, len(containerPrivateKeys)) + for _, key := range containerPrivateKeys { + privateKeys = append(privateKeys, sshconfig.AgentPrivateKey{ + Key: key.Key, + Passphrase: key.Passphrase, + IsPath: true, + }) + } + + return &sshconfig.ConnectionConfig{ + Config: &sshconfig.Config{ + Mode: mode, + + User: container.Settings.Username, + Port: tests.Ptr(container.LocalPort()), + SudoPassword: container.Settings.Password, + + PrivateKeys: privateKeys, + }, + + Hosts: []sshconfig.Host{ + { + Host: "127.0.0.1", + }, + }, + } +} + +func getSSHProvider(test *tests.Test, config *sshconfig.ConnectionConfig) *provider.DefaultSSHProvider { + loopsParams := gossh.ClientLoopsParams{ + ConnectToHostDirectly: retry.NewEmptyParams( + retry.WithWait(2*time.Second), + retry.WithAttempts(10), + ), + NewSession: retry.NewEmptyParams( + retry.WithWait(1*time.Second), + retry.WithAttempts(5), + ), + } + + return provider.NewDefaultSSHProvider( + test.Settings(), + config, + provider.SSHClientWithLoopsParams(loopsParams), + provider.SSHClientWithStartAfterCreate(true), + provider.SSHClientWithID(test.FullName()), + ) +} + +func assertKubeClient(t *testing.T, test *tests.Test, client connection.KubeClient, success bool, errs ...string) { + const ( + key = "my-key" + ns = "default" + ) + + name := fmt.Sprintf("kube-cl-%s", tests.GenerateID(test.Name())) + content := tests.RandString(32) + + liveLoopParams := retry.NewEmptyParams( + retry.WithWait(1*time.Second), + retry.WithAttempts(4), + ) + + liveCtx := context.TODO() + err := kube.IsLive(liveCtx, client, liveLoopParams) + if !success { + require.Error(t, err, "should not live") + for _, errSub := range errs { + require.Contains(t, err.Error(), errSub, "should contain %s", errSub) + } + return + } + require.NoError(t, err, "should live") + + defaultParams := retry.NewEmptyParams( + retry.WithAttempts(4), + retry.WithWait(2*time.Second), + retry.WithLogger(test.GetLogger()), + ) + + createCMParams := defaultParams.Clone( + retry.WithName("Create ConfigMap %s/%s", ns, name), + ) + + err = retry.NewLoopWithParams(createCMParams).Run(func() error { + ctx, cancel := kubeRequestCtx() + defer cancel() + + cm := v1.ConfigMap{ + ObjectMeta: metav1.ObjectMeta{ + Name: name, + Namespace: ns, + }, + + Data: map[string]string{ + key: content, + }, + } + + _, err := client.CoreV1().ConfigMaps(ns).Create(ctx, &cm, metav1.CreateOptions{}) + + return err + }) + + require.NoError(t, err, "should create configmap") + + getCMParams := defaultParams.Clone( + retry.WithName("Get ConfigMap %s/%s", ns, name), + ) + + var gotCM *v1.ConfigMap + err = retry.NewLoopWithParams(getCMParams).Run(func() error { + ctx, cancel := kubeRequestCtx() + defer cancel() + cm, err := client.CoreV1().ConfigMaps(ns).Get(ctx, name, metav1.GetOptions{}) + if err != nil { + return err + } + gotCM = cm + return nil + }) + + require.NoError(t, err, "should get configmap") + require.NotNil(t, gotCM, "should get configmap") + require.Equal(t, content, gotCM.Data[key], "should content be equal") +} + +func kubeRequestCtx() (context.Context, context.CancelFunc) { + return context.WithTimeout(context.TODO(), 4*time.Second) +} + +func registerCleanupSSHProvider(t *testing.T, test *tests.Test, p *provider.DefaultSSHProvider) { + t.Cleanup(func() { + if err := p.Cleanup(context.TODO()); err != nil { + test.GetLogger().ErrorF("Failed to clean up %s provider ssh: %v", t.Name(), err) + } + }) +} + +func registerCleanupKubeProvider(t *testing.T, test *tests.Test, p *provider.DefaultKubeProvider) { + t.Cleanup(func() { + if err := p.Cleanup(context.TODO()); err != nil { + test.GetLogger().ErrorF("Failed to clean up %s provider kube provider: %v", t.Name(), err) + } + }) +} + +func getKubeProvider(t *testing.T, test *tests.Test, config *kube.Config, sshProvider connection.SSHProvider, opts ...provider.RunnerInterfaceOpt) (*provider.DefaultKubeProvider, provider.RunnerInterface) { + sett := test.Settings() + ri, err := provider.GetRunnerInterface(config, sett, sshProvider, opts...) + require.NoError(t, err, "runner interface should provided") + + loopParams := retry.NewEmptyParams( + retry.WithAttempts(10), + retry.WithWait(2*time.Second), + ) + + return provider.NewDefaultKubeProvider(sett, config, ri).WithLoopsParams(provider.KubeProviderLoopsParams{ + InitClient: loopParams.Clone(), + WaitingReady: loopParams.Clone(), + }), ri +} + +func extractSSHClient(t *testing.T, kubeClient connection.KubeClient) connection.SSHClient { + kubeClientImpl, ok := kubeClient.(*kube.KubernetesClient) + require.True(t, ok, "kube client should be of type *kube.KubernetesClient") + + require.False(t, govalue.Nil(kubeClientImpl.NodeInterface), "kube client should have node interface") + + nodeWrapper, ok := kubeClientImpl.NodeInterface.(*ssh.NodeInterfaceWrapper) + require.True(t, ok, "node wrapper should be of type *ssh.NodeInterfaceWrapper") + + sshClient := nodeWrapper.Client() + + require.False(t, govalue.Nil(sshClient), "ssh client should not be nil") + + return sshClient +} + +func assertAdditionalClientsOverSSH(t *testing.T, test *tests.Test, clientFromClientCall connection.KubeClient, additional []connection.KubeClient, success bool) { + sshClientFromClientCall := extractSSHClient(t, clientFromClientCall) + + for i, client := range additional { + require.False(t, clientFromClientCall == client, "additional client should not be default") + + assertKubeClient(t, test, client, success) + + currentSSHClient := extractSSHClient(t, client) + require.False(t, currentSSHClient == sshClientFromClientCall, "additional ssh client should not be equal to first ssh additional client") + + for _, a := range additional[i+1:] { + require.False(t, client == a, "additional client should not be equal to additional another client") + + additionalSSHClient := extractSSHClient(t, a) + require.False(t, additionalSSHClient == currentSSHClient, "additional ssh client should not be equal to additional another client") + } + } +} + +func assertAdditionalClients(t *testing.T, test *tests.Test, clientFromClientCall connection.KubeClient, additional []connection.KubeClient, success bool) { + for i, client := range additional { + require.False(t, clientFromClientCall == client, "additional client should not be default") + + assertKubeClient(t, test, client, success) + + for _, a := range additional[i+1:] { + require.False(t, client == a, "additional client should not be equal to additional another client") + } + } +} + +func disJoinClients(all []connection.KubeClient, subSet []connection.KubeClient) []connection.KubeClient { + res := make([]connection.KubeClient, 0, len(subSet)) + for _, client := range all { + contain := false + for _, sub := range subSet { + if client == sub { + contain = true + break + } + } + + if !contain { + res = append(res, client) + } + } + + return res +} + +func assertSSHClientLive(t *testing.T, test *tests.Test, sshClient connection.SSHClient, live bool) { + if _, ok := sshClient.(*clissh.Client); ok { + test.GetLogger().InfoF("SSH client always should be live") + live = true + } + + loopParams := retry.NewEmptyParams( + retry.WithAttempts(3), + retry.WithWait(2*time.Second), + retry.WithLogger(test.GetLogger()), + retry.WithName("Check that ssh client live"), + ) + + out := "" + + err := retry.NewLoopWithParams(loopParams).Run(func() error { + cmd := sshClient.Command("echo", "-n", "RUN_OK") + o, err := cmd.CombinedOutput(context.TODO()) + if err != nil { + return err + } + + out = string(o) + return nil + }) + + if !live { + require.Error(t, err, "ssh client should not be live") + return + } + + require.NoError(t, err, "ssh client should live") + require.True(t, strings.HasSuffix(out, "RUN_OK"), "should valid output for check live") +} + +func logClientSwitching(test *tests.Test) { + test.GetLogger().InfoF("Start switching ssh client") +} + +func assertNoGetSSHConnection(t *testing.T, sshProvider *provider.DefaultSSHProvider) { + require.False(t, sshProvider.HasCurrent(), "ssh provider should not have current client") + require.Len(t, sshProvider.AdditionalClients(), 0, "ssh provider should not have any additional clients") +} + +func assertSimpleGetKubeClient(t *testing.T, test *tests.Test, kubeProvider *provider.DefaultKubeProvider) []connection.KubeClient { + ctx := context.TODO() + + firstClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "first client should be created") + + assertKubeClient(t, test, firstClient, true) + + secondClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "second client should be created") + + require.True(t, firstClient == secondClient, "first client should be equal to second client") + + return []connection.KubeClient{firstClient, secondClient} +} + +type assertAdditional func(t *testing.T, test *tests.Test, clientFromClientCall connection.KubeClient, additional []connection.KubeClient, success bool) + +func assertNewAdditionalClientsWithoutInitialize(t *testing.T, test *tests.Test, kubeProvider *provider.DefaultKubeProvider, assert assertAdditional) { + ctx := context.TODO() + + firstClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "first client should be created") + + assertKubeClient(t, test, firstClient, true) + + additionalClients := make([]connection.KubeClient, 0, 2) + + firstAdditionalClient, err := kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditionalClient) + + secondAdditionalClient, err := kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditionalClient) + + require.Equal(t, kubeProvider.AdditionalClientsCount(), len(additionalClients), "additional client should added to provider") + + assert(t, test, firstClient, additionalClients, false) + + // stop additional client no initialized not failed does not affect another + for _, c := range additionalClients { + kube.Stop(c, true) + } + + assertKubeClient(t, test, firstClient, true) +} + +type getProvidersWithoutSSH func(t *testing.T, test *tests.Test) *provider.DefaultKubeProvider + +func assertCleanupWithoutSSH(t *testing.T, getKubeForCleanupProvider getProvidersWithoutSSH) { + rt := runTest{name: "cleanup"} + + assertClean := func(t *testing.T, kubeProvider *provider.DefaultKubeProvider) { + doClean := func() { + err := kubeProvider.Cleanup(context.TODO()) + require.NoError(t, err, "should cleanup") + } + + require.NotPanics(t, doClean, "should cleanup") + require.False(t, kubeProvider.HasCurrent(), "should not have current") + require.Equal(t, 0, kubeProvider.AdditionalClientsCount(), "should not have any additional clients") + } + + t.Run("NoClients", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeForCleanupProvider(t, test) + + assertClean(t, kubeProvider) + }) + + t.Run("OnlyDefault", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeForCleanupProvider(t, test) + + defaultClient, err := kubeProvider.Client(context.TODO()) + require.NoError(t, err, "should be created") + + assertClean(t, kubeProvider) + + assertKubeClient(t, test, defaultClient, false, kube.ErrStoppedKubeClient.Error()) + }) + + t.Run("WithAdditional", func(t *testing.T) { + test := newSubTest(t, rt) + + kubeProvider := getKubeForCleanupProvider(t, test) + + ctx := context.TODO() + + defaultClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "should be created") + + additionalClients := make([]connection.KubeClient, 0, 3) + + firstAdditional, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditional) + + secondAdditional, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditional) + + _, err = kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "additional client should be created") + + assertClean(t, kubeProvider) + + assertKubeClient(t, test, defaultClient, false) + + for _, c := range additionalClients { + assertKubeClient(t, test, c, false) + } + }) +} + +func assertIncorrectConfiguration(t *testing.T, test *tests.Test, getKubeProvider getProvidersWithoutSSH) { + kubeProvider := getKubeProvider(t, test) + + ctx := context.TODO() + + kubeProvider.WithLoopsParams(provider.KubeProviderLoopsParams{ + WaitingReady: retry.NewEmptyParams( + retry.WithWait(2*time.Second), + retry.WithAttempts(4), + ), + }) + + _, err := kubeProvider.Client(ctx) + require.Error(t, err, "should be not created") + + _, err = kubeProvider.NewAdditionalClient(ctx) + require.Error(t, err, "additional client should not be created") + + _, err = kubeProvider.NewAdditionalClient(ctx) + require.Error(t, err, "additional client should not be created") + + _, err = kubeProvider.NewAdditionalClientWithoutInitialize(ctx) + require.NoError(t, err, "additional client without initialize should be created") + + require.Equal(t, kubeProvider.AdditionalClientsCount(), 1, "only without initialize should store") +} + +func assertNewAdditional(t *testing.T, test *tests.Test, kubeProvider *provider.DefaultKubeProvider) { + ctx := context.TODO() + + defaultClient, err := kubeProvider.Client(ctx) + require.NoError(t, err, "default client should be created") + + assertKubeClient(t, test, defaultClient, true) + + additionalClients := make([]connection.KubeClient, 0, 2) + + firstAdditionalClient, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, firstAdditionalClient) + + secondAdditionalClient, err := kubeProvider.NewAdditionalClient(ctx) + require.NoError(t, err, "additional client should be created") + additionalClients = append(additionalClients, secondAdditionalClient) + + require.Equal(t, len(additionalClients), kubeProvider.AdditionalClientsCount()) + + assertAdditionalClients(t, test, defaultClient, additionalClients, true) + + kube.Stop(secondAdditionalClient, true) + assertKubeClient(t, test, secondAdditionalClient, false, kube.ErrStoppedKubeClient.Error()) + + assertKubeClient(t, test, defaultClient, true) + assertKubeClient(t, test, firstAdditionalClient, true) +} diff --git a/pkg/tests/rand.go b/pkg/tests/rand.go index 0e2bbfc..5b6c09f 100644 --- a/pkg/tests/rand.go +++ b/pkg/tests/rand.go @@ -82,6 +82,10 @@ func RandPassword(n int) string { return randString(n, passwordRunes) } +func RandString(n int) string { + return randString(n, lettersRunes) +} + func randString(n int, letters []rune) string { randomizer := getRand() diff --git a/pkg/tests/ssh_container.go b/pkg/tests/ssh_container.go index bf3f630..316b8f9 100644 --- a/pkg/tests/ssh_container.go +++ b/pkg/tests/ssh_container.go @@ -732,7 +732,7 @@ func (c *SSHContainer) defaultRetryParams(name string) retry.Params { func (c *SSHContainer) DownloadKubectl(version string) error { args := []string{"curl", "-LO", "https://dl.k8s.io/release/" + version + "/bin/linux/amd64/kubectl"} - if err := c.ExecToContainer("kubectl", args...); err != nil { + if err := c.ExecToContainer("download kubectl", args...); err != nil { return err } diff --git a/pkg/tests/test.go b/pkg/tests/test.go index a9b5396..cd2c0db 100644 --- a/pkg/tests/test.go +++ b/pkg/tests/test.go @@ -40,6 +40,7 @@ type testOpts struct { logBuffer *bytes.Buffer prettyLogger bool isIntegration bool + authSock string } type TestOpt func(opts *testOpts) @@ -73,6 +74,12 @@ func TestWithParallelRun(p bool) TestOpt { } } +func TestWithAuthSock(p string) TestOpt { + return func(opts *testOpts) { + opts.authSock = p + } +} + func applyTestOpts(opts ...TestOpt) testOpts { options := testOpts{} for _, opt := range opts { @@ -153,11 +160,17 @@ func NewTest(testName string, opts ...TestOpt) (*Test, error) { resTest.Logger.InfoF("Created tmp dir '%s' for test '%s'", resTest.tmpDir, resTest.testName) - resTest.settings = settings.NewBaseProviders(settings.ProviderParams{ + params := settings.ProviderParams{ LoggerProvider: log.SimpleLoggerProvider(resTest.Logger), IsDebug: options.isDebug, TmpDir: resTest.tmpDir, - }) + } + + if options.authSock != "" { + params.AuthSock = options.authSock + } + + resTest.settings = settings.NewBaseProviders(params) return resTest, nil } @@ -175,6 +188,11 @@ func (s *Test) WithEnvsPrefix(p string) *Test { return s } +func (s *Test) WithAuthSock(p string) *Test { + s.settings = s.settings.Clone(settings.CloneWithAuthSock(p)) + return s +} + func (s *Test) GetLogger() *log.InMemoryLogger { return s.Logger } diff --git a/pkg/utils/file/reader.go b/pkg/utils/file/reader.go index 3dd07f7..7d9dd8b 100644 --- a/pkg/utils/file/reader.go +++ b/pkg/utils/file/reader.go @@ -24,18 +24,9 @@ import ( ) func Reader(path string, fileType string) (io.ReadCloser, error) { - fullPath, err := filepath.Abs(path) + fullPath, err := isExists(path, fileType, true) if err != nil { - return nil, fmt.Errorf("Cannot get abs path for %s: %w", path, err) - } - - stat, err := os.Stat(fullPath) - if err != nil { - return nil, fmt.Errorf("Cannot get %s file info for %s: %w", fileType, fullPath, err) - } - - if stat.IsDir() || !stat.Mode().IsRegular() { - return nil, fmt.Errorf("%s path '%s' should be regular file", fileType, fullPath) + return nil, err } return os.Open(fullPath) @@ -56,3 +47,34 @@ func ReadFile(path string, fileType string, logger ...log.Logger) ([]byte, error return io.ReadAll(reader) } + +func IsExists(path string, fileType string) error { + _, err := isExists(path, fileType, false) + return err +} + +func isExists(path string, fileType string, shouldRegular bool) (string, error) { + if path == "" { + return "", fmt.Errorf("pass empty path for %s", fileType) + } + + fullPath, err := filepath.Abs(path) + if err != nil { + return "", fmt.Errorf("cannot get abs path for %s: %w", path, err) + } + + stat, err := os.Stat(fullPath) + if err != nil { + return "", fmt.Errorf("cannot get %s file info for %s: %w", fileType, fullPath, err) + } + + if stat.IsDir() { + return "", fmt.Errorf("%s path '%s' should be a file not dir", fileType, fullPath) + } + + if shouldRegular && !stat.Mode().IsRegular() { + return "", fmt.Errorf("%s path '%s' should be regular file", fileType, fullPath) + } + + return fullPath, nil +} diff --git a/pkg/utils/rand/rand.go b/pkg/utils/rand/rand.go new file mode 100644 index 0000000..85f721d --- /dev/null +++ b/pkg/utils/rand/rand.go @@ -0,0 +1,42 @@ +// Copyright 2026 Flant JSC +// +// Licensed under the Apache License, Version 2.0 (the "License"); +// you may not use this file except in compliance with the License. +// You may obtain a copy of the License at +// +// http://www.apache.org/licenses/LICENSE-2.0 +// +// Unless required by applicable law or agreed to in writing, software +// distributed under the License is distributed on an "AS IS" BASIS, +// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +// See the License for the specific language governing permissions and +// limitations under the License. + +package rand + +import ( + mathrand "math/rand" + "time" +) + +func RangeExclude(min, max int, exclude map[int]struct{}) int { + randomizer := getRand() + for i := 0; i < 100; i++ { + v := rangeWithRandomized(randomizer, min, max) + if _, ok := exclude[v]; ok { + continue + } + + return v + } + + panic("random range exclude failed after 100 iterations") +} + +func rangeWithRandomized(randomizer *mathrand.Rand, min, max int) int { + return randomizer.Intn(max-min) + min +} + +func getRand() *mathrand.Rand { + return mathrand.New(mathrand.NewSource(time.Now().UnixNano())) +} diff --git a/tests/go.mod b/tests/go.mod index c4a6a59..73be965 100644 --- a/tests/go.mod +++ b/tests/go.mod @@ -11,6 +11,8 @@ require ( github.com/deckhouse/deckhouse/pkg/log v0.1.1-0.20251230144142-2bad7c3d1edf // indirect github.com/deckhouse/lib-dhctl v0.13.0 // indirect github.com/deckhouse/lib-gossh v0.3.0 // indirect + github.com/emicklei/go-restful/v3 v3.11.0 // indirect + github.com/fxamacker/cbor/v2 v2.7.0 // indirect github.com/go-logr/logr v1.4.2 // indirect github.com/go-openapi/analysis v0.19.10 // indirect github.com/go-openapi/errors v0.19.7 // indirect @@ -23,25 +25,48 @@ require ( github.com/go-openapi/swag v0.23.0 // indirect github.com/go-openapi/validate v0.19.12 // indirect github.com/go-stack/stack v1.8.0 // indirect + github.com/gogo/protobuf v1.3.2 // indirect + github.com/golang/protobuf v1.5.4 // indirect + github.com/google/gnostic-models v0.6.8 // indirect github.com/google/go-cmp v0.6.0 // indirect + github.com/google/gofuzz v1.2.0 // indirect + github.com/google/uuid v1.6.0 // indirect github.com/gookit/color v1.5.2 // indirect github.com/hashicorp/errwrap v1.0.0 // indirect github.com/hashicorp/go-multierror v1.1.1 // indirect github.com/josharian/intern v1.0.0 // indirect + github.com/json-iterator/go v1.1.12 // indirect github.com/mailru/easyjson v0.7.7 // indirect github.com/mitchellh/mapstructure v1.3.2 // indirect + github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd // indirect + github.com/modern-go/reflect2 v1.0.2 // indirect + github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 // indirect github.com/name212/govalue v1.1.0 // indirect + github.com/pkg/errors v0.9.1 // indirect github.com/spf13/pflag v1.0.10 // indirect github.com/werf/logboek v0.5.5 // indirect + github.com/x448/float16 v0.8.4 // indirect github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 // indirect go.mongodb.org/mongo-driver v1.5.1 // indirect go.yaml.in/yaml/v2 v2.4.2 // indirect golang.org/x/crypto v0.47.0 // indirect golang.org/x/net v0.48.0 // indirect + golang.org/x/oauth2 v0.23.0 // indirect golang.org/x/sys v0.40.0 // indirect golang.org/x/term v0.39.0 // indirect golang.org/x/text v0.33.0 // indirect + golang.org/x/time v0.7.0 // indirect + google.golang.org/protobuf v1.35.1 // indirect + gopkg.in/evanphx/json-patch.v4 v4.12.0 // indirect + gopkg.in/inf.v0 v0.9.1 // indirect + k8s.io/api v0.32.10 // indirect + k8s.io/apimachinery v0.32.10 // indirect + k8s.io/client-go v0.32.10 // indirect k8s.io/klog/v2 v2.130.1 // indirect + k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f // indirect + k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 // indirect + sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 // indirect + sigs.k8s.io/structured-merge-diff/v4 v4.4.2 // indirect sigs.k8s.io/yaml v1.6.0 // indirect ) diff --git a/tests/go.sum b/tests/go.sum index 9322eae..5aa406b 100644 --- a/tests/go.sum +++ b/tests/go.sum @@ -27,6 +27,10 @@ github.com/deckhouse/lib-gossh v0.3.0 h1:FUAlF8+fLnBCII9hXSNx+arZ4PH3H/6fzp5LBln github.com/deckhouse/lib-gossh v0.3.0/go.mod h1:6bT8jf2fkBPEhYBU35+vMBr5YscliTiS+Vr8v06C+70= github.com/docker/go-units v0.3.3/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= github.com/docker/go-units v0.4.0/go.mod h1:fgPhTUdO+D/Jk86RDLlptpiXQzgHJF7gydDDbaIK4Dk= +github.com/emicklei/go-restful/v3 v3.11.0 h1:rAQeMHw1c7zTmncogyy8VvRZwtkmkZ4FxERmMY4rD+g= +github.com/emicklei/go-restful/v3 v3.11.0/go.mod h1:6n3XBCmQQb25CM2LCACGz8ukIrRry+4bhvbpWn3mrbc= +github.com/fxamacker/cbor/v2 v2.7.0 h1:iM5WgngdRBanHcxugY4JySA0nk1wZorNOpTgCMedv5E= +github.com/fxamacker/cbor/v2 v2.7.0/go.mod h1:pxXPTn3joSm21Gbwsv0w9OSA2y1HFR9qXEeXQVeNoDQ= github.com/globalsign/mgo v0.0.0-20180905125535-1ca0a4f7cbcb/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/globalsign/mgo v0.0.0-20181015135952-eeefdecb41b8/go.mod h1:xkRDCp4j0OGD1HRkm4kmhM+pmpv3AKq5SU7GMg4oO/Q= github.com/go-logr/logr v1.4.2 h1:6pFjapn8bFcIbiKo3XT4j/BhANplGihG6tvd+8rYgrY= @@ -105,6 +109,8 @@ github.com/go-openapi/validate v0.19.12/go.mod h1:Rzou8hA/CBw8donlS6WNEUQupNvUZ0 github.com/go-sql-driver/mysql v1.5.0/go.mod h1:DCzpHaOWr8IXmIStZouvnhqoel9Qv2LBy8hT2VhHyBg= github.com/go-stack/stack v1.8.0 h1:5SgMzNM5HxrEjV0ww2lTmX6E2Izsfxas4+YHWRs3Lsk= github.com/go-stack/stack v1.8.0/go.mod h1:v0f6uXyyMGvRgIKkXu+yp6POWl0qKG85gN/melR3HDY= +github.com/go-task/slim-sprig/v3 v3.0.0 h1:sUs3vkvUymDpBKi3qH1YSqBQk9+9D/8M2mN1vB6EwHI= +github.com/go-task/slim-sprig/v3 v3.0.0/go.mod h1:W848ghGpv3Qj3dhTPRyJypKRiqCdHZiAzKg9hl15HA8= github.com/gobuffalo/attrs v0.0.0-20190224210810-a9411de4debd/go.mod h1:4duuawTqi2wkkpB4ePgWMaai6/Kc6WEz83bhFwpHzj0= github.com/gobuffalo/depgen v0.0.0-20190329151759-d478694a28d3/go.mod h1:3STtPUQYuzV0gBVOY3vy6CfMm/ljR4pABfrTeHNLHUY= github.com/gobuffalo/depgen v0.1.0/go.mod h1:+ifsuy7fhi15RWncXQQKjWS9JPkdah5sZvtHc2RXGlg= @@ -129,12 +135,24 @@ github.com/gobuffalo/packd v0.1.0/go.mod h1:M2Juc+hhDXf/PnmBANFCqx4DM3wRbgDvnVWe github.com/gobuffalo/packr/v2 v2.0.9/go.mod h1:emmyGweYTm6Kdper+iywB6YK5YzuKchGtJQZ0Odn4pQ= github.com/gobuffalo/packr/v2 v2.2.0/go.mod h1:CaAwI0GPIAv+5wKLtv8Afwl+Cm78K/I/VCm/3ptBN+0= github.com/gobuffalo/syncx v0.0.0-20190224160051-33c29581e754/go.mod h1:HhnNqWY95UYwwW3uSASeV7vtgYkT2t16hJgV3AEPUpw= +github.com/gogo/protobuf v1.3.2 h1:Ov1cvc58UF3b5XjBnZv7+opcTcQFZebYjWzi34vdm4Q= +github.com/gogo/protobuf v1.3.2/go.mod h1:P1XiOD3dCwIKUDQYPy72D8LYyHL2YPYrpS2s69NZV8Q= +github.com/golang/protobuf v1.5.4 h1:i7eJL8qZTpSEXOPTxNKhASYpMn+8e5Q6AdndVa1dWek= +github.com/golang/protobuf v1.5.4/go.mod h1:lnTiLA8Wa4RWRcIUkrtSVa5nRhsEGBg48fD6rSs7xps= github.com/golang/snappy v0.0.1/go.mod h1:/XxbfmMg8lxefKM7IXC3fBNl/7bRcc72aCRzEWrmP2Q= +github.com/google/gnostic-models v0.6.8 h1:yo/ABAfM5IMRsS1VnXjTBvUb61tFIHozhlYvRgGre9I= +github.com/google/gnostic-models v0.6.8/go.mod h1:5n7qKqH0f5wFt+aWF8CW6pZLLNOfYuF5OpfBSENuI8U= github.com/google/go-cmp v0.2.0/go.mod h1:oXzfMopK8JAjlY9xF4vHSVASa0yLyX7SntLO5aqRK0M= github.com/google/go-cmp v0.3.0/go.mod h1:8QqcDgzrUqlUb/G2PQTWiueGozuR1884gddMywk6iLU= github.com/google/go-cmp v0.5.2/go.mod h1:v8dTdLbMG2kIc/vJvl+f65V22dbkXbowE6jgT/gNBxE= +github.com/google/go-cmp v0.5.9/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= github.com/google/go-cmp v0.6.0 h1:ofyhxvXcZhMsU5ulbFiLKl/XBFqE1GSq7atu8tAmTRI= github.com/google/go-cmp v0.6.0/go.mod h1:17dUlkBOakJ0+DkrSSNjCkIjxS6bF9zb3elmeNGIjoY= +github.com/google/gofuzz v1.0.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/gofuzz v1.2.0 h1:xRy4A+RhZaiKjJ1bPfwQ8sedCA+YS2YcCHW6ec7JMi0= +github.com/google/gofuzz v1.2.0/go.mod h1:dBl0BpW6vV/+mYPU4Po3pmUjxk6FQPldtuIdl/M65Eg= +github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db h1:097atOisP2aRj7vFgYQBbFN4U4JNXUNYpxael3UzMyo= +github.com/google/pprof v0.0.0-20241029153458-d1b30febd7db/go.mod h1:vavhavw2zAxS5dIdcRluK6cSGGPlZynqzFM8NdvU144= github.com/google/uuid v1.0.0/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.1.1/go.mod h1:TIyPZe4MgqvfeYDBFedMoGGpEw/LqOeaOT+nhxU+yHo= github.com/google/uuid v1.6.0 h1:NIvaJDMOsjHA8n1jAhLSgzrAzy1Hgr+hNrb57e+94F0= @@ -151,9 +169,13 @@ github.com/jmespath/go-jmespath/internal/testify v1.5.1/go.mod h1:L3OGu8Wl2/fWfC github.com/joho/godotenv v1.3.0/go.mod h1:7hK45KPybAkOC6peb+G5yklZfMxEjkZhHbwpqxOKXbg= github.com/josharian/intern v1.0.0 h1:vlS4z54oSdjm0bgjRigI+G1HpF+tI+9rE5LLzOg8HmY= github.com/josharian/intern v1.0.0/go.mod h1:5DoeVV0s6jJacbCEi61lwdGj/aVlrQvzHFFd8Hwg//Y= +github.com/json-iterator/go v1.1.12 h1:PV8peI4a0ysnczrg+LtxykD8LfKY9ML6u2jnxaEnrnM= +github.com/json-iterator/go v1.1.12/go.mod h1:e30LSqwooZae/UwlEbR2852Gd8hjQvJoHmT4TnhNGBo= github.com/karrick/godirwalk v1.8.0/go.mod h1:H5KPZjojv4lE+QYImBI8xVtrBRgYrIVsaRPx4tDPEn4= github.com/karrick/godirwalk v1.10.3/go.mod h1:RoGL9dQei4vP9ilrpETWE8CLOZ1kiN0LhBygSwrAsHA= github.com/kisielk/errcheck v1.2.0/go.mod h1:/BMXB+zMLi60iA8Vv6Ksmxu/1UDYcXs4uQLJ+jE2L00= +github.com/kisielk/errcheck v1.5.0/go.mod h1:pFxgyoBC7bSaBwPgfKdkLd5X25qrDl4LWUI2bnpBCr8= +github.com/kisielk/gotool v1.0.0/go.mod h1:XhKaO+MFFWcvkIS/tQcRk01m1F5IRFswLeQ+oQHNcck= github.com/klauspost/compress v1.9.5/go.mod h1:RyIbtBH6LamlWaDj8nUwkbUhJ87Yi3uG0guNDohfE1A= github.com/konsorten/go-windows-terminal-sequences v1.0.1/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= github.com/konsorten/go-windows-terminal-sequences v1.0.2/go.mod h1:T0+1ngSBFLxvqU3pZ+m/2kptfBszLMUkC4ZK/EgS/cQ= @@ -178,15 +200,27 @@ github.com/markbates/safe v1.0.1/go.mod h1:nAqgmRi7cY2nqMc92/bSEeQA+R4OheNU2T1kN github.com/mitchellh/mapstructure v1.1.2/go.mod h1:FVVH3fgwuzCH5S8UJGiWEs2h04kUh9fWfEaFds41c1Y= github.com/mitchellh/mapstructure v1.3.2 h1:mRS76wmkOn3KkKAyXDu42V+6ebnXWIztFSYGN7GeoRg= github.com/mitchellh/mapstructure v1.3.2/go.mod h1:bFUtVrKA4DC2yAKiSyO/QUcy7e+RRV2QTWOzhPopBRo= +github.com/modern-go/concurrent v0.0.0-20180228061459-e0a39a4cb421/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd h1:TRLaZ9cD/w8PVh93nsPXa1VrQ6jlwL5oN8l14QlcNfg= +github.com/modern-go/concurrent v0.0.0-20180306012644-bacd9c7ef1dd/go.mod h1:6dJC0mAP4ikYIbvyc7fijjWJddQyLn8Ig3JB5CqoB9Q= +github.com/modern-go/reflect2 v1.0.2 h1:xBagoLtFs94CBntxluKeaWgTMpvLxC4ur3nMaC9Gz0M= +github.com/modern-go/reflect2 v1.0.2/go.mod h1:yWuevngMOJpCy52FWWMvUC8ws7m/LJsjYzDa0/r8luk= github.com/montanaflynn/stats v0.0.0-20171201202039-1bf9dbcd8cbe/go.mod h1:wL8QJuTMNUDYhXwkmfOly8iTdp5TEcJFWZD2D7SIkUc= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822 h1:C3w9PqII01/Oq1c1nUAm88MOHcQC9l5mIlSMApZMrHA= +github.com/munnerz/goautoneg v0.0.0-20191010083416-a7dc8b61c822/go.mod h1:+n7T8mK8HuQTcFwEeznm/DIxMOiR9yIdICNftLE1DvQ= github.com/name212/govalue v1.1.0 h1:kSdUVs21cM5bFp7RW5sWPrwQ0RzC/Xhk3f+A+dUL6TM= github.com/name212/govalue v1.1.0/go.mod h1:3mLA4mFb82esucQHCOIAnUjN7e7AZnRYEfxeaHLKjho= github.com/niemeyer/pretty v0.0.0-20200227124842-a10e7caefd8e/go.mod h1:zD1mROLANZcx1PVRCS0qkT7pwLkGfwJo4zjcN/Tysno= +github.com/onsi/ginkgo/v2 v2.21.0 h1:7rg/4f3rB88pb5obDgNZrNHrQ4e6WpjonchcpuBRnZM= +github.com/onsi/ginkgo/v2 v2.21.0/go.mod h1:7Du3c42kxCUegi0IImZ1wUQzMBVecgIHjR1C+NkhLQo= +github.com/onsi/gomega v1.35.1 h1:Cwbd75ZBPxFSuZ6T+rN/WCb/gOc6YgFBXLlZLhC7Ds4= +github.com/onsi/gomega v1.35.1/go.mod h1:PvZbdDc8J6XJEpDK4HCuRBm8a6Fzp9/DmhC9C7yFlog= github.com/pborman/uuid v1.2.0/go.mod h1:X/NO0urCmaxf9VXbdlT7C2Yzkj2IKimNn4k+gtPdI/k= github.com/pelletier/go-toml v1.4.0/go.mod h1:PN7xzY2wHTK0K9p34ErDQMlFxa51Fk0OUruD3k1mMwo= github.com/pelletier/go-toml v1.7.0/go.mod h1:vwGMzjaWMwyfHwgIBhI2YUM4fB6nL6lVAvS1LBMMhTE= github.com/pkg/errors v0.8.0/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pkg/errors v0.8.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= +github.com/pkg/errors v0.9.1 h1:FEBLx1zS214owpjy7qsBeixbURkuhQAwrK5UwLGTwt4= github.com/pkg/errors v0.9.1/go.mod h1:bwawxfHBFNV+L2hUp1rHADufV3IMtnDRdf1r5NINEl0= github.com/pmezard/go-difflib v1.0.0/go.mod h1:iKH77koFhYxTK1pcRnkKkqfTogsbg7gZNVY4sRDYZ/4= github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 h1:Jamvg5psRIccs7FGNTlIRMkT8wgtp5eCXdBlqhYGL6U= @@ -224,6 +258,8 @@ github.com/tidwall/pretty v1.0.0/go.mod h1:XNkn88O1ChpSDQmQeStsy+sBenx6DDtFZJxhV github.com/vektah/gqlparser v1.1.2/go.mod h1:1ycwN7Ij5njmMkPPAOaRFY4rET2Enx7IkVv3vaXspKw= github.com/werf/logboek v0.5.5 h1:RmtTejHJOyw0fub4pIfKsb7OTzD90ZOUyuBAXqYqJpU= github.com/werf/logboek v0.5.5/go.mod h1:Gez5J4bxekyr6MxTmIJyId1F61rpO+0/V4vjCIEIZmk= +github.com/x448/float16 v0.8.4 h1:qLwI1I70+NjRFUR3zs1JPUCgaCXSh3SW62uAKT1mSBM= +github.com/x448/float16 v0.8.4/go.mod h1:14CWIYCyZA/cWjXOioeEpHeN/83MdbZDRQHoFcYsOfg= github.com/xdg-go/pbkdf2 v1.0.0/go.mod h1:jrpuAogTd400dnrH08LKmI/xc1MbPOebTwRqcT5RDeI= github.com/xdg-go/scram v1.0.2/go.mod h1:1WAq6h33pAW+iRreB34OORO2Nf7qel3VV3fjBj+hCSs= github.com/xdg-go/stringprep v1.0.2/go.mod h1:8F9zXuvzgwmyT5DUm4GUfZGDdT3W+LCvS6+da4O5kxM= @@ -232,6 +268,8 @@ github.com/xdg/stringprep v0.0.0-20180714160509-73f8eece6fdc/go.mod h1:Jhud4/sHM github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778 h1:QldyIu/L63oPpyvQmHgvgickp1Yw510KJOqX7H24mg8= github.com/xo/terminfo v0.0.0-20210125001918-ca9a967f8778/go.mod h1:2MuV+tbUrU1zIOPMxZ5EncGwgmMJsa+9ucAQZXxsObs= github.com/youmark/pkcs8 v0.0.0-20181117223130-1be2e3e5546d/go.mod h1:rHwXgn7JulP+udvsHwJoVG1YGAP6VLg4y9I5dyZdqmA= +github.com/yuin/goldmark v1.1.27/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= +github.com/yuin/goldmark v1.2.1/go.mod h1:3hX8gzYuyVAZsxl0MRgGTJEmQBFcNTphYh9decYSb74= go.mongodb.org/mongo-driver v1.0.3/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.mongodb.org/mongo-driver v1.1.1/go.mod h1:u7ryQJ+DOzQmeO7zB6MHyr8jkEQvC8vH7qLUO4lqsUM= go.mongodb.org/mongo-driver v1.3.0/go.mod h1:MSWZXKOynuguX+JSvwP8i+58jYCXxbia8HS3gZBapIE= @@ -249,24 +287,33 @@ golang.org/x/crypto v0.0.0-20190422162423-af44ce270edf/go.mod h1:WFFai1msRO1wXaE golang.org/x/crypto v0.0.0-20190530122614-20be4c3c3ed5/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190611184440-5c40567a22f8/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20190617133340-57b3e21c3d56/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= +golang.org/x/crypto v0.0.0-20191011191535-87dc89f01550/go.mod h1:yigFU9vqHzYiE8UmvKecakEJjdnWj3jj499lnFckfCI= golang.org/x/crypto v0.0.0-20200302210943-78000ba7a073/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= +golang.org/x/crypto v0.0.0-20200622213623-75b288015ac9/go.mod h1:LzIPMQfyMNhhGPhUkYOs5KpL4U8rLKemX1yGLhDgUto= golang.org/x/crypto v0.47.0 h1:V6e3FRj+n4dbpw86FJ8Fv7XVOql7TEwpHapKoMJ/GO8= golang.org/x/crypto v0.47.0/go.mod h1:ff3Y9VzzKbwSSEzWqJsJVBnWmRwRSHt/6Op5n9bQc4A= +golang.org/x/mod v0.2.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= +golang.org/x/mod v0.3.0/go.mod h1:s0Qsj1ACt9ePp/hMypM3fl4fZqREWJwdYDEqhRiZZUA= golang.org/x/net v0.0.0-20181005035420-146acd28ed58/go.mod h1:mL1N/T3taQHkDXs73rZJwtUhF3w3ftmwwsq0BUmARs4= golang.org/x/net v0.0.0-20190311183353-d8887717615a/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190320064053-1272bf9dcd53/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190404232315-eb5bcb51f2a3/go.mod h1:t9HGtf8HONx5eT2rtn7q6eTqICYqUVnKs3thJo3Qplg= golang.org/x/net v0.0.0-20190613194153-d28f0bde5980/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= +golang.org/x/net v0.0.0-20190620200207-3b0461eec859/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20190827160401-ba9fcec4b297/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200202094626-16171245cfb2/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200226121028-0de0cce0169b/go.mod h1:z5CRVTTTmAJ677TzLLGU+0bjPO0LkuOLi4/5GtJWs/s= golang.org/x/net v0.0.0-20200602114024-627f9648deb9/go.mod h1:qpuaurCH72eLCgpAm/N6yyVIVM9cpaDIP3A8BGJEC5A= +golang.org/x/net v0.0.0-20201021035429-f5854403a974/go.mod h1:sp8m0HH+o8qH0wwXwYZr8TS3Oi6o0r6Gce1SSxlDquU= golang.org/x/net v0.48.0 h1:zyQRTTrjc33Lhh0fBgT/H3oZq9WuvRR5gPC70xpDiQU= golang.org/x/net v0.48.0/go.mod h1:+ndRgGjkh8FGtu1w1FGbEC31if4VrNVMuKTgcAAnQRY= +golang.org/x/oauth2 v0.23.0 h1:PbgcYx2W7i4LvjJWEbf0ngHV6qJYr86PkAV3bXdLEbs= +golang.org/x/oauth2 v0.23.0/go.mod h1:XYTD2NtWslqkgxebSiOHnXEap4TF09sJSc7H1sXbhtI= golang.org/x/sync v0.0.0-20190227155943-e225da77a7e6/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190412183630-56d357773e84/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190423024810-112230192c58/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sync v0.0.0-20190911185100-cd5d95a43a6e/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= +golang.org/x/sync v0.0.0-20201020160332-67f06af15bc9/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM= golang.org/x/sys v0.0.0-20180905080454-ebe1bf3edb33/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190215142949-d0b11bdaac8a/go.mod h1:STP8DvDyc/dI5b8T5hshtkjS+E42TnysNCUPdjciGhY= golang.org/x/sys v0.0.0-20190321052220-f7bb7a8bee54/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= @@ -277,6 +324,7 @@ golang.org/x/sys v0.0.0-20190422165155-953cdadca894/go.mod h1:h1NjWce9XRLGQEsW7w golang.org/x/sys v0.0.0-20190531175056-4c3a928424d2/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20190616124812-15dcb6c0061f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20200323222414-85ca7c5b95cd/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= +golang.org/x/sys v0.0.0-20200930185726-fdedc70b468f/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.0.0-20210330210617-4fbd30eecc44/go.mod h1:h1NjWce9XRLGQEsW7wpKNCjG9DtNlClVuFLEZdDNbEs= golang.org/x/sys v0.40.0 h1:DBZZqJ2Rkml6QMQsZywtnjnnGvHza6BTfYFWY9kjEWQ= golang.org/x/sys v0.40.0/go.mod h1:OgkHotnGiDImocRcuBABYBEXf8A9a87e/uXjp9XT3ks= @@ -284,9 +332,12 @@ golang.org/x/term v0.39.0 h1:RclSuaJf32jOqZz74CkPA9qFuVTX7vhLlpfj/IGWlqY= golang.org/x/term v0.39.0/go.mod h1:yxzUCTP/U+FzoxfdKmLaA0RV1WgE0VY7hXBwKtY/4ww= golang.org/x/text v0.3.0/go.mod h1:NqM8EUOU14njkJ3fqMW+pc6Ldnwhi/IjpwHt7yyuwOQ= golang.org/x/text v0.3.2/go.mod h1:bEr9sfX3Q8Zfm5fL9x+3itogRgK3+ptLWKqgva+5dAk= +golang.org/x/text v0.3.3/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.3.5/go.mod h1:5Zoc/QRtKVWzQhOtBMvqHzDpF6irO9z98xDceosuGiQ= golang.org/x/text v0.33.0 h1:B3njUFyqtHDUI5jMn1YIr5B0IE2U0qck04r6d4KPAxE= golang.org/x/text v0.33.0/go.mod h1:LuMebE6+rBincTi9+xWTY8TztLzKHc/9C1uBCG27+q8= +golang.org/x/time v0.7.0 h1:ntUhktv3OPE6TgYxXWv9vKvUSJyIFJlyohwbkEwPrKQ= +golang.org/x/time v0.7.0/go.mod h1:3BpzKBy/shNhVucY/MWOyx10tF3SFh9QdLuxbVysPQM= golang.org/x/tools v0.0.0-20180917221912-90fa682c2a6e/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20181030221726-6c7e314b6563/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= golang.org/x/tools v0.0.0-20190125232054-d66bd3c5d5a6/go.mod h1:n7NCudcB/nEzxVGmLbDWY5pfWTLqBcC2KZ6jyYvM4mQ= @@ -296,13 +347,27 @@ golang.org/x/tools v0.0.0-20190420181800-aa740d480789/go.mod h1:LCzVGOaR6xXOjkQ3 golang.org/x/tools v0.0.0-20190531172133-b3315ee88b7d/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190614205625-5aca471b1d59/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= golang.org/x/tools v0.0.0-20190617190820-da514acc4774/go.mod h1:/rFqwRUd4F7ZHNgwSSTFct+R/Kf4OFW1sUzUTQQTgfc= +golang.org/x/tools v0.0.0-20191119224855-298f0cb1881e/go.mod h1:b+2E5dAYhXwXZwtnZ6UAqBI28+e2cm9otk0dWdXHAEo= +golang.org/x/tools v0.0.0-20200619180055-7c47624df98f/go.mod h1:EkVYQZoAsY45+roYkvgYkIh4xh/qjgUK9TdY2XT94GE= +golang.org/x/tools v0.0.0-20210106214847-113979e3529a/go.mod h1:emZCQorbCU4vsT4fOWvOPXz4eW1wZW4PmDk9uLelYpA= +golang.org/x/tools v0.40.0 h1:yLkxfA+Qnul4cs9QA3KnlFu0lVmd8JJfoq+E41uSutA= +golang.org/x/tools v0.40.0/go.mod h1:Ik/tzLRlbscWpqqMRjyWYDisX8bG13FrdXp3o4Sr9lc= +golang.org/x/xerrors v0.0.0-20190717185122-a985d3407aa7/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20191011141410-1b5146add898/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= golang.org/x/xerrors v0.0.0-20191204190536-9bdfabe68543/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +golang.org/x/xerrors v0.0.0-20200804184101-5ec99f83aff1/go.mod h1:I/5z698sn9Ka8TeJc9MKroUUfqBBauWjQqLJ2OPfmY0= +google.golang.org/protobuf v1.35.1 h1:m3LfL6/Ca+fqnjnlqQXNpFPABW1UD7mjh8KO2mKFytA= +google.golang.org/protobuf v1.35.1/go.mod h1:9fA7Ob0pmnwhb644+1+CVWFRbNajQ6iRojtC/QF5bRE= gopkg.in/check.v1 v0.0.0-20161208181325-20d25e280405/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20180628173108-788fd7840127/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20200227125254-8fa46927fb4f/go.mod h1:Co6ibVJAznAaIkqp8huTwlJQCZ016jof/cbN4VW5Yz0= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c h1:Hei/4ADfdWqJk1ZMxUNpqntNwaWcugrBjAiHlqqRiVk= gopkg.in/check.v1 v1.0.0-20201130134442-10cb98267c6c/go.mod h1:JHkPIbrfpd72SG/EVd6muEfDQjcINNoR0C8j2r3qZ4Q= gopkg.in/errgo.v2 v2.1.0/go.mod h1:hNsd1EY+bozCKY1Ytp96fpM3vjJbqLJn88ws8XvfDNI= +gopkg.in/evanphx/json-patch.v4 v4.12.0 h1:n6jtcsulIzXPJaxegRbvFNNrZDjbij7ny3gmSPG+6V4= +gopkg.in/evanphx/json-patch.v4 v4.12.0/go.mod h1:p8EYWUEYMpynmqDbY58zCKCFZw8pRWMG4EsWvDvM72M= +gopkg.in/inf.v0 v0.9.1 h1:73M5CoZyi3ZLMOyDlQh031Cx6N9NDJ2Vvfl76EDAgDc= +gopkg.in/inf.v0 v0.9.1/go.mod h1:cWUDdTG/fYaXco+Dcufb5Vnc6Gp2YChqWtbxRZE0mXw= gopkg.in/yaml.v2 v2.2.1/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.2/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= gopkg.in/yaml.v2 v2.2.4/go.mod h1:hI93XBmqTisBFMUTm0b8Fm+jr3Dg1NNxqwp+5A1VGuI= @@ -313,7 +378,22 @@ gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C gopkg.in/yaml.v3 v3.0.0-20200605160147-a5ece683394c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA= gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM= +k8s.io/api v0.32.10 h1:ocp4turNfa1V40TuBW/LuA17TeXG9g/GI2ebg0KxBNk= +k8s.io/api v0.32.10/go.mod h1:AsMsc4b6TuampYqgMEGSv0HBFpRS4BlKTXAVCAa7oF4= +k8s.io/apimachinery v0.32.10 h1:SAg2kUPLYRcBJQj66oniP1BnXSqw+l1GvJFsJlBmVvQ= +k8s.io/apimachinery v0.32.10/go.mod h1:GpHVgxoKlTxClKcteaeuF1Ul/lDVb74KpZcxcmLDElE= +k8s.io/client-go v0.32.10 h1:MFmIjsKtcnn7mStjrJG1ZW2WzLsKKn6ZtL9hHM/W0xU= +k8s.io/client-go v0.32.10/go.mod h1:qJy/Ws3zSwnu/nD75D+/of1uxbwWHxrYT5P3FuobVLI= k8s.io/klog/v2 v2.130.1 h1:n9Xl7H1Xvksem4KFG4PYbdQCQxqc/tTUyrgXaOhHSzk= k8s.io/klog/v2 v2.130.1/go.mod h1:3Jpz1GvMt720eyJH1ckRHK1EDfpxISzJ7I9OYgaDtPE= +k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f h1:GA7//TjRY9yWGy1poLzYYJJ4JRdzg3+O6e8I+e+8T5Y= +k8s.io/kube-openapi v0.0.0-20241105132330-32ad38e42d3f/go.mod h1:R/HEjbvWI0qdfb8viZUeVZm0X6IZnxAydC7YU42CMw4= +k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738 h1:M3sRQVHv7vB20Xc2ybTt7ODCeFj6JSWYFzOFnYeS6Ro= +k8s.io/utils v0.0.0-20241104100929-3ea5e8cea738/go.mod h1:OLgZIPagt7ERELqWJFomSt595RzquPNLL48iOWgYOg0= +sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3 h1:/Rv+M11QRah1itp8VhT6HoVx1Ray9eB4DBr+K+/sCJ8= +sigs.k8s.io/json v0.0.0-20241010143419-9aa6b5e7a4b3/go.mod h1:18nIHnGi6636UCz6m8i4DhaJ65T6EruyzmoQqI2BVDo= +sigs.k8s.io/structured-merge-diff/v4 v4.4.2 h1:MdmvkGuXi/8io6ixD5wud3vOLwc1rj0aNqRlpuvjmwA= +sigs.k8s.io/structured-merge-diff/v4 v4.4.2/go.mod h1:N8f93tFZh9U6vpxwRArLiikrE5/2tiu1w1AGfACIGE4= +sigs.k8s.io/yaml v1.4.0/go.mod h1:Ejl7/uTz7PSA4eKMyQCUTnhZYNmLIl+5c2lQPGR2BPY= sigs.k8s.io/yaml v1.6.0 h1:G8fkbMSAFqgEFgh4b1wmtzDnioxFCUgTZhlbj5P9QYs= sigs.k8s.io/yaml v1.6.0/go.mod h1:796bPqUfzR/0jLAl6XjHl3Ck7MiyVv8dbTdyT3/pMf4=