K9s/release v0.31.8 (#2528)

* [Maint] Fix race condition issue

* [Bug] Fix #2501

* [Maint] Allow reference to resource aliases for plugins

* [Feat] Intro cp namespace command + misc cleanup

* [Maint] Rev k8s v0.29.1

* [Bug] Fix #1033, #1558

* [Bug] Fix #2527

* [Bug] Fix #2520

* rel v0.31.8
mine
Fernand Galiana 2024-02-06 19:21:28 -07:00 committed by GitHub
parent 763a6b0e00
commit 90a810ffc2
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
33 changed files with 980 additions and 119 deletions

View File

@ -11,7 +11,7 @@ DATE ?= $(shell TZ=UTC date -j -f "%s" ${SOURCE_DATE_EPOCH} +"%Y-%m-%dT%H:
else
DATE ?= $(shell date -u -d @${SOURCE_DATE_EPOCH} +"%Y-%m-%dT%H:%M:%SZ")
endif
VERSION ?= v0.31.7
VERSION ?= v0.31.8
IMG_NAME := derailed/k9s
IMAGE := ${IMG_NAME}:${VERSION}

View File

@ -0,0 +1,102 @@
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s.png" align="center" width="800" height="auto"/>
# Release v0.31.8
## Notes
Thank you to all that contributed with flushing out issues and enhancements for K9s!
I'll try to mark some of these issues as fixed. But if you don't mind grab the latest rev
and see if we're happier with some of the fixes!
If you've filed an issue please help me verify and close.
Your support, kindness and awesome suggestions to make K9s better are, as ever, very much noted and appreciated!
Also big thanks to all that have allocated their own time to help others on both slack and on this repo!!
As you may know, K9s is not pimped out by corps with deep pockets, thus if you feel K9s is helping your Kubernetes journey,
please consider joining our [sponsorship program](https://github.com/sponsors/derailed) and/or make some noise on social! [@kitesurfer](https://twitter.com/kitesurfer)
On Slack? Please join us [K9slackers](https://join.slack.com/t/k9sers/shared_invite/enQtOTA5MDEyNzI5MTU0LWQ1ZGI3MzliYzZhZWEyNzYxYzA3NjE0YTk1YmFmNzViZjIyNzhkZGI0MmJjYzhlNjdlMGJhYzE2ZGU1NjkyNTM)
## Maintenance Release!
Thank you all for pitching in and helping flesh out issues!!
Please make sure to add gory details to issues ie relevant configs, debug logs, etc...
Comments like: `same here!` or `me to!` doesn't really cut it for us to zero in ;(
Everyone has slightly different settings/platforms so every little bits of info helps with the resolves even if seemingly irrelevant.
---
## Videos Are In The Can!
Please dial [K9s Channel](https://www.youtube.com/channel/UC897uwPygni4QIjkPCpgjmw) for up coming content...
* [K9s v0.31.0 Configs+Sneak peek](https://youtu.be/X3444KfjguE)
* [K9s v0.30.0 Sneak peek](https://youtu.be/mVBc1XneRJ4)
* [Vulnerability Scans](https://youtu.be/ULkl0MsaidU)
---
## ♫ Sounds Behind The Release ♭
Going back to the classics...
* [Ambulance Blues - Neil Young](https://www.youtube.com/watch?v=bCQisTEdBwY)
* [Christopher Columbus - Burning Spear](https://www.youtube.com/watch?v=5qbMKTY_Cr0)
* [Feelin' the Same - Clinton Fearon](https://www.youtube.com/watch?v=aRPF2Yta_cs)
---
## A Word From Our Sponsors...
To all the good folks below that opted to `pay it forward` and join our sponsorship program, I salute you!!
* [Andreas Frangopoulos](https://github.com/qubeio)
* [Tu Hoang](https://github.com/rebyn)
* [Shoshin Nikita](https://github.com/ShoshinNikita)
* [Dima Altukhov](https://github.com/alt-dima)
* [wpbeckwith](https://github.com/wpbeckwith)
* [a-thomas-22](https://github.com/a-thomas-22)
* [kmath313](https://github.com/kmath313)
* [Jörgen](https://github.com/wthrbtn)
* [Eckl, Máté](https://github.com/ecklm)
* [Jacky Nguyen](https://github.com/nktpro)
* [Chris Bradley](https://github.com/chrisbradleydev)
* [Vytautas Kubilius](https://github.com/vytautaskubilius)
* [Patrick Christensen](https://github.com/BuriedStPatrick)
* [Ollie Lowson](https://github.com/ollielowson-wcbs)
* [Mike Macaulay](https://github.com/mmacaula)
* [David Birks](https://github.com/dbirks)
* [James Hounshell](https://github.com/jameshounshell)
* [elapse2039](https://github.com/elapse2039)
* [Vinicius Xavier](https://github.com/vinixaavier)
* [Phuc Phung](https://github.com/Foxhound401)
* [ollielowson](https://github.com/ollielowson)
> Sponsorship cancellations since the last release: **4!** 🥹
---
## Resolved Issues
* [#2527](https://github.com/derailed/k9s/issues/2527) Multiple k9s panels open in parallel for the same cluster breaks config.yaml
* [#2520](https://github.com/derailed/k9s/issues/2520) pods with init container with restartPolicy: Always stay in Init status
* [#2501](https://github.com/derailed/k9s/issues/2501) Cannot add plugins to helm scope bug
* [#2492](https://github.com/derailed/k9s/issues/2492) API Resources "carry over" between contexts, causing errors if they share shortnames
* [#1158](https://github.com/derailed/k9s/issues/1158) Removing a helm release incorrectly determines the namespace of resources
* [#1033](https://github.com/derailed/k9s/issues/1033) Helm delete deletes only the helm entry but not the deployment
---
## Contributed PRs
Please be sure to give `Big Thanks!` and `ATTA Girls/Boys!` to all the fine contributors for making K9s better for all of us!!
* [#2509](https://github.com/derailed/k9s/pull/2509) Fix Toggle Faults filtering
* [#2511](https://github.com/derailed/k9s/pull/2511) adding the f command to pf extender view
* [#2518](https://github.com/derailed/k9s/pull/2518) Added defaultsToFullScreen flag for Live/Details view,logs
---
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/imhotep_logo.png" width="32" height="auto"/> © 2024 Imhotep Software LLC. All materials licensed under [Apache v2.0](http://www.apache.org/licenses/LICENSE-2.0)

4
go.mod
View File

@ -11,7 +11,7 @@ require (
github.com/anchore/syft v0.100.0
github.com/atotto/clipboard v0.1.4
github.com/cenkalti/backoff/v4 v4.2.1
github.com/derailed/popeye v0.11.2
github.com/derailed/popeye v0.11.3
github.com/derailed/tcell/v2 v2.3.1-rc.3
github.com/derailed/tview v0.8.3
github.com/fatih/color v1.16.0
@ -22,7 +22,7 @@ require (
github.com/olekukonko/tablewriter v0.0.5
github.com/petergtz/pegomock v2.9.0+incompatible
github.com/rakyll/hey v0.1.4
github.com/rs/zerolog v1.31.0
github.com/rs/zerolog v1.32.0
github.com/sahilm/fuzzy v0.1.0
github.com/spf13/cobra v1.8.0
github.com/stretchr/testify v1.8.4

8
go.sum
View File

@ -381,8 +381,8 @@ github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc h1:U9qPSI2PIWSS1
github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc/go.mod h1:J7Y8YcW2NihsgmVo/mv3lAwl/skON4iLHjSsI+c5H38=
github.com/deitch/magic v0.0.0-20230404182410-1ff89d7342da h1:ZOjWpVsFZ06eIhnh4mkaceTiVoktdU67+M7KDHJ268M=
github.com/deitch/magic v0.0.0-20230404182410-1ff89d7342da/go.mod h1:B3tI9iGHi4imdLi4Asdha1Sc6feLMTfPLXh9IUYmysk=
github.com/derailed/popeye v0.11.2 h1:8MKMjYBJdYNktTKeh98TeT127jZY6CFAsurrENoTZCY=
github.com/derailed/popeye v0.11.2/go.mod h1:HygqX7A8BwidorJjJUnWDZ5AvbxHIU7uRwXgOtn9GwY=
github.com/derailed/popeye v0.11.3 h1:gQUp6zuSIRDBdyLS1Ln0nFs8FbQ+KGE+iQxe0w4Ug8M=
github.com/derailed/popeye v0.11.3/go.mod h1:HygqX7A8BwidorJjJUnWDZ5AvbxHIU7uRwXgOtn9GwY=
github.com/derailed/tcell/v2 v2.3.1-rc.3 h1:9s1fmyRcSPRlwr/C9tcpJKCujbrtmPpST6dcMUD2piY=
github.com/derailed/tcell/v2 v2.3.1-rc.3/go.mod h1:nf68BEL8fjmXQHJT3xZjoZFs2uXOzyJcNAQqGUEMrFY=
github.com/derailed/tview v0.8.3 h1:jhN7LW7pfCWf7Z6VC5Dpi/1usavOBZxz2mY90//TMsU=
@ -1013,8 +1013,8 @@ github.com/rogpeppe/go-internal v1.3.0/go.mod h1:M8bDsm7K2OlrFYOpmOWEs/qY81heoFR
github.com/rogpeppe/go-internal v1.11.0 h1:cWPaGQEPrBb5/AsnsZesgZZ9yb1OQ+GOISoDNXVBh4M=
github.com/rogpeppe/go-internal v1.11.0/go.mod h1:ddIwULY96R17DhadqLgMfk9H9tvdUzkipdSkR5nkCZA=
github.com/rs/xid v1.5.0/go.mod h1:trrq9SKmegXys3aeAKXMUTdJsYXVwGY3RLcfgqegfbg=
github.com/rs/zerolog v1.31.0 h1:FcTR3NnLWW+NnTwwhFWiJSZr4ECLpqCm6QsEnyvbV4A=
github.com/rs/zerolog v1.31.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/rs/zerolog v1.32.0 h1:keLypqrlIjaFsbmJOBdB/qvyF8KEtCWHwobLp5l/mQ0=
github.com/rs/zerolog v1.32.0/go.mod h1:/7mN4D5sKwJLZQ2b/znpjC3/GQWY/xaDXUM0kKWRHss=
github.com/rubenv/sql-migrate v1.5.2 h1:bMDqOnrJVV/6JQgQ/MxOpU+AdO8uzYYA/TxFUBzFtS0=
github.com/rubenv/sql-migrate v1.5.2/go.mod h1:H38GW8Vqf8F0Su5XignRyaRcbXbJunSWxs+kmzlg0Is=
github.com/russross/blackfriday/v2 v2.0.1/go.mod h1:+Rmxgy9KzJVeS9/2gXHxylqXiyQDYRxCVz55jmeOWTM=

View File

@ -20,7 +20,7 @@ const (
defaultCallTimeoutDuration time.Duration = 15 * time.Second
// UsePersistentConfig caches client config to avoid reloads.
UsePersistentConfig = true
UsePersistentConfig = false
)
// Config tracks a kubernetes configuration.
@ -85,6 +85,24 @@ func (c *Config) SwitchContext(name string) error {
return nil
}
func (c *Config) Clone(ns string) (*genericclioptions.ConfigFlags, error) {
flags := genericclioptions.NewConfigFlags(false)
ct, err := c.CurrentContextName()
if err != nil {
return nil, err
}
cl, err := c.CurrentClusterName()
if err != nil {
return nil, err
}
flags.Context, flags.ClusterName = &ct, &cl
flags.Namespace = &ns
flags.Timeout = c.Flags().Timeout
flags.KubeConfig = c.Flags().KubeConfig
return flags, nil
}
// CurrentClusterName returns the currently active cluster name.
func (c *Config) CurrentClusterName() (string, error) {
if isSet(c.flags.ClusterName) {

View File

@ -6,7 +6,6 @@ package data
import (
"fmt"
"io"
"os"
"sync"
"github.com/derailed/k9s/internal/client"
@ -29,6 +28,8 @@ func NewConfig(ct *api.Context) *Config {
// Validate ensures config is in norms.
func (c *Config) Validate(conn client.Connection, ks KubeSettings) {
c.mx.Lock()
defer c.mx.Unlock()
if c.Context == nil {
c.Context = NewContext()
@ -42,19 +43,3 @@ func (c *Config) Dump(w io.Writer) {
fmt.Fprintf(w, "%s\n", string(bb))
}
// Save saves the config to disk.
func (c *Config) Save(path string) error {
c.mx.RLock()
defer c.mx.RUnlock()
if err := EnsureDirPath(path, DefaultDirMod); err != nil {
return err
}
cfg, err := yaml.Marshal(c)
if err != nil {
return err
}
return os.WriteFile(path, cfg, DefaultFileMod)
}

View File

@ -8,6 +8,7 @@ import (
"fmt"
"os"
"path/filepath"
"sync"
"github.com/derailed/k9s/internal/config/json"
"github.com/rs/zerolog/log"
@ -18,6 +19,7 @@ import (
// Dir tracks context configurations.
type Dir struct {
root string
mx sync.Mutex
}
// NewDir returns a new instance.
@ -50,14 +52,32 @@ func (d *Dir) Load(n string, ct *api.Context) (*Config, error) {
func (d *Dir) genConfig(path string, ct *api.Context) (*Config, error) {
cfg := NewConfig(ct)
if err := cfg.Save(path); err != nil {
if err := d.Save(path, cfg); err != nil {
return nil, err
}
return cfg, nil
}
func (d *Dir) Save(path string, c *Config) error {
d.mx.Lock()
defer d.mx.Unlock()
if err := EnsureDirPath(path, DefaultDirMod); err != nil {
return err
}
cfg, err := yaml.Marshal(c)
if err != nil {
return err
}
return os.WriteFile(path, cfg, DefaultFileMod)
}
func (d *Dir) loadConfig(path string) (*Config, error) {
d.mx.Lock()
defer d.mx.Unlock()
bb, err := os.ReadFile(path)
if err != nil {
return nil, err

View File

@ -78,7 +78,7 @@ func (k *K9s) Save() error {
data.MainConfigFile,
)
return k.getActiveConfig().Save(path)
return k.dir.Save(path, k.getActiveConfig())
}
// Merge merges k9s configs.
@ -157,7 +157,6 @@ func (k *K9s) ActiveContextName() string {
// ActiveContext returns the currently active context.
func (k *K9s) ActiveContext() (*data.Context, error) {
if cfg := k.getActiveConfig(); cfg != nil && cfg.Context != nil {
return cfg.Context, nil
}

View File

@ -15,6 +15,7 @@ import (
"helm.sh/helm/v3/pkg/action"
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/cli-runtime/pkg/genericclioptions"
)
var (
@ -31,7 +32,7 @@ type HelmChart struct {
// List returns a collection of resources.
func (h *HelmChart) List(ctx context.Context, ns string) ([]runtime.Object, error) {
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return nil, err
}
@ -55,7 +56,7 @@ func (h *HelmChart) List(ctx context.Context, ns string) ([]runtime.Object, erro
// Get returns a resource.
func (h *HelmChart) Get(_ context.Context, path string) (runtime.Object, error) {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return nil, err
}
@ -70,7 +71,7 @@ func (h *HelmChart) Get(_ context.Context, path string) (runtime.Object, error)
// GetValues returns values for a release
func (h *HelmChart) GetValues(path string, allValues bool) ([]byte, error) {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return nil, err
}
@ -87,7 +88,7 @@ func (h *HelmChart) GetValues(path string, allValues bool) ([]byte, error) {
// Describe returns the chart notes.
func (h *HelmChart) Describe(path string) (string, error) {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return "", err
}
@ -102,7 +103,7 @@ func (h *HelmChart) Describe(path string) (string, error) {
// ToYAML returns the chart manifest.
func (h *HelmChart) ToYAML(path string, showManaged bool) (string, error) {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return "", err
}
@ -122,10 +123,13 @@ func (h *HelmChart) Delete(_ context.Context, path string, _ *metav1.DeletionPro
// Uninstall uninstalls a HelmChart.
func (h *HelmChart) Uninstall(path string, keepHist bool) error {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
flags := h.Client().Config().Flags()
flags.Namespace = &ns
cfg, err := ensureHelmConfig(flags, ns)
if err != nil {
return err
}
u := action.NewUninstall(cfg)
u.KeepHistory = keepHist
res, err := u.Run(n)
@ -140,13 +144,13 @@ func (h *HelmChart) Uninstall(path string, keepHist bool) error {
}
// ensureHelmConfig return a new configuration.
func ensureHelmConfig(c client.Connection, ns string) (*action.Configuration, error) {
func ensureHelmConfig(flags *genericclioptions.ConfigFlags, ns string) (*action.Configuration, error) {
cfg := new(action.Configuration)
err := cfg.Init(c.Config().Flags(), ns, os.Getenv("HELM_DRIVER"), helmLogger)
err := cfg.Init(flags, ns, os.Getenv("HELM_DRIVER"), helmLogger)
return cfg, err
}
func helmLogger(s string, args ...interface{}) {
log.Debug().Msgf("%s %v", s, args)
func helmLogger(fmt string, args ...interface{}) {
log.Debug().Msgf("[Helm] "+fmt, args...)
}

View File

@ -39,7 +39,7 @@ func (h *HelmHistory) List(ctx context.Context, _ string) ([]runtime.Object, err
}
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return nil, err
}
@ -65,7 +65,7 @@ func (h *HelmHistory) Get(_ context.Context, path string) (runtime.Object, error
}
ns, n := client.Namespaced(fqn)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return nil, err
}
@ -134,7 +134,7 @@ func (h *HelmHistory) GetValues(path string, allValues bool) ([]byte, error) {
func (h *HelmHistory) Rollback(_ context.Context, path, rev string) error {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return err
}
@ -152,7 +152,7 @@ func (h *HelmHistory) Rollback(_ context.Context, path, rev string) error {
// Delete uninstall a Helm.
func (h *HelmHistory) Delete(_ context.Context, path string, _ *metav1.DeletionPropagation, _ Grace) error {
ns, n := client.Namespaced(path)
cfg, err := ensureHelmConfig(h.Client(), ns)
cfg, err := ensureHelmConfig(h.Client().Config().Flags(), ns)
if err != nil {
return err
}

View File

@ -54,6 +54,7 @@ func (p *Pod) IsHappy(po v1.Pod) bool {
return false
}
}
return true
}

View File

@ -319,8 +319,8 @@ func loadK9s(m ResourceMetas) {
func loadHelm(m ResourceMetas) {
m[client.NewGVR("helm")] = metav1.APIResource{
Name: "chart",
Kind: "Chart",
Name: "helm",
Kind: "Helm",
Namespaced: true,
Verbs: []string{"delete"},
Categories: []string{helmCat},

View File

@ -1,3 +1,4 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright Authors of K9s
package dao

View File

@ -1,3 +1,6 @@
// SPDX-License-Identifier: Apache-2.0
// Copyright Authors of K9s
package dao_test
import (
@ -65,13 +68,11 @@ func (f *testFactory) Forwarders() watch.Forwarders {
}
func (f *testFactory) DeleteForwarder(string) {}
type testResource struct{}
func load(n string) *unstructured.Unstructured {
raw, _ := os.ReadFile(fmt.Sprintf("testdata/%s.json", n))
var o unstructured.Unstructured
json.Unmarshal(raw, &o)
_ = json.Unmarshal(raw, &o)
return &o
}

View File

@ -4,6 +4,7 @@
package render
import (
"context"
"math"
"sort"
"strconv"
@ -28,7 +29,7 @@ func computeVulScore(m metav1.ObjectMeta, spec *v1.PodSpec) string {
return "0"
}
ii := ExtractImages(spec)
vul.ImgScanner.Enqueue(ii...)
vul.ImgScanner.Enqueue(context.Background(), ii...)
return vul.ImgScanner.Score(ii...)
}

View File

@ -74,13 +74,11 @@ func (j Job) Render(o interface{}, ns string, r *Row) error {
}
func (Job) diagnose(ready string, completed *metav1.Time) error {
if completed == nil {
return nil
}
tokens := strings.Split(ready, "/")
if tokens[0] != tokens[1] {
return fmt.Errorf("expecting %s completion got %s", tokens[1], tokens[0])
}
return nil
}

View File

@ -335,7 +335,7 @@ func (p *Pod) Phase(po *v1.Pod) string {
status = po.Status.Reason
}
status, ok := p.initContainerPhase(po.Status, len(po.Spec.InitContainers), status)
status, ok := p.initContainerPhase(po, status)
if ok {
return status
}
@ -374,13 +374,16 @@ func (*Pod) containerPhase(st v1.PodStatus, status string) (string, bool) {
return status, running
}
func (*Pod) initContainerPhase(st v1.PodStatus, initCount int, status string) (string, bool) {
for i, cs := range st.InitContainerStatuses {
s := checkContainerStatus(cs, i, initCount)
if s == "" {
continue
func (*Pod) initContainerPhase(po *v1.Pod, status string) (string, bool) {
count := len(po.Spec.InitContainers)
rs := make(map[string]bool, count)
for _, c := range po.Spec.InitContainers {
rs[c.Name] = restartableInitCO(c.RestartPolicy)
}
for i, cs := range po.Status.InitContainerStatuses {
if s := checkInitContainerStatus(cs, i, count, rs[cs.Name]); s != "" {
return s, true
}
return s, true
}
return status, false
@ -389,7 +392,7 @@ func (*Pod) initContainerPhase(st v1.PodStatus, initCount int, status string) (s
// ----------------------------------------------------------------------------
// Helpers..
func checkContainerStatus(cs v1.ContainerStatus, i, initCount int) string {
func checkInitContainerStatus(cs v1.ContainerStatus, count, initCount int, restartable bool) string {
switch {
case cs.State.Terminated != nil:
if cs.State.Terminated.ExitCode == 0 {
@ -402,11 +405,15 @@ func checkContainerStatus(cs v1.ContainerStatus, i, initCount int) string {
return "Init:Signal:" + strconv.Itoa(int(cs.State.Terminated.Signal))
}
return "Init:ExitCode:" + strconv.Itoa(int(cs.State.Terminated.ExitCode))
case restartable && cs.Started != nil && *cs.Started:
if cs.Ready {
return ""
}
case cs.State.Waiting != nil && cs.State.Waiting.Reason != "" && cs.State.Waiting.Reason != "PodInitializing":
return "Init:" + cs.State.Waiting.Reason
default:
return "Init:" + strconv.Itoa(i) + "/" + strconv.Itoa(initCount)
}
return "Init:" + strconv.Itoa(count) + "/" + strconv.Itoa(initCount)
}
// PosStatus computes pod status.
@ -429,7 +436,7 @@ func PodStatus(pod *v1.Pod) string {
case container.State.Terminated != nil && container.State.Terminated.ExitCode == 0:
continue
case container.State.Terminated != nil:
if len(container.State.Terminated.Reason) == 0 {
if container.State.Terminated.Reason == "" {
if container.State.Terminated.Signal != 0 {
reason = fmt.Sprintf("Init:Signal:%d", container.State.Terminated.Signal)
} else {
@ -494,3 +501,7 @@ func hasPodReadyCondition(conditions []v1.PodCondition) bool {
return false
}
func restartableInitCO(p *v1.ContainerRestartPolicy) bool {
return p != nil && *p == v1.ContainerRestartPolicyAlways
}

View File

@ -13,6 +13,288 @@ import (
mv1beta1 "k8s.io/metrics/pkg/apis/metrics/v1beta1"
)
func Test_checkInitContainerStatus(t *testing.T) {
true := true
uu := map[string]struct {
status v1.ContainerStatus
e string
count, total int
restart bool
}{
"none": {
e: "Init:0/0",
},
"restart": {
status: v1.ContainerStatus{
Name: "ic1",
Started: &true,
State: v1.ContainerState{},
},
restart: true,
e: "Init:0/0",
},
"no-restart": {
status: v1.ContainerStatus{
Name: "ic1",
Started: &true,
State: v1.ContainerState{},
},
e: "Init:0/0",
},
"terminated-reason": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 1,
Reason: "blah",
},
},
},
e: "Init:blah",
},
"terminated-signal": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 1,
Signal: 9,
},
},
},
e: "Init:Signal:9",
},
"terminated-code": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 1,
},
},
},
e: "Init:ExitCode:1",
},
"terminated-restart": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
Reason: "blah",
},
},
},
},
"waiting": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: "blah",
},
},
},
e: "Init:blah",
},
"waiting-init": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: "PodInitializing",
},
},
},
e: "Init:0/0",
},
"running": {
status: v1.ContainerStatus{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
e: "Init:0/0",
},
}
for k := range uu {
u := uu[k]
t.Run(k, func(t *testing.T) {
assert.Equal(t, u.e, checkInitContainerStatus(u.status, u.count, u.total, u.restart))
})
}
}
func Test_containerPhase(t *testing.T) {
uu := map[string]struct {
status v1.PodStatus
e string
ok bool
}{
"none": {},
"empty": {
status: v1.PodStatus{
Phase: PhaseUnknown,
},
},
"waiting": {
status: v1.PodStatus{
Phase: PhaseUnknown,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: "waiting",
},
},
},
},
},
e: "waiting",
},
"terminated": {
status: v1.PodStatus{
Phase: PhaseUnknown,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
Reason: "done",
},
},
},
},
},
e: "done",
},
"terminated-sig": {
status: v1.PodStatus{
Phase: PhaseUnknown,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
Signal: 9,
},
},
},
},
},
e: "Signal:9",
},
"terminated-code": {
status: v1.PodStatus{
Phase: PhaseUnknown,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 2,
},
},
},
},
},
e: "ExitCode:2",
},
"running": {
status: v1.PodStatus{
Phase: PhaseUnknown,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
Ready: true,
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
},
ok: true,
},
}
var p Pod
for k := range uu {
u := uu[k]
t.Run(k, func(t *testing.T) {
s, ok := p.containerPhase(u.status, "")
assert.Equal(t, u.ok, ok)
assert.Equal(t, u.e, s)
})
}
}
func Test_restartableInitCO(t *testing.T) {
always, never := v1.ContainerRestartPolicyAlways, v1.ContainerRestartPolicy("never")
uu := map[string]struct {
p *v1.ContainerRestartPolicy
e bool
}{
"empty": {},
"set": {
p: &always,
e: true,
},
"unset": {
p: &never,
},
}
for k := range uu {
u := uu[k]
t.Run(k, func(t *testing.T) {
assert.Equal(t, u.e, restartableInitCO(u.p))
})
}
}
func Test_gatherPodMx(t *testing.T) {
uu := map[string]struct {
cc []v1.Container

View File

@ -227,11 +227,31 @@ func TestCheckPodStatus(t *testing.T) {
},
e: render.PhaseRunning,
},
"gated": {
pod: v1.Pod{
Status: v1.PodStatus{
Conditions: []v1.PodCondition{
{Type: v1.PodScheduled, Reason: v1.PodReasonSchedulingGated},
},
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
},
},
e: v1.PodReasonSchedulingGated,
},
"backoff": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{},
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
@ -246,6 +266,256 @@ func TestCheckPodStatus(t *testing.T) {
},
e: render.PhaseImagePullBackOff,
},
"backoff-init": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
},
},
e: "Init:ImagePullBackOff",
},
"init-terminated-cool": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
},
},
e: "Init:0/0",
},
"init-terminated-reason": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 1,
Reason: "blah",
},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
},
},
e: "Init:blah",
},
"init-terminated-sig": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 2,
Signal: 9,
},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
},
},
e: "Init:Signal:9",
},
"init-terminated-code": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 2,
},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: render.PhaseImagePullBackOff,
},
},
},
},
},
},
e: "Init:ExitCode:2",
},
"co-reason": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
Reason: "blah",
},
},
},
},
},
},
e: "blah",
},
"co-reason-ready": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
Ready: true,
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
},
},
e: "Running",
},
"co-reason-completed": {
pod: v1.Pod{
Status: v1.PodStatus{
Conditions: []v1.PodCondition{
{Type: v1.PodReady, Status: v1.ConditionTrue},
},
Phase: render.PhaseCompleted,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
Ready: true,
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
},
},
e: "Running",
},
"co-sig": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 2,
Signal: 9,
},
},
},
},
},
},
e: "Signal:9",
},
"co-code": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Terminated: &v1.ContainerStateTerminated{
ExitCode: 2,
},
},
},
},
},
},
e: "ExitCode:2",
},
"co-ready": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: v1.PodRunning,
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
},
},
e: "Running",
},
}
for k := range uu {
@ -254,7 +524,123 @@ func TestCheckPodStatus(t *testing.T) {
assert.Equal(t, u.e, render.PodStatus(&u.pod))
})
}
}
func TestCheckPhase(t *testing.T) {
always := v1.ContainerRestartPolicyAlways
uu := map[string]struct {
pod v1.Pod
e string
}{
"unknown": {
pod: v1.Pod{
Status: v1.PodStatus{
Phase: render.PhaseUnknown,
},
},
e: render.PhaseUnknown,
},
"terminating": {
pod: v1.Pod{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: testTime()},
},
Status: v1.PodStatus{
Phase: render.PhaseUnknown,
Reason: "bla",
},
},
e: render.PhaseTerminating,
},
"terminating-toast-node": {
pod: v1.Pod{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: testTime()},
},
Status: v1.PodStatus{
Phase: render.PhaseUnknown,
Reason: render.NodeUnreachablePodReason,
},
},
e: render.PhaseUnknown,
},
"restartable": {
pod: v1.Pod{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: testTime()},
},
Spec: v1.PodSpec{
InitContainers: []v1.Container{
{
Name: "ic1",
RestartPolicy: &always,
},
},
},
Status: v1.PodStatus{
Phase: render.PhaseUnknown,
Reason: "bla",
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
},
},
},
},
e: "Init:0/1",
},
"waiting": {
pod: v1.Pod{
ObjectMeta: metav1.ObjectMeta{
DeletionTimestamp: &metav1.Time{Time: testTime()},
},
Spec: v1.PodSpec{
InitContainers: []v1.Container{
{
Name: "ic1",
RestartPolicy: &always,
},
},
Containers: []v1.Container{
{
Name: "c1",
},
},
},
Status: v1.PodStatus{
Phase: render.PhaseUnknown,
Reason: "bla",
InitContainerStatuses: []v1.ContainerStatus{
{
Name: "ic1",
State: v1.ContainerState{
Running: &v1.ContainerStateRunning{},
},
},
},
ContainerStatuses: []v1.ContainerStatus{
{
Name: "c1",
State: v1.ContainerState{
Waiting: &v1.ContainerStateWaiting{
Reason: "bla",
},
},
},
},
},
},
e: "Init:0/1",
},
}
var p render.Pod
for k := range uu {
u := uu[k]
t.Run(k, func(t *testing.T) {
assert.Equal(t, u.e, p.Phase(&u.pod))
})
}
}
// ----------------------------------------------------------------------------

View File

@ -214,7 +214,7 @@ func (c *Configurator) activeConfig() (cluster string, context string, ok bool)
if err != nil {
return
}
cluster, context = ct.ClusterName, c.Config.K9s.ActiveContextName()
cluster, context = ct.GetClusterName(), c.Config.K9s.ActiveContextName()
if cluster != "" && context != "" {
ok = true
}
@ -254,14 +254,13 @@ func (c *Configurator) loadSkinFile(s synchronizer) {
log.Debug().Msgf("Loading skin file: %q", skinFile)
if err := c.Styles.Load(skinFile); err != nil {
if errors.Is(err, os.ErrNotExist) {
s.Flash().Warnf("Skin file %q not found in skins dir: %s", filepath.Base(skinFile), config.AppSkinsDir)
log.Warn().Msgf("Skin file %q not found in skins dir: %s", filepath.Base(skinFile), config.AppSkinsDir)
c.updateStyles("")
} else {
s.Flash().Errf("Failed to parse skin file -- %s: %s.", filepath.Base(skinFile), err)
log.Error().Msgf("Failed to parse skin file -- %s: %s.", filepath.Base(skinFile), err)
c.updateStyles(skinFile)
}
} else {
s.Flash().Infof("Skin file loaded: %q", skinFile)
c.updateStyles(skinFile)
}
}

View File

@ -22,7 +22,7 @@ const AllScopes = "all"
type Runner interface {
App() *App
GetSelectedItem() string
Aliases() []string
Aliases() map[string]struct{}
EnvFn() EnvFunc
}
@ -44,13 +44,13 @@ func includes(aliases []string, s string) bool {
return false
}
func inScope(scopes, aliases []string) bool {
func inScope(scopes []string, aliases map[string]struct{}) bool {
if hasAll(scopes) {
return true
}
for _, s := range scopes {
if includes(aliases, s) {
return true
if _, ok := aliases[s]; ok {
return ok
}
}
@ -119,8 +119,9 @@ func pluginActions(r Runner, aa ui.KeyActions) error {
if err := pp.Load(r.App().Config.ContextPluginsPath()); err != nil {
errs = errors.Join(errs, err)
}
aliases := r.Aliases()
for k, plugin := range pp.Plugins {
if !inScope(plugin.Scopes, r.Aliases()) {
if !inScope(plugin.Scopes, aliases) {
continue
}
key, err := asKey(plugin.ShortCut)

View File

@ -53,15 +53,16 @@ func TestIncludes(t *testing.T) {
func TestInScope(t *testing.T) {
uu := map[string]struct {
ss, aa []string
e bool
ss []string
aa map[string]struct{}
e bool
}{
"empty": {},
"yes": {e: true, ss: []string{"blee", "duh", "fred"}, aa: []string{"blee", "fred", "duh"}},
"no": {ss: []string{"blee", "duh", "fred"}, aa: []string{"blee1", "fred1"}},
"empty scopes": {aa: []string{"blee1", "fred1"}},
"yes": {e: true, ss: []string{"blee", "duh", "fred"}, aa: map[string]struct{}{"blee": {}, "fred": {}, "duh": {}}},
"no": {ss: []string{"blee", "duh", "fred"}, aa: map[string]struct{}{"blee1": {}, "fred1": {}}},
"empty scopes": {aa: map[string]struct{}{"blee1": {}, "fred1": {}}},
"empty aliases": {ss: []string{"blee1", "fred1"}},
"all": {e: true, ss: []string{AllScopes}, aa: []string{"blee1", "fred1"}},
"all": {e: true, ss: []string{AllScopes}, aa: map[string]struct{}{"blee1": {}, "fred1": {}}},
}
for k := range uu {

View File

@ -434,9 +434,6 @@ func (a *App) switchNS(ns string) error {
if err := a.Config.SetActiveNamespace(ns); err != nil {
return err
}
if err := a.Config.Save(); err != nil {
return err
}
return a.factory.SetActiveNS(ns)
}
@ -517,6 +514,10 @@ func (a *App) BailOut() {
}
}()
if err := a.Config.Save(); err != nil {
log.Error().Err(err).Msg("config save failed!")
}
if err := nukeK9sShell(a); err != nil {
log.Error().Err(err).Msgf("nuking k9s shell pod")
}

View File

@ -229,8 +229,8 @@ func (b *Browser) SetContextFn(f ContextFunc) { b.contextFn = f }
func (b *Browser) GetTable() *Table { return b.Table }
// Aliases returns all available aliases.
func (b *Browser) Aliases() []string {
return append(b.meta.ShortNames, b.meta.SingularName, b.meta.Name)
func (b *Browser) Aliases() map[string]struct{} {
return aliasesFor(b.meta, b.app.command.AliasesFor(b.meta.Name))
}
// ----------------------------------------------------------------------------
@ -449,9 +449,6 @@ func (b *Browser) switchNamespaceCmd(evt *tcell.EventKey) *tcell.EventKey {
if err := b.app.Config.SetActiveNamespace(b.GetModel().GetNamespace()); err != nil {
log.Error().Err(err).Msg("Config save NS failed!")
}
if err := b.app.Config.Save(); err != nil {
log.Error().Err(err).Msg("Config save failed!")
}
return nil
}
@ -539,6 +536,8 @@ func (b *Browser) namespaceActions(aa ui.KeyActions) {
if !b.meta.Namespaced || b.GetTable().Path != "" {
return
}
aa[ui.KeyN] = ui.NewKeyAction("Copy Namespace", b.cpNsCmd, false)
b.namespaces = make(map[int]string, data.MaxFavoritesNS)
aa[ui.Key0] = ui.NewKeyAction(client.NamespaceAll, b.switchNamespaceCmd, true)
b.namespaces[0] = client.NamespaceAll

View File

@ -37,6 +37,18 @@ func NewCommand(app *App) *Command {
}
}
// AliasesFor gather all known aliases for a given resource.
func (c *Command) AliasesFor(s string) []string {
aa := make([]string, 0, 10)
for k, v := range c.alias.Alias {
if v == s {
aa = append(aa, k)
}
}
return aa
}
// Init initializes the command.
func (c *Command) Init(path string) error {
c.alias = dao.NewAlias(c.app.factory)
@ -128,9 +140,6 @@ func (c *Command) xrayCmd(p *cmd.Interpreter) error {
if err := c.app.switchNS(ns); err != nil {
return err
}
if err := c.app.Config.Save(); err != nil {
return err
}
return c.exec(p, client.NewGVR("xrays"), NewXray(gvr), true)
}
@ -309,9 +318,6 @@ func (c *Command) exec(p *cmd.Interpreter, gvr client.GVR, comp model.Component,
if clearStack {
cmd := contextRX.ReplaceAllString(p.GetLine(), "")
c.app.Config.SetActiveView(cmd)
if err := c.app.Config.Save(); err != nil {
log.Error().Err(err).Msg("Config save failed!")
}
}
if err := c.app.inject(comp, clearStack); err != nil {
return err

View File

@ -23,8 +23,27 @@ import (
"github.com/derailed/tview"
"github.com/rs/zerolog/log"
"github.com/sahilm/fuzzy"
v1 "k8s.io/apimachinery/pkg/apis/meta/v1"
)
func aliasesFor(m v1.APIResource, aa []string) map[string]struct{} {
rr := make(map[string]struct{})
rr[m.Name] = struct{}{}
for _, a := range aa {
rr[a] = struct{}{}
}
if m.ShortNames != nil {
for _, a := range m.ShortNames {
rr[a] = struct{}{}
}
}
if m.SingularName != "" {
rr[m.SingularName] = struct{}{}
}
return rr
}
func clipboardWrite(text string) error {
return clipboard.WriteAll(text)
}

View File

@ -164,7 +164,7 @@ func (v *LiveView) bindKeys() {
}
if v.model != nil && v.model.GVR().IsDecodable() {
v.actions.Add(ui.KeyActions{
ui.KeyT: ui.NewKeyAction("Toggle Encoded / Decoded", v.toggleEncodedDecodedCmd, true),
ui.KeyX: ui.NewKeyAction("Toggle Decode", v.toggleEncodedDecodedCmd, true),
})
}
}

View File

@ -9,7 +9,6 @@ import (
"github.com/derailed/k9s/internal/render"
"github.com/derailed/k9s/internal/ui"
"github.com/derailed/tcell/v2"
"github.com/rs/zerolog/log"
)
const (
@ -69,11 +68,6 @@ func (n *Namespace) useNamespace(fqn string) {
n.App().Flash().Err(err)
return
}
n.App().Flash().Infof("Namespace %s is now active!", ns)
if err := n.App().Config.Save(); err != nil {
log.Error().Err(err).Msg("Config file save failed!")
}
}
func (n *Namespace) decorate(td *render.TableData) {

View File

@ -116,7 +116,7 @@ func (p *Pod) bindKeys(aa ui.KeyActions) {
}
aa.Add(ui.KeyActions{
ui.KeyN: ui.NewKeyAction("Show Node", p.showNode, true),
ui.KeyO: ui.NewKeyAction("Show Node", p.showNode, true),
ui.KeyShiftR: ui.NewKeyAction("Sort Ready", p.GetTable().SortColCmd(readyCol, true), false),
ui.KeyShiftT: ui.NewKeyAction("Sort Restart", p.GetTable().SortColCmd("RESTARTS", false), false),
ui.KeyShiftS: ui.NewKeyAction("Sort Status", p.GetTable().SortColCmd(statusCol, true), false),

View File

@ -220,7 +220,22 @@ func (t *Table) cpCmd(evt *tcell.EventKey) *tcell.EventKey {
t.app.Flash().Err(err)
return nil
}
t.app.Flash().Info("Current selection copied to clipboard...")
t.app.Flash().Info("Resource name copied to clipboard...")
return nil
}
func (t *Table) cpNsCmd(evt *tcell.EventKey) *tcell.EventKey {
path := t.GetSelectedItem()
if path == "" {
return evt
}
ns, _ := client.Namespaced(path)
if err := clipboardWrite(ns); err != nil {
t.app.Flash().Err(err)
return nil
}
t.app.Flash().Info("Resource namespace copied to clipboard...")
return nil
}

View File

@ -247,8 +247,8 @@ func (x *Xray) k9sEnv() Env {
}
// Aliases returns all available aliases.
func (x *Xray) Aliases() []string {
return append(x.meta.ShortNames, x.meta.SingularName, x.meta.Name)
func (x *Xray) Aliases() map[string]struct{} {
return aliasesFor(x.meta, x.app.command.AliasesFor(x.meta.Name))
}
func (x *Xray) logsCmd(prev bool) func(evt *tcell.EventKey) *tcell.EventKey {

View File

@ -4,6 +4,7 @@
package vul
import (
"context"
"errors"
"fmt"
"sync"
@ -33,6 +34,12 @@ import (
var ImgScanner *imageScanner
const (
imgChanSize = 3
imgScanTimeout = 2 * time.Second
scanConcurrency = 2
)
type imageScanner struct {
store *store.Store
dbCloser *db.Closer
@ -60,6 +67,7 @@ func (s *imageScanner) ShouldExcludes(m metav1.ObjectMeta) bool {
func (s *imageScanner) GetScan(img string) (*Scan, bool) {
s.mx.RLock()
defer s.mx.RUnlock()
scan, ok := s.scans[img]
return scan, ok
@ -106,6 +114,7 @@ func (s *imageScanner) Stop() {
if s.dbCloser != nil {
s.dbCloser.Close()
s.dbCloser = nil
}
}
@ -127,27 +136,35 @@ func (s *imageScanner) isInitialized() bool {
return s.initialized
}
func (s *imageScanner) Enqueue(images ...string) {
func (s *imageScanner) Enqueue(ctx context.Context, images ...string) {
if !s.isInitialized() {
return
}
for _, i := range images {
go func(img string) {
if _, ok := s.GetScan(img); ok {
return
}
sc := newScan(img)
s.setScan(img, sc)
if err := s.scan(img, sc); err != nil {
log.Warn().Err(err).Msgf("Scan failed for img %s --", img)
}
}(i)
ctx, cancel := context.WithTimeout(ctx, imgScanTimeout)
defer cancel()
for _, img := range images {
if _, ok := s.GetScan(img); ok {
continue
}
go s.scanWorker(ctx, img)
}
}
func (s *imageScanner) scan(img string, sc *Scan) error {
func (s *imageScanner) scanWorker(ctx context.Context, img string) {
defer log.Debug().Msgf("ScanWorker bailing out!")
log.Debug().Msgf("ScanWorker processing: %q", img)
sc := newScan(img)
s.setScan(img, sc)
if err := s.scan(ctx, img, sc); err != nil {
log.Warn().Err(err).Msgf("Scan failed for img %s --", img)
}
}
func (s *imageScanner) scan(ctx context.Context, img string, sc *Scan) error {
defer func(t time.Time) {
log.Debug().Msgf("Scan %s images: %v", img, time.Since(t))
log.Debug().Msgf("ScanTime %q: %v", img, time.Since(t))
}(time.Now())
var errs error

View File

@ -1,6 +1,6 @@
name: k9s
base: core20
version: 'v0.31.7'
version: 'v0.31.8'
summary: K9s is a CLI to view and manage your Kubernetes clusters.
description: |
K9s is a CLI to view and manage your Kubernetes clusters. By leveraging a terminal UI, you can easily traverse Kubernetes resources and view the state of your clusters in a single powerful session.