checkpoint

mine
derailed 2020-01-19 12:13:21 -07:00
parent 97540ded19
commit 4d557fb813
25 changed files with 277 additions and 96 deletions

BIN
assets/k9s_xray.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 50 KiB

BIN
assets/skins/dracula.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 731 KiB

View File

@ -10,10 +10,6 @@ Also if you dig this tool, please make some noise on social! [@kitesurfer](https
---
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_helm.png" align="center" width="300" height="auto"/>
This was a long week in the saddle, you guys have been so awesome and supportive thru these last few drops. Thank you!!
### Searchable Logs
There has been quiet a few demands for this feature. It should now be generally available in this drop. It works the same as the resource view ie `/fred`, you can also specify a fuzzy filter using `/-f blee-duh`. The paint is still fresh on that deal and not super confident that it will work nominaly as I had to rework the logs to enable. So totally possible I've hosed something in the process.
@ -40,7 +36,6 @@ k9s:
...
```
### K9s Slackers
I've enabled a [K9s slack channel](https://join.slack.com/t/k9sers/shared_invite/enQtOTAzNTczMDYwNjc5LWJlZjRkNzE2MzgzYWM0MzRiYjZhYTE3NDc1YjNhYmM2NTk2MjUxMWNkZGMzNjJiYzEyZmJiODBmZDYzOGQ5NWM) dedicated to all K9ers. This would be a place for us to meet and discuss ideas and use cases. I'll be honest here I am not a big slack afficionado as I don't do very well with interrupt drive workflows. But I think it would be a great resource for us all.

View File

@ -0,0 +1,78 @@
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_small.png" align="right" width="200" height="auto"/>
# Release v0.13.0
## Notes
Thank you to all that contributed with flushing out issues and enhancements for K9s! I'll try to mark some of these issues as fixed. But if you don't mind grab the latest rev and see if we're happier with some of the fixes! If you've filed an issue please help me verify and close. Your support, kindness and awesome suggestions to make K9s better is as ever very much noticed and appreciated!
Also if you dig this tool, please make some noise on social! [@kitesurfer](https://twitter.com/kitesurfer)
---
### GH Sponsor
I know a lot of you have voiced in the past for other ways to contribute to this project ie liquids budget or prozac supplies whichever best applies here... So I've enabled github sponsors and the button should now be available on this repo.
I'd like to personally thank the following folks for their support and efforts with this project as I know some of you have been around since it's inception almost a year ago!
* [Norbert Csibra](https://github.com/ncsibra)
* [Andrew Roth](https://github.com/RothAndrew)
* [James Smith](https://github.com/sedders123)
Big thanks in full effect to you all, I am so humbled and honored by your gesture!
### Dracula Skin
Since we're in the thank you phase, might as well lasso in `Josh Symmonds` for contributing the `Dracula` K9s skin that is now available in this repo under the skins directory. Here is a sneak peek of what K9s looks like under that skin. I am hopeful that like minded `graphically` inclined K9ers will contribute cool skins for this project for us to share/use in our Kubernetes clusters.
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/skins/dracula.png"/>
### XRay Vision!
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_xray.png"/>
Since we've launched K9s, we've longed for a view that would display the relationships among resources. For instance, pods may reference configmaps/secrets directly via volumes or indirectly with containers referencing configmaps/secrets via say env vars. Having the ability to know which pods/deployments use a given configmap may involve some serious `kubectl` wizardry. K9s now has xray vision which allows one to view and traverse these relationships/associations.
For this, we are introducing a new command aka `xray`. Xray initally supports the following resources (more to come later...)
1. Deployments
2. Services
3. StatefulSets
4. DaemonSets
To enable cluster xray vision for deployments simply type `:xray deploy`. You can also enter the resource aliases/shortnames or use the alias `x` for `xray`. Some of the commands available in table view mode are available here ie describe, view, shell, logs, delete, etc...
Xray not only will tell you when a resource is considered `TOAST` ie the resource is in a bad state, but also will tell you if a dependency is actually broken via `TOAST_REF` status. For example a pod referencing a configmap that has been deleted from the cluster.
Xray view also supports for filtering the resources by leveraging regex, labels or fuzzy filters. This affords for getting more of an application view across several resources.
As it stands Xray will check for following resource dependencies:
* pods
* containers
* configmaps
* secrets
* serviceaccounts
* persistentvolumeclaims
Keep in mind these can be expensive traversals and the view is eventually consistent as dependent resources would be lazy loaded.
We hope you'll find this feature useful? Keep in mind this is an initial drop and more will be coming in this area in subsequent releases. As always, your comments/suggestions are encouraged and welcomed.
### Breaking Change Header Toggle
It turns out the 'h' to toggle header was a bad move as it is use by the view navigation. So we changed that shortcut to `Ctrl-h` to toggle the header expansion/collapse.
---
## Resolved Bugs/Features
* [Issue #494](https://github.com/derailed/k9s/issues/494)
* [Issue #490](https://github.com/derailed/k9s/issues/490)
* [Issue #488](https://github.com/derailed/k9s/issues/488)
* [Issue #486](https://github.com/derailed/k9s/issues/486)
---
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/imhotep_logo.png" width="32" height="auto"/> © 2020 Imhotep Software LLC. All materials licensed under [Apache v2.0](http://www.apache.org/licenses/LICENSE-2.0)

View File

@ -87,11 +87,6 @@ func (a *APIClient) CanI(ns, gvr string, verbs []string) (auth bool, err error)
ns = AllNamespaces
}
key := makeCacheKey(ns, gvr, verbs)
defer func(t time.Time) string {
log.Debug().Msgf("AUTH elapsed %t--%q %v", auth, key, time.Since(t))
return "s"
}(time.Now())
if v, ok := a.cache.Get(key); ok {
if auth, ok = v.(bool); ok {
return auth, nil
@ -107,7 +102,6 @@ func (a *APIClient) CanI(ns, gvr string, verbs []string) (auth bool, err error)
return auth, err
}
if !resp.Status.Allowed {
log.Debug().Msgf(" NO %q ;(", v)
a.cache.Add(key, false, cacheExpiry)
return auth, fmt.Errorf("`%s access denied for user on %q:%s", v, ns, gvr)
}

View File

@ -10,7 +10,6 @@ import (
"github.com/derailed/k9s/internal/client"
"github.com/derailed/k9s/internal/config"
"github.com/derailed/k9s/internal/render"
"github.com/rs/zerolog/log"
"k8s.io/apimachinery/pkg/runtime"
)
@ -66,7 +65,6 @@ func (a *Alias) List(ctx context.Context, _ string) ([]runtime.Object, error) {
// AsGVR returns a matching gvr if it exists.
func (a *Alias) AsGVR(cmd string) (client.GVR, bool) {
gvr, ok := a.Aliases.Get(cmd)
log.Debug().Msgf("ASGVR %q %q %v", cmd, gvr, ok)
if ok {
return client.NewGVR(gvr), true
}
@ -75,6 +73,7 @@ func (a *Alias) AsGVR(cmd string) (client.GVR, bool) {
// Get fetch a resource.
func (a *Alias) Get(_ context.Context, _ string) (runtime.Object, error) {
// BOZO!! NYI
panic("NYI!")
}

View File

@ -208,14 +208,12 @@ func loadPreferred(f Factory, m ResourceMetas) error {
}
func loadCRDs(f Factory, m ResourceMetas) {
log.Debug().Msgf("Loading CRDs...")
const crdGVR = "apiextensions.k8s.io/v1beta1/customresourcedefinitions"
oo, err := f.List(crdGVR, "", true, labels.Everything())
if err != nil {
log.Warn().Err(err).Msgf("Fail CRDs load")
return
}
log.Debug().Msgf(">>> CRDS count %d", len(oo))
for _, o := range oo {
meta, errs := extractMeta(o)

View File

@ -227,8 +227,6 @@ func (t *Tree) reconcile(ctx context.Context) error {
t.fireTreeTreeChanged(t.root)
}
log.Debug().Msgf("TREE ROOT returns %d children", len(t.root.Children))
return nil
}

View File

@ -75,7 +75,7 @@ func (e Event) Render(o interface{}, ns string, r *Row) error {
ev.Reason,
ev.Source.Component,
strconv.Itoa(int(ev.Count)),
Truncate(ev.Message, 80),
ev.Message,
toAge(ev.LastTimestamp))
return nil

View File

@ -175,10 +175,9 @@ func (a *App) clusterUpdater(ctx context.Context) {
for {
select {
case <-ctx.Done():
log.Debug().Msg("Cluster updater canceled!")
log.Debug().Msg("ClusterInfo updater canceled!")
return
case <-time.After(clusterRefresh):
// BOZO!! refact - should not hold ui for updating clusterinfo
a.refreshClusterInfo()
}
}
@ -190,7 +189,6 @@ func (a *App) refreshClusterInfo() {
log.Error().Msgf("Something is wrong with the connection. Bailing out!")
a.BailOut()
}
a.QueueUpdateDraw(func() {
if !a.showHeader {
a.refreshIndicator()
@ -291,13 +289,7 @@ func (a *App) BailOut() {
// Run starts the application loop
func (a *App) Run() {
ctx, cancel := context.WithCancel(context.Background())
defer cancel()
a.Halt()
if err := a.StylesUpdater(ctx, a); err != nil {
log.Error().Err(err).Msg("Unable to track skin changes")
}
a.Resume()
go func() {
<-time.After(splashTime * time.Second)

View File

@ -116,7 +116,7 @@ func (c *Command) specialCmd(cmd string) bool {
return true
case "x", "xray":
if err := c.xrayCmd(cmd); err != nil {
log.Error().Err(err).Msgf("Invalid command")
c.app.Flash().Err(err)
}
return true
default:

View File

@ -265,14 +265,14 @@ func (x *Xray) shellCmd(evt *tcell.EventKey) *tcell.EventKey {
return nil
}
log.Debug().Msgf("STATUS %q", ref.Status)
if ref.Status != "" {
x.app.Flash().Errf("%s is not in a running state", ref.Path)
return nil
}
if ref.Parent != nil {
x.shellIn(ref.Parent.Path, ref.Path)
_, co := client.Namespaced(ref.Path)
x.shellIn(ref.Parent.Path, co)
} else {
log.Error().Msgf("No parent found on container node %q", ref.Path)
}
@ -544,7 +544,6 @@ func (x *Xray) update(node *xray.TreeNode) {
// XrayDataChanged notifies the model data changed.
func (x *Xray) TreeChanged(node *xray.TreeNode) {
log.Debug().Msgf("Tree Changed %d", len(node.Children))
x.count = node.Count(x.gvr.String())
x.update(x.filter(node))
x.UpdateTitle()
@ -589,10 +588,8 @@ func (x *Xray) defaultContext() context.Context {
func (x *Xray) Start() {
x.Stop()
log.Debug().Msgf("XRAY STARTING! -- %q", x.selectedNode)
x.cmdBuff.AddListener(x.app.Cmd())
x.cmdBuff.AddListener(x)
// x.app.SetFocus(x)
ctx := x.defaultContext()
ctx, x.cancelFn = context.WithCancel(ctx)
@ -602,7 +599,6 @@ func (x *Xray) Start() {
// Stop terminates watch loop.
func (x *Xray) Stop() {
log.Debug().Msgf("XRAY STOPPING!")
if x.cancelFn == nil {
return
}

View File

@ -63,11 +63,6 @@ func (f *Factory) Terminate() {
// List returns a resource collection.
func (f *Factory) List(gvr, ns string, wait bool, labels labels.Selector) ([]runtime.Object, error) {
defer func(t time.Time) {
log.Debug().Msgf("FACTORY-LIST [%t] %q::%q elapsed %v", wait, ns, gvr, time.Since(t))
}(time.Now())
log.Debug().Msgf("List %q:%q", ns, gvr)
inf, err := f.CanForResource(ns, gvr, client.MonitorAccess)
if err != nil {
return nil, err
@ -87,18 +82,12 @@ func (f *Factory) List(gvr, ns string, wait bool, labels labels.Selector) ([]run
// Get retrieves a given resource.
func (f *Factory) Get(gvr, path string, wait bool, sel labels.Selector) (runtime.Object, error) {
defer func(t time.Time) {
log.Debug().Msgf("FACTORY-GET [%t] %q--%q elapsed %v", wait, gvr, path, time.Since(t))
}(time.Now())
ns, n := namespaced(path)
log.Debug().Msgf("GET %q:%q::%q", ns, gvr, n)
inf, err := f.CanForResource(ns, gvr, []string{client.GetVerb})
if err != nil {
return nil, err
}
DumpFactory(f)
if wait {
f.waitForCacheSync(ns)
}
@ -121,19 +110,13 @@ func (f *Factory) waitForCacheSync(ns string) {
if !ok {
return
}
log.Debug().Msgf("!!!!!! WAIT FOR CACHE-SYNC %q", ns)
// Hang for a sec for the cache to refresh if still not done bail out!
c := make(chan struct{})
go func(c chan struct{}) {
<-time.After(defaultWaitTime)
log.Debug().Msgf("Wait for sync timed out!")
close(c)
}(c)
mm := fac.WaitForCacheSync(c)
for k, v := range mm {
log.Debug().Msgf("%t -- %s", v, k)
}
log.Debug().Msgf("Sync completed for ns %q", ns)
_ = fac.WaitForCacheSync(c)
}
// WaitForCacheSync waits for all factories to update their cache.
@ -196,7 +179,6 @@ func (f *Factory) ForResource(ns, gvr string) informers.GenericInformer {
log.Error().Err(fmt.Errorf("MEOW! No informer for %q:%q", ns, gvr))
return inf
}
log.Debug().Msgf("FOR_RESOURCE %q:%q", ns, gvr)
fact.Start(f.stopChan)
return inf

View File

@ -91,7 +91,7 @@ func addRef(f dao.Factory, parent *TreeNode, gvr, id string, optional *bool) {
func validate(f dao.Factory, n *TreeNode, _ *bool) {
res, err := f.Get(n.GVR, n.ID, false, labels.Everything())
if err != nil || res == nil {
log.Debug().Msgf("Fail to located ref %q::%q -- %#v-%#v", n.GVR, n.ID, err, res)
log.Warn().Err(err).Msgf("Missing ref %q::%q", n.GVR, n.ID)
n.Extras[StatusKey] = MissingRefStatus
return
}

View File

@ -106,7 +106,7 @@ func makeFactory() testFactory {
}
type testFactory struct {
rows []runtime.Object
rows map[string][]runtime.Object
}
var _ dao.Factory = testFactory{}
@ -115,14 +115,16 @@ func (f testFactory) Client() client.Connection {
return nil
}
func (f testFactory) Get(gvr, path string, wait bool, sel labels.Selector) (runtime.Object, error) {
if len(f.rows) > 0 {
return f.rows[0], nil
oo, ok := f.rows[gvr]
if ok && len(oo) > 0 {
return oo[0], nil
}
return nil, nil
}
func (f testFactory) List(gvr, ns string, wait bool, sel labels.Selector) ([]runtime.Object, error) {
if len(f.rows) > 0 {
return f.rows, nil
oo, ok := f.rows[gvr]
if ok {
return oo, nil
}
return nil, nil
}

View File

@ -27,7 +27,10 @@ func TestDeployRender(t *testing.T) {
var re xray.Deployment
for k := range uu {
f := makeFactory()
f.rows = []runtime.Object{load(t, "po")}
f.rows = map[string][]runtime.Object{
"v1/pods": []runtime.Object{load(t, "po")},
"v1/serviceaccounts": []runtime.Object{load(t, "sa")},
}
u := uu[k]
t.Run(k, func(t *testing.T) {

View File

@ -27,7 +27,7 @@ func TestDaemonSetRender(t *testing.T) {
var re xray.DaemonSet
for k := range uu {
f := makeFactory()
f.rows = []runtime.Object{load(t, "po")}
f.rows = map[string][]runtime.Object{"v1/pods": []runtime.Object{load(t, "po")}}
u := uu[k]
t.Run(k, func(t *testing.T) {
o := load(t, u.file)

View File

@ -10,16 +10,13 @@ import (
"github.com/derailed/k9s/internal/dao"
"github.com/derailed/k9s/internal/render"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/labels"
"k8s.io/apimachinery/pkg/runtime"
"k8s.io/kubernetes/pkg/util/node"
)
type Pod struct{}
func (p *Pod) Status(po *v1.Pod) {
}
func (p *Pod) Render(ctx context.Context, ns string, o interface{}) error {
pwm, ok := o.(*render.PodWithMetrics)
if !ok {
@ -37,6 +34,25 @@ func (p *Pod) Render(ctx context.Context, ns string, o interface{}) error {
return fmt.Errorf("no factory found in context")
}
node := NewTreeNode("v1/pods", client.FQN(po.Namespace, po.Name))
parent, ok := ctx.Value(KeyParent).(*TreeNode)
if !ok {
return fmt.Errorf("Expecting a TreeNode but got %T", ctx.Value(KeyParent))
}
parent.Add(node)
if err := p.containerRefs(ctx, node, po.Namespace, po.Spec); err != nil {
return err
}
p.podVolumeRefs(f, node, po.Namespace, po.Spec.Volumes)
if err := p.serviceAccountRef(f, ctx, node, po.Namespace, po.Spec.ServiceAccountName); err != nil {
return err
}
return p.validate(node, po)
}
func (p *Pod) validate(node *TreeNode, po v1.Pod) error {
phase := p.phase(&po)
ss := po.Status.ContainerStatuses
cr, _, _ := p.statuses(ss)
@ -48,32 +64,50 @@ func (p *Pod) Render(ctx context.Context, ns string, o interface{}) error {
status = CompletedStatus
}
node := NewTreeNode("v1/pods", client.FQN(po.Namespace, po.Name))
node.Extras[StatusKey] = status
node.Extras[StateKey] = strconv.Itoa(cr) + "/" + strconv.Itoa(len(ss))
parent, ok := ctx.Value(KeyParent).(*TreeNode)
if !ok {
return fmt.Errorf("Expecting a TreeNode but got %T", ctx.Value(KeyParent))
}
parent.Add(node)
ctx = context.WithValue(ctx, KeyParent, node)
var cre Container
for i := 0; i < len(po.Spec.InitContainers); i++ {
if err := cre.Render(ctx, ns, render.ContainerRes{Container: &po.Spec.InitContainers[i]}); err != nil {
return err
}
}
for i := 0; i < len(po.Spec.Containers); i++ {
if err := cre.Render(ctx, ns, render.ContainerRes{Container: &po.Spec.Containers[i]}); err != nil {
return err
}
}
p.podVolumeRefs(f, node, po.Namespace, po.Spec.Volumes)
return nil
}
func (*Pod) containerRefs(ctx context.Context, parent *TreeNode, ns string, spec v1.PodSpec) error {
ctx = context.WithValue(ctx, KeyParent, parent)
var cre Container
for i := 0; i < len(spec.InitContainers); i++ {
if err := cre.Render(ctx, ns, render.ContainerRes{Container: &spec.InitContainers[i]}); err != nil {
return err
}
}
for i := 0; i < len(spec.Containers); i++ {
if err := cre.Render(ctx, ns, render.ContainerRes{Container: &spec.Containers[i]}); err != nil {
return err
}
}
return nil
}
func (*Pod) serviceAccountRef(f dao.Factory, ctx context.Context, parent *TreeNode, ns, sa string) error {
if sa == "" {
return nil
}
id := client.FQN(ns, sa)
o, err := f.Get("v1/serviceaccounts", id, false, labels.Everything())
if err != nil {
return err
}
if o == nil {
addRef(f, parent, "v1/serviceaccounts", id, nil)
return nil
}
var saRE ServiceAccount
ctx = context.WithValue(ctx, KeyParent, parent)
return saRE.Render(ctx, ns, o)
}
func (*Pod) podVolumeRefs(f dao.Factory, parent *TreeNode, ns string, vv []v1.Volume) {
for _, v := range vv {
sec := v.VolumeSource.Secret

View File

@ -19,13 +19,13 @@ func TestPodRender(t *testing.T) {
"plain": {
file: "po",
level1: 1,
level2: 2,
level2: 3,
status: xray.OkStatus,
},
"withInit": {
file: "init",
level1: 1,
level2: 1,
level2: 2,
status: xray.OkStatus,
},
}

50
internal/xray/sa.go Normal file
View File

@ -0,0 +1,50 @@
package xray
import (
"context"
"fmt"
"github.com/derailed/k9s/internal"
"github.com/derailed/k9s/internal/client"
"github.com/derailed/k9s/internal/dao"
v1 "k8s.io/api/core/v1"
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
"k8s.io/apimachinery/pkg/runtime"
)
type ServiceAccount struct{}
func (s *ServiceAccount) Render(ctx context.Context, ns string, o interface{}) error {
raw, ok := o.(*unstructured.Unstructured)
if !ok {
return fmt.Errorf("ServiceAccount render expecting *Unstructured, but got %T", o)
}
var sa v1.ServiceAccount
err := runtime.DefaultUnstructuredConverter.FromUnstructured(raw.Object, &sa)
if err != nil {
return err
}
f, ok := ctx.Value(internal.KeyFactory).(dao.Factory)
if !ok {
return fmt.Errorf("no factory found in context")
}
node := NewTreeNode("v1/serviceaccounts", client.FQN(sa.Namespace, sa.Name))
node.Extras[StatusKey] = OkStatus
parent, ok := ctx.Value(KeyParent).(*TreeNode)
if !ok {
return fmt.Errorf("Expecting a TreeNode but got %T", ctx.Value(KeyParent))
}
parent.Add(node)
for _, sec := range sa.Secrets {
addRef(f, node, "v1/secrets", client.FQN(sa.Namespace, sec.Name), nil)
}
for _, sec := range sa.ImagePullSecrets {
addRef(f, node, "v1/secrets", client.FQN(sa.Namespace, sec.Name), nil)
}
return nil
}

40
internal/xray/sa_test.go Normal file
View File

@ -0,0 +1,40 @@
package xray_test
import (
"context"
"testing"
"github.com/derailed/k9s/internal"
"github.com/derailed/k9s/internal/xray"
"github.com/stretchr/testify/assert"
)
func TestSARender(t *testing.T) {
uu := map[string]struct {
file string
level1, level2 int
status string
}{
"plain": {
file: "sa",
level1: 1,
level2: 2,
status: xray.OkStatus,
},
}
var re xray.ServiceAccount
for k := range uu {
u := uu[k]
t.Run(k, func(t *testing.T) {
o := load(t, u.file)
root := xray.NewTreeNode("serviceaccounts", "serviceaccounts")
ctx := context.WithValue(context.Background(), xray.KeyParent, root)
ctx = context.WithValue(ctx, internal.KeyFactory, makeFactory())
assert.Nil(t, re.Render(ctx, "", o))
assert.Equal(t, u.level1, root.CountChildren())
assert.Equal(t, u.level2, root.Children[0].CountChildren())
})
}
}

View File

@ -29,7 +29,7 @@ func TestStatefulSetRender(t *testing.T) {
u := uu[k]
t.Run(k, func(t *testing.T) {
f := makeFactory()
f.rows = []runtime.Object{load(t, "po")}
f.rows = map[string][]runtime.Object{"v1/pods": []runtime.Object{load(t, "po")}}
o := load(t, u.file)
root := xray.NewTreeNode("statefulsets", "statefulsets")

View File

@ -27,7 +27,7 @@ func TestServiceRender(t *testing.T) {
var re xray.Service
for k := range uu {
f := makeFactory()
f.rows = []runtime.Object{load(t, "po")}
f.rows = map[string][]runtime.Object{"v1/pods": []runtime.Object{load(t, "po")}}
u := uu[k]
t.Run(k, func(t *testing.T) {

View File

@ -0,0 +1,23 @@
{
"apiVersion": "v1",
"kind": "ServiceAccount",
"metadata": {
"annotations": {
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"v1\",\"kind\":\"ServiceAccount\",\"metadata\":{\"annotations\":{},\"name\":\"zorg\",\"namespace\":\"default\"},\"secrets\":[{\"name\":\"zorg\"}]}\n"
},
"creationTimestamp": "2020-01-19T16:31:41Z",
"name": "zorg",
"namespace": "default",
"resourceVersion": "3667084",
"selfLink": "/api/v1/namespaces/default/serviceaccounts/zorg",
"uid": "be8959a7-e324-4cfd-88c1-5fd45c028be6"
},
"secrets": [
{
"name": "zorg"
},
{
"name": "zorg-token-rhhzn"
}
]
}

View File

@ -116,17 +116,14 @@ func (t *TreeNode) Diff(d *TreeNode) bool {
}
if t.CountChildren() != d.CountChildren() {
log.Debug().Msgf("SIZE-DIFF")
return true
}
if t.ID != d.ID || t.GVR != d.GVR || !reflect.DeepEqual(t.Extras, d.Extras) {
log.Debug().Msgf("ID DIFF")
return true
}
for i := 0; i < len(t.Children); i++ {
if t.Children[i].Diff(d.Children[i]) {
log.Debug().Msgf("CHILD-DIFF")
return true
}
}
@ -355,7 +352,7 @@ func toEmoji(gvr string) string {
case "containers":
return "🐳"
case "v1/serviceaccounts":
return "🛎"
return "💁‍♀️"
case "v1/persistentvolumes":
return "📚"
case "v1/persistentvolumeclaims":
@ -363,14 +360,14 @@ func toEmoji(gvr string) string {
case "v1/secrets":
return "🔒"
case "v1/configmaps":
return "🗄"
return "🔑"
default:
return "📎"
}
}
func (t TreeNode) colorize() string {
const colorFmt = "%s %s [%s::b]%s[::]"
const colorFmt = "%s [gray::-][%s[gray::-]] [%s::b]%s[::]"
_, n := client.Namespaced(t.ID)
color, flag := "white", "[green::b]OK"