K8s update (#1041)
* rev k8s * add rel notes * checkpoint * add helm * cleaning up * cleaning up * rel notes * check deps * rev apis + bugs * cleanup cronjob view * update rel notes * clean up + docsmine
parent
58010eedfd
commit
561a0c1f41
|
|
@ -0,0 +1,31 @@
|
|||
name: K9s Checks
|
||||
|
||||
on:
|
||||
workflow_dispatch:
|
||||
push:
|
||||
branches:
|
||||
- master
|
||||
tags:
|
||||
- rc*
|
||||
pull_request:
|
||||
branches:
|
||||
- master
|
||||
jobs:
|
||||
build:
|
||||
runs-on: ubuntu-20.04
|
||||
steps:
|
||||
- name: Install Go
|
||||
uses: actions/setup-go@v2
|
||||
with:
|
||||
go-version: 1.16
|
||||
|
||||
- name: Setup GO env
|
||||
run: go env -w CGO_ENABLED=0
|
||||
|
||||
- name: Checkout Code
|
||||
uses: actions/checkout@v2
|
||||
|
||||
- name: Run Tests
|
||||
run: make test
|
||||
env:
|
||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||
2
Makefile
2
Makefile
|
|
@ -3,7 +3,7 @@ PACKAGE := github.com/derailed/$(NAME)
|
|||
GIT := $(shell git rev-parse --short HEAD)
|
||||
SOURCE_DATE_EPOCH ?= $(shell date +%s)
|
||||
DATE := $(shell date -u -d @${SOURCE_DATE_EPOCH} +%FT%T%Z)
|
||||
VERSION ?= v0.24.1
|
||||
VERSION ?= v0.24.3
|
||||
IMG_NAME := derailed/k9s
|
||||
IMAGE := ${IMG_NAME}:${VERSION}
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,64 @@
|
|||
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/k9s_small.png" align="right" width="200" height="auto"/>
|
||||
|
||||
# Release v0.24.3
|
||||
|
||||
## Notes
|
||||
|
||||
Thank you to all that contributed with flushing out issues and enhancements for K9s! I'll try to mark some of these issues as fixed. But if you don't mind grab the latest rev and see if we're happier with some of the fixes! If you've filed an issue please help me verify and close. Your support, kindness and awesome suggestions to make K9s better are as ever very much noted and appreciated!
|
||||
|
||||
If you feel K9s is helping your Kubernetes journey, please consider joining our [sponsorhip program](https://github.com/sponsors/derailed) and/or make some noise on social! [@kitesurfer](https://twitter.com/kitesurfer)
|
||||
|
||||
On Slack? Please join us [K9slackers](https://join.slack.com/t/k9sers/shared_invite/enQtOTA5MDEyNzI5MTU0LWQ1ZGI3MzliYzZhZWEyNzYxYzA3NjE0YTk1YmFmNzViZjIyNzhkZGI0MmJjYzhlNjdlMGJhYzE2ZGU1NjkyNTM)
|
||||
|
||||
## A Word From Our Sponsors...
|
||||
|
||||
I would like to extend a `Big Thank You` to the following generous K9s friends for joining our sponsorship program and supporting this project!
|
||||
|
||||
* [Levkov](https://github.com/levkov)
|
||||
* [Michael McCafferty](https://github.com/mikemcc)
|
||||
* [Stephan Skydan](https://github.com/sskydan)
|
||||
* [Terrac Skiens](https://github.com/bluefishforsale)
|
||||
* [Zafer Abo-Samra](https://github.com/Inbiten)
|
||||
* [Gabriel Martinez](https://github.com/GMartinez-Sisti)
|
||||
* [Pierre Lebrun](https://github.com/pierreyves-lebrun)
|
||||
* [Luc Suryo](https://github.com/my10c)
|
||||
* [Sean O'Brien](https://github.com/sob)
|
||||
|
||||
## Maintenance Release!
|
||||
|
||||
o Update Kubernetes to v0.20.5
|
||||
|
||||
## There are some that call me... Alpha!
|
||||
|
||||
K9s is still and will remain an open source software. As such it is free and we will continue to maintain this repo!
|
||||
|
||||
That said in order to support our efforts, we've recently launched [K9sAlpha](https://k9salpha.io) which is a freemium version of K9s. K9sAlpha unlocks additional features and enhancement.
|
||||
|
||||
If you would like to support us, you can either join our github sponsors or purchase a K9sAlpha license. If you are an active member of our github sponsorship program, you are eligible for a free K9sAlpha license. Please reach out for your shoe-phone and contact us for your personalized license key.
|
||||
|
||||
<img src="https://k9salpha.io/assets/k9salpha-blue.png" align="center" width="300" height="auto"/>
|
||||
|
||||
---
|
||||
|
||||
## Resolved Issues
|
||||
|
||||
* [Issue #1038](https://github.com/derailed/k9s/issues/1038) Release Cronjob API
|
||||
* [Issue #1035](https://github.com/derailed/k9s/issues/1035) Update Ingress API Group
|
||||
* [Issue #1028](https://github.com/derailed/k9s/issues/1028) Go compile
|
||||
* [Issue #1024](https://github.com/derailed/k9s/issues/1024) Add Pod Readiness/Nominated cols
|
||||
* [Issue #1013](https://github.com/derailed/k9s/issues/1013) Panic string negative repeat count
|
||||
* [Issue #1005](https://github.com/derailed/k9s/issues/1005) No x86_64 binaries
|
||||
* [Issue #735](https://github.com/derailed/k9s/issues/735) Shell into windows containers
|
||||
|
||||
## Resolved PRs
|
||||
|
||||
* [PR #1022](https://github.com/derailed/k9s/pull/1022) Update release
|
||||
* [PR #1012](https://github.com/derailed/k9s/pull/1012) Fix typo for cluster based skins
|
||||
* [PR #1009](https://github.com/derailed/k9s/pull/1009) Add webi installer info
|
||||
* [PR #1004](https://github.com/derailed/k9s/pull/1004) Correction CronJob ApiVersion
|
||||
* [PR #1026](https://github.com/derailed/k9s/pull/1026) Add option to hide logo
|
||||
* [PR #997](https://github.com/derailed/k9s/pull/997) Shell into windows containers
|
||||
|
||||
---
|
||||
|
||||
<img src="https://raw.githubusercontent.com/derailed/k9s/master/assets/imhotep_logo.png" width="32" height="auto"/> © 2020 Imhotep Software LLC. All materials licensed under [Apache v2.0](http://www.apache.org/licenses/LICENSE-2.0)
|
||||
20
cmd/root.go
20
cmd/root.go
|
|
@ -1,7 +1,6 @@
|
|||
package cmd
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"fmt"
|
||||
"runtime/debug"
|
||||
|
||||
|
|
@ -13,7 +12,6 @@ import (
|
|||
"github.com/rs/zerolog/log"
|
||||
"github.com/spf13/cobra"
|
||||
"k8s.io/cli-runtime/pkg/genericclioptions"
|
||||
"k8s.io/klog"
|
||||
)
|
||||
|
||||
const (
|
||||
|
|
@ -41,24 +39,6 @@ func init() {
|
|||
rootCmd.AddCommand(versionCmd(), infoCmd())
|
||||
initK9sFlags()
|
||||
initK8sFlags()
|
||||
|
||||
var flags flag.FlagSet
|
||||
klog.InitFlags(&flags)
|
||||
if err := flags.Set("logtostderr", "false"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flags.Set("alsologtostderr", "false"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flags.Set("stderrthreshold", "fatal"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flags.Set("v", "0"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flags.Set("log_file", config.K9sLogs); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
// Execute root command
|
||||
|
|
|
|||
48
go.mod
48
go.mod
|
|
@ -1,48 +1,46 @@
|
|||
module github.com/derailed/k9s
|
||||
|
||||
go 1.15
|
||||
go 1.16
|
||||
|
||||
replace (
|
||||
github.com/docker/distribution => github.com/docker/distribution v0.0.0-20191216044856-a8371794149d
|
||||
github.com/docker/docker => github.com/moby/moby v17.12.0-ce-rc1.0.20200618181300-9dc6525e6118+incompatible
|
||||
)
|
||||
|
||||
require (
|
||||
github.com/Azure/go-autorest v14.0.0+incompatible // indirect
|
||||
github.com/atotto/clipboard v0.1.2
|
||||
github.com/atotto/clipboard v0.1.4
|
||||
github.com/cenkalti/backoff v2.2.1+incompatible
|
||||
github.com/cenkalti/backoff/v4 v4.1.0
|
||||
github.com/derailed/popeye v0.9.0
|
||||
github.com/derailed/tview v0.4.9
|
||||
github.com/derailed/tview v0.4.10
|
||||
github.com/drone/envsubst v1.0.2 // indirect
|
||||
github.com/fatih/color v1.10.0
|
||||
github.com/fsnotify/fsnotify v1.4.9
|
||||
github.com/fvbommel/sortorder v1.0.2
|
||||
github.com/gdamore/tcell/v2 v2.0.1-0.20201017141208-acf90d56d591
|
||||
github.com/gdamore/tcell/v2 v2.2.0
|
||||
github.com/ghodss/yaml v1.0.0
|
||||
github.com/golang/protobuf v1.4.2 // indirect
|
||||
github.com/kylelemons/godebug v1.1.0 // indirect
|
||||
github.com/mattn/go-runewidth v0.0.9
|
||||
github.com/mattn/go-runewidth v0.0.10
|
||||
github.com/openfaas/faas v0.0.0-20200207215241-6afae214e3ec
|
||||
github.com/openfaas/faas-cli v0.0.0-20200124160744-30b7cec9634c
|
||||
github.com/openfaas/faas-provider v0.15.0
|
||||
github.com/petergtz/pegomock v2.7.0+incompatible
|
||||
github.com/petergtz/pegomock v2.9.0+incompatible
|
||||
github.com/rakyll/hey v0.1.4
|
||||
github.com/rs/zerolog v1.20.0
|
||||
github.com/ryanuber/go-glob v1.0.0 // indirect
|
||||
github.com/sahilm/fuzzy v0.1.0
|
||||
github.com/spf13/cobra v1.1.1
|
||||
github.com/stretchr/testify v1.6.1
|
||||
golang.org/x/net v0.0.0-20200519113804-d87ec0cfa476 // indirect
|
||||
golang.org/x/sys v0.0.0-20200519105757-fe76b779f299 // indirect
|
||||
golang.org/x/text v0.3.2
|
||||
google.golang.org/genproto v0.0.0-20200515170657-fc4c6c6a6587 // indirect
|
||||
google.golang.org/grpc v1.29.1 // indirect
|
||||
gopkg.in/yaml.v2 v2.2.8
|
||||
helm.sh/helm/v3 v3.2.0
|
||||
k8s.io/api v0.18.8
|
||||
k8s.io/apimachinery v0.18.8
|
||||
k8s.io/cli-runtime v0.18.8
|
||||
k8s.io/client-go v0.18.8
|
||||
github.com/spf13/cobra v1.1.3
|
||||
github.com/stretchr/testify v1.7.0
|
||||
golang.org/x/text v0.3.5
|
||||
gopkg.in/yaml.v2 v2.4.0
|
||||
helm.sh/helm/v3 v3.5.2
|
||||
k8s.io/api v0.20.5
|
||||
k8s.io/apimachinery v0.20.5
|
||||
k8s.io/cli-runtime v0.20.5
|
||||
k8s.io/client-go v0.20.5
|
||||
k8s.io/klog v1.0.0
|
||||
k8s.io/kubectl v0.18.2
|
||||
k8s.io/metrics v0.18.8
|
||||
rsc.io/letsencrypt v0.0.3 // indirect
|
||||
k8s.io/klog/v2 v2.4.0 // indirect
|
||||
k8s.io/kubectl v0.20.5
|
||||
k8s.io/metrics v0.20.5
|
||||
sigs.k8s.io/yaml v1.2.0
|
||||
vbom.ml/util v0.0.0-20180919145318-efcd4e0f9787 // indirect
|
||||
)
|
||||
|
|
|
|||
|
|
@ -84,6 +84,11 @@ func (a *Aliases) Define(gvr string, aliases ...string) {
|
|||
a.mx.Lock()
|
||||
defer a.mx.Unlock()
|
||||
|
||||
// BOZO!! Could not get full events struct using this api group??
|
||||
if gvr == "events.k8s.io/v1/events" || gvr == "extensions/v1beta1" {
|
||||
return
|
||||
}
|
||||
|
||||
for _, alias := range aliases {
|
||||
if _, ok := a.Alias[alias]; ok {
|
||||
continue
|
||||
|
|
|
|||
|
|
@ -26,6 +26,6 @@ func (c *CustomResourceDefinition) List(ctx context.Context, _ string) ([]runtim
|
|||
labelSel = sel.AsSelector()
|
||||
}
|
||||
|
||||
const gvr = "apiextensions.k8s.io/v1beta1/customresourcedefinitions"
|
||||
const gvr = "apiextensions.k8s.io/v1/customresourcedefinitions"
|
||||
return c.Factory.List(gvr, "-", false, labelSel)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -16,7 +16,11 @@ import (
|
|||
"k8s.io/apimachinery/pkg/util/rand"
|
||||
)
|
||||
|
||||
const maxJobNameSize = 42
|
||||
const (
|
||||
maxJobNameSize = 42
|
||||
cronJobGVR = "batch/v1beta1/cronjobs"
|
||||
jobGVR = "batch/v1/jobs"
|
||||
)
|
||||
|
||||
var (
|
||||
_ Accessor = (*CronJob)(nil)
|
||||
|
|
@ -31,7 +35,7 @@ type CronJob struct {
|
|||
// Run a CronJob.
|
||||
func (c *CronJob) Run(path string) error {
|
||||
ns, _ := client.Namespaced(path)
|
||||
auth, err := c.Client().CanI(ns, "batch/v1/jobs", []string{client.GetVerb, client.CreateVerb})
|
||||
auth, err := c.Client().CanI(ns, jobGVR, []string{client.GetVerb, client.CreateVerb})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -39,7 +43,7 @@ func (c *CronJob) Run(path string) error {
|
|||
return fmt.Errorf("user is not authorized to run jobs")
|
||||
}
|
||||
|
||||
o, err := c.Factory.Get("batch/v1beta1/cronjobs", path, true, labels.Everything())
|
||||
o, err := c.Factory.Get(cronJobGVR, path, true, labels.Everything())
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
|
|
@ -107,28 +111,40 @@ func (c *CronJob) ScanSA(ctx context.Context, fqn string, wait bool) (Refs, erro
|
|||
return refs, nil
|
||||
}
|
||||
|
||||
// SetSuspend a CronJob.
|
||||
func (c *CronJob) SetSuspend(ctx context.Context, path string, suspend bool) error {
|
||||
ns, n := client.Namespaced(path)
|
||||
auth, err := c.Client().CanI(ns, "batch/v1beta1/CronJob", []string{client.GetVerb, client.UpdateVerb})
|
||||
// Suspend toggles suspend/resume on a CronJob.
|
||||
func (c *CronJob) ToggleSuspend(ctx context.Context, path string) error {
|
||||
ns, _ := client.Namespaced(path)
|
||||
|
||||
auth, err := c.Client().CanI(cronJobGVR, ns, []string{client.GetVerb, client.UpdateVerb})
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
if !auth {
|
||||
return fmt.Errorf("user is not authorized to update a CronJob")
|
||||
return fmt.Errorf("user is not authorized to run jobs")
|
||||
}
|
||||
|
||||
o, err := c.Get(ctx, path)
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
var cj batchv1beta1.CronJob
|
||||
err = runtime.DefaultUnstructuredConverter.FromUnstructured(o.(*unstructured.Unstructured).Object, &cj)
|
||||
if err != nil {
|
||||
return errors.New("expecting CronJob resource")
|
||||
}
|
||||
|
||||
dial, err := c.Client().Dial()
|
||||
if err != nil {
|
||||
return err
|
||||
}
|
||||
cronjob, err := dial.BatchV1beta1().CronJobs(ns).Get(ctx, n, metav1.GetOptions{})
|
||||
if err != nil {
|
||||
return err
|
||||
if cj.Spec.Suspend != nil {
|
||||
current := !*cj.Spec.Suspend
|
||||
cj.Spec.Suspend = ¤t
|
||||
} else {
|
||||
true := true
|
||||
cj.Spec.Suspend = &true
|
||||
}
|
||||
|
||||
cronjob.Spec.Suspend = &suspend
|
||||
_, err = dial.BatchV1beta1().CronJobs(ns).Update(ctx, cronjob, metav1.UpdateOptions{})
|
||||
_, err = dial.BatchV1beta1().CronJobs(ns).Update(ctx, &cj, metav1.UpdateOptions{})
|
||||
|
||||
return err
|
||||
}
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ func (o DrainOptions) toDrainHelper(k kubernetes.Interface, w io.Writer) drain.H
|
|||
Client: k,
|
||||
GracePeriodSeconds: o.GracePeriodSeconds,
|
||||
Timeout: o.Timeout,
|
||||
DeleteLocalData: o.DeleteLocalData,
|
||||
DeleteEmptyDirData: o.DeleteEmptyDirData,
|
||||
IgnoreAllDaemonSets: o.IgnoreAllDaemonSets,
|
||||
Out: w,
|
||||
ErrOut: w,
|
||||
|
|
|
|||
|
|
@ -43,7 +43,7 @@ func AccessorFor(f Factory, gvr client.GVR) (Accessor, error) {
|
|||
client.NewGVR("v1/nodes"): &Node{},
|
||||
client.NewGVR("apps/v1/deployments"): &Deployment{},
|
||||
client.NewGVR("apps/v1/daemonsets"): &DaemonSet{},
|
||||
client.NewGVR("extensions/v1beta1/daemonsets"): &DaemonSet{},
|
||||
client.NewGVR("apps/v1/daemonsets"): &DaemonSet{},
|
||||
client.NewGVR("apps/v1/statefulsets"): &StatefulSet{},
|
||||
client.NewGVR("batch/v1beta1/cronjobs"): &CronJob{},
|
||||
client.NewGVR("batch/v1/jobs"): &Job{},
|
||||
|
|
@ -298,6 +298,9 @@ func loadPreferred(f Factory, m ResourceMetas) error {
|
|||
for _, r := range rr {
|
||||
for _, res := range r.APIResources {
|
||||
gvr := client.FromGVAndR(r.GroupVersion, res.Name)
|
||||
if isDeprecated(gvr) {
|
||||
continue
|
||||
}
|
||||
res.Group, res.Version = gvr.G(), gvr.V()
|
||||
if res.SingularName == "" {
|
||||
res.SingularName = strings.ToLower(res.Kind)
|
||||
|
|
@ -309,8 +312,17 @@ func loadPreferred(f Factory, m ResourceMetas) error {
|
|||
return nil
|
||||
}
|
||||
|
||||
var deprecatedGVRs = map[client.GVR]struct{}{
|
||||
client.NewGVR("extensions/v1beta1/ingresses"): {},
|
||||
}
|
||||
|
||||
func isDeprecated(gvr client.GVR) bool {
|
||||
_, ok := deprecatedGVRs[gvr]
|
||||
return ok
|
||||
}
|
||||
|
||||
func loadCRDs(f Factory, m ResourceMetas) {
|
||||
const crdGVR = "apiextensions.k8s.io/v1beta1/customresourcedefinitions"
|
||||
const crdGVR = "apiextensions.k8s.io/v1/customresourcedefinitions"
|
||||
oo, err := f.List(crdGVR, client.ClusterScope, false, labels.Everything())
|
||||
if err != nil {
|
||||
log.Warn().Err(err).Msgf("Fail CRDs load")
|
||||
|
|
@ -347,7 +359,10 @@ func extractMeta(o runtime.Object) (metav1.APIResource, []error) {
|
|||
m.Name, errs = extractStr(meta, "name", errs)
|
||||
|
||||
m.Group, errs = extractStr(spec, "group", errs)
|
||||
m.Version, errs = extractStr(spec, "version", errs)
|
||||
versions, errs := extractSlice(spec, "versions", errs)
|
||||
if len(versions) > 0 {
|
||||
m.Version = versions[0]
|
||||
}
|
||||
|
||||
var scope string
|
||||
scope, errs = extractStr(spec, "scope", errs)
|
||||
|
|
@ -383,11 +398,20 @@ func extractSlice(m map[string]interface{}, n string, errs []error) ([]string, [
|
|||
return s, append(errs, fmt.Errorf("failed to extract slice %s -- %#v", n, m))
|
||||
}
|
||||
|
||||
ss := make([]string, len(ii))
|
||||
for i, name := range ii {
|
||||
ss[i], ok = name.(string)
|
||||
if !ok {
|
||||
return ss, append(errs, fmt.Errorf("expecting string shortnames"))
|
||||
ss := make([]string, 0, len(ii))
|
||||
for _, name := range ii {
|
||||
switch o := name.(type) {
|
||||
case string:
|
||||
ss = append(ss, o)
|
||||
case map[string]interface{}:
|
||||
s, ok := o["name"].(string)
|
||||
if ok {
|
||||
ss = append(ss, s)
|
||||
} else {
|
||||
errs = append(errs, fmt.Errorf("unable to find key %q in map", n))
|
||||
}
|
||||
default:
|
||||
errs = append(errs, fmt.Errorf("unknown field type %t for key %q", o, n))
|
||||
}
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
"metadata": {
|
||||
"annotations": {
|
||||
"helm.sh/resource-policy": "keep",
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{\"helm.sh/resource-policy\":\"keep\"},\"labels\":{\"app\":\"istio-pilot\",\"chart\":\"istio\",\"heritage\":\"Tiller\",\"release\":\"istio\"},\"name\":\"destinationrules.networking.istio.io\"},\"spec\":{\"additionalPrinterColumns\":[{\"JSONPath\":\".spec.host\",\"description\":\"The name of a service from the service registry\",\"name\":\"Host\",\"type\":\"string\"},{\"JSONPath\":\".metadata.creationTimestamp\",\"description\":\"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\\n\\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata\",\"name\":\"Age\",\"type\":\"date\"}],\"group\":\"networking.istio.io\",\"names\":{\"categories\":[\"istio-io\",\"networking-istio-io\"],\"kind\":\"DestinationRule\",\"listKind\":\"DestinationRuleList\",\"plural\":\"destinationrules\",\"shortNames\":[\"dr\"],\"singular\":\"destinationrule\"},\"scope\":\"Namespaced\",\"version\":\"v1alpha3\"}}\n"
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{\"helm.sh/resource-policy\":\"keep\"},\"labels\":{\"app\":\"istio-pilot\",\"chart\":\"istio\",\"heritage\":\"Tiller\",\"release\":\"istio\"},\"name\":\"destinationrules.networking.istio.io\"},\"spec\":{\"additionalPrinterColumns\":[{\"JSONPath\":\".spec.host\",\"description\":\"The name of a service from the service registry\",\"name\":\"Host\",\"type\":\"string\"},{\"JSONPath\":\".metadata.creationTimestamp\",\"description\":\"CreationTimestamp is a timestamp representing the server time when this object was created. It is not guaranteed to be set in happens-before order across separate operations. Clients may not set this value. It is represented in RFC3339 form and is in UTC.\\n\\nPopulated by the system. Read-only. Null for lists. More info: https://git.k8s.io/community/contributors/devel/api-conventions.md#metadata\",\"name\":\"Age\",\"type\":\"date\"}],\"group\":\"networking.istio.io\",\"names\":{\"categories\":[\"istio-io\",\"networking-istio-io\"],\"kind\":\"DestinationRule\",\"listKind\":\"DestinationRuleList\",\"plural\":\"destinationrules\",\"shortNames\":[\"dr\"],\"singular\":\"destinationrule\"},\"scope\":\"Namespaced\",\"version\":\"v1alpha3\"}}\n"
|
||||
},
|
||||
"creationTimestamp": "2019-12-30T16:13:02Z",
|
||||
"generation": 1,
|
||||
|
|
|
|||
|
|
@ -77,7 +77,7 @@ type DrainOptions struct {
|
|||
GracePeriodSeconds int
|
||||
Timeout time.Duration
|
||||
IgnoreAllDaemonSets bool
|
||||
DeleteLocalData bool
|
||||
DeleteEmptyDirData bool
|
||||
Force bool
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -18,13 +18,13 @@ var Registry = map[string]ResourceMeta{
|
|||
DAO: &dao.Dir{},
|
||||
Renderer: &render.Dir{},
|
||||
},
|
||||
"pulses": {
|
||||
DAO: &dao.Pulse{},
|
||||
},
|
||||
"helm": {
|
||||
DAO: &dao.Helm{},
|
||||
Renderer: &render.Helm{},
|
||||
},
|
||||
"pulses": {
|
||||
DAO: &dao.Pulse{},
|
||||
},
|
||||
"openfaas": {
|
||||
DAO: &dao.OpenFaas{},
|
||||
Renderer: &render.OpenFaas{},
|
||||
|
|
@ -135,15 +135,6 @@ var Registry = map[string]ResourceMeta{
|
|||
},
|
||||
|
||||
// Extensions...
|
||||
"extensions/v1beta1/daemonsets": {
|
||||
Renderer: &render.DaemonSet{},
|
||||
},
|
||||
"extensions/v1beta1/ingresses": {
|
||||
Renderer: &render.Ingress{},
|
||||
},
|
||||
"extensions/v1beta1/networkpolicies": {
|
||||
Renderer: &render.NetworkPolicy{},
|
||||
},
|
||||
"networking.k8s.io/v1/networkpolicies": {
|
||||
Renderer: &render.NetworkPolicy{},
|
||||
},
|
||||
|
|
@ -176,9 +167,6 @@ var Registry = map[string]ResourceMeta{
|
|||
"apiextensions.k8s.io/v1/customresourcedefinitions": {
|
||||
Renderer: &render.CustomResourceDefinition{},
|
||||
},
|
||||
"apiextensions.k8s.io/v1beta1/customresourcedefinitions": {
|
||||
Renderer: &render.CustomResourceDefinition{},
|
||||
},
|
||||
|
||||
// Storage...
|
||||
"storage.k8s.io/v1/storageclasses": {
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ func TestTableReconcile(t *testing.T) {
|
|||
err := ta.reconcile(ctx)
|
||||
assert.Nil(t, err)
|
||||
data := ta.Peek()
|
||||
assert.Equal(t, 20, len(data.Header))
|
||||
assert.Equal(t, 22, len(data.Header))
|
||||
assert.Equal(t, 1, len(data.RowEvents))
|
||||
assert.Equal(t, client.NamespaceAll, data.Namespace)
|
||||
}
|
||||
|
|
@ -106,7 +106,7 @@ func TestTableHydrate(t *testing.T) {
|
|||
|
||||
assert.Nil(t, hydrate("blee", oo, rr, render.Pod{}))
|
||||
assert.Equal(t, 1, len(rr))
|
||||
assert.Equal(t, 20, len(rr[0].Fields))
|
||||
assert.Equal(t, 22, len(rr[0].Fields))
|
||||
}
|
||||
|
||||
func TestTableGenericHydrate(t *testing.T) {
|
||||
|
|
|
|||
|
|
@ -33,7 +33,7 @@ func TestTableRefresh(t *testing.T) {
|
|||
ctx = context.WithValue(ctx, internal.KeyWithMetrics, false)
|
||||
ta.Refresh(ctx)
|
||||
data := ta.Peek()
|
||||
assert.Equal(t, 20, len(data.Header))
|
||||
assert.Equal(t, 22, len(data.Header))
|
||||
assert.Equal(t, 1, len(data.RowEvents))
|
||||
assert.Equal(t, client.NamespaceAll, data.Namespace)
|
||||
assert.Equal(t, 1, l.count)
|
||||
|
|
|
|||
|
|
@ -6,7 +6,7 @@ import (
|
|||
|
||||
"github.com/derailed/k9s/internal/client"
|
||||
v1 "k8s.io/api/core/v1"
|
||||
"k8s.io/api/extensions/v1beta1"
|
||||
netv1 "k8s.io/api/networking/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
|
@ -39,7 +39,7 @@ func (i Ingress) Render(o interface{}, ns string, r *Row) error {
|
|||
if !ok {
|
||||
return fmt.Errorf("Expected Ingress, but got %T", o)
|
||||
}
|
||||
var ing v1beta1.Ingress
|
||||
var ing netv1.Ingress
|
||||
err := runtime.DefaultUnstructuredConverter.FromUnstructured(raw.Object, &ing)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
@ -77,7 +77,7 @@ func toAddress(lbs v1.LoadBalancerStatus) string {
|
|||
return strings.Join(res, ",")
|
||||
}
|
||||
|
||||
func toTLSPorts(tls []v1beta1.IngressTLS) string {
|
||||
func toTLSPorts(tls []netv1.IngressTLS) string {
|
||||
if len(tls) != 0 {
|
||||
return "80, 443"
|
||||
}
|
||||
|
|
@ -85,7 +85,7 @@ func toTLSPorts(tls []v1beta1.IngressTLS) string {
|
|||
return "80"
|
||||
}
|
||||
|
||||
func toHosts(rr []v1beta1.IngressRule) string {
|
||||
func toHosts(rr []netv1.IngressRule) string {
|
||||
hh := make([]string, 0, len(rr))
|
||||
for _, r := range rr {
|
||||
if r.Host == "" {
|
||||
|
|
|
|||
|
|
@ -5,7 +5,7 @@ import (
|
|||
"strings"
|
||||
|
||||
"github.com/derailed/k9s/internal/client"
|
||||
v1beta1 "k8s.io/api/extensions/v1beta1"
|
||||
netv1 "k8s.io/api/networking/v1"
|
||||
metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
|
||||
"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
|
||||
"k8s.io/apimachinery/pkg/runtime"
|
||||
|
|
@ -42,7 +42,7 @@ func (n NetworkPolicy) Render(o interface{}, ns string, r *Row) error {
|
|||
if !ok {
|
||||
return fmt.Errorf("Expected NetworkPolicy, but got %T", o)
|
||||
}
|
||||
var np v1beta1.NetworkPolicy
|
||||
var np netv1.NetworkPolicy
|
||||
err := runtime.DefaultUnstructuredConverter.FromUnstructured(raw.Object, &np)
|
||||
if err != nil {
|
||||
return err
|
||||
|
|
@ -71,7 +71,7 @@ func (n NetworkPolicy) Render(o interface{}, ns string, r *Row) error {
|
|||
|
||||
// Helpers...
|
||||
|
||||
func ingress(ii []v1beta1.NetworkPolicyIngressRule) (string, string, string) {
|
||||
func ingress(ii []netv1.NetworkPolicyIngressRule) (string, string, string) {
|
||||
var ports, sels, blocks []string
|
||||
for _, i := range ii {
|
||||
if p := portsToStr(i.Ports); p != "" {
|
||||
|
|
@ -88,7 +88,7 @@ func ingress(ii []v1beta1.NetworkPolicyIngressRule) (string, string, string) {
|
|||
return strings.Join(ports, ","), strings.Join(sels, ","), strings.Join(blocks, ",")
|
||||
}
|
||||
|
||||
func egress(ee []v1beta1.NetworkPolicyEgressRule) (string, string, string) {
|
||||
func egress(ee []netv1.NetworkPolicyEgressRule) (string, string, string) {
|
||||
var ports, sels, blocks []string
|
||||
for _, e := range ee {
|
||||
if p := portsToStr(e.Ports); p != "" {
|
||||
|
|
@ -105,7 +105,7 @@ func egress(ee []v1beta1.NetworkPolicyEgressRule) (string, string, string) {
|
|||
return strings.Join(ports, ","), strings.Join(sels, ","), strings.Join(blocks, ",")
|
||||
}
|
||||
|
||||
func portsToStr(pp []v1beta1.NetworkPolicyPort) string {
|
||||
func portsToStr(pp []netv1.NetworkPolicyPort) string {
|
||||
ports := make([]string, 0, len(pp))
|
||||
for _, p := range pp {
|
||||
proto, port := NAValue, NAValue
|
||||
|
|
@ -120,7 +120,7 @@ func portsToStr(pp []v1beta1.NetworkPolicyPort) string {
|
|||
return strings.Join(ports, ",")
|
||||
}
|
||||
|
||||
func peersToStr(pp []v1beta1.NetworkPolicyPeer) (string, string) {
|
||||
func peersToStr(pp []netv1.NetworkPolicyPeer) (string, string) {
|
||||
sels := make([]string, 0, len(pp))
|
||||
ips := make([]string, 0, len(pp))
|
||||
for _, p := range pp {
|
||||
|
|
@ -138,7 +138,7 @@ func peersToStr(pp []v1beta1.NetworkPolicyPeer) (string, string) {
|
|||
return strings.Join(sels, ","), strings.Join(ips, ",")
|
||||
}
|
||||
|
||||
func renderBlock(b *v1beta1.IPBlock) string {
|
||||
func renderBlock(b *netv1.IPBlock) string {
|
||||
s := b.CIDR
|
||||
|
||||
if len(b.Except) == 0 {
|
||||
|
|
@ -155,7 +155,7 @@ func renderBlock(b *v1beta1.IPBlock) string {
|
|||
return s + "[" + strings.Join(b.Except, ",") + "]"
|
||||
}
|
||||
|
||||
func renderPeer(i v1beta1.NetworkPolicyPeer) string {
|
||||
func renderPeer(i netv1.NetworkPolicyPeer) string {
|
||||
var s string
|
||||
|
||||
if i.PodSelector != nil {
|
||||
|
|
|
|||
|
|
@ -76,6 +76,8 @@ func (Pod) Header(ns string) Header {
|
|||
HeaderColumn{Name: "QOS", Wide: true},
|
||||
HeaderColumn{Name: "LABELS", Wide: true},
|
||||
HeaderColumn{Name: "VALID", Wide: true},
|
||||
HeaderColumn{Name: "NOMINATED NODE", Wide: true},
|
||||
HeaderColumn{Name: "READINESS GATES", Wide: true},
|
||||
HeaderColumn{Name: "AGE", Time: true, Decorator: AgeDecorator},
|
||||
}
|
||||
}
|
||||
|
|
@ -118,6 +120,8 @@ func (p Pod) Render(o interface{}, ns string, row *Row) error {
|
|||
p.mapQOS(po.Status.QOSClass),
|
||||
mapToStr(po.Labels),
|
||||
asStatus(p.diagnose(phase, cr, len(ss))),
|
||||
asNominated(po.Status.NominatedNodeName),
|
||||
asReadinessGate(po.Spec.ReadinessGates),
|
||||
toAge(po.ObjectMeta.CreationTimestamp),
|
||||
}
|
||||
|
||||
|
|
@ -138,6 +142,24 @@ func (p Pod) diagnose(phase string, cr, ct int) error {
|
|||
// ----------------------------------------------------------------------------
|
||||
// Helpers...
|
||||
|
||||
func asNominated(n string) string {
|
||||
if n == "" {
|
||||
return MissingValue
|
||||
}
|
||||
return n
|
||||
}
|
||||
|
||||
func asReadinessGate(gg []v1.PodReadinessGate) string {
|
||||
if len(gg) == 0 {
|
||||
return MissingValue
|
||||
}
|
||||
ss := make([]string, 0, len(gg))
|
||||
for _, g := range gg {
|
||||
ss = append(ss, string(g.ConditionType))
|
||||
}
|
||||
return strings.Join(ss, ",")
|
||||
}
|
||||
|
||||
// PodWithMetrics represents a pod and its metrics.
|
||||
type PodWithMetrics struct {
|
||||
Raw *unstructured.Unstructured
|
||||
|
|
|
|||
|
|
@ -1,10 +1,10 @@
|
|||
{
|
||||
"apiVersion": "apiextensions.k8s.io/v1beta1",
|
||||
"apiVersion": "apiextensions.k8s.io/v1",
|
||||
"kind": "CustomResourceDefinition",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"helm.sh/hook": "crd-install",
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apiextensions.k8s.io/v1beta1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"crd-install\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app\":\"mixer\",\"istio\":\"mixer-adapter\",\"k8s-app\":\"istio\",\"package\":\"adapter\"},\"name\":\"adapters.config.istio.io\",\"namespace\":\"\"},\"spec\":{\"group\":\"config.istio.io\",\"names\":{\"categories\":[\"istio-io\",\"policy-istio-io\"],\"kind\":\"adapter\",\"plural\":\"adapters\",\"singular\":\"adapter\"},\"scope\":\"Namespaced\",\"version\":\"v1alpha2\"}}\n"
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apiextensions.k8s.io/v1\",\"kind\":\"CustomResourceDefinition\",\"metadata\":{\"annotations\":{\"helm.sh/hook\":\"crd-install\"},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"app\":\"mixer\",\"istio\":\"mixer-adapter\",\"k8s-app\":\"istio\",\"package\":\"adapter\"},\"name\":\"adapters.config.istio.io\",\"namespace\":\"\"},\"spec\":{\"group\":\"config.istio.io\",\"names\":{\"categories\":[\"istio-io\",\"policy-istio-io\"],\"kind\":\"adapter\",\"plural\":\"adapters\",\"singular\":\"adapter\"},\"scope\":\"Namespaced\",\"version\":\"v1alpha2\"}}\n"
|
||||
},
|
||||
"creationTimestamp": "2019-02-05T22:04:29Z",
|
||||
"generation": 1,
|
||||
|
|
@ -17,7 +17,7 @@
|
|||
},
|
||||
"name": "adapters.config.istio.io",
|
||||
"resourceVersion": "37115599",
|
||||
"selfLink": "/apis/apiextensions.k8s.io/v1beta1/customresourcedefinitions/adapters.config.istio.io",
|
||||
"selfLink": "/apis/apiextensions.k8s.io/v1/customresourcedefinitions/adapters.config.istio.io",
|
||||
"uid": "029b8c3e-2992-11e9-81cd-42010a80005b"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"apiVersion": "extensions/v1beta1",
|
||||
"apiVersion": "apps/v1",
|
||||
"kind": "Deployment",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
|
|
@ -14,7 +14,7 @@
|
|||
"name": "icx-db",
|
||||
"namespace": "icx",
|
||||
"resourceVersion": "37116271",
|
||||
"selfLink": "/apis/extensions/v1beta1/namespaces/icx/deployments/icx-db",
|
||||
"selfLink": "/apis/apps/v1/namespaces/icx/deployments/icx-db",
|
||||
"uid": "6f6143bc-a5f3-11e9-990f-42010a800218"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -1,9 +1,9 @@
|
|||
{
|
||||
"apiVersion": "extensions/v1beta1",
|
||||
"apiVersion": "apps/v1",
|
||||
"kind": "DaemonSet",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"fluentd-gcp\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v3.2.0\"},\"name\":\"fluentd-gcp-v3.2.0\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"fluentd-gcp\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v3.2.0\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"STACKDRIVER_METADATA_AGENT_URL\",\"value\":\"http://$(NODE_NAME):8799\"}],\"image\":\"gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/sh\",\"-c\",\"LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300}; STUCK_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-900}; if [ ! -e /var/log/fluentd-buffers ]; then\\n exit 1;\\nfi; touch -d \\\"${STUCK_THRESHOLD_SECONDS} seconds ago\\\" /tmp/marker-stuck; if [[ -z \\\"$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-stuck -print -quit)\\\" ]]; then\\n rm -rf /var/log/fluentd-buffers;\\n exit 1;\\nfi; touch -d \\\"${LIVENESS_THRESHOLD_SECONDS} seconds ago\\\" /tmp/marker-liveness; if [[ -z \\\"$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-liveness -print -quit)\\\" ]]; then\\n exit 1;\\nfi;\\n\"]},\"initialDelaySeconds\":600,\"periodSeconds\":60},\"name\":\"fluentd-gcp\",\"volumeMounts\":[{\"mountPath\":\"/var/log\",\"name\":\"varlog\"},{\"mountPath\":\"/var/lib/docker/containers\",\"name\":\"varlibdockercontainers\",\"readOnly\":true},{\"mountPath\":\"/etc/google-fluentd/config.d\",\"name\":\"config-volume\"}]},{\"command\":[\"/monitor\",\"--stackdriver-prefix=container.googleapis.com/internal/addons\",\"--api-override=https://monitoring.googleapis.com/\",\"--source=fluentd:http://localhost:24231?whitelisted=stackdriver_successful_requests_count,stackdriver_failed_requests_count,stackdriver_ingested_entries_count,stackdriver_dropped_entries_count\",\"--pod-id=$(POD_NAME)\",\"--namespace-id=$(POD_NAMESPACE)\"],\"env\":[{\"name\":\"POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"k8s.gcr.io/prometheus-to-sd:v0.3.1\",\"name\":\"prometheus-to-sd-exporter\"}],\"dnsPolicy\":\"Default\",\"hostNetwork\":true,\"nodeSelector\":{\"beta.kubernetes.io/fluentd-ds-ready\":\"true\"},\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"fluentd-gcp\",\"terminationGracePeriodSeconds\":60,\"tolerations\":[{\"effect\":\"NoExecute\",\"operator\":\"Exists\"},{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/var/log\"},\"name\":\"varlog\"},{\"hostPath\":{\"path\":\"/var/lib/docker/containers\"},\"name\":\"varlibdockercontainers\"},{\"configMap\":{\"name\":\"fluentd-gcp-config-old-v1.2.5\"},\"name\":\"config-volume\"}]}},\"updateStrategy\":{\"type\":\"RollingUpdate\"}}}\n"
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"apps/v1\",\"kind\":\"DaemonSet\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"k8s-app\":\"fluentd-gcp\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v3.2.0\"},\"name\":\"fluentd-gcp-v3.2.0\",\"namespace\":\"kube-system\"},\"spec\":{\"template\":{\"metadata\":{\"annotations\":{\"scheduler.alpha.kubernetes.io/critical-pod\":\"\"},\"labels\":{\"k8s-app\":\"fluentd-gcp\",\"kubernetes.io/cluster-service\":\"true\",\"version\":\"v3.2.0\"}},\"spec\":{\"containers\":[{\"env\":[{\"name\":\"NODE_NAME\",\"valueFrom\":{\"fieldRef\":{\"apiVersion\":\"v1\",\"fieldPath\":\"spec.nodeName\"}}},{\"name\":\"STACKDRIVER_METADATA_AGENT_URL\",\"value\":\"http://$(NODE_NAME):8799\"}],\"image\":\"gcr.io/stackdriver-agents/stackdriver-logging-agent:0.6-1.6.0-1\",\"livenessProbe\":{\"exec\":{\"command\":[\"/bin/sh\",\"-c\",\"LIVENESS_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-300}; STUCK_THRESHOLD_SECONDS=${LIVENESS_THRESHOLD_SECONDS:-900}; if [ ! -e /var/log/fluentd-buffers ]; then\\n exit 1;\\nfi; touch -d \\\"${STUCK_THRESHOLD_SECONDS} seconds ago\\\" /tmp/marker-stuck; if [[ -z \\\"$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-stuck -print -quit)\\\" ]]; then\\n rm -rf /var/log/fluentd-buffers;\\n exit 1;\\nfi; touch -d \\\"${LIVENESS_THRESHOLD_SECONDS} seconds ago\\\" /tmp/marker-liveness; if [[ -z \\\"$(find /var/log/fluentd-buffers -type f -newer /tmp/marker-liveness -print -quit)\\\" ]]; then\\n exit 1;\\nfi;\\n\"]},\"initialDelaySeconds\":600,\"periodSeconds\":60},\"name\":\"fluentd-gcp\",\"volumeMounts\":[{\"mountPath\":\"/var/log\",\"name\":\"varlog\"},{\"mountPath\":\"/var/lib/docker/containers\",\"name\":\"varlibdockercontainers\",\"readOnly\":true},{\"mountPath\":\"/etc/google-fluentd/config.d\",\"name\":\"config-volume\"}]},{\"command\":[\"/monitor\",\"--stackdriver-prefix=container.googleapis.com/internal/addons\",\"--api-override=https://monitoring.googleapis.com/\",\"--source=fluentd:http://localhost:24231?whitelisted=stackdriver_successful_requests_count,stackdriver_failed_requests_count,stackdriver_ingested_entries_count,stackdriver_dropped_entries_count\",\"--pod-id=$(POD_NAME)\",\"--namespace-id=$(POD_NAMESPACE)\"],\"env\":[{\"name\":\"POD_NAME\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.name\"}}},{\"name\":\"POD_NAMESPACE\",\"valueFrom\":{\"fieldRef\":{\"fieldPath\":\"metadata.namespace\"}}}],\"image\":\"k8s.gcr.io/prometheus-to-sd:v0.3.1\",\"name\":\"prometheus-to-sd-exporter\"}],\"dnsPolicy\":\"Default\",\"hostNetwork\":true,\"nodeSelector\":{\"beta.kubernetes.io/fluentd-ds-ready\":\"true\"},\"priorityClassName\":\"system-node-critical\",\"serviceAccountName\":\"fluentd-gcp\",\"terminationGracePeriodSeconds\":60,\"tolerations\":[{\"effect\":\"NoExecute\",\"operator\":\"Exists\"},{\"effect\":\"NoSchedule\",\"operator\":\"Exists\"}],\"volumes\":[{\"hostPath\":{\"path\":\"/var/log\"},\"name\":\"varlog\"},{\"hostPath\":{\"path\":\"/var/lib/docker/containers\"},\"name\":\"varlibdockercontainers\"},{\"configMap\":{\"name\":\"fluentd-gcp-config-old-v1.2.5\"},\"name\":\"config-volume\"}]}},\"updateStrategy\":{\"type\":\"RollingUpdate\"}}}\n"
|
||||
},
|
||||
"creationTimestamp": "2019-04-12T23:35:36Z",
|
||||
"generation": 2,
|
||||
|
|
@ -16,7 +16,7 @@
|
|||
"name": "fluentd-gcp-v3.2.0",
|
||||
"namespace": "kube-system",
|
||||
"resourceVersion": "34805583",
|
||||
"selfLink": "/apis/extensions/v1beta1/namespaces/kube-system/daemonsets/fluentd-gcp-v3.2.0",
|
||||
"selfLink": "/apis/apps/v1/namespaces/kube-system/daemonsets/fluentd-gcp-v3.2.0",
|
||||
"uid": "ac95611f-5d7b-11e9-af05-42010a800018"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -1,12 +1,12 @@
|
|||
{
|
||||
"apiVersion": "extensions/v1beta1",
|
||||
"apiVersion": "networking.k8s.io/v1",
|
||||
"kind": "Ingress",
|
||||
"metadata": {
|
||||
"labels": {
|
||||
"role": "ingress"
|
||||
},
|
||||
"annotations": {
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"extensions/v1beta1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"nginx.ingress.kubernetes.io/rewrite-target\":\"/\"},\"name\":\"test-ingress\",\"namespace\":\"default\"},\"spec\":{\"rules\":[{\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"test\",\"servicePort\":80},\"path\":\"/testpath\"}]}}]}}\n",
|
||||
"kubectl.kubernetes.io/last-applied-configuration": "{\"apiVersion\":\"networking.k8s.io/v1\",\"kind\":\"Ingress\",\"metadata\":{\"annotations\":{\"nginx.ingress.kubernetes.io/rewrite-target\":\"/\"},\"name\":\"test-ingress\",\"namespace\":\"default\"},\"spec\":{\"rules\":[{\"http\":{\"paths\":[{\"backend\":{\"serviceName\":\"test\",\"servicePort\":80},\"path\":\"/testpath\"}]}}]}}\n",
|
||||
"nginx.ingress.kubernetes.io/rewrite-target": "/"
|
||||
},
|
||||
"creationTimestamp": "2019-08-30T20:53:52Z",
|
||||
|
|
@ -14,7 +14,7 @@
|
|||
"name": "test-ingress",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "49801063",
|
||||
"selfLink": "/apis/extensions/v1beta1/namespaces/default/ingresses/test-ingress",
|
||||
"selfLink": "/apis/networking.k8s.io/v1/namespaces/default/ingresses/test-ingress",
|
||||
"uid": "45e44c1d-cb68-11e9-990f-42010a800218"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"apiVersion": "extensions/v1beta1",
|
||||
"apiVersion": "networking.k8s.io/v1",
|
||||
"kind": "NetworkPolicy",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
|
|
@ -10,7 +10,7 @@
|
|||
"name": "fred",
|
||||
"namespace": "default",
|
||||
"resourceVersion": "48999995",
|
||||
"selfLink": "/apis/extensions/v1beta1/namespaces/default/networkpolicies/fred",
|
||||
"selfLink": "/apis/networking.k8s.io/v1/namespaces/default/networkpolicies/fred",
|
||||
"uid": "e4aada4d-c8fd-11e9-990f-42010a800218"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -1,5 +1,5 @@
|
|||
{
|
||||
"apiVersion": "extensions/v1beta1",
|
||||
"apiVersion": "networking.k8s.io/v1",
|
||||
"kind": "ReplicaSet",
|
||||
"metadata": {
|
||||
"annotations": {
|
||||
|
|
@ -26,7 +26,7 @@
|
|||
}
|
||||
],
|
||||
"resourceVersion": "37116270",
|
||||
"selfLink": "/apis/extensions/v1beta1/namespaces/icx/replicasets/icx-db-7d4b578979",
|
||||
"selfLink": "/apis/networking.k8s.io/v1/namespaces/icx/replicasets/icx-db-7d4b578979",
|
||||
"uid": "6f637a60-a5f3-11e9-990f-42010a800218"
|
||||
},
|
||||
"spec": {
|
||||
|
|
|
|||
|
|
@ -58,15 +58,21 @@ func (c *Cow) talk() {
|
|||
if len(says) == 0 {
|
||||
says = "Nothing to report here. Please move along..."
|
||||
}
|
||||
c.SetText(cowTalk(says))
|
||||
x, _, w, _ := c.GetRect()
|
||||
c.SetText(cowTalk(says, (x+w)/2))
|
||||
}
|
||||
|
||||
func cowTalk(says string) string {
|
||||
func cowTalk(says string, w int) string {
|
||||
msg := fmt.Sprintf("[red::]< [::b]Ruroh? %s[::-] >", says)
|
||||
buff := make([]string, 0, len(cow)+3)
|
||||
buff = append(buff, "[red::] "+strings.Repeat("─", len(says)+8))
|
||||
buff = append(buff, msg)
|
||||
buff = append(buff, " "+strings.Repeat("─", len(says)+8))
|
||||
buff = append(buff, fmt.Sprintf("< [red::b]Ruroh? %s[-::-] >", says))
|
||||
buff = append(buff, " "+strings.Repeat("─", len(says)+8))
|
||||
spacer := strings.Repeat(" ", len(says)/2-8)
|
||||
rCount := w/2 - 8
|
||||
if rCount < 0 {
|
||||
rCount = w / 2
|
||||
}
|
||||
spacer := strings.Repeat(" ", rCount)
|
||||
for _, s := range cow {
|
||||
buff = append(buff, "[red::b]"+spacer+s)
|
||||
}
|
||||
|
|
|
|||
|
|
@ -10,6 +10,7 @@ import (
|
|||
"github.com/derailed/k9s/internal/dao"
|
||||
"github.com/derailed/k9s/internal/render"
|
||||
"github.com/derailed/k9s/internal/ui"
|
||||
"github.com/derailed/k9s/internal/ui/dialog"
|
||||
"github.com/derailed/tview"
|
||||
"github.com/gdamore/tcell/v2"
|
||||
"github.com/rs/zerolog/log"
|
||||
|
|
@ -67,12 +68,41 @@ func jobCtx(path, uid string) ContextFunc {
|
|||
|
||||
func (c *CronJob) bindKeys(aa ui.KeyActions) {
|
||||
aa.Add(ui.KeyActions{
|
||||
tcell.KeyCtrlT: ui.NewKeyAction("Trigger", c.trigger, true),
|
||||
ui.KeyShiftS: ui.NewKeyAction("ToggleSuspend", c.toggleSuspend, true),
|
||||
ui.KeyT: ui.NewKeyAction("Trigger", c.triggerCmd, true),
|
||||
ui.KeyS: ui.NewKeyAction("Suspend/Resume", c.toggleSuspendCmd, true),
|
||||
})
|
||||
}
|
||||
|
||||
func (c *CronJob) toggleSuspend(evt *tcell.EventKey) *tcell.EventKey {
|
||||
func (c *CronJob) triggerCmd(evt *tcell.EventKey) *tcell.EventKey {
|
||||
fqn := c.GetTable().GetSelectedItem()
|
||||
if fqn == "" {
|
||||
return evt
|
||||
}
|
||||
|
||||
msg := fmt.Sprintf("Trigger Cronjob %s?", fqn)
|
||||
dialog.ShowConfirm(c.App().Styles.Dialog(), c.App().Content.Pages, "Confirm Job Trigger", msg, func() {
|
||||
res, err := dao.AccessorFor(c.App().factory, c.GVR())
|
||||
if err != nil {
|
||||
c.App().Flash().Err(fmt.Errorf("no accessor for %q", c.GVR()))
|
||||
return
|
||||
}
|
||||
runner, ok := res.(dao.Runnable)
|
||||
if !ok {
|
||||
c.App().Flash().Err(fmt.Errorf("expecting a jobrunner resource for %q", c.GVR()))
|
||||
return
|
||||
}
|
||||
|
||||
if err := runner.Run(fqn); err != nil {
|
||||
c.App().Flash().Errf("Cronjob trigger failed %v", err)
|
||||
return
|
||||
}
|
||||
c.App().Flash().Infof("Triggering Job %s %s", c.GVR(), fqn)
|
||||
}, func() {})
|
||||
|
||||
return nil
|
||||
}
|
||||
|
||||
func (c *CronJob) toggleSuspendCmd(evt *tcell.EventKey) *tcell.EventKey {
|
||||
sel := c.GetTable().GetSelectedItem()
|
||||
if sel == "" {
|
||||
return evt
|
||||
|
|
@ -86,18 +116,19 @@ func (c *CronJob) toggleSuspend(evt *tcell.EventKey) *tcell.EventKey {
|
|||
}
|
||||
|
||||
func (c *CronJob) showSuspendDialog(sel string) {
|
||||
cell := c.GetTable().GetCell(c.GetTable().GetSelectedRowIndex(), c.GetTable().NameColIndex() + 2)
|
||||
cell := c.GetTable().GetCell(c.GetTable().GetSelectedRowIndex(), c.GetTable().NameColIndex()+2)
|
||||
if cell == nil {
|
||||
c.App().Flash().Errf("Unable to assert current status")
|
||||
return
|
||||
}
|
||||
suspended := strings.TrimSpace(cell.Text) == "true"
|
||||
title := "Suspend"
|
||||
if suspended {
|
||||
title = "Unsuspend"
|
||||
title = "Resume"
|
||||
}
|
||||
|
||||
confirm := tview.NewModalForm(fmt.Sprintf("<%s>", title), c.makeSuspendForm(sel, !suspended))
|
||||
confirm.SetText(fmt.Sprintf("%s CronJob %s", title, sel))
|
||||
confirm.SetText(fmt.Sprintf("%s CronJob %s?", title, sel))
|
||||
confirm.SetDoneFunc(func(int, string) {
|
||||
c.dismissDialog()
|
||||
})
|
||||
|
|
@ -107,17 +138,20 @@ func (c *CronJob) showSuspendDialog(sel string) {
|
|||
|
||||
func (c *CronJob) makeSuspendForm(sel string, suspend bool) *tview.Form {
|
||||
f := c.makeStyledForm()
|
||||
action := "suspended"
|
||||
action := "suspend"
|
||||
if !suspend {
|
||||
action ="unsuspended"
|
||||
action = "resume"
|
||||
}
|
||||
|
||||
f.AddButton("Cancel", func() {
|
||||
c.dismissDialog()
|
||||
})
|
||||
f.AddButton("OK", func() {
|
||||
defer c.dismissDialog()
|
||||
|
||||
ctx, cancel := context.WithTimeout(context.Background(), c.App().Conn().Config().CallTimeout())
|
||||
defer cancel()
|
||||
if err := c.setSuspend(ctx, sel, suspend); err != nil {
|
||||
if err := c.toggleSuspend(ctx, sel); err != nil {
|
||||
log.Error().Err(err).Msgf("CronJOb %s %s failed", sel, action)
|
||||
c.App().Flash().Err(err)
|
||||
} else {
|
||||
|
|
@ -125,14 +159,10 @@ func (c *CronJob) makeSuspendForm(sel string, suspend bool) *tview.Form {
|
|||
}
|
||||
})
|
||||
|
||||
f.AddButton("Cancel", func() {
|
||||
c.dismissDialog()
|
||||
})
|
||||
|
||||
return f
|
||||
}
|
||||
|
||||
func (c *CronJob) setSuspend(ctx context.Context, path string, suspend bool) error {
|
||||
func (c *CronJob) toggleSuspend(ctx context.Context, path string) error {
|
||||
res, err := dao.AccessorFor(c.App().factory, c.GVR())
|
||||
if err != nil {
|
||||
return nil
|
||||
|
|
@ -142,7 +172,7 @@ func (c *CronJob) setSuspend(ctx context.Context, path string, suspend bool) err
|
|||
return fmt.Errorf("expecting a scalable resource for %q", c.GVR())
|
||||
}
|
||||
|
||||
return cronJob.SetSuspend(ctx, path, suspend)
|
||||
return cronJob.ToggleSuspend(ctx, path)
|
||||
}
|
||||
|
||||
func (c *CronJob) makeStyledForm() *tview.Form {
|
||||
|
|
@ -160,28 +190,3 @@ func (c *CronJob) makeStyledForm() *tview.Form {
|
|||
func (c *CronJob) dismissDialog() {
|
||||
c.App().Content.RemovePage(suspendDialogKey)
|
||||
}
|
||||
|
||||
func (c *CronJob) trigger(evt *tcell.EventKey) *tcell.EventKey {
|
||||
sel := c.GetTable().GetSelectedItem()
|
||||
if sel == "" {
|
||||
return evt
|
||||
}
|
||||
|
||||
res, err := dao.AccessorFor(c.App().factory, c.GVR())
|
||||
if err != nil {
|
||||
return nil
|
||||
}
|
||||
runner, ok := res.(dao.Runnable)
|
||||
if !ok {
|
||||
c.App().Flash().Err(fmt.Errorf("expecting a jobrunner resource for %q", c.GVR()))
|
||||
return nil
|
||||
}
|
||||
|
||||
if err := runner.Run(sel); err != nil {
|
||||
c.App().Flash().Errf("Cronjob trigger failed %v", err)
|
||||
return evt
|
||||
}
|
||||
c.App().Flash().Infof("Triggering Job %s %s", c.GVR(), sel)
|
||||
|
||||
return nil
|
||||
}
|
||||
|
|
|
|||
|
|
@ -48,8 +48,8 @@ func ShowDrain(view ResourceViewer, path string, defaults dao.DrainOptions, okFn
|
|||
f.AddCheckbox("Ignore DaemonSets:", defaults.IgnoreAllDaemonSets, func(v bool) {
|
||||
opts.IgnoreAllDaemonSets = v
|
||||
})
|
||||
f.AddCheckbox("Delete Local Data:", defaults.DeleteLocalData, func(v bool) {
|
||||
opts.DeleteLocalData = v
|
||||
f.AddCheckbox("Delete Local Data:", defaults.DeleteEmptyDirData, func(v bool) {
|
||||
opts.DeleteEmptyDirData = v
|
||||
})
|
||||
f.AddCheckbox("Force:", defaults.Force, func(v bool) {
|
||||
opts.Force = v
|
||||
|
|
|
|||
|
|
@ -82,7 +82,8 @@ func (v *LiveView) Init(_ context.Context) error {
|
|||
// ResourceFailed notifies when their is an issue.
|
||||
func (v *LiveView) ResourceFailed(err error) {
|
||||
v.text.SetTextAlign(tview.AlignCenter)
|
||||
v.text.SetText(cowTalk(err.Error()))
|
||||
x, _, w, _ := v.GetRect()
|
||||
v.text.SetText(cowTalk(err.Error(), x+w))
|
||||
}
|
||||
|
||||
// ResourceChanged notifies when the filter changes.
|
||||
|
|
|
|||
|
|
@ -73,7 +73,7 @@ func (n *Node) drainCmd(evt *tcell.EventKey) *tcell.EventKey {
|
|||
defaults := dao.DrainOptions{
|
||||
GracePeriodSeconds: -1,
|
||||
Timeout: 5 * time.Second,
|
||||
DeleteLocalData: false,
|
||||
DeleteEmptyDirData: false,
|
||||
IgnoreAllDaemonSets: false,
|
||||
}
|
||||
ShowDrain(n, path, defaults, drainNode)
|
||||
|
|
|
|||
|
|
@ -37,6 +37,7 @@ func NewNamespace(gvr client.GVR) ResourceViewer {
|
|||
func (n *Namespace) bindKeys(aa ui.KeyActions) {
|
||||
aa.Add(ui.KeyActions{
|
||||
ui.KeyU: ui.NewKeyAction("Use", n.useNsCmd, true),
|
||||
ui.KeyShiftS: ui.NewKeyAction("Sort Status", n.GetTable().SortColCmd(statusCol, true), false),
|
||||
})
|
||||
}
|
||||
|
||||
|
|
|
|||
|
|
@ -13,5 +13,5 @@ func TestNSCleanser(t *testing.T) {
|
|||
|
||||
assert.Nil(t, ns.Init(makeCtx()))
|
||||
assert.Equal(t, "Namespaces", ns.Name())
|
||||
assert.Equal(t, 6, len(ns.Hints()))
|
||||
assert.Equal(t, 7, len(ns.Hints()))
|
||||
}
|
||||
|
|
|
|||
|
|
@ -20,6 +20,13 @@ import (
|
|||
"k8s.io/apimachinery/pkg/runtime"
|
||||
)
|
||||
|
||||
const (
|
||||
windowsOS = "windows"
|
||||
powerShell = "powershell"
|
||||
osBetaSelector = "beta.kubernetes.io/os"
|
||||
osSelector = "kubernetes.io/os"
|
||||
)
|
||||
|
||||
// Pod represents a pod viewer.
|
||||
type Pod struct {
|
||||
ResourceViewer
|
||||
|
|
@ -241,11 +248,15 @@ func resumeShellIn(a *App, c model.Component, path, co string) {
|
|||
shellIn(a, path, co)
|
||||
}
|
||||
|
||||
func shellIn(a *App, path, co string) {
|
||||
os := getPodOS(a.factory, path)
|
||||
args := computeShellArgs(path, co, a.Conn().Config().Flags().KubeConfig, os)
|
||||
func shellIn(a *App, fqn, co string) {
|
||||
os, err := getPodOS(a.factory, fqn)
|
||||
if err != nil {
|
||||
log.Warn().Err(err).Msgf("os detect failed")
|
||||
}
|
||||
args := computeShellArgs(fqn, co, a.Conn().Config().Flags().KubeConfig, os)
|
||||
|
||||
c := color.New(color.BgGreen).Add(color.FgBlack).Add(color.Bold)
|
||||
if !runK(a, shellOpts{clear: true, banner: c.Sprintf(bannerFmt, path, co), args: args}) {
|
||||
if !runK(a, shellOpts{clear: true, banner: c.Sprintf(bannerFmt, fqn, co), args: args}) {
|
||||
a.Flash().Err(errors.New("Shell exec failed"))
|
||||
}
|
||||
}
|
||||
|
|
@ -293,11 +304,10 @@ func attachIn(a *App, path, co string) {
|
|||
|
||||
func computeShellArgs(path, co string, kcfg *string, os string) []string {
|
||||
args := buildShellArgs("exec", path, co, kcfg)
|
||||
if os == "windows" {
|
||||
return append(args, "--", "powershell")
|
||||
} else {
|
||||
return append(args, "--", "sh", "-c", shellCheck)
|
||||
if os == windowsOS {
|
||||
return append(args, "--", powerShell)
|
||||
}
|
||||
return append(args, "--", "sh", "-c", shellCheck)
|
||||
}
|
||||
|
||||
func buildShellArgs(cmd, path, co string, kcfg *string) []string {
|
||||
|
|
@ -364,22 +374,20 @@ func podIsRunning(f dao.Factory, path string) bool {
|
|||
return re.Phase(po) == render.Running
|
||||
}
|
||||
|
||||
func getPodOS(f dao.Factory, path string) string {
|
||||
po, err := fetchPod(f, path)
|
||||
func getPodOS(f dao.Factory, fqn string) (string, error) {
|
||||
po, err := fetchPod(f, fqn)
|
||||
if err != nil {
|
||||
log.Error().Err(err).Msg("unable to fetch pod")
|
||||
return ""
|
||||
return "", err
|
||||
}
|
||||
if os, ok := po.Spec.NodeSelector[osBetaSelector]; ok {
|
||||
return os, nil
|
||||
}
|
||||
os, ok := po.Spec.NodeSelector[osSelector]
|
||||
if !ok {
|
||||
return "", fmt.Errorf("no os information available")
|
||||
}
|
||||
|
||||
if os, success := po.Spec.NodeSelector["beta.kubernetes.io/os"]; success {
|
||||
return os
|
||||
}
|
||||
|
||||
if os, success := po.Spec.NodeSelector["kubernetes.io/os"]; success {
|
||||
return os
|
||||
}
|
||||
|
||||
return ""
|
||||
return os, nil
|
||||
}
|
||||
|
||||
func resourceSorters(t *Table) ui.KeyActions {
|
||||
|
|
|
|||
|
|
@ -9,43 +9,97 @@ import (
|
|||
|
||||
func TestComputeShellArgs(t *testing.T) {
|
||||
config, empty := "coolConfig", ""
|
||||
_ = config
|
||||
uu := map[string]struct {
|
||||
path, co string
|
||||
fqn, co, os string
|
||||
cfg *string
|
||||
e string
|
||||
}{
|
||||
"config": {
|
||||
"fred/blee",
|
||||
"c1",
|
||||
"darwin",
|
||||
&config,
|
||||
"exec -it -n fred blee --kubeconfig coolConfig -c c1 -- sh -c " + shellCheck,
|
||||
},
|
||||
"noconfig": {
|
||||
"no-config": {
|
||||
"fred/blee",
|
||||
"c1",
|
||||
"linux",
|
||||
nil,
|
||||
"exec -it -n fred blee -c c1 -- sh -c " + shellCheck,
|
||||
},
|
||||
"emptyConfig": {
|
||||
"fred/blee",
|
||||
"c1",
|
||||
&empty,
|
||||
"exec -it -n fred blee -c c1 -- sh -c " + shellCheck,
|
||||
},
|
||||
"singleContainer": {
|
||||
"empty-config": {
|
||||
"fred/blee",
|
||||
"",
|
||||
"",
|
||||
&empty,
|
||||
"exec -it -n fred blee -- sh -c " + shellCheck,
|
||||
},
|
||||
"single-container": {
|
||||
"fred/blee",
|
||||
"",
|
||||
"linux",
|
||||
&empty,
|
||||
"exec -it -n fred blee -- sh -c " + shellCheck,
|
||||
},
|
||||
"windows": {
|
||||
"fred/blee",
|
||||
"c1",
|
||||
windowsOS,
|
||||
&empty,
|
||||
"exec -it -n fred blee -c c1 -- powershell",
|
||||
},
|
||||
}
|
||||
|
||||
for k := range uu {
|
||||
u := uu[k]
|
||||
t.Run(k, func(t *testing.T) {
|
||||
args := computeShellArgs(u.path, u.co, u.cfg)
|
||||
|
||||
args := computeShellArgs(u.fqn, u.co, u.cfg, u.os)
|
||||
assert.Equal(t, u.e, strings.Join(args, " "))
|
||||
})
|
||||
}
|
||||
}
|
||||
|
||||
// func TestComputeShellArgs(t *testing.T) {
|
||||
// config, empty := "coolConfig", ""
|
||||
// uu := map[string]struct {
|
||||
// path, co string
|
||||
// cfg *string
|
||||
// e string
|
||||
// }{
|
||||
// "config": {
|
||||
// "fred/blee",
|
||||
// "c1",
|
||||
// &config,
|
||||
// "exec -it -n fred blee --kubeconfig coolConfig -c c1 -- sh -c " + shellCheck,
|
||||
// },
|
||||
// "noconfig": {
|
||||
// "fred/blee",
|
||||
// "c1",
|
||||
// nil,
|
||||
// "exec -it -n fred blee -c c1 -- sh -c " + shellCheck,
|
||||
// },
|
||||
// "emptyConfig": {
|
||||
// "fred/blee",
|
||||
// "c1",
|
||||
// &empty,
|
||||
// "exec -it -n fred blee -c c1 -- sh -c " + shellCheck,
|
||||
// },
|
||||
// "singleContainer": {
|
||||
// "fred/blee",
|
||||
// "",
|
||||
// &empty,
|
||||
// "exec -it -n fred blee -- sh -c " + shellCheck,
|
||||
// },
|
||||
// }
|
||||
|
||||
// for k := range uu {
|
||||
// u := uu[k]
|
||||
// t.Run(k, func(t *testing.T) {
|
||||
// args := computeShellArgs(u.path, u.co, u.cfg)
|
||||
|
||||
// assert.Equal(t, u.e, strings.Join(args, " "))
|
||||
// })
|
||||
// }
|
||||
// }
|
||||
|
|
|
|||
|
|
@ -105,7 +105,7 @@ func appsViewers(vv MetaViewers) {
|
|||
vv[client.NewGVR("apps/v1/daemonsets")] = MetaViewer{
|
||||
viewerFn: NewDaemonSet,
|
||||
}
|
||||
vv[client.NewGVR("extensions/v1beta1/daemonsets")] = MetaViewer{
|
||||
vv[client.NewGVR("apps/v1/daemonsets")] = MetaViewer{
|
||||
viewerFn: NewDaemonSet,
|
||||
}
|
||||
}
|
||||
|
|
@ -147,7 +147,7 @@ func extViewers(vv MetaViewers) {
|
|||
vv[client.NewGVR("apiextensions.k8s.io/v1/customresourcedefinitions")] = MetaViewer{
|
||||
enterFn: showCRD,
|
||||
}
|
||||
vv[client.NewGVR("apiextensions.k8s.io/v1beta1/customresourcedefinitions")] = MetaViewer{
|
||||
vv[client.NewGVR("apiextensions.k8s.io/v1/customresourcedefinitions")] = MetaViewer{
|
||||
enterFn: showCRD,
|
||||
}
|
||||
}
|
||||
|
|
|
|||
22
main.go
22
main.go
|
|
@ -1,6 +1,7 @@
|
|||
package main
|
||||
|
||||
import (
|
||||
"flag"
|
||||
"os"
|
||||
|
||||
"github.com/derailed/k9s/cmd"
|
||||
|
|
@ -8,12 +9,33 @@ import (
|
|||
"github.com/rs/zerolog"
|
||||
"github.com/rs/zerolog/log"
|
||||
_ "k8s.io/client-go/plugin/pkg/client/auth"
|
||||
"k8s.io/klog/v2"
|
||||
)
|
||||
|
||||
func init() {
|
||||
config.EnsurePath(config.K9sLogs, config.DefaultDirMod)
|
||||
}
|
||||
|
||||
func init() {
|
||||
klog.InitFlags(nil)
|
||||
|
||||
if err := flag.Set("logtostderr", "false"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flag.Set("alsologtostderr", "false"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flag.Set("stderrthreshold", "fatal"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flag.Set("v", "0"); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
if err := flag.Set("log_file", config.K9sLogs); err != nil {
|
||||
panic(err)
|
||||
}
|
||||
}
|
||||
|
||||
func main() {
|
||||
mod := os.O_CREATE | os.O_APPEND | os.O_WRONLY
|
||||
file, err := os.OpenFile(config.K9sLogs, mod, config.DefaultFileMod)
|
||||
|
|
|
|||
Loading…
Reference in New Issue