Menu

云原生的未来属于那些敢于创新、善于实践并持续精进的开发者,Kubernetes 是你迈向智能化基础设施的坚实基石。

本文面向有 Kubernetes 基础的开发者,系统梳理生产级云原生应用的高级架构模式、API 扩展、现代开发实践与企业级运维,助力你掌握行业最佳实践与技术栈。

引言

本指南结合企业级应用的实际经验,涵盖微服务架构、DevOps 流程、安全加固、可观测性等关键领域,帮助你构建生产级的云原生应用程序。

通过本指南,你将学习到:

高级应用部署模式

Kubernetes 提供丰富的原语和架构模式,支持复杂企业级应用的高可用与弹性部署。以下内容结合行业实践,介绍常用的高级部署模式。

容器架构模式

下图展示了 Pod 内外常见的容器协作模式。

```mermaid “容器架构模式” graph TD subgraph “Pod 内架构模式” SIDECAR[“Sidecar 模式
日志收集、代理、监控”] AMBASSADOR[“Ambassador 模式
服务代理、路由”] ADAPTER[“Adapter 模式
数据转换、标准化”] end

subgraph "Pod 间协作模式"
    LEADER_ELECTION["Leader Election<br/>主节点选举"]
    WORK_QUEUE["Work Queue<br/>任务队列分发"]
    SCATTER_GATHER["Scatter-Gather<br/>并行处理聚合"]
end

subgraph "应用层模式"
    BULKHEAD["Bulkhead<br/>故障隔离"]
    CIRCUIT_BREAKER["Circuit Breaker<br/>熔断保护"]
    RETRY["Retry & Timeout<br/>重试超时机制"]
end

SIDECAR --> BULKHEAD
AMBASSADOR --> CIRCUIT_BREAKER
ADAPTER --> RETRY ```

容器架构模式 {width=1920 height=1397}

Sidecar 容器模式

Sidecar 容器用于分离关注点,提升可维护性和可观测性。

# Sidecar 实践示例:日志收集与监控
apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app-with-sidecar
spec:
  replicas: 3
  template:
    spec:
      containers:
      - name: web-app
        image: nginx:1.21
        ports:
        - containerPort: 80
      - name: log-shipper
        image: fluent/fluent-bit:2.1
        volumeMounts:
        - name: log-volume
          mountPath: /var/log/app
        - name: config-volume
          mountPath: /fluent-bit/etc
      - name: metrics-exporter
        image: prometheus/blackbox-exporter:v0.24
        ports:
        - containerPort: 9115
      volumes:
      - name: log-volume
        emptyDir: {}
      - name: config-volume
        configMap:
          name: fluent-bit-config

实际应用场景包括日志聚合、监控代理、安全代理和数据同步等。

Init 容器高级用法

Init 容器用于准备运行环境,如依赖检查、数据迁移等。

apiVersion: v1
kind: Pod
metadata:
  name: app-with-init
spec:
  initContainers:
  - name: wait-for-db
    image: busybox:1.36
    command: ['sh', '-c', 'until nslookup db-service; do echo waiting for db; sleep 2; done;']
  - name: db-migration
    image: myapp/migration:v1.2
    env:
    - name: DB_HOST
      value: "db-service"
  containers:
  - name: app
    image: myapp:v2.1
    readinessProbe:
      httpGet:
        path: /health
        port: 8080

Pod 配置高级实践

合理配置亲和性、污点容忍和 Downward API,可提升调度效率和应用弹性。

亲和性与反亲和性调度

apiVersion: apps/v1
kind: Deployment
metadata:
  name: web-app
spec:
  replicas: 3
  template:
    spec:
      affinity:
        nodeAffinity:
          preferredDuringSchedulingIgnoredDuringExecution:
          - weight: 100
            preference:
              matchExpressions:
              - key: node-type
                operator: In
                values:
                - high-performance
        podAntiAffinity:
          requiredDuringSchedulingIgnoredDuringExecution:
          - labelSelector:
              matchLabels:
                app: web-app
            topologyKey: kubernetes.io/hostname
      containers:
      - name: web
        image: nginx:1.21

污点容忍高级配置

# 节点污点设置
kubectl taint nodes gpu-node-01 gpu=nvidia:NoSchedule
kubectl taint nodes gpu-node-01 workload=ai:PreferNoSchedule

# Pod 容忍配置
apiVersion: v1
kind: Pod
metadata:
  name: gpu-pod
spec:
  tolerations:
  - key: "gpu"
    operator: "Equal"
    value: "nvidia"
    effect: "NoSchedule"
  - key: "workload"
    operator: "Equal"
    value: "ai"
    effect: "PreferNoSchedule"
    tolerationSeconds: 300
  containers:
  - name: gpu-app
    image: nvidia/cuda:11.8-runtime-ubuntu20.04
    resources:
      limits:
        nvidia.com/gpu: 1

Downward API 生产实践

apiVersion: v1
kind: Pod
metadata:
  name: app-with-metadata
  labels:
    app: my-app
    version: v1.2.3
spec:
  containers:
  - name: app
    image: myapp:v1.2.3
    env:
    - name: POD_NAME
      valueFrom:
        fieldRef:
          fieldPath: metadata.name
    - name: POD_NAMESPACE
      valueFrom:
        fieldRef:
          fieldPath: metadata.namespace
    - name: POD_IP
      valueFrom:
        fieldRef:
          fieldPath: status.podIP
    - name: NODE_NAME
      valueFrom:
        fieldRef:
          fieldPath: spec.nodeName
    volumeMounts:
    - name: pod-info
      mountPath: /etc/pod-info
  volumes:
  - name: pod-info
    downwardAPI:
      items:
      - path: "labels"
        fieldRef:
          fieldPath: metadata.labels
      - path: "annotations"
        fieldRef:
          fieldPath: metadata.annotations

高级工作负载控制器

合理配置 HPA、CronJob 等控制器,实现自动扩缩容与定时任务调度。

HorizontalPodAutoscaler (HPA) 增强配置

apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
  name: web-app-hpa
spec:
  scaleTargetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: web-app
  minReplicas: 3
  maxReplicas: 50
  metrics:
  - type: Resource
    resource:
      name: cpu
      target:
        type: Utilization
        averageUtilization: 70
  - type: Resource
    resource:
      name: memory
      target:
        type: Utilization
        averageUtilization: 80
  - type: Pods
    pods:
      metric:
        name: packets_per_second
      target:
        type: AverageValue
        averageValue: 1000
  - type: Object
    object:
      metric:
        name: requests_per_second
      describedObject:
        apiVersion: networking.k8s.io/v1
        kind: Ingress
        name: web-app-ingress
      target:
        type: Value
        value: "5000"
  behavior:
    scaleDown:
      stabilizationWindowSeconds: 300
      policies:
      - type: Percent
        value: 10
        periodSeconds: 60
    scaleUp:
      stabilizationWindowSeconds: 60
      policies:
      - type: Percent
        value: 50
        periodSeconds: 60
      - type: Pods
        value: 5
        periodSeconds: 60

CronJob 高级调度

apiVersion: batch/v1
kind: CronJob
metadata:
  name: batch-job
spec:
  schedule: "0 */6 * * *"
  timeZone: "Asia/Shanghai"
  concurrencyPolicy: Forbid
  startingDeadlineSeconds: 300
  suspend: false
  successfulJobsHistoryLimit: 3
  failedJobsHistoryLimit: 1
  jobTemplate:
    spec:
      parallelism: 2
      completions: 2
      backoffLimit: 3
      activeDeadlineSeconds: 600
      template:
        spec:
          restartPolicy: OnFailure
          containers:
          - name: batch-processor
            image: myapp/batch:v1.0
            resources:
              requests:
                memory: "512Mi"
                cpu: "500m"
              limits:
                memory: "1Gi"
                cpu: "1000m"

多集群部署策略

多集群架构提升高可用性与灾备能力,常见同步机制包括 Federation、配置同步与服务发现同步。

```mermaid “多集群部署架构” graph TD subgraph “多集群架构” GLOBAL_LB[“全局负载均衡器
Global Load Balancer”] CLUSTER1[“集群1
Primary Region”] CLUSTER2[“集群2
Secondary Region”] CLUSTER3[“集群3
Edge Cluster”] end

subgraph "同步机制"
    FEDERATION["Federation API<br/>跨集群资源同步"]
    CONFIG_SYNC["Config Sync<br/>配置同步"]
    SERVICE_SYNC["Service Sync<br/>服务发现同步"]
end

GLOBAL_LB --> CLUSTER1
GLOBAL_LB --> CLUSTER2
GLOBAL_LB --> CLUSTER3

FEDERATION --> CONFIG_SYNC
FEDERATION --> SERVICE_SYNC

CONFIG_SYNC --> CLUSTER1
SERVICE_SYNC --> CLUSTER1
CONFIG_SYNC --> CLUSTER2
SERVICE_SYNC --> CLUSTER2 ```

多集群部署架构 {width=1920 height=936}

Karmada 多集群管理实践

Karmada 支持多集群资源调度与自定义扩展。

apiVersion: config.karmada.io/v1alpha1
kind: ResourceInterpreterCustomization
metadata:
  name: workload-customization
spec:
  target:
    apiVersion: apps/v1
    kind: Deployment
  customizations:
    retention:
      luaScript: |
        function Retain(desired, observed)
          if observed.spec.replicas > desired.spec.replicas then
            desired.spec.replicas = observed.spec.replicas
          end
          return desired
        end
---
apiVersion: policy.karmada.io/v1alpha1
kind: PropagationPolicy
metadata:
  name: nginx-propagation
spec:
  resourceSelectors:
    - apiVersion: apps/v1
      kind: Deployment
      name: nginx
  placement:
    clusterAffinity:
      clusterNames:
      - cluster1
      - cluster2
    replicaScheduling:
      replicaSchedulingType: Divided
      replicaDivisionPreference: Weighted
      weightPreference:
        staticWeightList:
        - targetCluster:
            clusterNames:
            - cluster1
          weight: 2
        - targetCluster:
            clusterNames:
            - cluster2
          weight: 1

Kubernetes API 扩展模式

Kubernetes 支持多种 API 扩展方式,满足复杂应用的自动化与智能化管理需求。

Operator 模式:智能自动化

下图展示了 Operator 架构及其与核心资源的关系。

```mermaid “Operator 架构” graph TD subgraph “Operator 架构” CRD[“自定义资源定义
Custom Resource Definition”] CR[“自定义资源实例
Custom Resource”] CONTROLLER[“Operator 控制器
Reconcile Loop”] KUBE_API[“Kubernetes API
状态同步”] end

subgraph "应用管理"
    DEPLOY["Deployment<br/>应用部署"]
    SERVICE["Service<br/>网络访问"]
    CONFIG["ConfigMap/Secret<br/>配置管理"]
    MONITOR["监控指标<br/>Prometheus"]
end

USER["开发者"] -->|"kubectl apply"| CRD
CRD --> CR
CR --> CONTROLLER
CONTROLLER -->|"管理"| DEPLOY
CONTROLLER -->|"管理"| SERVICE
CONTROLLER -->|"管理"| CONFIG
CONTROLLER -->|"暴露"| MONITOR ```

Operator 架构 {width=1920 height=1998}

Operator 模式是主流的 Kubernetes API 扩展方式,支持应用生命周期自动化管理。

# PostgreSQL Operator CRD 示例
apiVersion: postgresql.example.com/v1
kind: PostgreSQLCluster
metadata:
  name: prod-database
spec:
  version: "16"
  replicas: 3
  storage:
    size: 500Gi
    className: "fast-ssd"
  backup:
    schedule: "0 */6 * * *"
    retention: "30d"
  monitoring:
    enabled: true

Operator 开发工具栈涵盖 Go、Python、Java、Rust 等多语言生态。

```mermaid “Operator 开发工具栈” graph TD subgraph “Go 生态” SDK[“Operator SDK v1.35+
Red Hat官方”] KUBEBUILDER[“Kubebuilder v4.x
Kubernetes SIG”] CONTROLLER_RUNTIME[“controller-runtime v0.19+
核心框架”] end

subgraph "多语言支持"
    KOPF["Kopf<br/>Python装饰器"]
    FABRIC8["Fabric8<br/>Java客户端"]
    KUBE_RS["kube-rs<br/>Rust安全"]
end

subgraph "高级抽象"
    CROSSPLANE["Crossplane<br/>基础设施即代码"]
    KUDO["KUDO<br/>声明式Operator"]
    CAPSULE["Capsule<br/>多租户管理"]
end

SDK --> CONTROLLER_RUNTIME
KUBEBUILDER --> CONTROLLER_RUNTIME
CROSSPLANE --> KUBE_RS ```

Operator 开发工具栈 {width=2012 height=742}

CRD 最佳实践

合理设计 CRD Schema,提升 API 可用性与安全性。

apiVersion: apiextensions.k8s.io/v1
kind: CustomResourceDefinition
metadata:
  name: myapps.example.com
spec:
  group: example.com
  versions:
  - name: v1
    served: true
    storage: true
    schema:
      openAPIV3Schema:
        type: object
        properties:
          spec:
            type: object
            properties:
              replicas:
                type: integer
                minimum: 1
                maximum: 10
              image:
                type: string
                pattern: '^[^:]+:[^:]+$'
            required:
            - replicas
            - image
          status:
            type: object
            properties:
              phase:
                type: string
                enum: ["Pending", "Running", "Failed"]
              conditions:
                type: array
                items:
                  type: object
                  properties:
                    type:
                      type: string
                    status:
                      type: string
                      enum: ["True", "False", "Unknown"]
                    lastTransitionTime:
                      type: string
                      format: date-time
                    reason:
                      type: string
                    message:
                      type: string
  scope: Namespaced
  names:
    plural: myapps
    singular: myapp
    kind: MyApp
    shortNames:
    - ma

API 聚合层扩展

API 聚合适用于企业级复杂扩展,支持自定义 API Server 与统一入口。

```mermaid “API 聚合层扩展” graph TD subgraph “API Server 聚合” MAIN_API[“主 API Server
core API”] AGGREGATOR[“API 聚合器
kube-aggregator”] EXT_API[“扩展 API Server
Custom Implementation”] end

subgraph "客户端访问"
    KUBECTL["kubectl<br/>CLI工具"]
    CLIENT["REST Client<br/>编程访问"]
    DASHBOARD["Dashboard<br/>Web界面"]
end

KUBECTL --> AGGREGATOR
CLIENT --> AGGREGATOR
DASHBOARD --> AGGREGATOR

AGGREGATOR --> MAIN_API
AGGREGATOR --> EXT_API

EXT_API -->|"注册"| AGGREGATOR ```

API 聚合层扩展 {width=1920 height=1717}

现代开发实践

现代云原生开发强调自动化、声明式配置和服务治理,以下介绍主流实践。

GitOps 工作流

GitOps 通过代码驱动基础设施变更,实现自动化部署与回滚。

```mermaid “GitOps 工作流” flowchart TD DEV[“开发环境”] –>|”代码提交”| GIT[“Git 仓库”] GIT –>|”触发”| CI[“CI Pipeline”] CI –>|”构建测试”| ARTIFACTS[“容器镜像
Helm Charts”] ARTIFACTS –>|”部署到”| STAGING[“Staging 环境”] STAGING –>|”验证”| QA[“质量检查”] QA –>|”合并到”| MAIN[“主分支”] MAIN –>|”自动部署”| PROD[“生产环境”]

subgraph "GitOps 工具链"
    ARGO_CD["ArgoCD<br/>声明式部署"]
    FLUX["Flux<br/>GitOps Operator"]
    JENKINS_X["Jenkins X<br/>云原生CI/CD"]
end

ARGO_CD --> PROD
FLUX --> PROD ```

GitOps 工作流 {width=1920 height=2994}

ArgoCD 应用管理

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: my-app
  namespace: argocd
spec:
  project: default
  source:
    repoURL: https://github.com/myorg/my-app
    targetRevision: HEAD
    path: helm
  destination:
    server: https://kubernetes.default.svc
    namespace: production
  syncPolicy:
    automated:
      prune: true
      selfHeal: true
    syncOptions:
    - CreateNamespace=true
    retry:
      limit: 5
      backoff:
        duration: 5s
        factor: 2
        maxDuration: 3m

Service Mesh 集成

服务网格提升微服务治理能力,实现流量管理、安全加固与可观测性。

```mermaid “服务网格架构” graph TD subgraph “服务网格架构” ISTIO[“Istio
控制平面”] ENVOY[“Envoy
数据平面”] PILOT[“Pilot
配置管理”] CITadel[“Citadel
安全管理”] end

subgraph "应用集成"
    MICROSERVICES["微服务应用"]
    GATEWAY["API Gateway"]
    MONITORING["监控系统"]
end

ISTIO --> PILOT
PILOT --> ENVOY
CITadel --> ENVOY

MICROSERVICES --> ENVOY
GATEWAY --> ENVOY
ENVOY --> MONITORING ```

服务网格架构 {width=1920 height=1609}

Istio 服务治理

apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
  name: my-app-routing
spec:
  hosts:
  - my-app.example.com
  http:
  - match:
    - headers:
        x-user-type:
          exact: premium
    route:
    - destination:
        host: my-app
        subset: premium
  - route:
    - destination:
        host: my-app
        subset: standard
---
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
  name: my-app-subsets
spec:
  host: my-app
  subsets:
  - name: premium
    labels:
      version: v2
  - name: standard
    labels:
      version: v1

Serverless 计算模式

Knative 支持事件驱动与自动扩缩容,适合函数即服务场景。

```mermaid “Knative 架构” graph TD subgraph “Knative 架构” SERVING[“Knative Serving
无服务器部署”] EVENTING[“Knative Eventing
事件驱动”] FUNCTIONS[“Knative Functions
函数运行时”] end

subgraph "工作负载"
    HTTP_APPS["HTTP应用<br/>自动扩缩容"]
    EVENT_DRIVEN["事件驱动<br/>异步处理"]
    FUNCTIONS_RUNTIME["函数即服务<br/>FaaS"]
end

SERVING --> HTTP_APPS
EVENTING --> EVENT_DRIVEN
FUNCTIONS --> FUNCTIONS_RUNTIME

HTTP_APPS -->|"0到N"| SCALE["自动扩缩容"]
EVENT_DRIVEN -->|"事件触发"| PROCESS["异步处理"]
FUNCTIONS_RUNTIME -->|"按需执行"| COMPUTE["按需计算"] ```

Knative 架构 {width=1920 height=1502}

Knative 服务部署

apiVersion: serving.knative.dev/v1
kind: Service
metadata:
  name: hello-world
spec:
  template:
    spec:
      containers:
      - image: gcr.io/knative-samples/helloworld-go
        env:
        - name: TARGET
          value: "Go Sample v1"
        resources:
          requests:
            cpu: 100m
            memory: 128Mi
          limits:
            cpu: 1000m
            memory: 512Mi
        autoscaling.knative.dev/minScale: "0"
        autoscaling.knative.dev/maxScale: "10"
        autoscaling.knative.dev/target: "80"

企业级运维实践

企业级运维关注可观测性、安全加固与灾难恢复,保障系统稳定与数据安全。

可观测性架构

下图展示了监控、日志与分布式追踪的整体架构。

```mermaid “可观测性架构” graph TD subgraph “指标收集” PROMETHEUS[“Prometheus
时序数据库”] NODE_EXPORTER[“Node Exporter
节点指标”] KUBE_STATE[“kube-state-metrics
Kubernetes状态”] APP_METRICS[“应用指标
自定义exporter”] end

subgraph "日志聚合"
    FLUENT_BIT["Fluent Bit<br/>日志收集器"]
    ELASTICSEARCH["Elasticsearch<br/>日志存储"]
    KIBANA["Kibana<br/>日志可视化"]
end

subgraph "分布式追踪"
    JAEGER["Jaeger<br/>追踪系统"]
    OPENTELEMETRY["OpenTelemetry<br/>观测标准"]
    APP_TRACING["应用追踪<br/>代码埋点"]
end

PROMETHEUS --> GRAFANA["Grafana<br/>仪表板"]
FLUENT_BIT --> ELASTICSEARCH
ELASTICSEARCH --> KIBANA
JAEGER --> GRAFANA

APP_METRICS --> PROMETHEUS
APP_TRACING --> JAEGER ```

可观测性架构 {width=2272 height=662}

Prometheus 监控配置

apiVersion: monitoring.coreos.com/v1
kind: PrometheusRule
metadata:
  name: myapp-alerts
  namespace: monitoring
spec:
  groups:
  - name: myapp
    rules:
    - alert: HighRequestLatency
      expr: histogram_quantile(0.95, rate(http_request_duration_seconds_bucket[5m])) > 0.5
      for: 10m
      labels:
        severity: warning
      annotations:
        summary: "高请求延迟"
        description: "95 分位请求延迟超过 500ms"
    - alert: PodCrashLooping
      expr: increase(kube_pod_container_status_restarts_total[10m]) > 5
      for: 5m
      labels:
        severity: critical
      annotations:
        summary: "Pod 重启循环"
        description: "Pod  10 分钟内重启超过 5 次"

安全加固实践

安全加固涵盖身份认证、网络安全与运行时安全,保障集群与数据安全。

```mermaid “安全加固架构” graph TD subgraph “身份认证” OIDC[“OIDC
开放身份连接”] CERT_MANAGER[“cert-manager
证书管理”] VAULT[“HashiCorp Vault
密钥管理”] end

subgraph "网络安全"
    NETWORK_POLICIES["Network Policies<br/>网络策略"]
    SERVICE_MESH["Service Mesh<br/>服务网格"]
    INGRESS_TLS["Ingress TLS<br/>传输加密"]
end

subgraph "运行时安全"
    POD_SECURITY["Pod Security Standards<br/>安全标准"]
    FALCO["Falco<br/>运行时检测"]
    KYVERNO["Kyverno<br/>策略引擎"]
end

OIDC --> RBAC["RBAC<br/>角色访问控制"]
CERT_MANAGER --> TLS["mTLS<br/>双向TLS"]
VAULT --> SECRETS["Secret管理"]

NETWORK_POLICIES --> ISOLATION["网络隔离"]
SERVICE_MESH --> ENCRYPTION["流量加密"]
INGRESS_TLS --> ENCRYPTION

POD_SECURITY --> COMPLIANCE["合规检查"]
FALCO --> MONITORING["安全监控"]
KYVERNO --> POLICIES["策略执行"] ```

安全加固架构 {width=2402 height=380}

Pod 安全策略实施

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: restrict-image-registries
spec:
  validationFailureAction: enforce
  rules:
  - name: validate-image-registry
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "只允许来自可信镜像仓库的容器镜像"
      pattern:
        spec:
          containers:
          - image: "registry.example.com/*"
  - name: require-security-context
    match:
      resources:
        kinds:
        - Pod
    validate:
      message: "必须设置安全上下文"
      pattern:
        spec:
          securityContext:
            runAsNonRoot: true
            runAsUser: ">999"
          containers:
          - securityContext:
              allowPrivilegeEscalation: false
              capabilities:
                drop:
                - ALL

灾难恢复与备份

灾备体系保障集群与数据的高可用与快速恢复。

```mermaid “灾难恢复与备份架构” graph TD subgraph “备份策略” ETCD_BACKUP[“etcd 备份
集群状态”] PVC_BACKUP[“PVC 备份
持久化数据”] CONFIG_BACKUP[“配置备份
应用配置”] end

subgraph "恢复流程"
    CLUSTER_RESTORE["集群恢复<br/>etcd还原"]
    APP_RESTORE["应用恢复<br/>PVC还原"]
    VALIDATION["验证检查<br/>功能测试"]
end

subgraph "工具链"
    VELERO["Velero<br/>Kubernetes备份"]
    KASTEN["Kasten K10<br/>企业备份"]
    PX_BACKUP["Portworx Backup<br/>云原生备份"]
end

ETCD_BACKUP --> CLUSTER_RESTORE
PVC_BACKUP --> APP_RESTORE
CONFIG_BACKUP --> APP_RESTORE

VELERO --> ETCD_BACKUP
KASTEN --> PVC_BACKUP
PX_BACKUP --> CONFIG_BACKUP

CLUSTER_RESTORE --> VALIDATION
APP_RESTORE --> VALIDATION ```

灾难恢复与备份架构 {width=1920 height=2243}

Velero 备份配置

apiVersion: velero.io/v1
kind: Backup
metadata:
  name: daily-app-backup
  namespace: velero
spec:
  includedNamespaces:
  - production
  - staging
  includedResources:
  - deployments
  - services
  - configmaps
  - secrets
  - persistentvolumeclaims
  excludedResources:
  - events
  - pods
  storageLocation: aws-s3-backup
  ttl: 720h0m0s
  schedule: "0 2 * * *"
  snapshotVolumes: true
  volumeSnapshotLocations:
  - aws-ebs-backup
---
apiVersion: velero.io/v1
kind: Schedule
metadata:
  name: weekly-full-backup
  namespace: velero
spec:
  schedule: "0 3 * * 0"
  template:
    includedNamespaces:
    - "*"
    storageLocation: aws-s3-backup
    ttl: 168h0m0s

总结

2025 年的高级 Kubernetes 开发已进入智能化、自动化和安全加固的新阶段。通过掌握 Operator 模式、GitOps 工作流、服务网格集成和企业级运维实践,开发者可构建真正生产级的云原生应用。

关键要点:

持续学习与实践,将助力企业构建更可靠、可扩展和安全的 Kubernetes 应用生态系统。

参考资源

  1. Kubernetes 官方文档 - kubernetes.io
  2. Operator SDK 官方文档 - sdk.operatorframework.io
  3. Karmada 多集群管理 - karmada.io
  4. ArgoCD GitOps - argoproj.github.io
  5. Istio 服务网格 - istio.io
  6. Knative Serverless - knative.dev
  7. Prometheus 监控 - prometheus.io
  8. Kyverno 策略引擎 - kyverno.io
  9. Velero 灾备 - velero.io

Menu