跳到主要内容

24 篇博文 含有标签「kubernetes edge computing」

查看所有标签

· 阅读需 3 分钟

The KubeEdge community is thrilled to announce the release of KubeEdge v1.12! This release introduces several exciting new features and enhancements, including alpha implementation of the next-generation Cloud Native Device Management Interface (DMI), a new version of the lightweight Edged engine, high-availability mode for EdgeMesh, edge node upgrades from the cloud, authorization for the Edge Kube-API endpoint, and more.

What's New in KubeEdge v1.12

Alpha Implementation of Next-Gen Cloud Native Device Management Interface (DMI)

DMI makes KubeEdge's IoT device management more pluggable and modular in a cloud-native way, covering Device Lifecycle Management, Device Operation, and Device Data Management.

  • Device Lifecycle Management: Simplifies IoT device lifecycle management, making it as easy as managing a pod.

  • Device Operation: Provides the ability to operate devices through the Kubernetes API.

  • Device Data Management: Separates device data management from device management, allowing data to be consumed by local applications or synchronized to the cloud through a special tunnel.

Next-Gen Edged Graduates to GA: Suitable for More Scenarios

The new version of the lightweight Edged engine, optimized from Kubelet and integrated into EdgeCore, has graduated to General Availability (GA) in this release. The new Edged will continue to communicate with the cloud through a reliable transmission tunnel, making it suitable for a wider range of scenarios.

Introducing High-Availability Mode for EdgeMesh

KubeEdge v1.12 introduces a high-availability mode for EdgeMesh. Unlike the previous centralized relay mode, the EdgeMesh HA mode can set up multiple relay nodes. When some relay nodes fail, other relay nodes can continue to provide relay services, avoiding single points of failure and improving system stability.

Support Edge Node Upgrade from the Cloud

KubeEdge v1.12 introduces the NodeUpgradeJob v1alpha1 API to upgrade edge nodes from the cloud. With this API and its associated controller, users can upgrade selected edge nodes from the cloud and roll back to the original version if the upgrade fails.

Support Authorization for Edge Kube-API Endpoint

Authorization for the Edge Kube-API Endpoint is now available in KubeEdge v1.12. Third-party plugins and applications that depend on Kubernetes APIs on edge nodes must use a bearer token to communicate with the kube-apiserver via the HTTPS server in MetaServer.

New GigE Mapper

KubeEdge v1.12 includes a new GigE Device Mapper with a Golang implementation, which is used to access GigE Vision protocol cameras.

Important Steps Before Upgrading

  • If you want to upgrade KubeEdge to v1.12, the configuration file in EdgeCore has been upgraded to v1alpha2. You must modify your configuration file for Edged in EdgeCore to adapt to the new Edged.

  • If you want to use authorization for the Edge Kube-API Endpoint, please enable the RequireAuthorization feature through the feature gate in both CloudCore and EdgeCore. If the RequireAuthorization feature is enabled, MetaServer will only serve HTTPS requests.

  • If you want to upgrade EdgeMesh to v1.12, you do not need to deploy the existing EdgeMesh-server, but you need to configure relayNodes.

  • If you want to run EdgeMesh v1.12 on KubeEdge v1.12 and use HTTPS requests to communicate with KubeEdge, you must set kubeAPIConfig.metaServer.security.enable=true.

KubeEdge v1.12 brings exciting new features and improvements to the edge computing ecosystem. We invite you to explore the release and provide feedback to the community. Happy edge computing!

· 阅读需 13 分钟
Wack Xu

Abstract

The population of KubeEdge brings in community interests in the scalability and scale of KubeEdge. Now, Kubernetes clusters powered by KubeEdge, as fully tested, can stably support 100,000 concurrent edge nodes and manage more than one million pods. This report introduces the metrics used in the test, the test procedure, and the method to connect to an ocean of edge nodes.

Background

Fast growing technologies, such as 5G networks, industrial Internet, and AI, are giving edge computing an important role in driving digital transformation. Cities, transportation, healthcare, manufacturing, and many other fields are becoming smart thanks to edge computing. According to Gartner, by 2023, the number of intelligent edge devices may be more than 20 times that of traditional IT devices. By 2028, the embedding of sensors, storage, computing, and advanced AI functions in edge devices will grow steadily. IoT devices are of various types and in large quantities. The increasing connected IoT devices are challenging management and O&M.

At the same time, users in the KubeEdge community are expecting large-scale edge deployment. There are already some successful use cases for KubeEdge. In unmanned toll stations across China, there are nearly 100,000 edge nodes and more than 500,000 edge applications in this project, and the numbers keep growing. Another case is a vehicle-cloud collaboration platform, the industry-first cloud-edge-device system. It enables fast software upgrade and iteration for software-defined vehicles. On this platform, each vehicle is connected as an edge node, and the number of edge nodes will reach millions.

Introduction to KubeEdge

KubeEdge is the industry's first cloud native edge computing framework designed for edge-cloud collaboration. Complementing Kubernetes for container orchestration and scheduling, KubeEdge allows applications, resources, data, and devices to collaborate between edges and the cloud. Devices, edges, and the cloud are now fully connected in edge computing.

In the KubeEdge architecture, the cloud is a unified control plane, which includes native Kubernetes management components and KubeEdge-developed CloudCore components. It listens to cloud resource changes and provides reliable, efficient cloud-edge messaging. At the edge side lie the EdgeCore components, including Edged, MetaManager, and EdgeHub. They receive messages from the cloud and manage the lifecycle of containers. The device mapper and event bus are responsible for device access.

kubeedge-arch

Based on the Kubernetes control plane, KubeEdge allows nodes to be deployed more remotely and thereby extends edge-cloud collaboration. Kubernetes supports 5,000 nodes and 150,000 pods, which are far from enough for edge computing in Internet of Everything (IoE). The access of a large number of edge devices demands a scalable, centralized edge computing platform. To help users cost less and manage more in an easier way, KubeEdge, fully compatible with Kubernetes, optimizes the cloud-edge messaging and provides access support for mass edge nodes.

SLIs/SLOs

Scalability and performance are important features of Kubernetes clusters. Before performing the large-scale performance test, we need to define the measurement metrics. The Kubernetes community defines the following SLIs (Service Level Indicators) and SLOs (Service Level Objectives) to measure the cluster service quality.

  1. API Call Latency
StatusSLISLO
OfficialLatency of mutating API calls for single objects for every (resource, verb) pair, measured as 99th percentile over last 5 minutesIn default Kubernetes installation, for every (resource, verb) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day <= 1s
OfficialLatency of non-streaming read-only API calls for every (resource, scope) pair, measured as 99th percentile over last 5 minutesIn default Kubernetes installation, for every (resource, scope) pair, excluding virtual and aggregated resources and Custom Resource Definitions, 99th percentile per cluster-day: (a) <= 1s if scope=resource (b) <= 30s5 otherwise (if scope=namespace or scope=cluster)
  1. Pod Startup Latency
StatusSLISLO
OfficialStartup latency of schedulable stateless pods, excluding time to pull images and run init containers, measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutesIn default Kubernetes installation, 99th percentile per cluster-day <= 5s
WIPStartup latency of schedulable stateful pods, excluding time to pull images, run init containers, provision volumes (in delayed binding mode) and unmount/detach volumes (from previous pod if needed), measured from pod creation timestamp to when all its containers are reported as started and observed via watch, measured as 99th percentile over last 5 minutesTBD

The community also defines indicators such as in-cluster network programming latency (latency for Service updates or changes in ready pods to be reflected to iptables/IPVS rules), in-cluster network latency, DNS programming latency (latency for Service updates or changes in ready pods to be reflected to the DNS server), and DNS latency. These indicators have not yet been quantified. This test was conducted to satisfy all SLIs/SLOs in the official state.

Kubernetes Scalability Dimensions and Thresholds

Kubernetes scalability does not just mean the number of nodes (Scalability != #Nodes). Other dimensions include the number of namespaces, pods, Services, secrets, and ConfigMaps. Configurations that Kubernetes supports create the Scalability Envelope (which keeps evolving):

k8s-scalability

Obviously, it is impossible for a Kubernetes cluster to expand resource objects without limitation while satisfying SLIs/SLOs. Therefore, the industry defines the upper limits of Kubernetes resource objects.

1. Pods/node 30
2. Backends <= 50k & Services <= 10k & Backends/service <= 250
3. Pod churn 20/s
4. Secret & configmap/node 30
5. Namespaces <= 10k & Pods <= 150k & Pods/namespace <= 3k
6. ​ …..

Dimensions are sometimes not independent. As you move farther along one dimension, your cross-section wrt other dimensions gets smaller. For example, if 5000 nodes are expanded to 10,000 nodes, the specifications of other dimensions will be affected. A heavy workload is required if all scenarios are tested. In this test, we focus on the typical scenarios. We manage to host 100k edge nodes and 1000k pods in a single cluster while satisfying the SLIs/SLOs.

Test Tools

ClusterLoader2

ClusterLoader2 is an open source Kubernetes cluster performance test tool. It can test the Kubernetes SLIs/SLOs to check whether the cluster meets the service quality standards. It also visualizes data for locating cluster problems and optimizing cluster performance. After the test, users get a performance report with detailed test results.

Clusterloader2 performance metrics:

  • APIResponsivenessPrometheusSimple
  • APIResponsivenessPrometheus
  • CPUProfile
  • EtcdMetrics
  • MemoryProfile
  • MetricsForE2E
  • PodStartupLatency
  • ResourceUsageSummary
  • SchedulingMetrics
  • SchedulingThroughput
  • WaitForControlledPodsRunning
  • WaitForRunningPods

Edgemark

Edgemark is a performance test tool similar to Kubemark. It simulates deploying KubeEdge edge nodes in the KubeEdge cluster scalability test to build ultra-large Kubernetes clusters, powered by KubeEdge, with limited resources. The objective is to expose the cluster control plane downsides that occur only in large-scale deployments. The following figure illustrates the Edgemark deployment:

edgemark-deploy

  • K8s master: the master node of the Kubernetes cluster
  • Edgemark master: the master node of the simulated Kubernetes cluster
  • CloudCore: the KubeEdge cloud management component, which is responsible for edge node access
  • hollow pod: a pod started in the actual cluster. It registers with the Edgemark master as a virtual edge node by starting Edgemark in it. The Edgemark master can schedule pods to this virtual edge node.
  • hollow edgeNode: a virtual node in the simulated cluster, registered from a hollow pod

Cluster Deployment Scheme for the Test

deploy

The Kubernetes control plane is deployed with one master node. The etcd, kube-apiserver, kube-scheduler, and kube-controller are deployed as single-instance. The KubeEdge control plane is deployed with five CloudCore instances and connects to the kube-apiserver through the IP address of the master node. Hollow EdgeNodes are exposed by a load balancer and randomly connect to a CloudCore instance based on the round-robin policy of the load balancer.

Test Environment Information

Control Plane OS Version

CentOS 7.9 64bit 3.10.0-1160.15.2.el7.x86_64

Kubernetes Version

Major:"1", Minor:"23", GitVersion:"v1.23.4", GitCommit:"e6c093d87ea4cbb530a7b2ae91e54c0842d8308a", GitTreeState:"clean", BuildDate:"2022-02-16T12:38:05Z", GoVersion:"go1.17.7", Compiler:"gc", Platform:"linux/amd64"

KubeEdge Version

KubeEdge v1.11.0-alpha.0

Master Node Configurations

  • CPU
Architecture:          x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 128
On-line CPU(s) list: 0-127
Thread(s) per core: 2
Core(s) per socket: 32
Socket(s): 2
NUMA node(s): 2
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz
Stepping: 6
CPU MHz: 2999.998
  • MEMORY
Total online memory:   256G
  • ETCD DISK
Type:   SAS_SSD
Size: 300GB

CloudCore Node Configurations

  • CPU
Architecture:          x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 12
On-line CPU(s) list: 0-11
Thread(s) per core: 2
Core(s) per socket: 6
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 106
Model name: Intel(R) Xeon(R) Platinum 8378A CPU @ 3.00GHz
Stepping: 6
CPU MHz: 2999.998
  • MEMORY
Total online memory:   48G

Component Parameter Configurations

1. kube-apiserver

--max-requests-inflight=2000
--max-mutating-requests-inflight=1000

2. kube-controller-manager

--kube-api-qps=100
--kube-api-burst=100

3. kube-scheduler

--kube-api-qps=200
--kube-api-burst=400

4. CloudCore

apiVersion: cloudcore.config.kubeedge.io/v1alpha1
kind: CloudCore
kubeAPIConfig:
kubeConfig: ""
master: ""
qps: 60000
burst: 80000
modules:
cloudHub:
advertiseAddress:
- xx.xx.xx.xx
nodeLimit: 30000
tlsCAFile: /etc/kubeedge/ca/rootCA.crt
tlsCertFile: /etc/kubeedge/certs/server.crt
tlsPrivateKeyFile: /etc/kubeedge/certs/server.key
unixsocket:
address: unix:///var/lib/kubeedge/kubeedge.sock
enable: false
websocket:
address: 0.0.0.0
enable: true
port: 10000
cloudStream:
enable: false
deviceController:
enable: false
dynamicController:
enable: false
edgeController:
buffer:
configMapEvent: 102400
deletePod: 10240
endpointsEvent: 1
podEvent: 102400
queryConfigMap: 10240
queryEndpoints: 1
queryNode: 10240
queryPersistentVolume: 1
queryPersistentVolumeClaim: 1
querySecret: 10240
queryService: 1
queryVolumeAttachment: 1
ruleEndpointsEvent: 1
rulesEvent: 1
secretEvent: 1
serviceEvent: 10240
updateNode: 15240
updateNodeStatus: 30000
updatePodStatus: 102400
enable: true
load:
deletePodWorkers: 5000
queryConfigMapWorkers: 1000
queryEndpointsWorkers: 1
queryNodeWorkers: 5000
queryPersistentVolumeClaimWorkers: 1
queryPersistentVolumeWorkers: 1
querySecretWorkers: 1000
queryServiceWorkers: 1
queryVolumeAttachmentWorkers: 1
updateNodeStatusWorkers: 10000
updateNodeWorkers: 5000
updatePodStatusWorkers: 20000
ServiceAccountTokenWorkers: 10000
nodeUpdateFrequency: 60
router:
enable: false
syncController:
enable: true

Density Test

Test Execution

Before using ClusterLoader2 to perform the performance test, we defined the test policy using the configuration file. In this test, we used the official Kubernetes density case. The configuration file we used can be obtained here:

https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/testing/density/config.yaml

The following table describes the detailed Kubernetes resource configurations:

Maximum typeMaximum value
Number of Nodes100,000
Number of Pods1,000,000
Number of Pods per node10
Number of Namespaces400
Number of Pods per Namespace2,500

For details about the test method and procedure, see the following links:

https://github.com/kubeedge/kubeedge/tree/master/build/edgemark

https://github.com/kubernetes/perf-tests/blob/master/clusterloader2/docs/GETTING_STARTED.md

Test Results

APIResponsivenessPrometheusSimple

  1. mutating API latency(threshold=1s):

    mutating-api-latency

  2. Read-only API call latency(scope=resource, threshold=1s)

    read-only-api-call-resource

  3. Read-only API call latency(scope=namespace, threshold=5s)

    read-only-api-call-namespace

  4. Read-only API call latency(scope=cluster, threshold=30s)

    read-only-api-call-cluster

PodStartupLatency

metricp50(ms)p90(ms)p99(ms)SLO(ms)
pod_startup1688275140875000
create_to_schedule001000N/A
schedule_to_run100010001000N/A
run_to_watch108716742265N/A
schedule_to_watch165727243070N/A

Note: Theoretically, the latency should always be greater than 0. Because kube-apiserver does not support RFC339NANO, the timestamp precision can only be seconds. Therefore, when the latency is low, some values collected by ClusterLoader2 are 0 due to precision loss.

Conclusion and Analysis

From the preceding test results, the API call latency and pod startup latency meet the SLIs/SLOs defined by the Kubernetes community. Therefore, the KubeEdge-powered Kubernetes clusters can stably support 100,000 concurrent edge nodes and more than one million pods. In production, the network between edge nodes and the cloud is connected according to O&M requirements due to reasons such as network security and partition management. Therefore, the number of edge nodes that can be managed by a single cluster can increase proportionally based on the proportion of offline edge nodes to online ones. In addition, data fragmentation is used on the Kubernetes control plane to store different resources to the corresponding etcd space, which allows for a larger service deployment scale.

KubeEdge's Support for Large-Scale Edge Node Access

1. Efficient Cloud-Edge Messaging

List-watch is a unified mechanism for asynchronous messaging of Kubernetes components. The list operation calls the list API of a resource to obtain full resource data through non-persistent HTTP connections. The watch operation calls the watch API of a resource to monitor resource change events and obtain incremental change data through persistent HTTP connections and block-based transmission encoding. In Kubernetes, in addition to the list-watch of a node, pods allocated to the node, and full service metadata, kubelet must also watch (by default) the running pods mounted with secrets and ConfigMaps as data volumes. The number of list-watch operations could explode with increasing nodes and pods, which heavily burdens kube-apiserver.

KubeEdge uses the two-way multiplexing edge-cloud message channel and supports the WebSocket (default) and QUIC protocols. EdgeCore at the edge initiates a connection request to CloudCore on the cloud. CloudCore list-watches Kubernetes resource changes, and delivers metadata to the edge through this two-way channel. EdgeCore uploads the metadata, such as edge node status and application status, to CloudCore through this channel. CloudCore reports the received metadata to kube-apiserver.

CloudCore aggregates the upstream and downstream data. kube-apiserver processes only several list-watch requests from CloudCore. It can be effectively unburdened and the cluster performance gets improved.

Memory usage when the native Kubernetes kube-apiserver is used under the same node and pod scales:

kube-apiserver-usage

Memory usage when kube-apiserver is used in a KubeEdge-powered Kubernetes cluster:

kubeedge-kube-apiserver-usage

2. Reliable Incremental Cloud-Edge Data Transmission

In the case of complex edge network topology or poor networking quality, cloud-edge communication may be compromised by high network latency, intermittent/frequent disconnection, and other issues. When the network recovers and edge nodes want to reconnect to the cloud, a large number of full list requests will be generated, pressuring kube-apiserver. Large-scale deployments may amplify this challenge to system stability. To solve it, KubeEdge records the version of the metadata successfully sent to the edge. When the cloud-edge network is reconnected, the cloud sends incremental metadata starting from the recorded metadata version.

3. Lightweight Edge + Edge-Cloud Messaging Optimization

EdgeCore removes native kubelet features that are not used in edge deployments, such as in-tree volume and cloud-provider, trims the status information reported by nodes, and optimizes resource usage of edge agent software. EdgeCore can run with a minimum of 70 MB memory on edge devices whose memory is as minimum as 100 MB. The WebSocket channel, edge-cloud message combination, and data trim greatly reduces the communication pressure on the edge and cloud and the access pressure on the control plane. They ensure that the system can work properly even in the case of high latency and jitter.

When 100,000 edge nodes are connected, the number of ELB connections is 100,000.

connect-number

When 100,000 edge nodes and more than 1,000,000 pods are deployed, the inbound rate of the ELB network is about 3 MB/s, and the average uplink bandwidth to each edge node is about 0.25 kbit/s.

network

Next Steps

Targeted tests will be performed on edge devices, edge-cloud messaging, and edge service mesh. In addition, for some edge scenarios, such as large-scale node network disconnection and reconnection, high latency of edge networks, and intermittent disconnection, new SLIs/SLOs need to be introduced to measure the cluster service quality and perform large-scale tests.

· 阅读需 2 分钟
Vincent Lin

As the first cloud-native edge computing community, KubeEdge provides solutions for cloud-edge synergy and has been widely adopted in industries including Transportation, Energy, Internet, CDN, Manufacturing, Smart campus, etc. With the accelerated deployment of KubeEdge in this area based on cloud-edge synergy, the community will improve the security of KubeEdge continuously in cloud-native edge computing scenarios.

The KubeEdge community attaches great importance to security and has set up Sig Security and Security Team to design KubeEdge system security and quickly respond to and handle security vulnerabilities. To conduct a more comprehensive security assessment of the KubeEdge project, the KubeEdge community cooperates with Ada Logics Ltd. and The Open Source Technology Improvement Fund performed a holistic security audit of KubeEdge and output a security auditing report, including the security threat model and security issues related to the KubeEdge project. Thank you to experts Adam Korczynski and David Korczynski of Ada Logics for their professional and comprehensive evaluation of the KubeEdge project, which has important guiding significance for the security protection of the KubeEdge project. Thank you Amir Montazery and Derek Zimmer of OSTIF and Cloud Native Computing Foundation (CNCF) who helped with this engagement.

The discovered security issues have been fixed and patched to the latest three minor release versions (v1.11.1, v1.10.2, v1.9.4) by KubeEdge maintainers according to the kubeedge security policy. Security advisories have been published here.

For more details of the threat model and the mitigations, Please check KubeEdge Threat Model And Security Protection Analysis: https://github.com/kubeedge/community/tree/master/sig-security/sig-security-audit/KubeEdge-threat-model-and-security-protection-analysis.md.

References:

Audit report: https://github.com/kubeedge/community/tree/master/sig-security/sig-security-audit/KubeEdge-security-audit-2022.pdf

OSTIF Blogpost: https://ostif.org/our-audit-of-kubeedge-is-complete-multiple-security-issues-found-and-fixed

CNCF Blogpost:

KubeEdge Threat Model And Security Protection Analysis: https://github.com/kubeedge/community/tree/master/sig-security/sig-security-audit/KubeEdge-threat-model-and-security-protection-analysis.md

· 阅读需 4 分钟

On Jun 21, 2022 KubeEdge released v1.11, introducing several exciting new features and enhancements that significantly improve node group management, mapper development, installation experience, and overall stability.

v1.11 What's New

Release Highlights

Node Group Management

Users can now deploy applications to several node groups without writing deployment for every group. Node group management helps users to:

  • Manage nodes in groups

  • Spread apps among node groups

  • Run different versions of app instances in different node groups

  • Limit service endpoints in the same location as the client

Two new APIs have been introduced to implement Node Group Management:

  • NodeGroup API: represents a group of nodes that have the same labels.
  • EdgeApplication API: contains the template of the application organized by node groups, and the information on how to deploy different editions of the application to different node groups.

Refer to the links for more details (#3574, #3719).

Mapper SDK

Mapper-sdk is a basic framework written in Go. Based on this framework, developers can more easily implement a new mapper. Mapper-sdk has realized the connection to KubeEdge, provides data conversion, and manages the basic properties and status of devices, etc., as well as basic capabilities and abstract definition of the driver interface. Developers only need to implement the customized protocol driver interface of the corresponding device to realize the function of mapper.

Refer to the link for more details (#70).

Beta sub-commands in Keadm to GA

Some new sub-commands in Keadm have moved to GA, including containerized deployment, offline installation, etc. The original init and join behaviors have been replaced by the implementation from beta init and beta join:

  • CloudCore will be running in containers and managed by Kubernetes Deployment by default.

  • Keadm now downloads releases that are packed as container images to edge nodes for node setup.

  • init: CloudCore Helm Chart is integrated into init, which can be used to deploy containerized CloudCore.

  • join: Installing edgecore as a system service from a Docker image, no need to download from the GitHub release.

  • reset: Reset the node, clean up the resources installed on the node by init or join. It will automatically detect the type of node to clean up.

  • manifest generate: Generate all the manifests to deploy the cloud-side components.

Refer to the link for more details (#3900).

Deprecation of original init and join

The original init and join sub-commands have been deprecated as they had issues with offline installation, etc.

Refer to the link for more details (#3900).

Next-gen Edged to Beta: Suitable for more scenarios

The new version of the lightweight engine Edged, optimized from Kubelet and integrated into edgecore, has moved to Beta. The new Edged will still communicate with the cloud through the reliable transmission tunnel.

Refer to the link for more details (Dev-Branch for beta: feature-new-edged).

Important Steps before Upgrading

If you want to use Keadm to deploy KubeEdge v1.11.0, please note that the behaviors of the init and join sub-commands have been changed.

Other Notable Changes

  • Add custom image repo for keadm join beta (#3654)

  • Keadm: beta join support remote runtime (#3655)

  • Use sync mode to update pod status (#3658)

  • Make log level configurable for local up kubeedge (#3664)

  • Use dependency to pull images (#3671)

  • Move apis and client under kubeedge/cloud/pkg/ to kubeedge/pkg/ (#3683)

  • Add subresource field in application for API with subresource (#3693)

  • Add Keadm beta e2e (#3699)

  • Keadm beta config images: support remote runtime (#3700)

  • Use unified image management (#3720)

  • Use armhf as default for armv7/v6 (#3723)

  • Add ErrStatus in api-server application (#3742)

  • Support compile binaries with kubeedge/build-tools image (#3756)

  • Add min TLS version for stream server (#3764)

  • Adding security policy (#3778)

  • Chart: add cert domain config in helm chart (#3802)

  • Add domain support for certgen.sh (#3808)

  • Remove default KubeConfig for cloudcore (#3836)

  • Helm: Allow annotation of the cloudcore service (#3856)

  • Add rate limiter for edgehub (#3862)

  • Sync pod status immediately when status update (#3891)

· 阅读需 4 分钟

On Mar 7, 2022, KubeEdge released v1.10. The new version introduces several enhancements, significantly improving the installation experience, performance testing, network communication, and Kubernetes version compatibility.

v1.10 What's New

Release Highlights

Installation Experience Improvement with Keadm

Keadm adds some new sub-commands to improve the user experience, including containerized deployment, offline installation, etc. New sub-commands including: beta, config.

beta provides some sub-commands that are still in testing, but have complete functions and can be used in advance. Sub-commands including: beta init, beta manifest generate, beta join, beta reset.

  • beta init: CloudCore Helm Chart is integrated in beta init, which can be used to deploy containerized CloudCore.

  • beta join: Installing edgecore as system service from docker image, no need to download from github release.

  • beta reset: Reset the node, clean up the resources installed on the node by beta init or beta join. It will automatically detect the type of node to clean up.

  • beta manifest generate: Generate all the manifests to deploy the cloudside components.

config is used to configure kubeedge cluster, like cluster upgrade, API conversion, image preloading. Now the image preloading has supported, sub-commands including: config images list, config images pull.

  • config images list: List all images required for kubeedge installation.

  • config images pull: Pull all images required for kubeedge installation.

Refer to the links for more details. (#3517, #3540, #3554, #3534)

Preview version for Next-gen Edged: Suitable for more scenarios

A new version of the lightweight engine Edged, which is optimized from kubelet and integrated in edgecore, and occupies less resource. Users can customize lightweight optimization according to their needs.

Refer to the links for more details. (Dev-Branch for previewing: feature-new-edged)

Edgemark: Support large-scale KubeEdge cluster performance testing

Edgemark is a performance testing tool inherited from Kubemark. The primary use case of Edgemark is also scalability testing, it allows users to simulate edge clusters, which can be much bigger than the real ones.

Edgemark consists of two parts: real cloud part components and a set of "Hollow" Edge Nodes. In "Hollow" Edge Nodes, EdgeCore runs in container. The edged module runs with an injected mock CRI part that doesn't do anything. So the hollow edge node doesn't actually start any containers, and also doesn't mount any volumes.

Refer to the link for more details. (#3637)

EdgeMesh proxy tunnel supports quic

Users can choose edgemesh's proxy tunnel as quic protocol to transmit data. In edge scenarios, nodes are often in a weak network environment. Compared with the traditional tcp protocol, the quic protocol has better performance and QoS in the weak network environment.

Refer to the link for more details. (#281)

EdgeMesh supports proxy for udp applications

Some users' services use the udp protocol, and now edgemesh can also support the proxy of udp applications.

Refer to the link for more details. (#295)

EdgeMesh support SSH login between cloud-edge/edge-edge nodes

Edge nodes are generally distributed in the Private network environment, but it is often necessary to ssh login and operate the edge node. EdgeMesh provide a socks5proxy based on the tunnel inside EdgeMesh, which supports forwarding ssh requests from cloud/edge nodes to edge nodes.

Refer to the links for more details. (#258, #242)

Kubernetes Dependencies Upgrade

Upgrade the vendered kubernetes version to v1.22.6, users now can use the feature of new version on the cloud and on the edge side.

Refer to the link for more details. (#3624)

Important Steps before Upgrading

If you want to deploy the KubeEdge v1.10.0, please note that the Kubernetes dependency is 1.22.6.

Other Notable Changes

  • Remove dependency on os/exec and curl in favor of net/http (#3409, @mjlshen)

  • Optimize script when create stream cert (#3412, @gujun4990)

  • Cloudhub: prevent dropping volume messages (#3457, @moolen)

  • Modify the log view command after edgecore is running (#3456, @zc2638)

  • Optimize the iptables manager (#3461, @zhu733756)

  • Add script for build release (#3467, @gy95)

  • Using lateset codes to do keadm_e2e (#3469, @gy95)

  • Change the resourceType of msg issued by synccontroller (#3496, @Rachel-Shao)

  • Add a basic image for building various components of KubeEdge (#3513, @zc2638)

  • Supporting crossbuild all components (#3515, [@fisher