Research and Implementation of Container Based Application Orchestration Service Technology

With the rapid development of cloud computing technology, Kubernetes(K8S), as the main orchestration tool for cloud native applications, has become the preferred choice for enterprises and developers. This article is based on container based application orchestration service technology. Through a set of templates containing cloud resource descriptions, it quickly completes functions such as application creation and configuration, application batch cloning, and application multi environment deployment. It simplifies and automates the lifecycle management capabilities required for cloud applications, such as resource planning, application design, deployment, status monitoring, and scaling. Users can more conveniently complete infrastructure management and operation and maintenance work, In order to focus more on innovation and research and development, and improve work efficiency. The actual application effect of the technology used in this article depends to a certain extent on the ability level of basic service resources, and manual template creation is required for the first use. In production use, a certain professional ability is required to create a good application layout template, adjust and optimize resources according to the production environment, in order to significantly improve the effectiveness and efficiency of practical applications.


Introduction
With the popularity of cloud-native technologies, more and more enterprises use Kubernetes to manage applications to automate the deployment, scaling, and management of containerized applications.The shift from on-premises and single-cloud K8S deployments to hybrid and multi-cloud deployments is accelerating; However, the scale and number of clusters are also growing explosively, and the K8S environment is extended to data centers, public clouds, and edge environments.The team has to face various challenges, such as complex operation and maintenance management, multiple application dependencies, inconvenient scaling and capacity, differences in cross-platform compatibility, and lack of professional skills.Literature [1] gives a new response speed optimization method of power supply system in K8S container cluster environment, but this method has great pertinency and limitations, and is difficult to be widely applied at present.Literatures [2][3][4] have studied the network transmission, load balancing, network environment and compatibility of K8S container cluster, and designed different schemes to improve performance.However, these schemes often require a lot of upgrading and optimization of the existing system, and the cost is difficult to estimate.In terms of container auto-scaling technology, literature [5] designs a new scaling algorithm based on the scheduling strategy of predictive scaling and reactive scaling, which can improve the response delay problem and improve the quality of service of applications.
As an independent cloud service, Application Orchestration service (AOS) provides life cycle management capabilities such as resource planning, application design, deployment, status monitoring, scaling and so on.It also applies graphical designer to provide intuitive and convenient cloud resource provisioning and application deployment, and realizes one-click cloud resource provisioning and application replication.Enabling users to spend less time on infrastructure management and focus more on business innovation.Literature [6] provides that under the trend of cloud computing to multicloud development, the orchestration technology that can automatically configure, manage and coordinate computer system applications and services can reduce the management cost of large-scale complex systems, and through the template file transformed by the graphical designer, the application orchestration efficiency and cloud resource utilization rate are deeply studied.The feasibility of an application orchestration scheme in multi-cloud environment is verified.Figure 1 shows the workflow of application orchestration service.

Technical Proposal
In this paper, a PAAS container cloud platform is built, which includes four layers: cloud-native solution, container cloud, container cloud base and infrastructure service.The graphical application orchestration designer is integrated to provide unified management of multiple distributions K8S, which can quickly enable federated clusters, cloud-edge collaboration, microservices and other capabilities.To solve the problem of user migration and compatibility from different versions of K8S; It provides full life cycle management capabilities based on applications and fine management capabilities for resources, network, storage and other resources.It takes clusters and applications as the core to solve problems such as application dependence, application deployment and scalability.It provides integrated monitoring operation and independent operation and maintenance capabilities, combined with container-based self-recovery and self-scheduling strategies and interface operation and maintenance attribute configuration, which greatly reduces operation and maintenance costs and difficulties.Figure 2 shows the functional architecture of container cloud platform.

Kubernetes Core Capabilities
The core capabilities of K8S mainly focus on cluster management, container orchestration, automatic scaling, load balancing and service discovery, high availability and fault tolerance, storage management, configuration and key management, logging and monitoring, etc., providing users with reliable, powerful and scalable containerization services [7].
At present, native K8S has some shortcomings.In terms of cluster management, K8S can manage a cluster composed of multiple nodes, but K8S is a very complex system, and it needs to configure a large number of parameters and options to manage the cluster, which is easy to cause misconfiguration or inconsistency, and increases the difficulty of maintenance and troubleshooing.The native K8S provides container orchestration function, which can automatically schedule and manage the deployment, scaling and upgrading of containers to a certain extent.Its configuration files are used to define and configure applications, services and environments [8].The more complex your cluster environment and deployment tasks, the more complex your configuration files will be.K8S can automatically scale horizontally and vertically, but this automatic scaling mechanism is based on metrics, which is not flexible enough for delay-sensitive applications (such as real-time computing) and cannot meet the real-time requirements of applications.When the application has a sudden increase in load, it may not be able to scale in time, resulting in the performance degradation of the application.K8S provides built-in load balancing and service discovery mechanism.By default, it uses Ingress Controller as load balancer.But a single load balancer and a simple algorithm can not adapt to the load balancing needs of large-scale applications [9].Through replica set and failure recovery mechanism, K8S ensures high availability and fault tolerance of applications when node failures or container crashes.However, it does not have the function of fault domain awareness, so it cannot automatically disperse Pods and replica sets in multiple fault domains.During a rolling upgrade, both old and new versions of Pods may exist at the same time, which could render the entire app unusable if it fails.K8S provides a variety of storage plug-ins and abstract interfaces, which can mount storage volumes, persist data and share data.However, K8S only supports the creation and deletion of dynamic storage volumes, and can not dynamically expand and contract.It only supports basic volume types such as emptyDir, hostPath, nfs, etc.It lacks support for other storage types such as Ceph, GlusterFS, etc.It can not perform fine-grained storage management according to different business requirements.K8S provides a configuration management function to separate the configuration parameters, and manage and inject them through ConfigMap and Secret objects [10].Lack of centralized configuration and key management tools The lack of fine-grained access control mechanism can not fine-grained control the access of different roles, which may lead to the risk of configuration and key leakage.The lack of built-in key rotation mechanism increases the risk of key management.The lack of perfect audit and monitoring functions makes it difficult to track and monitor the use and change of configuration and key, increasing security risks [11].

Container Cloud Platform
The container cloud platform is a Pass platform with cluster and application as the core, which provides users with complete life cycle management and one-stop operation and maintenance management of containers [12].The specific functions can be achieved as shown in the following table 1:

Function name
Functional description Multi-cluster management Manage multiple Kubernetes clusters at the same time, and add specific configuration, more convenient multi-cluster, multi-tenant management.

Enriched dimension permission management
Rbac-based permission management manages department resources from the tenant -project perspective, which is more suitable for multi-tenant scenarios

Multi-version management
Templates are used to manage each YAML file, update Kubernetes resources based on specific historical versions, and preserve release history More diverse ecosystems Cloud services are connected with platforms such as peripheral monitoring and resource trees, adding capabilities that native Kubernetes does not have

Simplicity of operator
The basic Kubernetes resource profile addition method is provided in form form, while the advanced mode supports directly writing YAML files to create resources Rapid cloud ascent Unified cluster management, no need to log in to the server, compliance, fast to ensure the stable and healthy operation of services, fast release of applications, application version management, etc.Control resources and use them more effectively

Seperating safety
Spatial data isolation based on cluster, organization, project, etc. Isolation and auditing of operations based on clusters and applications Integrated monitoring operation and maintenance Visual operation and maintenance based on cluster & application; Provide a complete log/monitoring system, failure can be timely troubleshooting Here are some concepts that might be mathematically formulated in relation to a container cloud platform, which may vary from platform to platform and scenario to scenario: 1. Scheduling of resources Evaluation functions in resource scheduling algorithms that may take into account CPU usage, memory usage, network bandwidth, and so on [13].
Scheduling evaluation function = α * X + β * Y + γ * Z (1) where X is CPU utilization, Y is memory utilization, Z is network bandwidth.

Load balancing
The weight allocation formula in the load balancing algorithm ensures load balancing among each node or container instance.
3. where A is CPU margin, B is Memory allowance, C is Network bandwidth allowance.Container orchestration The cost function in the container orchestration algorithm takes into account the dependencies between containers, constraints, etc.
Cost Function = (c i * N i ) (3) where c i is Container resource requirement, N i is Node resource price.

Elastic Scaling
The threshold and trigger conditions in the elastic scaling strategy make decisions based on the real-time load.
If CPU usage > threshold, increase the number of nodes.If CPU usage < threshold, the number of nodes is reduced.
5. Service Level Agreement (SLA) Performance indicators and constraints defined in the SLA.
SLA success rate = (service successfully responds to requests/total requests) * 100% (4) 6. Container resource limit The definition of a container's resource limit, which ensures that the container does not exceed the scheduled resources at runtime.
CPU limit = 0.5 (5) which means container uses half a CPU core at most) 7. Network Bandwidth Control: Network bandwidth control policy that limits the network transmission rate of containers or services.
Bandwidth control = 100 Mbps (6) which means limit the container's network rate to 100 megabits per second.

Key Technology
The container cloud platform and AOS technology theories studied in this paper include containerization technology, service orchestration technology, cloud native technology, automatic operation and maintenance technology, distributed system and network theory.

Containerization
Application orchestration services are usually based on container technologies such as Docker, K8S, etc. Containerization technology can package an application and its dependent components into an independent and portable container, which can be deployed and managed in different runtime environments.The goal of containerization technology is to provide a lightweight, rapidly deployable, extensible and portable solution to meet the needs of modern application development and deployment.K8S is a container orchestration and management platform for automating the deployment, scaling, and management of containers.K8S provides a specification and resource management mechanism for containerized applications, which can automatically schedule and scale containers to ensure high availability and elasticity of applications [14].
The mathematical formula of containerization technology usually involves resource allocation, performance optimization, container orchestration and so on.Here are some concepts related to containerization and some mathematical formulas that may be involved: 1) Container load balancing Weight allocation formula in container load balancing algorithm to ensure load balancing of each container instance.Weight = λ*x + μ*y (7) where x is CPU margin, y is memory margin.2) Container storage optimization Container storage optimization algorithm, used to maximize the utilization of storage resources.
Storage utilization = (Used storage/total storage) * 100% 3) Container security assessment Container security assessment algorithm to detect security vulnerabilities and risks in container images.

Service Orchestration Technology
Application orchestration service can automate the deployment, management and scheduling of applications by defining and managing the relationships and dependencies between different application components.This involves some Service orchestration technologies such as Docker Compose, K8S Service and Deployment, etc [15].
The Service orchestration technology in K8S involves several important concepts and components, including Pod, Service, ReplicaSet and Deployment.Pod, container group, is the smallest scheduling unit of K8S, which can contain one or more application containers.Pod realizes resource sharing and network communication between containers, and can accommodate multiple interrelated applications.Service is a service, which is an abstract logical concept used to define the access policy and network access entry of a set of Pods.Service can provide load balancing, Service discovery, name resolution and other functions, and can form related pods into a service through a tag selector.When a Pod changes, the service will automatically update the Pod list of its backend to ensure that the request can be forwarded correctly.A ReplicaSet, or replica set, is a K8S controller that guarantees the running state of a group of pods.It can specify the number of pods that need to be run and automatically recover when a Pod fails or is deleted.It can also define the startup strategy of the Pod, container parameters, and other configurations to ensure that the Pod runs in a consistent environment.
Service Orchestration refers to the coordination and management of the execution sequence and communication of multiple service instances in a distributed system to achieve a specific business process or workflow.Mathematical formulation in service orchestration technology may involve the expression of some models, algorithms or optimization problems.Here are some mathematical concepts and formulas that may be related to service orchestration: 1. Workflow modeling A directed graph represents a workflow, where nodes represent services or tasks and edges represent the order of execution.
The adjacency matrix of the graph represents the structure of the workflow.

Task scheduling:
Formulas that may be used in scheduling algorithms, such as Minimum Completion Time scheduling: where st i is start-up time and et i is execution time.

Resource allocation and optimization
Mathematical modeling of the task allocation problem, such as integer linear programming: =1 av ij ≤ 1, ∀j (13) 4. Definition of Service Level Agreement (SLA) Mathematical expressions of performance metrics that might be included in an SLA, such as thresholds for service response times: Service response time ≤ threshold (14)

Limitations and Future Work
Containerization is a method of packaging software so that it can run consistently on any infrastructure.Orchestration refers to the automated configuration, coordination, and management of these containers.
Here are some common limitations and areas for future work in this field:

Conclusion
In order to further improve the performance and work efficiency of the container cloud, and optimize the defects of the native K8S, this paper provides application orchestration services and other ways to achieve the efficiency, stability and ease of use of the container cloud platform by directly modifying the component source code and integrating third-party open source plugins. 1) Component source code modification: a total of hundreds of open source component codes were modified to enhance the overall stability and performance of the platform, including modifying the default mount path of kubelet container startup log from Emptydir to HostPath to meet the requirements of log collection.The image deletion method was modified to solve the problem that the image could not be deleted without tag.The Clair scanning tool was integrated into the image warehouse to realize the security scanning of the image.The disk recovery mechanism of Docker Registry was modified to solve the problem that residual images could not reclaim disk space.The pipeline state judgment method of Jenkins docking K8S plugin was modified to solve the problem of asynchrony of large-scale pipeline running state.Modify K8S gray upgrade mechanism, automatic elastic expansion function, etc.
2) Application choreography service: It provides a graphical application choreography designer and modifies the relevant code, so that users can automatically generate templates and automatically create and deploy applications by creating and modifying the application choreography topology, which greatly improves the work efficiency and convenience.At the same time, the Helm tool is combined with AOS, so that users can choose more abundant and operate more conveniently when orchestrating applications.
In this paper, through the secondary development of K8S component source code and container cloud platform, the unified management of resources is realized and the efficiency of middleware management is improved.It reduces the threshold of primary operation and maintenance, and provides middleware operation and maintenance visualization and advanced operation and maintenance functions out of the box.Unified monitoring is provided to quickly locate middleware faults and problems.Quick start and provisioning of middleware, reduced deployment time and other features and optimizations.However, there are still barriers to the use of container cloud platforms and products such as K8S and AOS, which require users to have certain technical ability and use experience.And the cloud native technology is still developing rapidly.With the upgrading of technology and the increase of demand, new problems may be encountered in the subsequent use.It is necessary to continue to iteratively optimize the function and performance of the container cloud platform to ensure the stability, security and efficiency of the product.

Figure 1 .
Figure 1.Workflow of application orchestration service.

Figure 2 .
Figure 2. Functional architecture of container cloud platform.
Container-based systems often share the same OS kernel, leading to potential security vulnerabilities.Ensuring secure isolation between containers remains a critical challenge.3.Complexity in Management: Despite automation, orchestrating a large number of containers can become complex.Managing the lifecycle of containers, dealing with dependencies, and ensuring high availability are non-trivial tasks.4.Resource Overhead: Container orchestration can introduce additional resource overhead, especially in terms of CPU and memory usage, which can be significant in large-scale deployments.5.Networking Challenges: Ensuring robust and secure networking between containers, especially across multiple hosts or cloud environments, can be complex.6.Data Persistence and Storage: Managing stateful applications and ensuring data persistence in a containerized environment is challenging, as containers are inherently stateless.7.Limited Standardization: Different container orchestration tools and platforms have varying features and capabilities, leading to a lack of standardization.