Oppgaven er ikke lenger tilgjengelig

Edge computing with Kubernetes: Resource scheduling

What is Kubernetes?

Kubernetes, often called K8s, is a system for automating the deployment of applications in the cloud, as well as the distribution, configuration, coordination and administration of these applications (online, see: kubernetes.io). Kubernetes can be used to ensure uptime and scalability for services and end systems, features that are essential for both civilian and military systems. K8s are often used as a service platform in modern communication and information systems. Kubernetes was originally developed by Google and is now released as open source as part of the Cloud Native Computing Foundation (CNCF).

The main benefit of using Kubernetes is the flexibility and scalability the system provides, including automated software updates, service discovery and load balancing, service orchestration, security and configuration management, and self-healing. K8s therefore fits in very well with modern agile system development based on DevOps and microservices.

K8s is also scalable across a set of computers (or nodes), be they physical hardware or virtual machines (or a combination of these). When K8s are used across multiple nodes, it is referred to as a Kubernetes cluster. Typically, in cloud computing, Kubernetes is run in a data hall. However, the solution can also be used for edge computing, but there unstable communication and problems with node connection will cause challenges.

Specific question for this thesis

We offer several tasks related to studying improvements of K8s, which makes it more suitable in the context of edge computing, for example for use in disaster relief and rescue operations. This thesis is well-suited for collaboration with another student who selects a closely related thesis.

Resource scheduling

Kubernetes is quite good at making dispatch decisions based on the availability of RAM and CPU cycles. This is sufficient for orchestration in clouds that are deployed in data centers.

To make Kubernetes relevant for the orchestration of services at the edge of the network (Edge Computing, Fog Computing etc.), the communication between nodes must be handled as well. "Out there", network availability is not even guaranteed, and when networks are available, they may have very low capacity (4G, even 3G), quickly changing latency (route changes), or throughput that changes very frequently and very much.

That means that we require a new scheduler for Kubernetes' orchestrator, one that considers RAM, CPU and network properties. In the thesis, you will explore the challenges that we can face, formulate them as part of a scheduling strategy, and implement and test your approach.

Learning outcome

Experience

  • in formulating, investigating and answering research questions
  • handling of Kubernetes, the famous container orchestration system
  • conducting, evaluating and interpreting research results

Conditions

We expect that you:

  • have been admitted to a master's program in MatNat@UiO - primarily PROSA
  • take this as a long thesis
  • will participate actively in the weekly SINLab meetings
  • are interested in and have some knowledge of C/C++ programming
  • include the course IN5060 in the study plan, unless you have already completed a course on classical (non-ML) data analysis
  • include IN5700 or IN5020 in the study plan, unless you have already completed a course on distributed systems
Publisert 29. aug. 2023 09:58 - Sist endret 16. okt. 2023 14:59

Veileder(e)

Omfang (studiepoeng)

60