Container orchestration platform used as the runtime foundation for scalable, resilient, and multi-tenant systems.
Kubernetes is used as the core runtime platform for deploying and operating containerized workloads, providing consistent abstractions for scaling, resilience, and service coordination.
It forms the execution layer on which application platforms, data systems, and machine learning workflows are built.
Workload Orchestration
Supports stateless, stateful, and batch workloads through well-defined primitives.
Service Networking & Discovery
Provides stable service endpoints, traffic routing, and policy-based network isolation.
Scaling & Self-Healing
Enables automatic scaling, rolling updates, and recovery through declarative health management.
Extensibility via APIs
Allows platform-specific behavior through CRDs, operators, and controllers.
Multi-Tenant Execution Model
Supports shared clusters with logical isolation and resource governance.
Designed and operated Kubernetes-based platforms to run application services, data pipelines, and ML workloads in production environments.
Key contributions included:
Kubernetes acted as the foundational runtime layer, enabling higher-level platforms—such as ML orchestration, GitOps delivery, and observability—to operate in a consistent and scalable manner.
CML Insights • 2025
CML Insights • 2024 - 2025
CML Insights • 2023 - 2024
CML Insights • July 2022 - 2025
How I used it: Scalable microservices with production-grade reliability
IFS R&D International, Sri Lanka • 2020 - 2022
How I used it: Orchestrated, scalable container workloads for production-grade cloud delivery