kcp is a highly-multi-tenant Kubernetes control-plane, built for SaaS service-providers who want to offer a unified API-driven platform to many customers and dramatically reduce a) cost of operation of the platform and b) cost of onboarding of new services. It is made to scale to the order of magnitude of 10s of thousands of customers, offer a Kubernetes-like experience including execution of workloads, but with dramatically lower overhead per new customer.
Check out our concepts document and feel free to open an issue if something is not covered.
If kcp is a Kubernetes API server without pod-like APIs, how do resources like Deployments get scheduled?
kcp has a concept called syncer which is installed on each SyncTarget. The syncer negotiates, with kcp, a set of APIs to make accessible in the workspace. This may include things like Deployments or other resources you may explicitly configure the syncer to synchronize to kcp. Once these APIs are made available in your Workspace you may then create resources of that type. From there, the Location and Placement APIs help determine which Location your deployable resource lands on.
Will KCP be able to pass the K8S conformance tests in CNCF Conformance Suites?
No, the Kubernets conformance suites require that all Kubernetes APIs are supported and kcp does not support all APIs out of the box (for instance, Pods).
Yes! All development is public and we have started discussions about what is mature enough to present to various interested parties. It is worth noting that not all of the goals of kcp are necessarily part of the goals of Kubernetes so while some items may be accepted upstream we expect that at least some of the concepts in kcp will live outside the Kubernetes repository.
kcp depends on a fork of Kubernetes. Updating Kubernetes for the kcp project itself requires a rebase. We are actively following the releases of Kubernetes and rebasing regularly. Updating Kubernetes for clusters attached to kcp is exactly like it is done today, though you may choose to follow different patterns of availability for applications based on kcp's ability to cordon and drain clusters or relocate applications.
Yes.
We are in the early stages of brainstorming storage use cases. Please join in the conversation if you have opinions or use cases in this areas.
With multiple Workspaces
on a single cluster, that implies Pods
from multiple tenants are on the same host VM. Does this mean privileged Pods
are forbidden to avoid cross contamination with host ports and host paths?
We aren't quite there yet. Security controls are especially important at the multi-tenant level and we'd love to hear your use cases in this area.
Controller patterns are something we are actively working on defining better. In general, operators and controllers from a service providing team would run on their provided compute (or shared compute) and point back to kcp in order to have a view of the resources being created that they need to take action upon. This view would be via a Virtual Workspace.
Yes. We are tracking read-through of resources and debugging as use cases we need to support. You can view a demo of the current work in our April 26 Community Call Recording.
Workspaces can contain other workspaces and workspaces are typed. Please see the Workspace documentation for more details.
Are custom admission controllers considered? How would that work across clusters if api server and the actual service is located elsewhere?
Yes. Validating and mutating webhooks via an external URL are supported.
They do, in the workload clusters where the pods live and run. The control plane doesn't have pods (at least not by default, today).
Let’s take something boring like FIPS compliance. Would a workspace be guaranteed to run accordingly to the regulatory standards? Ie a workspace admin defined some FIPS stuffs and kcp ensures that the resulting pods do run appropriate in the FIPS shard?
In kcp an application should be able to describe the constraints it needs in its runtime environment. This may be technical requirements like GPU or storage, it may be regulatory like data locality or FIPS, or it may be some other cool thing we haven't thought of yet. kcp expects the integration with Location and Placement APIs to handle finding the right placement that fulfills those requirements.
Shards in kcp represent a single apiserver and etcd/db instance. This is how kcp would like to split workspaces across many kcp instances since etcd will have storage limits.
You're in the right place. Clone this repo and run make install WHAT=./cmd/kubectl-kcp
.