2

We are running containers on Kubernetes on Amazon AWS. This cluster was created via the kube-up set of scripts. Everything was provisioned correctly and working fine. We ran into a snag however - our fairly large servers, c4.xlarges, are only allowed to run 40 pods. This is a small number for us, as we are running many small pods, some rarely used. Is there a way to up this limit from the salt master or the launch configuration? What is the best route to go about doing this?

Thanks.

1 Answer 1

0

Fixed it! I think I made a pretty good fix too, from what I can tell. I'm new to Salt, but feel I have a pretty decent grasp on it now. It's much simpler and less intimidating than I thought, a really neat tool. Anyway, onto the fix:

Kubernetes provisions the master and the minions with salt. Salt's config files are located at /srv/salt. After looking at the top.sls file, I found the kubelet folder (we need to change the flags passed to the kubelet). Poking through that folder we find the init.sls file, pointing to the kubelet.service file that is using the salt://kubelet/default for the config. Perfect, that's just the /srv/salt/kubelet/default file. Yuck. Lots of mega conditionals, but it all boils down to that last line DAEMON_ARGS=... If we want to do this the right way, modify the last line to add another variable:

{% set max_pods =  "--max-pods=" + pillar.get('kubelet_max_pods', '40') %}

DAEMON_ARGS="{{daemon_args}} {{api_servers_with_port}} {{debugging_handlers}} {{hostname_override}} {{cloud_provider}} {{config}} {{manifest_url}} --allow_privileged={{pillar['allow_privileged']}} {{pillar['log_level']}} {{cluster_dns}} {{cluster_domain}} {{docker_root}} {{kubelet_root}} {{configure_cbr0}} {{cgroup_root}} {{system_container}} {{pod_cidr}} {{max_pods}}"

(Notice our new variable max_pods)

This config file has access to our pillar files, the "config" files of salt. These are located right next to the salt configs in /srv/pillar. After verifying all these configs are passed to all hosts, we can modify one. I thought it best fit in cluster-params.sls:

kubelet_max_pods: '80'

I killed off my old nodes and now our nodes have a max of 80 pods per host instead of the default.

You must log in to answer this question.

Not the answer you're looking for? Browse other questions tagged .