Provisioning Fedora CoreOS on Amazon Web Services

This guide shows how to provision new Fedora CoreOS (FCOS) instances on the Amazon Web Services (AWS) cloud platform.

Prérequis

Before provisioning an FCOS machine, you must have an Ignition configuration file containing your customizations. If you do not have one, see Producing an Ignition File.

Fedora CoreOS has a default core user that can be used to explore the OS. If you want to use it, finalize its configuration by providing e.g. an SSH key.

If you do not want to use Ignition to get started, you can make use of the Afterburn support.

You also need to have access to an AWS account. The examples below use the aws command-line tool, which must be separately installed and configured beforehand.

Launching a VM instance

Minimal Example

New AWS instances can be directly created from the public FCOS images. You can find the latest AMI for each region from the download page.

If you are only interested in exploring FCOS without further customization, you can use a registered SSH key-pair for the default core user.

To test out FCOS this way you’ll need to run the aws ec2 run-instances command and provide some information to get the instance up and running. The following is an example command you can use:

Launching a new instance
NAME='instance1'
SSHKEY='my-key'     # the name of your SSH key: `aws ec2 describe-key-pairs`
IMAGE='ami-xxx'     # the AMI ID found on the download page
DISK='20'           # the size of the hard disk
REGION='us-east-1'  # the target region
TYPE='m5.large'     # the instance type
SUBNET='subnet-xxx' # the subnet: `aws ec2 describe-subnets`
SECURITY_GROUPS='sg-xx' # the security group `aws ec2 describe-security-groups`
aws ec2 run-instances                     \
    --region $REGION                      \
    --image-id $IMAGE                     \
    --instance-type $TYPE                 \
    --key-name $SSHKEY                    \
    --subnet-id $SUBNET                   \
    --security-group-ids $SECURITY_GROUPS \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${NAME}}]" \
    --block-device-mappings "VirtualName=/dev/xvda,DeviceName=/dev/xvda,Ebs={VolumeSize=${DISK}}"
You can find out the instance’s assigned IP by running aws ec2 describe-instances

You now should be able to SSH into the instance using the associated IP address.

Example connecting
ssh core@<adresse ip>

Customized Example

In order to launch a customized FCOS instance, a valid Ignition configuration must be passed as its user data at creation time. You can use the same command from the Minimal Example but add --user-data file://path/to/config.ign argument:

The SSH key for the core user is supplied via Afterburn in this example as well.
Launching and customizing a new instance
NAME='instance1'
SSHKEY='my-key'     # the name of your SSH key: `aws ec2 describe-key-pairs`
IMAGE='ami-xxx'     # the AMI ID found on the download page
DISK='20'           # the size of the hard disk
REGION='us-east-1'  # the target region
TYPE='m5.large'     # the instance type
SUBNET='subnet-xxx' # the subnet: `aws ec2 describe-subnets`
SECURITY_GROUPS='sg-xx' # the security group `aws ec2 describe-security-groups`
USERDATA='/path/to/config.ign' # path to your Ignition config
aws ec2 run-instances                     \
    --region $REGION                      \
    --image-id $IMAGE                     \
    --instance-type $TYPE                 \
    --key-name $SSHKEY                    \
    --subnet-id $SUBNET                   \
    --security-group-ids $SECURITY_GROUPS \
    --user-data "file://${USERDATA}"      \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${NAME}}]" \
    --block-device-mappings "VirtualName=/dev/xvda,DeviceName=/dev/xvda,Ebs={VolumeSize=${DISK}}"
By design, cloud-init configuration and startup scripts are not supported on FCOS. Instead, it is recommended to encode any startup logic as systemd service units in the Ignition configuration.
You can find out the instance’s assigned IP by running aws ec2 describe-instances

You now should be able to SSH into the instance using the associated IP address.

Example connecting
ssh core@<adresse ip>

Remote Ignition configuration

As user-data is limited to 16 KB, you may need to use an external source for your Ignition configuration. A common solution is to upload the config to a S3 bucket, as the following steps show:

Create a new s3 bucket
NAME='instance1'
aws s3 mb s3://$NAME-infra
Upload the Ignition file
NAME='instance1'
CONFIG='/path/to/config.ign' # path to your Ignition config
aws s3 cp $CONFIG s3://$NAME-infra/bootstrap.ign

You can verify the file have been correctly uploaded:

List files in the bucket
NAME='instance1'
aws s3 ls s3://$NAME-infra/

Then create a minimal Ignition config as follows:

Retrieving a remote Ignition file from a s3 bucket
variant: fcos
version: 1.5.0
ignition:
  config:
    replace:
      source: s3://instance1-infra/bootstrap.ign
Format the remote Ignition file to json format
butane -p config.bu -o config.ign

You need to create a role that includes s3:GetObject permission, and attach it to the instance profile. See role creation document for more information.

Create the instance profile
cat <<EOF >trustpolicyforec2.json
{
  "Version": "2012-10-17",
  "Statement": {
    "Effect": "Allow",
    "Principal": {"Service": "ec2.amazonaws.com"},
    "Action": "sts:AssumeRole"
  }
}
EOF

# Create the role and attach the trust policy that allows EC2 to assume this role.
ROLE_NAME="my-role"
aws iam create-role --role-name ${ROLE_NAME} --assume-role-policy-document file://trustpolicyforec2.json

# Attach the AWS managed policy named AmazonS3ReadOnlyAccess to the role
aws iam attach-role-policy --policy-arn arn:aws:iam::aws:policy/AmazonS3ReadOnlyAccess --role-name ${ROLE_NAME}

# Create the instance profile required by EC2 to contain the role
PROFILE="my-instance-profile"
aws iam create-instance-profile --instance-profile-name ${PROFILE}

# Finally, add the role to the instance profile
aws iam add-role-to-instance-profile --instance-profile-name ${PROFILE} --role-name ${ROLE_NAME}

To launch the instance, need to attach the created profile. From the command-line, use --iam-instance-profile.

Launching and customizing a new instance with remote Ignition file from a S3 bucket
NAME='instance1'
SSHKEY='my-key'          # the name of your SSH key: `aws ec2 describe-key-pairs`
IMAGE='ami-xxx'          # the AMI ID found on the download page
DISK='20'                # the size of the hard disk
REGION='us-east-1'       # the target region
TYPE='m5.large'          # the instance type
SUBNET='subnet-xxx'      # the subnet: `aws ec2 describe-subnets`
SECURITY_GROUPS='sg-xxx' # the security group `aws ec2 describe-security-groups`
USERDATA='/path/to/config.ign' # path to your Ignition config
PROFILE='xxx-profile'    # the name of an IAM instance profile `aws iam list-instance-profiles`
aws ec2 run-instances                     \
    --region $REGION                      \
    --image-id $IMAGE                     \
    --instance-type $TYPE                 \
    --key-name $SSHKEY                    \
    --subnet-id $SUBNET                   \
    --security-group-ids $SECURITY_GROUPS \
    --user-data "file://${USERDATA}"      \
    --iam-instance-profile Name=${PROFILE}     \
    --tag-specifications "ResourceType=instance,Tags=[{Key=Name,Value=${NAME}}]" \
    --block-device-mappings "VirtualName=/dev/xvda,DeviceName=/dev/xvda,Ebs={VolumeSize=${DISK}}"

Once the first boot is completed, make sure to delete the configuration as it may contain sensitive data. See Configuration cleanup.

Configuration cleanup

If you need to have secrets in your Ignition configuration you should store it into a S3 bucket and have a minimal configuration in user-data. Once the instance has completed the first boot, clear the S3 bucket as any process or container running on the instance could access it. See the Ignition documentation for more advice on secret management.

Deleting the Ignition configuration from the s3 bucket
NAME='instance1'
aws s3 rm s3://$NAME-infra/bootstrap.ign

Optionnally, you can delete the whole bucket:

Deleting the s3 bucket
NAME='instance1'
aws s3 rb s3://$NAME-infra
The instance’s user data cannot be modified without stopping the instance.