Monday, March 20, 2017

Downloading and running docker images from S3 to an ec2 instance

Leave a Comment

I'm new to CloudFormation.

I have a script that creates a stack and an instance perfectly.

I now have a shell script to use to add an application to an ec2 instance. However this script keeps failing to download from s3.

Not sure what to do next and a lot of similar questions online have contradictory answers. Not sure where to go with this now. Can someone help me out?

Is using aws s3 cp s3: the wrong move?

"Type": "AWS::EC2::Instance",       "Metadata": {         "UserData": {           "Fn::Base64": {             "Fn::Join": [               "",               [                            "#!/bin/bash -e\n",                 "aws s3 cp s3://path/automation.tar.gz/tmp\n",                 "aws s3 cp s3://path/oracle-instance.tar.gz/tmp\n",                 "aws s3 cp s3://path/DockerTools-4.5.0.0-a83.tar.gz/tmp\n",                 "cd /tmp\n",                 "tar -xzvf DockerTools-4.5.0.0-a83.tar.gz\n",                 "cd 4.5.0.0-a83\n",                 "chmod u+x *\n",                 "sudo ./dockerTools.sh installFull\n"               ]             ]           }         }, 

1 Answers

Answers 1

Using aws s3 cp in your instance's UserData cloudinit script is a good approach for downloading/installing an object from S3 onto a new EC2 instance. However, you need to ensure that your EC2 instance has the necessary S3 permissions to access the object being downloaded, which you can do with an IAM Role for EC2.

To setup an IAM Role for EC2 from a CloudFormation template, use the AWS::IAM::InstanceProfile resource, referencing an AWS::IAM::Role resource with an AssumeRolePolicyDocument delegating access to ec2.amazonaws.com, with a Policy designed to grant least privilege (in this case, allowing 's3:GetObject' only for the specific S3 asset being downloaded).

Here's what a complete example would look like for your case (using the more concise YAML CloudFormation template syntax):

Description: Install S3 archives on a new EC2 instance with cloudinit Parameters:   ImageId:     Description: Image ID to launch EC2 instances.     Type: AWS::EC2::Image::Id     # us-east-1 amzn-ami-hvm-2016.09.1.20161221-x86_64-gp2     Default: "ami-9be6f38c"   S3Path:     Description: S3 bucket/object key path prefix     Type: String     Default: "MyS3Bucket/path" Resources:   EC2Role:     Type: AWS::IAM::Role     Properties:       AssumeRolePolicyDocument:         Version: 2012-10-17         Statement:         - Effect: Allow           Principal: {Service: [ ec2.amazonaws.com ]}           Action: ["sts:AssumeRole"]       Path: /       Policies:       - PolicyName: EC2Policy         PolicyDocument:           Version: 2012-10-17           Statement:           - Effect: Allow             Action: ['s3:GetObject']             Resource: !Sub 'arn:aws:s3:::${S3Path}/*'   RootInstanceProfile:     Type: AWS::IAM::InstanceProfile     Properties:       Path: /       Roles: [ !Ref EC2Role ]   Instance:     Type: AWS::EC2::Instance     Properties:       ImageId: !Ref ImageId       InstanceType: m3.medium       IamInstanceProfile: !Ref RootInstanceProfile       UserData:         "Fn::Base64":           !Sub |             #!/bin/bash -e             S3_PATH=${S3Path}             aws s3 cp s3://$S3_PATH/automation.tar.gz /tmp             aws s3 cp s3://$S3_PATH/oracle-instance.tar.gz /tmp             aws s3 cp s3://$S3_PATH/DockerTools-4.5.0.0-a83.tar.gz /tmp             cd /tmp             tar -xzvf DockerTools-4.5.0.0-a83.tar.gz             cd 4.5.0.0-a83             chmod u+x *             ./dockerTools.sh installFull 

Note:

  • UserData belongs in the Properties section of the AWS::EC2::Instance resource, not the Metadata section as you have in your example.
  • sudo is not needed in a user-data script, because cloudinit already executes user-data as root.
  • You may also need to install the AWS CLI manually if you're using an AMI other than Amazon Linux (which comes with it pre-installed).

Finally, if you continue to have issues, you can look at the output logs on the instance (ssh into the instance and run cat /var/log/cfn-init.log and cat /var/log/cloud-init-output.log) to find any underlying script error output. (However, in your example above you won't have any relevant output in there to start, because until UserData is properly set in the Properties section your script isn't being executed at all.)

If You Enjoyed This, Take 5 Seconds To Share It

0 comments:

Post a Comment