AWS/Terraform Workshop #4: S3, IAM, Terraform remote state, Jenkins

Artem Nosulchik
Universal Language
Published in
6 min readFeb 10, 2017

--

This post is part of our AWS/Terraform Workshops series that explores our vision for Service Oriented Architecture (SOA) and closely examines AWS Simple Storage Service, Terraform Remote State, and Identity Access Management. To learn more, check out our introductory workshop and new posts at Smartling Engineering Blog.

Prerequisites

Preface

AWS Simple Storage Service (S3)

Amazon S3 stores data as objects within buckets. An object consists of a file and any metadata that describes that file.

Buckets are the containers for objects and there can be multiple buckets. You can control access to each bucket (i.e. who can create, delete, and list objects in that bucket), view access logs for each bucket and its objects, and choose the geographical region where Amazon S3 will store the bucket and its contents.

Permissions for buckets and objects . You can specify permissions and attach resource-based policies to specific buckets and objects to determine which parts of aws infrastructure can have access to the resources. It’s possible to control objects creation, updates, deletes and lists. By default S3 resources are private and are available only to the resource owner (or the entity that created bucket and/or object).

Object versioning. When turned on, object versioning enables you to maintain multiple versions of an object in one bucket, for example, my-image.jpg (version 111111) and my-image.jpg (version 222222). You might want to enable versioning to protect yourself from unintended overwrites and deletions, or to archive objects so that you can retrieve previous versions of them later on.

S3 access logs. In order to track requests for access to your bucket, you can enable access logging. Each access log record provides details about a single access request, such as the requester, bucket name, request time, request action, response status, and error code, if any.

In order to enable logging you must specify the name of the S3 bucket that will store logs from another S3 bucket (that you enabled logging for).

Read more:

Terraform Remote State

By default, Terraform persists its state only to a local disk. When remote state storage is enabled, Terraform will automatically fetch the latest state from the remote server when necessary and if any updates are made, the newest state is persisted back to the remote server. In this mode, users do not need to durably store the state using version control or shared storage.

One of backends supported by Terraform for remote state is AWS S3.

Remote state gives you more than just easier version control and safer storage. It also allows you to delegate the outputs to other teams. This allows your infrastructure to be broken down more easily into components that multiple teams can access. Put another way, remote state also allows teams to share infrastructure resources in a read-only way.

Read more:

AWS Identity Access Management (IAM)

Identity and Access Management service helps to control access to AWS resources , including who can access (authentication) and what resources they can use and in what ways (authorization).

Permissions and policies. You grant permissions by creating a policy, which is a document that lists the actions that a user can perform and the resources that the actions can affect. Example policy document:

{ 
"Version": "2012-10-17",
"Statement": {
"Effect": "Allow",
"Action": "s3:*",
"Resource": "arn:aws:s3:us-east-1::mybucket/*"
}
}

Users, groups, roles, instance profiles. When you attach the policy to a user , that user gets permissions specified in the policy document. A policy attached to a group is applied to all users that are included in this group.

IAM roles. An IAM role is similar to a user, in that it is an AWS identity with permission policies that determines what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have any credentials (password or access keys) associated with it. Instead, if a user is assigned to a role, access keys are created dynamically and provided to the user.

Instance profiles. An instance profile is a container for an IAM role that you can use to pass role information to an EC2 instance when the instance starts.

Read more:

Hands On

1. Go to w4 directory in cloned Smartling/aws-terraform-workshops git repository.2. Create S3 bucket for terraform remote state:  a. cd remote_state, edit file s3.tf  b. Define S3 bucket in terraform configuration (make sure versioning is enabled for it).  c. Apply terraform configuration:$ terraform plan
$ terraform apply
Note: Names of S3 buckets must be unique with AWS S3 service so if anyone already took your bucket name, just use something like mybucket-w4-workshop-yourname. d. cd ../jenkins e. Configure terraform to use newly created S3 bucket as a remote state:$ terraform remote config -backend=S3 -backend-config="bucket=mybucket-w4-workshop" -backend-config="key=/terraform.tfstate" -backend-config="region=us-east-1" f. Check .terraform/terraform.tfstate file – you should see remote config section there e.g.:$ cat .terraform/terraform.tfstate
...
"remote": {
"type": "s3",
"config": {
"bucket": "mybucket-w4-workshop",
"key": "/terraform.tfstate",
"region": "us-east-1"
}
},
...
2. Deploy Jenkins using terraform. a. Define missing resources in terraform configuration according to comments *.tf files b. Make sure user-data for Jenkins ec2 instance contains your public SSH key terraform plan. c. terraform plan and apply d. Get public IP address of instance that hosts Jenkins and open http://<ip address>:8000 is browser. You'll be asked to provide password that can be obtained at Ec2 instance so you'll need to access it via SSH.Note: it takes about 4 minutes for Jenkins to bootstrap before it shows welcome/installation page. e. Follow instructions on the screen to install Jenkins.3. Configure Jenkins to build, test and deploy sample project. a. Go to “Manage Jenkins → Manage Plugins” section and install “Git plugin” (use search bar with exact name of plugin). b. Make sure you choose to restart Jenkins during the installation. c. Create new Project: i. Press “create new jobs” at Jenkins, specify Project name, select Freestyle project.

ii. Configure Job with Git repository of sample project (choose some open source project at Github and configure checkout step for it).
iii. Save changes. iiii. Go to newly created Project and press ‘Build Now’ -- look around and check that the build was successful (find out where build’s log can be found). d. Add build actions for the project. i. Go to Project's configuration. ii. Add one or more build steps (choose Execute Shell and add sample commands like "ls -la", "pwd", "date" etc.).4. Destroy AWS resources using ‘terraform destroy’ command.$ cd jenkins
$ terraform destroy

$ cd ../remote_state
$ terraform destroy
Note: terraform can't delete S3 bucket because it’s not empty so you may need to go to S3 web console and delete all files and all their versions for remote tfstate file.

Introductory story:

Series of workshops:

Did you find our workshops useful? Click the 💙 below!

--

--