Friday, April 22, 2016

Using Signed URLs with CloudFront, CarrierWave and Rails

Setting it all up and how to avoid some stupid pitfalls..


I created a rails app to server users with the ability for them to upload images and files.
To improve the performance of the site, I serve all app assets (JavaScript, Images, CSS files, etc.) from a CloudFront distribution. I then decided to also add a CloudFront in front of the bucket I use for user uploaded files.
Using carrier wave gem is quite straightforward and even adding the CloudFront CDN was quite easy. Just in case you have no experience, here are the basics:

1. add the carrierwave and carrierwave-aws gems - this is much better than using fog which bloats your app with multiple unneeded gems. It also supports more of the AWS API.

2. add a field to your relevant table to store the uploaded image or asset, like:
   add_column :users, :avatar, :string  

3. In your model class, mount an uploader for this field:
 mount_uploader :avatar, AvatarUploader  

4. Implement an uploader class (AvatarUploader in this exmple):
 class AvatarUploader < CarrierWave::Uploader::Base  
  include CarrierWave::MiniMagick  
  storage :aws  
  # Override the directory where uploaded files will be stored.  
  # This is a sensible default for uploaders that are meant to be mounted:  
  def store_dir  
   "uploads/#{model.class.to_s.underscore}/#{mounted_as}/#{model.id}"  
  end  
  # Create different versions of your uploaded files:  
  # version :thumb do  
  #  process :resize_to_fit => [50, 50]  
  # end  
  # Add a white list of extensions which are allowed to be uploaded.  
  # For images you might use something like this:  
  def extension_white_list  
   %w(jpg jpeg gif png)  
  end  
  def fix_exif_rotation #this is my attempted solution  
   manipulate! do |img|  
    img.tap(&:auto_orient)  
   end  
  end  
  process :fix_exif_rotation  
  process resize_to_limit: [200, 200]  
 end  

In the above code, I am including MiniMagick to manipulate the uploaded image (resize and fix orientation - the orientation was to fix issue with landscape oriented images uploaded from iPhone).
If you want to use MiniMagick, you will have to add the mini_magick gem.

5. You will then need to configure carrierwave with AWS credentials, bucket name and region like that:
  CarrierWave.configure do |config|  
   config.storage  = :aws  
   config.aws_bucket = ENV['S3_BUCKET'] || 'default-bucket'  
   config.aws_acl  = 'public-read'  
   config.aws_attributes = {  
     expires: 1.week.from_now.httpdate,  
     cache_control: 'max-age=604800'  
   }  
   config.aws_credentials = {  
     access_key_id:   ENV['S3_ACCESS_KEY'] || 'your-key',  
     secret_access_key: ENV['S3_SECRET_KEY'] || 'your-secret',  
     region:      'eu-west-1' # Required  
   }  
  end  

BTW, you could use local file storage for development by changing both carrier_wave initializer to use storage of :file as well as your uploader class.

6. To serve app assets from AWS CloudFront (for much improved performance and less load on your poor web server...), you can add an assets CDN host in your config/environments/production.rb. That's what I have:
  if ENV['APP_CDN_HOST'].present?  
   config.action_controller.asset_host = ENV['APP_CDN_HOST']  
  end  

7. To server those app assets from CloudFront, you will need to create a distribution pointing to your domain as origin. To do that, go to your AWS console, select the CloudFront service, and click "Create Distribution" at the top. In the following screen, click "Get Started" under the "Web" section ("RTMP" is used for streaming media). In the Origin Domain Name field, type in your domain (where your application is served from). If you want to support both http and https request you should pick "Match Viewer".
















Once created, set the APP_CDN_HOST to point to the CloudFront distribution URL.

8. To place CloudFront in front of your user uploaded files, you need to add the following to your carrier_wave configuration file:
   config.asset_host = ENV['CDN_HOST'] || 'http://.cloudfront.net' 
I recommend of course to store the distribution in an environment variable, but sometimes it is convenient to have it as above for your development environment. You will also need to create a cloud front distribution where the origin is the bucket you use for carrier wave (as defined above in the carrier_wave.rb configuration file).

9. Now all is smooth and performant. There is one small issue (which may or may not be relevant to you): anyone with a link to a user uploaded file, can share that link and so others can access the files. If you care about privacy and security of those files, you may want to protect them with a CloudFront signed URLs. Signed URLs are signed with a trusted private key, and can encode an expiration time for the link. This means that anyone trying to access a cloud front signed URL after the expiration time, will get an access denied result. To add this extra security step, you should follow the following steps...

10. Create an IAM user identity that will be used as a trusted signer. This can be the user creating the bucket, or another user. I recommend NOT using your root identity for either of those operations. What I did, is I created a user for both creating the bucket, as well as the cloud front distribution and gave that user the trusted signer role. I automated all those actions in a ruby script (assuming the aws_boto user is an admin user I have with the given credentials):


 require 'aws-sdk'  
 # get admin credentials from ENV variables:  
 abk = `echo $aws_boto_key`.strip  
 abs = `echo $aws_boto_secret`.strip  
 Aws.config.update(  
   {  
     region: 'eu-west-1',  
     credentials: Aws::Credentials.new(abk, abs),  
   }  
 )  
 # create bucket if does not exist  
 s3 = Aws::S3::Resource.new  
 sd = `echo $subdomain`.strip  
 bucket_name = "#{sd}-mydomain"  
 bucket = s3.bucket(bucket_name)  
 if bucket.exists?  
  puts "bucket already exists, no need to worry..."  
 else  
  puts "no such bucket, create it now"  
  bucket.create  
  # create IAM user and get the access key and secret  
  iam = Aws::IAM::Resource.new  
  user_name = "#{sd}-mydomain"  
  user = iam.user(user_name)  
  if user.exists?  
   puts "user already exists, no need to worry..."  
  else  
   puts "no such user, create it now"  
   user.create  
   accesskeypair = user.create_access_key_pair  
   File.open("userkey.cfg", 'w') { |file| file.write("#{accesskeypair.access_key_id}") }  
   File.open("usersecret.cfg", 'w') { |file| file.write("#{accesskeypair.secret}") }  
   # build a custom policy  
   policy_doc = '{  
           "Version": "2012-10-17",  
           "Statement": [  
             {  
               "Sid": "Stmt1444035127000",  
               "Effect": "Allow",  
               "Action": [  
                 "s3:Delete*",  
                 "s3:Get*",  
                 "s3:List*",  
                 "s3:Put*"  
               ],  
               "Resource": [  
                 "arn:aws:s3:::'+"#{bucket_name}"+'/*"  
               ]  
             }  
           ]  
         }'  
   user.create_policy({  
               policy_name: "S3_MyDomain_Access", # required  
               policy_document: policy_doc, # required  
             })  
  end  
  # create cloud front distribution for app assets (origin = app root directory)  
  cloudfront = Aws::CloudFront::Client.new()  
  app_dist = cloudfront.create_distribution(  
    {  
      distribution_config: {  
        # required  
        caller_reference: "app-dist-caller-#{Time.now.to_i}", # required - unique string for the request  
        aliases: {  
          quantity: 0  
        },  
        default_root_object: "",  
        origins: {  
          # required  
          quantity: 1, # required  
          items: [  
            {  
              id: "1", # required - unique within distribution  
              domain_name: "#{sd}.mydomain.com", # required  
              origin_path: "",  
              custom_origin_config: {# use only for custom origin, not for bucket  
                          http_port: 80, # required  
                          https_port: 443, # required  
                          origin_protocol_policy: "match-viewer", # required, accepts http-only, match-viewer  
              },  
            },  
          ],  
        },  
        default_cache_behavior: {  
          # required  
          target_origin_id: "1", # required  
          forwarded_values: {# required  
                    query_string: true, # required  
                    cookies: {# required  
                         forward: "none", # required, accepts none, whitelist, all  
                         whitelisted_names: {  
                           quantity: 0  
                         },  
                    },  
                    headers: {  
                      quantity: 0  
                    },  
          },  
          trusted_signers: {# required  
                   enabled: false, # required  
                   quantity: 0  
          },  
          viewer_protocol_policy: "allow-all", # required, accepts allow-all, https-only, redirect-to-https  
          min_ttl: 600, # required  
          allowed_methods: {  
            quantity: 2, # required  
            items: ["GET", "HEAD"], # required, accepts GET, HEAD, POST, PUT, PATCH, OPTIONS, DELETE  
            cached_methods: {  
              quantity: 2, # required  
              items: ["GET", "HEAD"], # required, accepts GET, HEAD, POST, PUT, PATCH, OPTIONS, DELETE  
            },  
          },  
          smooth_streaming: false,  
          default_ttl: 86400,  
          max_ttl: 2592000,  
        },  
        cache_behaviors: {  
          quantity: 0  
        },  
        custom_error_responses: {  
          quantity: 0  
        },  
        comment: "created automatically by SroolTheKnife", # required  
        logging: {  
          enabled: false, # required  
          include_cookies: false, # required  
          bucket: "", # required  
          prefix: "", # required  
        },  
        price_class: "PriceClass_100", # accepts PriceClass_100 (US and Europe), PriceClass_200, PriceClass_All  
        enabled: true, # required  
        viewer_certificate: {  
          cloud_front_default_certificate: true,  
          minimum_protocol_version: "TLSv1", # accepts SSLv3, TLSv1  
        },  
        restrictions: {  
          geo_restriction: {# required  
                   restriction_type: "none", # required, accepts blacklist, whitelist, none  
                   quantity: 0  
          },  
        },  
      },  
    })  
  File.open("appdist.cfg", 'w') { |file| file.write("#{app_dist.distribution.domain_name}") }  
  # create cloud front distribution for uploaded assets (origin = above bucket)  
  cloudfront = Aws::CloudFront::Client.new()  
  bucket_dist = cloudfront.create_distribution(  
    {  
      distribution_config: {  
        # required  
        caller_reference: "bucket-dist-caller-#{Time.now.to_i}", # required - unique string for the request  
        aliases: {  
          quantity: 0  
        },  
        default_root_object: "",  
        origins: {  
          # required  
          quantity: 1, # required  
          items: [  
            {  
              id: "1", # required - unique within distribution  
              domain_name: "#{bucket_name}.s3.amazonaws.com", # required  
              origin_path: "",  
              s3_origin_config: {# use only for bucket  
                        origin_access_identity: "", # required  
              },  
            },  
          ],  
        },  
        default_cache_behavior: {  
          # required  
          target_origin_id: "1", # required  
          forwarded_values: {# required  
                    query_string: true, # required  
                    cookies: {# required  
                         forward: "none", # required, accepts none, whitelist, all  
                         whitelisted_names: {  
                           quantity: 0  
                         },  
                    },  
                    headers: {  
                      quantity: 0  
                    },  
          },  
          trusted_signers: {  
            # required  
            enabled: true, # required  
            quantity: 1,  
            items: ['self'] # same user creating the bucket is the trusted signer
},  
          viewer_protocol_policy: "allow-all", # required, accepts allow-all, https-only, redirect-to-https  
          min_ttl: 600, # required  
          allowed_methods: {  
            quantity: 2, # required  
            items: ["GET", "HEAD"], # required, accepts GET, HEAD, POST, PUT, PATCH, OPTIONS, DELETE  
            cached_methods: {  
              quantity: 2, # required  
              items: ["GET", "HEAD"], # required, accepts GET, HEAD, POST, PUT, PATCH, OPTIONS, DELETE  
            },  
          },  
          smooth_streaming: false,  
          default_ttl: 86400,  
          max_ttl: 2592000,  
        },  
        cache_behaviors: {  
          quantity: 0  
        },  
        custom_error_responses: {  
          quantity: 0  
        },  
        comment: "created automatically by SroolTheKnife", # required  
        logging: {  
          enabled: false, # required  
          include_cookies: false, # required  
          bucket: "", # required  
          prefix: "", # required  
        },  
        price_class: "PriceClass_100", # accepts PriceClass_100 (US and Europe), PriceClass_200, PriceClass_All  
        enabled: true, # required  
        viewer_certificate: {  
          cloud_front_default_certificate: true,  
          minimum_protocol_version: "TLSv1", # accepts SSLv3, TLSv1  
        },  
        restrictions: {  
          geo_restriction: {# required  
                   restriction_type: "none", # required, accepts blacklist, whitelist, none  
                   quantity: 0  
          },  
        },  
      },  
    })  
  File.open("bucketdist.cfg", 'w') { |file| file.write("#{bucket_dist.distribution.domain_name}") }  
 end  


11. Once you have that, you can add the following to carrier_wave initializer, to add code signing:
   config.aws_signer = -> (unsigned_url, options) { Aws::CF::Signer.sign_url unsigned_url, options }  

12. You should add the cloudfront-signer gem, and initializers/cloudfront-signer.rb with the following code:
 Aws::CF::Signer.configure do |config|  
  config.key = ENV["CLOUDFRONT_KEY"] || '-----BEGIN RSA PRIVATE KEY-----  
 paste your private key here if you like...  
 -----END RSA PRIVATE KEY-----'  
  config.key_pair_id = ENV["CLOUDFRONT_KEY_PAIR_ID"] || 'paste key pair'  
  config.default_expires = 600  
 end  

That key and key pair ID are required to be set in the AWS console as explained here: Specifying Trusted Signers.

One last issue that caused me a lot of confusion and few hours of wasted time is that if you find that some of the uploaded assets "disappear" - for example, an uploaded image shows as a broken image. The reason might be (and was on my end) the rails cache kicking in, and serving the fragment (including the image URL) beyond the signed URL expiration time. You better not cache such fragments or else they will be broken once the signed URL expiration time arrives.



And a message from our advertisers:


Toptal provides remote engineers and designers of high quality. I recommend them. Follow this link (full disclosure: this is my affiliate link):
https://www.toptal.com/#engage-honest-computer-engineers-today




Thursday, January 28, 2016

Using Froala Editor with Rails

I am using Froala Text Editor (www.froala.com) to have a rich text editor.
In order to use its image upload (including support of pasting images), you need to implement a server side component. The Froala site documentation has an example for php, but I thought I would write a little post on how to achieve this with Rails.

So my main model is called Article, and I wanted to edit the content of the articles using Froala. To support uploading images, I created another model called ArticleAttachment and implemented it with a mounted uploader using CarrierWave:

class ArticleAttachment < ActiveRecord::Base  belongs_to :article
  mount_uploader :attachment, ArticleImageUploader
end


In the Article model, I added this has_many line:

has_many :article_attachments, dependent: :destroy

In the article controller, I implemented two methods: one for adding an image and one for deleting an image:

  def attach
    attachment = @article.article_attachments.create(attachment: params[:attachment])
    if attachment.save
      render :json => {link: attachment.attachment.url}
    else
      render :json => {error: 'failed to save attachment'}, status: 500
    end
  end

  def detach
    src = params[:src]
    uri = URI.parse(src)
    fname = File.basename(uri.path)
    attachment = @article.article_attachments.find_by(attachment: fname)
    if attachment.present? && attachment.destroy
      render :json => {success: 'deleted'}
    else
      render :json => {error: attachment.attachment.url}, status: 500
    end
  end

for that to work, you need to assign a unique file name in the uploader as explained here:
https://github.com/carrierwaveuploader/carrierwave/wiki/How-to:-Create-random-and-unique-filenames-for-all-versioned-files




And a message from our advertisers:


Toptal provides remote engineers and designers of high quality. I recommend them. Follow this link (full disclosure: this is my affiliate link):
https://www.toptal.com/#engage-honest-computer-engineers-today



Saturday, January 3, 2015

Developing with the latest OpenSSL on Mac OSX Yosemite and XCode 6.1.1

Developing OpenSSL on Mac OSX Yosemite, XCode 6.1.1

Had to write some encryption/decryption code for Mac OSX, and as I intend to use this on multiple platforms, I prefer to use openssl rather than Apple's code for all key generation and enc/dec functions.
Trying to use some sample code using openssl I discovered that Apple stopped supporting and updating OpenSSL a few years ago. If you try to compile code including headers, you will get a warning saying it was deprecated as of OSX Lion.
So I searched for how to update openssl on OSX and found few useful links, so this is the summary of what I needed to do:

1. Check what openssl version is installed in your system:

$ openssl version

with the latest version as of this writing you should get this:

OpenSSL 1.0.1j 15 Oct 2014

If you have a fresh OSX Yosemite (mine is 10.10.1), you most likely will get this response:

OpenSSL 0.9.8za 5 Jun 2014

This version does not contain some of the vulnurability fixes introduced in later versions (including the well-known Heartbleed).
So a quick search on updating openssl will bring you this answer.
On my machine, I had to install brew from scratch (using the instructions at brew.sh) and then following these:

$ brew update
$ brew install openssl
$ brew link --force openssl

I then had to rename old openssl and create a symbolic link to the brew installed one using:

$ sudo mv /usr/bin/openssl /usr/bin/openssl_OLD
$ sudo ln -s /usr/local/Cellar/openssl/1.0.1j_1/bin/openssl /usr/bin/openssl

Note: the 1.0.1j_1 is for the time of writing, it probably will change as openssl new versions introduced.

Now, this updates openssl to the latest for command line use. We now need to make sure the include directories are updated as well, so to check the version you have on your machine, look into /usr/include/openssl/opensslv.h - most likely it will contain the 0.9.8za version (or whatever was there before), so we need to link that directory to the brew installed one:

$ sudo mv /usr/include/openssl /usr/include/openssl_OLD
$ sudo ln -s /usr/local/Cellar/openssl/1.0.1j_1/include/openssl /usr/include/openssl

Now, if you try to create an xcode console app with this code:

#include 
#include 

int main(int argc, const char * argv[]) {
    // insert code here...
    printf("Hello, World!\n");
    printf("OpenSSL version is: %s\n", OPENSSL_VERSION_TEXT);
    return 0;
}


You will be disappointed to see that it still, stubbornly, keeps referring to the old openssl...
To find out what happens, hold the command key and click on the to open that header file, and you will see that xcode is actually looking for it under the osx sdk (on my machine it is: /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/openssl)

so to fix that, I created a symbolic link inside the sdk folder to the brew installed version as well:

$ sudo mv /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/openssl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/openssl_OLD
$ sudo ln -s /usr/local/Cellar/openssl/1.0.1j_1/include/openssl /Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.10.sdk/usr/include/openssl

Now, this should work which can be tested by building the above console app (you may need to first clean it to remove any cached headers).
Note: XCode may contain multiple SDKs (on my machine there is also MacOSX10.9.sdk) - so make sure you update all the ones you use.




And a message from our advertisers:


Toptal provides remote engineers and designers of high quality. I recommend them. Follow this link (full disclosure: this is my affiliate link):
https://www.toptal.com/#engage-honest-computer-engineers-today