Amazon S3 File Upload Api Crofton

Is there any way to set the file permission at the time of uploading files through Amazon S3 API. In my current solution i am able to upload the file on my bucket but i can not view the file through the URL which is mentioned in the file properties section.

  • Products
    • Overview

      Salesforce Apps

      Build Faster, Smarter & Together

  • Resources
    • Learn

      Tools

      By Topic

  • Community
  • Trailhead

Don't have an account?

Signup for a Developer Edition

Browse by Topic

You need to sign in to do that


Don't have an account?

ShowAll Questionssorted byDate Posted

Show

  • All Questions
  • Unanswered Questions
  • Unsolved Questions
  • Solved Questions

sorted by

  • Date Posted
  • Recent Activity
  • Most Popular

You need to sign in to do that


Don't have an account?

  • This Question
Jason Flammang
I have a need to upload binary stream PDF files to Amazon S3. I've seen the sample code available to use the REST API with the POST operation on visualforce page, however, I need to upload the file via APEX without user involvment, as I'm retrieiving the files from another database via their SOAP API.
I'm trying to do this using the PUT operation, but I don't think I'm doing the authentication correctly as I'm getting a 403 Forbidden response.
Any ideas?
  • June 12, 2015
  • ·
  • 0
  • ·
  • 8
Jason Flammang
well I guess I should have waited a bit longer to post this question...haha
turns out my signing string didn't need to be UTF-8 encoded (see old code 'String encodedStringToSign = EncodingUtil.urlEncode(stringToSign,'UTF-8');' )
Amazon's doucmentation mentions 'The string to sign (verb, headers, resource) must be UTF-8 encoded', however I removed this piece and just ran my createSignature class using stringToSign (not UTF-8 encoded) and it worked!!
Also, it turns out that you can decode the binary stream and use the decoded Blob as the body of the response. Otherwise, Amazon just displays the binary stream text on screen.
Here is my final code
Hope this helps someone else!
  • June 12, 2015
  • ·
  • 3
  • ·
  • 0
Amazon S3 File Upload Api CroftonJason Flammang
well I guess I should have waited a bit longer to post this question...haha
turns out my signing string didn't need to be UTF-8 encoded (see old code 'String encodedStringToSign = EncodingUtil.urlEncode(stringToSign,'UTF-8');' )
Amazon's doucmentation mentions 'The string to sign (verb, headers, resource) must be UTF-8 encoded', however I removed this piece and just ran my createSignature class using stringToSign (not UTF-8 encoded) and it worked!!
Also, it turns out that you can decode the binary stream and use the decoded Blob as the body of the response. Otherwise, Amazon just displays the binary stream text on screen.
Here is my final code
Hope this helps someone else!
Jason Flammang
Here is some updated sample code that I'm currently using
You'll get the 403(Forbidden) status code when there is something wrong with your signature. For me it was the dateTime and use my local time zone.
Hope this helps
  • October 19, 2015
  • ·
  • 4
  • ·
  • 0
Anil Meghnathi 6
  • October 28, 2015
  • ·
  • 1
  • ·
  • 0
Anil Meghnathi 6
Hi Jason,
I have one issue with the 'x-amz-acl' header. Along with the new fiel, I also want to set the ACL for that new file. I have setted this extra header for that but its not working. Do you have any idea?
Here is the code i am trying to use:
public static void PutFile(String fileContent, String filekey, String bucketName, String contentType, String region, String key, String secret){
String formattedDateString = Datetime.now().format('EEE, dd MMM yyyy HH:mm:ss z','America/Denver');
String filename = filekey;
HttpRequest req = new HttpRequest();
Http http = new Http();
req.setHeader('Content-Type', contentType);
req.setMethod('PUT');
req.setHeader('x-amz-acl', 'public-read-write');
req.setHeader('Host','s3' + region + '.amazonaws.com');
req.setEndpoint('https://s3' + region + '.amazonaws.com' + '/'+ bucketName + '/' + filename);
req.setHeader('Date', formattedDateString);
String stringToSign = 'PUTnn'+contentType+'nx-amz-acl:public-read-writen'+formattedDateString+'n/'+bucketName+'/'+filename;
req.setHeader('Authorization',createAuthHeader(stringToSign, key, secret));
if(fileContent != null && fileContent != '){
Blob pdfBlob = EncodingUtil.base64Decode(fileContent);
req.setBodyAsBlob(pdfBlob);
req.setHeader('Content-Length', string.valueOf(fileContent.length()));
// Execute web service call
try {
HTTPResponse res = http.send(req);
System.debug('***RESPONSE STRING: ' + res.toString());
System.debug('***RESPONSE STATUS: '+res.getStatus());
System.debug('***STATUS CODE:' +res.getStatusCode());
} catch(System.CalloutException e) {
system.debug('***ERROR: ' + e.getMessage());
}
}
}
  • December 13, 2015
  • ·
  • 0
  • ·
  • 0
Umer Farooq 12
Hi jason,
i have done this successfully. i want to upload multiple attachments on s3 bucket almost at one time i need to transfer 25 to 30 attachments. how i can do this ? currently i hit on my apex through scheduler and soap connection with sales force and get attachments from sf and upload it on s3. in one time it upload only 4 or 5 attachments. how i can upload multiple attachments on s3 at a time ?
Your early response will be appreciated.
Thanks
umer
  • August 18, 2016
  • ·
  • 0
  • ·
  • 0
Jason Flammang
I don't know your specific example, but my best guess would be for you to check out Batch Apex: https://developer.salesforce.com/docs/atlas.en-us.apexcode.meta/apexcode/apex_batch_interface.htm
I know there are limits as to the number of callouts that can be made duing a batch process, so you'll want to make sure your batch size keeps you within those limits.
You'll also want to make sure you specify Database.AllowCallouts when setting up your batch APEX class otherwise you'll get an error.
I currently use a batch APEX class to make callouts to a 3rd party SOAP API that only allows for one record to be retrieved at a time. Using the Batch processing I can update thousands of records a day. Hope this helps.
  • August 18, 2016
  • ·
  • 0
  • ·
  • 0
Kyle Dunsire 3
This was massively helptful. It works a treat. Thankyou!
  • October 4, 2016
  • ·
  • 0
  • ·
  • 0
Neilon Team
S3- Link is FREE App for Salesforce - Amazon Connector. Its also available on Appexchange.
Attach file related to any Salesforce object on Amazon.
5 GB free storage for one year.
Multiple file uplaod.
No file size limit for upload.
File access control capabiliy.
Track file downloads by users.
File exlorer capability.
https://appexchange.salesforce.com/listingDetail?listingId=a0N3000000CW1OXEA1
Here is our email address. Let us know if you have any query.
support@neiloncloud.com
Thanks.
  • April 22, 2017
  • ·
  • 0
  • ·
  • 1
Kenny Jacobson - Datawin Consulting

Amazon S3 File Upload Api Crofton Md

Anil,
I think you just have to re-arrange the stringToSign so that the 'x-amz-acl' is AFTER the formattedDateString.
Yours:Corrected:
  • June 15, 2017
  • ·
  • 0
  • ·
  • 0
Kenny Jacobson - Datawin ConsultingJason, thank you for posting this!!! It helped save me a lot of time.
And just for anyone who cares. If you want your files to be stored in the cheaper S3 storage (Infrequent Access) and with public read-only access you would do this...
Change thisTo this:
And in the createAuthHeader method, change this:


to this:


(Refactor as desired)
  • June 15, 2017
  • ·
  • 0
  • ·
  • 0
Manohar kumar

Hi Jason,

Hey Jason,
Hi have to do a get request. I copied your code and made some changes. I need to get all the files from a buket.
I am getting Status=Bad Request, StatusCode=400. Please help me with this. I am posting my code below.

Thanks,
Manohar

  • August 18, 2017
  • ·
  • 0
  • ·
  • 0

You need to sign in to do that.

Have an account?Sign In
Dismiss
Active4 months ago

I'm implementing a direct file upload from client machine to Amazon S3 via REST API using only JavaScript, without any server-side code. All works fine but one thing is worrying me...

When I send a request to Amazon S3 REST API, I need to sign the request and put a signature into Authentication header. To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

How can I handle this? And is it a problem at all? Maybe I can limit specific private key usage only to REST API calls from a specific CORS Origin and to only PUT and POST methods or maybe link key to only S3 and specific bucket? May be there are another authentication methods?

'Serverless' solution is ideal, but I can consider involving some serverside processing, excluding uploading a file to my server and then send in to S3.

Rory
25.8k42 gold badges138 silver badges215 bronze badges
OlegasOlegas
7,4065 gold badges42 silver badges68 bronze badges

9 Answers

I think what you want is Browser-Based Uploads Using POST.

Basically, you do need server-side code, but all it does is generate signed policies. Once the client-side code has the signed policy, it can upload using POST directly to S3 without the data going through your server.

Here's the official doc links:

Diagram: http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingHTTPPOST.html

Example code: http://docs.aws.amazon.com/AmazonS3/latest/dev/HTTPPOSTExamples.html

The signed policy would go in your html in a form like this:

Notice the FORM action is sending the file directly to S3 - not via your server.

Every time one of your users wants to upload a file, you would create the POLICY and SIGNATURE on your server. You return the page to the user's browser. The user can then upload a file directly to S3 without going through your server.

When you sign the policy, you typically make the policy expire after a few minutes. This forces your users to talk to your server before uploading. This lets you monitor and limit uploads if you desire.

The only data going to or from your server is the signed URLs. Your secret keys stay secret on the server.

secretmikesecretmike
9,0133 gold badges28 silver badges36 bronze badges

You can do this by AWS S3 Cognitotry this link here :

Also try this code

Just change Region, IdentityPoolId and Your bucket name

For more details, Please check - GithubJoomlerJoomler
1,4772 gold badges18 silver badges31 bronze badges

You're saying you want a 'serverless' solution. But that means you have no ability to put any of 'your' code in the loop. (NOTE: Once you give your code to a client, it's 'their' code now.) Locking down CORS is not going to help: People can easily write a non-web-based tool (or a web-based proxy) that adds the correct CORS header to abuse your system.

The big problem is that you can't differentiate between the different users. You can't allow one user to list/access his files, but prevent others from doing so. If you detect abuse, there is nothing you can do about it except change the key. (Which the attacker can presumably just get again.)

Your best bet is to create an 'IAM user' with a key for your javascript client. Only give it write access to just one bucket. (but ideally, do not enable the ListBucket operation, that will make it more attractive to attackers.)

If you had a server (even a simple micro instance at $20/month), you could sign the keys on your server while monitoring/preventing abuse in realtime. Without a server, the best you can do is periodically monitor for abuse after-the-fact. Here's what I would do:

1) periodically rotate the keys for that IAM user: Every night, generate a new key for that IAM user, and replace the oldest key. Since there are 2 keys, each key will be valid for 2 days.

2) enable S3 logging, and download the logs every hour. Set alerts on 'too many uploads' and 'too many downloads'. You will want to check both total file size and number of files uploaded. And you will want to monitor both the global totals, and also the per-IP address totals (with a lower threshold).

These checks can be done 'serverless' because you can run them on your desktop. (i.e. S3 does all the work, these processes just there to alert you to abuse of your S3 bucket so you don't get a giant AWS bill at the end of the month.)

BraveNewCurrencyBraveNewCurrency
10.4k1 gold badge30 silver badges42 bronze badges

Adding more info to the accepted answer, you can refer to my blog to see a running version of the code, using AWS Signature version 4.

Will summarize here:

As soon as the user selects a file to be uploaded, do the followings:1. Make a call to the web server to initiate a service to generate required params

  1. In this service, make a call to AWS IAM service to get temporary cred

  2. Once you have the cred, create a bucket policy (base 64 encoded string). Then sign the bucket policy with the temporary secret access key to generate final signature

  3. send the necessary parameters back to the UI

  4. Once this is received, create a html form object, set the required params and POST it.

For detailed info, please refer https://wordpress1763.wordpress.com/2016/10/03/browser-based-upload-aws-signature-version-4/

RajeevJRajeevJ

To create a signature, I must use my secret key. But all things happens on a client side, so, the secret key can be easily revealed from page source (even if I obfuscate/encrypt my sources).

This is where you have misunderstood. The very reason digital signatures are used is so that you can verify something as correct without revealing your secret key. In this case the digital signature is used to prevent the user from modifying the policy you set for the form post.

Digital signatures such as the one here are used for security all around the web. If someone (NSA?) really were able to break them, they would have much bigger targets than your S3 bucket :)

OlliMOlliM

Amazon S3 File Upload Api Crofton Md

I have given a simple code to upload files from Javascript browser to AWS S3 and list the all files in S3 bucket.

Steps:

  1. To know how to create Create IdentityPoolId http://docs.aws.amazon.com/cognito/latest/developerguide/identity-pools.html

    1. Goto S3's console page and open cors configuration from bucket properties and write following XML code into that.

    2. Create HTML file containing following code change the credentials, open file in browser and enjoy.

Nilesh PawarNilesh Pawar

If you don't have any server side code, you security depends on the security of the access to your JavaScript code on the client side (ie everybody who has the code could upload something).

So I would recommend, to simply create a special S3 bucket which is public writeable (but not readable), so you don't need any signed components on the client side.

The bucket name (a GUID eg) will be your only defense against malicious uploads (but a potential attacker could not use your bucket to transfer data, because it is write only to him)

Ruediger JungbeckRuediger Jungbeck
1,2063 gold badges26 silver badges41 bronze badges

Here is how you generate a policy document using node and serverless

The configuration object used is stored in SSM Parameter Store and looks like this

Samir PatelSamir Patel

If you are willing to use a 3rd party service, auth0.com supports this integration. The auth0 service exchanges a 3rd party SSO service authentication for an AWS temporary session token will limited permissions.

See:https://github.com/auth0-samples/auth0-s3-sample/
and the auth0 documentation.

JasonJason
3,9682 gold badges26 silver badges30 bronze badges

protected by CommunityApr 17 at 2:30

Thank you for your interest in this question. Because it has attracted low-quality or spam answers that had to be removed, posting an answer now requires 10 reputation on this site (the association bonus does not count).
Would you like to answer one of these unanswered questions instead?

Not the answer you're looking for? Browse other questions tagged javascriptauthenticationamazon-web-servicesamazon-s3amazon or ask your own question.