Giter VIP home page Giter VIP logo

play-s3's Introduction

This repository is no longer maintained

Just create a fork, if you want I can list it here.


Amazon Simple Storage Service (S3) module for Play 2.6

A minimal S3 API wrapper. Allows you to list, get, add and remove items from a bucket.

Has some extra features that help with direct upload and authenticated url generation.

Note: this version uses the new aws 4 signer, this requires you to correctly set the region

Important changes

10.0.0

  • Upgraded to Play 2.7

9.0.0

  • Upgraded to Play 2.6
  • Upgraded to Scala 2.12

8.0.0

  • Upgraded to Play 2.5

7.0.0

  • Organisation has been changed to 'net.kaliber'
  • Resolver (maven repository) has been moved
  • fromConfig and fromConfiguration methods have been renamed to fromApplication. Added fromConfiguration methods that can be used without access to an application (useful for application loaders introduced in Play 2.4)

Installation

  val appDependencies = Seq(
    "net.kaliber" %% "play-s3" % "9.0.0"

    // use the following version for play 2.5
    "net.kaliber" %% "play-s3" % "8.0.0"
    // use the following version for play 2.4
    "net.kaliber" %% "play-s3" % "7.0.2"
    // use the following version for play 2.3
    "nl.rhinofly" %% "play-s3" % "6.0.0"
    // use the following version for play 2.2
    //"nl.rhinofly" %% "play-s3" % "4.0.0"
    // use the following version for play 2.1
    //"nl.rhinofly" %% "play-s3" % "3.1.1"
  )

    // use the following for play 2.5 and 2.4
  resolvers += "Kaliber Internal Repository" at "https://jars.kaliber.io/artifactory/libs-release-local"

  // use the following for play 2.3 and below
  resolvers += "Rhinofly Internal Repository" at "http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local"

Configuration

application.conf should contain the following information:

aws.accessKeyId=AmazonAccessKeyId
aws.secretKey=AmazonSecretKey

If you are hosting in a specific region that can be specified. If you are using another S3 implementation (like riakCS), you can customize the domain name and https usage with these values:

#default is us-east-1
s3.region="eu-west-1"
#default is determined by the region, see: http://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
s3.host="your.domain.name"
#default is true
s3.https=false
#default is true
#required in case dots are present in the bucket name and https is enabled
s3.pathStyleAccess=false

Usage

Getting a S3 instance:

val s3 = S3.fromApplication(playApplication)
// or
val s3 = S3.fromConfiguration(wsClient, playConfiguration)

Getting a bucket:

val bucket = s3.getBucket("bucketName")

Adding a file:

//not that acl and headers are optional, the default value for acl is set to PUBLIC_READ.

val result = bucket + BucketFile(fileName, mimeType, byteArray, acl, headers)
//or
val result = bucket add BucketFile(fileName, mimeType, byteArray, acl, headers)

result
  .map { unit =>
    Logger.info("Saved the file")
  }
  .recover {
    case S3Exception(status, code, message, originalXml) => Logger.info("Error: " + message)
  }

Removing a file:

val result = bucket - fileName
//or
val result = bucket remove fileName

Retrieving a file:

val result = bucket get "fileName"

result.map {
    case BucketFile(name, contentType, content, acl, headers) => //...
}
//or
val file = Await.result(result, 10 seconds)
val BucketFile(name, contentType, content, acl, headers) = file

Listing the contents of a bucket:

val result = bucket.list

result.map { items =>
  items.map {
    case BucketItem(name, isVirtual) => //...
  }
}

//or using a prefix
val result = bucket list "prefix"

Retrieving a private url:

val url = bucket.url("fileName", expirationFromNowInSeconds)

Renaming a file:

val result = bucket rename("oldFileName", "newFileName", ACL)

Multipart file upload:

// Retrieve an upload ticket
val result:Future[BucketFileUploadTicket] =
  bucket initiateMultipartUpload BucketFile(fileName, mimeType)

// Upload the parts and save the tickets
val result:Future[BucketFilePartUploadTicket] =
  bucket uploadPart (uploadTicket, BucketFilePart(partNumber, content))

// Complete the upload using both the upload ticket and the part upload tickets
val result:Future[Unit] =
  bucket completeMultipartUpload (uploadTicket, partUploadTickets)

Updating the ACL of a file:

val result:Future[Unit] = bucket updateACL ("fileName", ACL)

Retrieving the ACL of a file:

val result = testBucket.getAcl("private2README.txt")

for {
 aclList <- result
 grant <- aclList
} yield
  grant match {
    case Grant(FULL_CONTROL, CanonicalUser(id, displayName)) => //...
    case Grant(READ, Group(uri)) => //...
  }

Browser upload helpers:

val `1 minute from now` = System.currentTimeMillis + (1 * 60 * 1000)

// import condition builders
import fly.play.s3.upload.Condition._

// create a policy and set the conditions
val policy =
  testBucket.uploadPolicy(expiration = new Date(`1 minute from now`))
    .withConditions(
      key startsWith "test/",
      acl eq PUBLIC_READ,
      successActionRedirect eq expectedRedirectUrl,
      header(CONTENT_TYPE) startsWith "text/",
      meta("tag").any)
    .toPolicy

// import Form helper
import fly.play.s3.upload.Form

val formFieldsFromPolicy = Form(policy).fields

// convert the form fields from the policy to an actial form
formFieldsFromPolicy
  .map {
    case FormElement(name, value, true) =>
      s"""<input type="text" name="$name" value="$value" />"""
    case FormElement(name, value, false) =>
      s"""<input type="hidden" name="$name" value="$value" />"""
  }

// make sure you add the file form field as last
val allFormFields =
  formFieldsFromPolicy.mkString("\n") +
  """<input type="text" name="file" />"""

More examples can be found in the S3Spec in the test folder. In order to run the tests you need an application.conf file in the test/conf that looks like this:

aws.accessKeyId="..."
aws.secretKey="..."

s3.region="eu-west-1"

testBucketName=s3playlibrary.rhinofly.net

play-s3's People

Contributors

dhruvbhatia avatar edgecaseberg avatar eecolor avatar goldv avatar hayena avatar jorkzijlstra avatar klaasman avatar lucianenache avatar mbknor avatar mellster2012 avatar th0br0 avatar waxzce avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

play-s3's Issues

Incompatibility with Play 2.5

Play 2.5 has changed the way they make REST calls, and it has broken play-s3 compatibility. Here is the error I get:

java.lang.ClassNotFoundException: com.ning.http.client.Response
at java.net.URLClassLoader$1.run(URLClassLoader.java:372)
at java.net.URLClassLoader$1.run(URLClassLoader.java:361)
at java.security.AccessController.doPrivileged(Native Method)
at java.net.URLClassLoader.findClass(URLClassLoader.java:360)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at fly.play.s3.Bucket.fly$play$s3$Bucket$$extractHeaders(Bucket.scala:226)
at fly.play.s3.Bucket$$anonfun$get$1.apply(Bucket.scala:67)
at fly.play.s3.Bucket$$anonfun$get$1.apply(Bucket.scala:66)
at fly.play.aws.AwsResponse$.apply(AwsResponse.scala:8)

Please let me know if you intend to continue support for this library as Play evolves. They make a lot of changes even with minor versions, so I know it can be a lot to maintain.

Thanks.

Multiple S3 buckets

I've my contents split in 2 different s3 buckets for various contents. One has images and the other has the Videos. How do I access these 2 buckets using the plugin?

Split Signer into separate project

I'd like to request that this library be split into two parts, one for AWS signing and the other for S3 which would depend on signing requests. It is essentially just a split right down the package line that already exists.

The reason I'd like to see this is because we'd like to build a Play DynamoDB impl that doesn't require the use of the AWS SDK since it is blocking. We have already proven this works great when using the signer portion of this library - we'd just like to see it split into two parts and published to sonatype.

Thoughts?

api-s3_2.1.0 not found

I added "nl.rhinofly" %% "api-s3" % "3.1.0" to my Play 2.1 (sbt 0.12.3, Scala 2.10.1) dependencies and resolvers += "Rhinofly Internal Repository" at "http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local" to my resolvers. I get the following error when I run the application:

[warn] ==== Typesafe Releases Repository: tried
[warn]   http://repo.typesafe.com/typesafe/releases/nl/rhinofly/api-s3_2.10/3.1.0/api-s3_2.10-3.1.0.pom
[warn] ==== Typesafe Snapshots Repository: tried
[warn]   http://repo.typesafe.com/typesafe/snapshots/nl/rhinofly/api-s3_2.10/3.1.0/api-s3_2.10-3.1.0.pom
[warn] ==== Rhinofly Internal Repository: tried
[warn]   http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local/nl/rhinofly/api-s3_2.10/3.1.0/api-s3_2.10-3.1.0.pom
[warn] ==== public: tried
[warn]   http://repo1.maven.org/maven2/nl/rhinofly/api-s3_2.10/3.1.0/api-s3_2.10-3.1.0.pom
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::                       
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: nl.rhinofly#api-s3_2.10;3.1.0: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
sbt.ResolveException: unresolved dependency: nl.rhinofly#api-s3_2.10;3.1.0: not found
    at sbt.IvyActions$.sbt$IvyActions$$resolve(IvyActions.scala:214)
    at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:122)
    at sbt.IvyActions$$anonfun$update$1.apply(IvyActions.scala:121)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
    at sbt.IvySbt$Module$$anonfun$withModule$1.apply(Ivy.scala:114)
    at sbt.IvySbt$$anonfun$withIvy$1.apply(Ivy.scala:102)
    at sbt.IvySbt.liftedTree1$1(Ivy.scala:49)
    at sbt.IvySbt.action$1(Ivy.scala:49)
    at sbt.IvySbt$$anon$3.call(Ivy.scala:58)
    at xsbt.boot.Locks$GlobalLock.withChannel$1(Locks.scala:75)
    at xsbt.boot.Locks$GlobalLock.withChannelRetries$1(Locks.scala:58)
    at xsbt.boot.Locks$GlobalLock$$anonfun$withFileLock$1.apply(Locks.scala:79)
    at xsbt.boot.Using$.withResource(Using.scala:11)
    at xsbt.boot.Using$.apply(Using.scala:10)
    at xsbt.boot.Locks$GlobalLock.liftedTree1$1(Locks.scala:51)
    at xsbt.boot.Locks$GlobalLock.withLock(Locks.scala:51)
    at xsbt.boot.Locks$.apply0(Locks.scala:30)
    at xsbt.boot.Locks$.apply(Locks.scala:27)
    at sbt.IvySbt.withDefaultLogger(Ivy.scala:58)
    at sbt.IvySbt.withIvy(Ivy.scala:99)
    at sbt.IvySbt.withIvy(Ivy.scala:95)
    at sbt.IvySbt$Module.withModule(Ivy.scala:114)
    at sbt.IvyActions$.update(IvyActions.scala:121)
    at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:951)
    at sbt.Classpaths$$anonfun$work$1$1.apply(Defaults.scala:949)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:972)
    at sbt.Classpaths$$anonfun$doWork$1$1$$anonfun$54.apply(Defaults.scala:970)
    at sbt.Tracked$$anonfun$lastOutput$1.apply(Tracked.scala:35)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:974)
    at sbt.Classpaths$$anonfun$doWork$1$1.apply(Defaults.scala:969)
    at sbt.Tracked$$anonfun$inputChanged$1.apply(Tracked.scala:45)
    at sbt.Classpaths$.cachedUpdate(Defaults.scala:977)
    at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:856)
    at sbt.Classpaths$$anonfun$45.apply(Defaults.scala:853)
    at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
    at sbt.Scoped$$anonfun$hf10$1.apply(Structure.scala:586)
    at scala.Function1$$anonfun$compose$1.apply(Function1.scala:49)
    at sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
    at sbt.Scoped$Reduced$$anonfun$combine$1$$anonfun$apply$12.apply(Structure.scala:311)
    at sbt.$tilde$greater$$anonfun$$u2219$1.apply(TypeFunctions.scala:41)
    at sbt.std.Transform$$anon$5.work(System.scala:71)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
    at sbt.Execute$$anonfun$submit$1$$anonfun$apply$1.apply(Execute.scala:232)
    at sbt.ErrorHandling$.wideConvert(ErrorHandling.scala:18)
    at sbt.Execute.work(Execute.scala:238)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
    at sbt.Execute$$anonfun$submit$1.apply(Execute.scala:232)
    at sbt.ConcurrentRestrictions$$anon$4$$anonfun$1.apply(ConcurrentRestrictions.scala:160)
    at sbt.CompletionService$$anon$2.call(CompletionService.scala:30)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
    at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:334)
    at java.util.concurrent.FutureTask.run(FutureTask.java:166)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
    at java.lang.Thread.run(Thread.java:722)

Possible Bug On Malformed URL

When I do the following:

bucket.get("some.bad.url")

I get the following immediately:

java.lang.IllegalArgumentException: Illegal character in path at index 174
at java.net.URI.create(URI.java:859)
at fly.play.aws.auth.Request.uri$lzycompute(Request.scala:13)
...

The exception is fine of course, but shouldn't it be wrapped in a Failure within the Future to be unlocked with a recover rather than just happen synchronously? Or am I just doing something dumb?

Thanks.

Multipart Fileupload

We are trying to implement an asynchronous action with Play 2.2.2 and Scala and wanted to enable our application to receive imges in form of a multiprt/form-data request. We are then trying to directly forward the received fileparts to s3 server but the Usage part of the documentation doesn't help us with this part. Anyone got an exampe Controller or idea of how to handle the direct upload?

Paths with special characters and spaces return SignatureDoesNotMatch

I'm not sure whether or not it is intended for paths to be encoded prior to passing to fly.s3 functions, but I'm having trouble getting this to work:

bucket add BucketFile("sample/test file.txt", "text/plain", content, Some(PRIVATE)) // will not work without URL encoding

bucket add BucketFile("sample/test&;-file.txt", "text/plain", content, Some(PRIVATE)) // works without URL encoding

bucket add BucketFile("sample/test & file.txt", "text/plain", content, Some(PRIVATE)) // will not work either way, URL encoding the path first will return SignatureDoesNotMatch

bucket add BucketFile("sample/test+&+file.txt", "text/plain", content, Some(PRIVATE)) // this will work (changing only the spaces), but is a hack

Null pointer exception while reading file from s3 bucket in play 2.4.4

On upgrading play from 2.2.x to 2.4.4, we tried to read file from s3 bucket, but getting NullPointerException on upgrade. However it works fine on the old version. Following are the details:
build.sbt:
"net.kaliber" %% "play-s3" % "7.0.2"

Application.conf:
s3.region="us-west-2"
aws.accessKeyId="xxxxxxxxxxx"
aws.secretKey="xxxxxxxxx"
aws.bucket="xxxxxxxx"

Code for accessing the s3 file:
import play.api.Play.{configuration, current}
private lazy val LOGOBUCKET: String = configuration.getString("aws.bucket").get
val logobucket = S3(LOGOBUCKET)
val result = logobucket get "xxx/xxx/xxx/logo.png"

Stacktrace for the same:
lay.api.http.HttpErrorHandlerExceptions$$anon$1: Execution exception[[NullPointerException: originalUrl]]
at play.api.http.HttpErrorHandlerExceptions$.throwableToUsefulException(HttpErrorHandler.scala:265) ~[play_2.11-2.4.4.jar:2.4.4]
at play.api.http.DefaultHttpErrorHandler.onServerError(HttpErrorHandler.scala:191) ~[play_2.11-2.4.4.jar:2.4.4]
at play.api.GlobalSettings$class.onError(GlobalSettings.scala:179) [play_2.11-2.4.4.jar:2.4.4]
at play.api.mvc.WithFilters.onError(Filters.scala:93) [play_2.11-2.4.4.jar:2.4.4]
at play.api.http.GlobalSettingsHttpErrorHandler.onServerError(HttpErrorHandler.scala:94) [play_2.11-2.4.4.jar:2.4.4]
Caused by: java.lang.NullPointerException: originalUrl
at com.ning.http.client.uri.UriParser.parse(UriParser.java:323) ~[async-http-client-1.9.21.jar:na]
at com.ning.http.client.uri.Uri.create(Uri.java:30) ~[async-http-client-1.9.21.jar:na]
at com.ning.http.client.providers.netty.handler.Protocol.exitAfterHandlingRedirect(Protocol.java:124) ~[async-http-client-1.9.21.jar:na]
at com.ning.http.client.providers.netty.handler.HttpProtocol.handleHttpResponse(HttpProtocol.java:423) ~[async-http-client-1.9.21.jar:na]
at com.ning.http.client.providers.netty.handler.HttpProtocol.handle(HttpProtocol.java:470) ~[async-http-client-1.9.21.jar:na]

Can't stream S3 video/Image while downloading

I can't stream the S3 video/Image while downloading. I can able to read the private bucket by using my aws.accessKeyId and aws.secretKey. I have used the below way.

Code snippet:
In HTML:

<!DOCTYPE html>
<html>
    <body>
        <img width="240" height="320" src="@routes.Application.loadS3File("user.png")" />
        <video controls="controls" preload controls width="320" height="240" name="Video Name">
            <source src="@routes.Application.loadS3File("user.mov")" type='video/mp4; codecs="avc1.42E01E, mp4a.40.2"' />
        </video>
    </body>
</html>

In Controller:

**method-1:**
def loadS3File(filename: String) = Action { implicit request =>
        try {
            val bucket = S3("user")
            val result = bucket.get(filename)
            val file = Await.result(result, 60 seconds)
            val BucketFile(name, contentType, content, acl, headers) = file
            Ok.chunked(Enumerator(content)).as(contentType);
        }
        catch {
            case e: Exception =>
                BadRequest("Error: " + e.getMessage)
        }
    }

**method-2:**
def loadS3File(filename: String) = Action.async {
        val bucket = S3("user")
        bucket.get(filename).map {
            case BucketFile(name, contentType, content, acl, headers) =>
                val byteString: ByteString = ByteString.fromArray(content)
                Ok.chunked(Enumerator(byteString)).as(contentType)
        }

    }

It waits to read the whole file content from S3. But the it doesn't return video to html. If I tried to show the S3 image, it read the whole image content from S3 and return to the html successfully.
I can't stream the video/Image while downloading in progress.

Can't specify timeout

Hi

It seems the only way to add a different timeout is by changing global play WS config (ws.timeout). It should be possible to configure a specific timeout for uploads since uploads usually take longer than other request.

Can only upload text files.

Using Play 2.4.1 and play-s3 version 7.0.2

I am able to upload text files just fine, but when I try anything more complex, images, .ods, .jar files it fails with this error The provided 'x-amz-content-sha256' header does not match what was computed.

If I swap "avatar.jpeg" for "text.txt" and "image/jpeg" for "plain/text" in the following code it works. The text file gets uploaded to S3. But if I try and upload any thing else it fails saying the header doesn't match. I've tried png, jpg, jpeg, ods, and jar files.

val file_path = "/path/to/file/avatar.jpeg"
val bucket = S3("path_to_bucket")
val byte_array = Files.readAllBytes(Paths.get(file_path))

val result = bucket + BucketFile("avatar.jpeg", "image/jpeg", byte_array)
result.map { unit =>
    Logger.info("Saved the file")
}
    .recover {
        case S3Exception(status, code, message, originalXml) =>
        {
            Logger.info("Error: " + message)
            Logger.info("originalXml: " + originalXml)
        }
    }

Is there any support for creating signed urls?

Hi, I'm building a play app that delivers some paid content, stored in S3, to end users. Does this module have any support for the signed url feature? If not is there something close that I could maybe fork and build upon?

Buckets with '.' in their name don't work with https

I'm using a bucket that has multiple '.' characters in it's name and trying to access it with https. This is failing and I believe it is because the certificate that amazon is using is a single-level wildcard certificate, e.g. for ".s3-eu-west-1.amanonaws.com". The url for the bucket is being constructed by prepending the bucket name to the base domain name (e.g. "foo.bar.s3-eu-west-1.amanonaws.com"), but this fails to match the certificate because the '' only allows for a single level of subdomain.

There is an alternative form of url which will work, namely "https://s3-eu-west-1.amazonaws.com/foo.bar/{path}". However, the httpUrl function in the s3 class is private so I am unable to override it to change the way the url is constructed.

It would be great if we could configure the choice of which url format to use, or failing that, at least make the httpUrl function public or protected so that we can override it.

Listing more than 1000 items in a bucket

Is there a way to configure this to list more than 1000 items?
Example:

val bucket = S3("my bucket")
bucket list "foo/" // only returns a max of 1000 items

I know this is an amazon limitation of returning 1000 items per request. I looked at the source in S3.scala and didn't see any special logic handling the case of having more than 1000 items. You have to keep track of this by putting a marker header in the HTTP to tell Amazon where you left off.

Library does not appear to find implicit AwsCredentials anymore

Since upgrading to 4.0.0 code like S3(bucket) now complains about not being able to find implicit credentials. This was not an issue with 3.3.0.

I was forced to create my own implicit credentials:

implicit val awsCredentials = SimpleAwsCredentials(PlayConfiguration("aws.accessKeyId"), PlayConfiguration("aws.secretKey"))

Multipart upload needs to allow sub-5MB part for last part

play-s3 has a check in S3.uploadPart that makes multipart uploads impossible unless they exactly divisible by S3.MINIMAL_PART_SIZE:

require(bucketFilePart.content.size >= S3.MINIMAL_PART_SIZE, "A part must be at least 5MB in size")

If you look at the example at http://docs.aws.amazon.com/AmazonS3/latest/dev/llJavaUploadFile.html
you'll see that the last part can be less than 5MB.

Simply removing that last check will fix the problem.

In the meantime, you could workaround the bug with the code in this gist: https://gist.github.com/cdmckay/7530967.

// Replace...
bucket.uploadPart(uploadTicket, bucketFilePart)
// ...with:
uploadPart(bucket, uploadTicket, bucketFilePart)

Geting signing failures from one account but not another

I have an application that is using rhinofly to upload files to S3. I have been developing and testing it against one AWS account with no problems. However, when I change the AWS credentials and other details to refer to a different account I get "SignatureDoesNotMatch" on all requests to S3.

I have verified that the credentials are correct by using them to access the same S3 bucket via a different route and the region is correctly set. Any thoughts as to why I might be seeing this?

Multipart upload support

As I understand browsing through the code there is no support to upload files in chunks. This is specially important for apps that upload big files. The current API forces to put the whole file in memory in a Byte[] which can easily leave an app without memory with a single upload. Here are a few questions...

  1. Are there any plans to support the S3 API's such as http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/s3/AmazonS3.html#initiateMultipartUpload(com.amazonaws.services.s3.model.InitiateMultipartUploadRequest)
    where a file may be chunked and uploaded in parts?
  2. Is there a way to pipe a file upload to the server to S3 so the stream is uploaded as it is received using play Iteratees API?
  3. Are there any plans to support something similar in the near future and would you consider a pull requests that adds support to chunk uploads?

And finally...

Thanks for such a great and simple lib!.
It has proven very useful to our projects that use S3 :)

Cannot load SBT dependencies

I added the configuration from your read me to my build.sbt and sbt could not find the library. Is anything outdated?

I am using Play 2.2

  val appDependencies = Seq(
    "nl.rhinofly" %% "play-s3" % "3.3.3"
    // use the following version for play 2.1
    //"nl.rhinofly" %% "play-s3" % "3.1.1"
  )

  val main = PlayProject(appName, appVersion, appDependencies, mainLang = SCALA).settings(
    resolvers += "Rhinofly Internal Repository" at "http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local"
  )

MalformedXML?

Hi, all of a sudden I'm getting a S3Exception when trying to upload to a bucket:

Problem accessing S3. Status 400, code MalformedXML, message 'MalformedXML'

<Error><Code>MalformedXML</Code><Message>The XML you provided was not well-formed or did not validate against our published schema</Message><RequestId>49A76E41B1D348D1</RequestId><HostId>k0BCqe9rd+cIL2+6v3tgzCtDp6iVy1+rlsLy/Bq3tjdq+dhcsLQn08rLBK3c7+PNq9PRdfYj1GM=</HostId></Error>

It happens with both 7.0.0 and 7.0.2. Any hint? Thanks for your time and attention!

Updating Header Information

Is it possible to update the header information for a key without removing the key and replacing it with the same content but new headers?

Thanks.

Delimiter in s3 request is showing up URL encoded

T 2014/04/04 08:26:11.320512 192.168.1.15:63084 -> 176.32.98.226:80 [AP]
GET /?delimiter=%2F HTTP/1.1.
Host: MyTestBucket.s3.amazonaws.com.
Date: Fri, 04 Apr 2014 12:26:11 UTC.
Authorization: AWS FOO:BAR=.
Connection: keep-alive.
Accept: /.
User-Agent: NING/1.0.

Is causing the request to fail. Can you not URL encode the request params, or at least have it be configurable? (Right now it's just a string in the case class)

Can't add the play-s3 dependency

I am using play 2.5.3 and I am trying to add the "play-s3" plugin in my build.sbt. But I got the below issue.

Error:Error while importing SBT project:<br/>...<br/><pre>[info] Resolving com.typesafe.play#play-docs_2.11;2.5.3 ...
[info] Resolving com.typesafe.play#play-doc_2.11;1.2.2 ...
[info] Resolving org.pegdown#pegdown;1.4.0 ...
[info] Resolving org.parboiled#parboiled-java;1.1.5 ...
[info] Resolving org.parboiled#parboiled-core;1.1.5 ...
[info] Resolving org.ow2.asm#asm;4.1 ...
[info] Resolving org.ow2.asm#asm-tree;4.1 ...
[info] Resolving org.ow2.asm#asm-analysis;4.1 ...
[info] Resolving org.ow2.asm#asm-util;4.1 ...
[info] Resolving jline#jline;2.12.1 ...
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: net.kaliber#play-s3_2.11;8.0.0: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[trace] Stack trace suppressed: run 'last *:update' for the full output.
[trace] Stack trace suppressed: run 'last *:ssExtractDependencies' for the full output.
[error] (*:update) sbt.ResolveException: unresolved dependency: net.kaliber#play-s3_2.11;8.0.0: not found
[error] (*:ssExtractDependencies) sbt.ResolveException: unresolved dependency: net.kaliber#play-s3_2.11;8.0.0: not found

play-s3-4.0.0 not found

Hi,

I'm getting the following error with SBT not finding the correct version of the play-s3 plugin. I'm using Play 2.2.5 and Scala 2.10.4. I have the following in my plugins.sbt file:

// S3 Module
resolvers += "Rhinofly Internal Repository" at "http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local"

// S3
addSbtPlugin("nl.rhinofly" %% "play-s3" % "4.0.0")

and am getting the following error:

[warn] ==== Rhinofly Internal Repository: tried
[warn]   http://maven-repository.rhinofly.net:8081/artifactory/libs-release-local/nl/rhinofly/play-s3_2.10_0.13/4.0.0/play-s3-4.0.0.pom
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  ::          UNRESOLVED DEPENDENCIES         ::
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]  :: nl.rhinofly#play-s3;4.0.0: not found
[warn]  ::::::::::::::::::::::::::::::::::::::::::::::
[warn]
[warn]  Note: Some unresolved dependencies have extra attributes.  Check that these dependencies exist with the requested attributes.
[warn]      nl.rhinofly:play-s3:4.0.0 (sbtVersion=0.13, scalaVersion=2.10)
[warn]
sbt.ResolveException: unresolved dependency: nl.rhinofly#play-s3;4.0.0: not found

I saw this similar issue for 2.1.0 and was wondering if this might be related.

Any help would be greatly appreciated. Thanks!

Does S3Signer support IAM roles?

Hey,

For my local test machine I'm using an IAM user with accessKey and SecretKey, but for on EC2 we're using IAM roles.
Does the S3 object or, the S3Signer support IAM roles?

Bad symbolic reference

I am getting the following error:

bad symbolic reference to fly.play.aws encountered in class file 'S3.class'.
Cannot access term aws in package fly.play. The current classpath may be
missing a definition for fly.play.aws, or S3.class may have been compiled against a version that's
incompatible with the one found on the current classpath.

I am just trying to get a bucket:

val bucket = S3("bucketA")

Any idea why this might be happening?

My conf file has my access key and my private key.

Not working with Play 2.4.0

play-s3 5.0.2 may not be compatible with Play 2.4.0

I've got warning messages while building Play 2.4.0 project -----------------------------------------------

[warn] There may be incompatibilities among your library dependencies.
[warn] Here are some of the libraries that were evicted:
[warn] * com.typesafe.play:play-ws_2.11:(2.3.0, 2.3.4) -> 2.4.0 (caller: XXXXplay:XXXXplay_2.11:1.0-SNAPSHOT, nl.rhinofly:play-aws-utils_2.11:4.1.0, nl.rhinofly:play-s3_2.11:5.0.2)

And got runtime error -----------------------------------------------

[ERROR] [06/04/2015 17:58:49.841] [application-multimedia-proc-23] [ActorSystem(application)] Uncaught fatal error from thread [application-multimedia-proc-23] shutting down ActorSystem [application]
java.lang.NoSuchMethodError: play.api.libs.ws.WS$.url(Ljava/lang/String;Lplay/api/Application;)Lplay/api/libs/ws/WSRequestHolder;
at fly.play.aws.Aws$AwsRequestBuilder.url(Aws.scala:23)
at fly.play.s3.S3.put(S3.scala:113)
at fly.play.s3.Bucket.add(Bucket.scala:102)
at controllers.NewnalWebImage$$anonfun$uploadFileToS3$1.apply(NewnalWebImage.scala:189)
at controllers.NewnalWebImage$$anonfun$uploadFileToS3$1.apply(NewnalWebImage.scala:187)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:251)
at scala.concurrent.Future$$anonfun$flatMap$1.apply(Future.scala:249)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)
at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Please support Play 2.4.x :D

Plugin does throw exceptions even if signature is Future

  def get(itemName: String): Future[BucketFile] =
    s3.get(name, Some(itemName), None, None) map S3Response { (status, response) =>
      val headers = extractHeaders(response)

      BucketFile(itemName,
        headers("Content-Type"),
        response.ahcResponse.getResponseBodyAsBytes,
        None,
        Some(headers))
    }

The following is unsafe because of the mapping with an S3Response

object S3Response {
  def apply[T](converter: (Int, Response) => T)(response: Response): T =
    AwsResponse(converter)(response) match {
      case Left(awsError) => throw S3Exception(awsError)
      case Right(t) => t
    }
}

If the S3 response is an error, then it will throw an exception instead of returning a Future failure, leading to harded error recovery in the clients.

This could be fixed by using flatMap instead of map:

    s3.get(name, Some(itemName), None, None) flatMap S3Response { (status, response) 
object S3Response {
  def apply[T](converter: (Int, Response) => T)(response: Response): Future[T] =
    AwsResponse(converter)(response) match {
      case Left(awsError) => Future.failed(awsError)
      case Right(t) => Future.success(t)
    }
}

(I haven't tried or compiled this code but you understand my point right?)

Don't map results through `unitResponse` to allow logging of errors

Every response is mapped through unitResponse. Whilst it simplifies the return types, it prevents any form of error management.

There are several solutions:

  • Let the response through
  • Allow the users to pass a continuation as an optional argument (default to unitResponse)
  • Others?

Add Controller utilities to make multipart uploading easier

In #2 a suggestion for a multipart upload fix was added that would make it easier for endusers to do multipart upload.

This involves an asynchronous handling of the upload while streaming the content using a custom body parser. An actor is probably needed to perform this type of action.

Any suggestions or pull requests are welcome

Scala 2.11 and Play 2.3.0

I am relatively new to Scala and SBT versioning, and I am using Scala 2.11. Naturally, when I used the method described in the documentation to import play-s3:

"nl.rhinofly" %% "play-s3" % "4.0.0"

I got an error indicating there was no play-s3_2.11 jar, which makes sense after I looked at the repo myself.

But then when I hardcoded the proper version like this

"nl.rhinofly" % "play-s3_2.10" % "5.0.0"

I get this error:

error Conflicting cross-version suffixes in: com.typesafe.play:play-functional, com.typesafe.akka:akka-actor, com.typesafe.play:play-json, com.typesafe.play:play, com.typesafe.play:play-iteratees, com.typesafe.play:twirl-api, com.typesafe.akka:akka-slf4j, org.scala-stm:scala-stm, com.typesafe.play:play-datacommons

Any tips on how to get around this are appreciated.

Thanks.

support to list buckets

I did not see any method which would list all buckets for an account. Am I missing something?

XAmzContentSHA256Mismatch (2.4.x branch)

I noticed there was a play_2.4.x branch. I'm not sure what the status of that branch is, but when I try to use it with some existing code, based on the 5.0.2 release using Play 2.3, running under the play 2.4 6.0.0 build I get:

[info] A fully executed contract should
[error]   ! execute events in sequence
[error]    fly.play.s3.S3Exception: Problem accessing S3. Status 400, code XAmzContentSHA256Mismatch, message 'XAmzContentSHA256Mismatch'
[error]    Original xml:
[error]    Some(<Error><Code>XAmzContentSHA256Mismatch</Code><Message>The provided 'x-amz-content-sha256' header does not match what was computed.</Message><ClientComputedContentSHA256>598a590771d788600f9b7c6573ff1d32e0c18c54fc6c1350d3fff8596e51cf5d</ClientComputedContentSHA256><S3ComputedContentSHA256>876b5050d2b8cf5f7158144f1d8cdd20f1106a56be9735d7581843b4aab71f82</S3ComputedContentSHA256><RequestId>72793E0590C92FAF</RequestId><HostId>v0H7O7Kwp9fSNRhAJIt3SrCuV5td5CABMzpEpqYTXRxr4IoCf+70Yct0ASmAsN/tYlhvHXr7NKI=</HostId></Error>) (S3Exception.scala:15)
[error] fly.play.s3.S3Exception$.apply(S3Exception.scala:15)
[error] fly.play.s3.S3Response$.apply(S3Response.scala:9)
[error] fly.play.s3.Bucket$$anonfun$unitResponse$2.apply(Bucket.scala:260)
[error] fly.play.s3.Bucket$$anonfun$unitResponse$2.apply(Bucket.scala:260)
[error] akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
[error] akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
[error] akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
[error] akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
[error] akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
[error] akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
[error] akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:397)

Any ideas? Sorry if this is a premature bug report.

rename has problem if more than one "directory" in destination path does not exist, returns NoSuchKey error

Hi again

I have found another problem using your rename.

I expect the cause is actually a weakness in the underlying AWS put copy using their X-Amz-copy-source header.

If I try to rename an object from say: d1/o1 to d2/o1, then the operation succeeds, even if there is nothing else with a d2 path.

However, if I try to rename an object from d1/o1 to d3/d4/o1, where no d3 path already exists, then it fails reporting NoSuchKey .

A spec for this would be something like:

"be able to rename a file to another path" inApp {

  val result = testBucket rename("privateREADME.txt", "d1/private2README.txt", AUTHENTICATED_READ)
  noException(result)
}

"be able to rename a file to another path with two non-existent directory parts" inApp {

  val result = testBucket rename("privateREADME.txt", "d2/d3/private2README.txt", AUTHENTICATED_READ)
  noException(result)
}

Where the first spec will pass but I expect the second spec will currently fail.

In my present application, I can work around this by always moving from d1/o1 to d3/o1 and then from d3/o1 to d3/d4/o1 . I can do that because I know d1/o1 will never otherwise exist. Were it not for that, I would be unsure just what to do, not seeing any other obvious way to the path without a terminal object. Perhaps this is because I do not properly understand AWS' path & object abstraction.

All the best

Is it possible to create a bucket that doesn't already exist?

Hello,

I just started using the library, and I need the ability to create check if new buckets, if they do not exist. I'm happy to submit a pull request, but first, I first wanted to be sure that

  1. This isn't already possible, and somehow, I'm just missing it
  2. You agree that this is a good feature for this library

I'm thinking of something along the lines of

S3.bucketExists("myBucket") // true or false
S3.createBucket("myBucket") // returns newly create Bucket

What do you think?

Build on scala 2.11.1

Awesome utility! Thanks for making it available. Could you build on 2.11.1 also?
I think all you need to do is add this to your build.sbt:

crossScalaVersions := Seq("2.10.4", "2.11.1")

Pre-Signed URLs for non-GET Actions

Originally posted in #20.

I need the ability to generate pre-signed urls for http-actions other than GET. I'm writing a restful play server but I don't want the clients to send the play server the files to upload to S3 - I want the clients to be able to upload them directly to S3 using PUT requests, without giving the clients real credentials for AWS.

Ideally I'd have something along the lines of an endpoint in the play server that just says the equivalent of:

You want to upload hello-world.jpg?
PUT it here: https://...bucket-url.../hello-world.jpg?aws-required-auth-info-etc

Basically offloading the effort of doing the upload back to the client (a mobile app which has to do the upload at least once anyway).

Unfortunately it seems all the url signing methods are hard-coded to only do "GET".

rename effectively removes server side encryption!

Hi

I see that rename is implemented as copy then delete.

But the copy made does not mirror any server side encryption of the source and there is now way to specify encryption.

It would be very desirable to be able to specify the encryption of the destination.

It appears that this could be done by making use of Server-Side Encryption Specific Request Headers such as x-amz-server-side​-encryption in fly.play.s3.S3.putCopy .

http://docs.aws.amazon.com/AmazonS3/latest/API/RESTObjectCOPY.html

But also it needs to be made clear that the present innocuously named rename can be so dangerous.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.