The component provides ability to connect to Amazon Simple Storage Service (Amazon S3) object storage service.
Following actions are inside:
This is the component for working with AWS S3 object storage service on AVA platform.
The component is based on AWS S3 SDK version 2.1132.0.
Name | Mandatory | Description | Values |
---|---|---|---|
ATTACHMENT_MAX_SIZE |
false | For elastic.io attachments configuration. Maximal possible attachment size in bytes. By default set to 104857600 and according to platform limitations CAN’T be bigger than that. |
Up to 104857600 bytes (100MB) |
ACCESS_KEY_ID |
false | For integration-tests is required to specify this variable | |
ACCESS_KEY_SECRET |
false | For integration-tests is required to specify this variable | |
REGION |
false | For integration-tests is required to specify this variable |
Please Note: From the platform version 20.51 we deprecated the component
LOG_LEVEL
environment variable. Now you can control logging level per each step of the flow.
The technical notes page gives some technical details about AWS-S3 component like changelog and completeness matrix.
Access keys consist of three parts: an access key ID, a secret access key and a region.
Like a user name and password, you must use both the access key ID and secret access key together to authenticate your requests.
According to AWS documentation for buckets created in Regions launched after March 20, 2019 Region
is required for AWS credential.
An access key ID (for example, AKIAIOSFODNN7EXAMPLE
).
A secret access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
).
Example: ca-central-1
.
Triggers to get all new and updated s3 objects since last polling.
Emit Individually
emits each object in separate message, Fetch All
emits all objects as array in one object with key results
-271821-04-20T00:00:00.000Z
+275760-09-13T00:00:00.000Z
Please Note: If Emit Behaviour selected as
Emit Individually
- emits each object in separate message with schema below, ifFetch All
emits all objects as array in one object with keyresults
, each item regards schema belowattachmentUrl
appears only if selected Enable File Attachments
{
"type": "object",
"properties": {
"attachmentUrl": {
"type": "string",
"required": true
},
"Key": {
"type": "string",
"required": true
},
"LastModified": {
"type": "string",
"required": true
},
"ETag": {
"type": "string",
"required": true
},
"Size": {
"type": "number",
"required": true
},
"StorageClass": {
"type": "string",
"required": true
},
"Owner": {
"type": "object",
"properties": {
"ID": {
"type": "string",
"required": true
}
}
}
}
}
Given a filename and a URL to an attachment stored in the platform, transfers the contents of the attachment to AWS S3. The component returns a summary of the written file. AWS S3 always overwrites the contents of the file if it already exists.
/
characters in the filename to create folders;standard
storage class.Read file from S3 bucket. This action reads file from S3 bucket by provided name. The result is storing in the output body (for json or xml) or in the output attachment (for other types). File type resolves by it’s extension. The name of attachment would be same to filename.
bucketName
is not provided in metadata);Default Bucket Name and folder
if provided, the field is optional).{
"type": "object",
"properties": {
"filename": {
"type": "string",
"required": true
},
"bucketName": {
"type": "string",
"required": false
}
}
}
{
"type": "object",
"properties": {
"filename": {
"type": "string",
"required": true
},
"attachmentUrl": {
"type": "string",
"required": true
},
"size": {
"type": "number",
"required": true
}
}
}
Emit individually all filenames from S3 bucket. This action gets all names of files which are storing in S3 bucket with provided name. The filenames emits individually.
Please Note: If you provide a bucket and folder (as an example
eio-dev/inbound
), not only all names of files will return but the name of the root folder (inbound/
) as well.
bucketName
is not provided in metadata);Default Bucket Name and folder
if provided, the field is optional).{
"type": "object",
"properties": {
"bucketName": {
"type": "string",
"required": false
}
}
}
{
"type": "object",
"properties": {
"ETag": {
"type": "string",
"required": true
},
"Location": {
"type": "string",
"required": false
},
"Key": {
"type": "string",
"required": true
},
"Bucket": {
"type": "string",
"required": true
}
}
}
It is possible to retrieve maximum 1000 file names.
Delete file from S3 bucket.
This action removes file from S3 by provided name in selected bucket. The action will emit single filename of removed file.
bucketName
is not provided);Default Bucket Name and folder
if provided, the field is optional).{
"type": "object",
"properties": {
"filename": {
"type": "string",
"required": true
},
"bucketName": {
"type": "string",
"required": false
}
}
}
{
"type": "object",
"properties": {
"filename": {
"type": "string",
"required": true
}
}
}
Rename file in S3 bucket and folder.
This action renames file by provided name in selected bucket and folder. The action will emit properties of renamed file.
{
"type": "object",
"properties": {
"bucketName": {
"title":"Bucket Name and folder",
"type": "string",
"required": true
},
"folder": {
"type": "string",
"required": false
},
"oldFileName": {
"type": "string",
"required": true
},
"newFileName": {
"type": "string",
"required": true
}
}
}
{
"type": "object",
"properties": {
"Key": {
"type": "string",
"required": true
},
"LastModified": {
"type": "string",
"required": true
},
"ETag": {
"type": "string",
"required": true
},
"Size": {
"type": "number",
"required": true
},
"StorageClass": {
"type": "string",
"required": true
},
"Owner": {
"type": "object",
"required": true,
"properties": {
"ID": {
"type": "string",
"required": true
}
}
}
}
}
This action is deprecated. Please use Write File to S3 From a Provided Attachment instead. Put stream as file into S3 bucket. This action creates or rewrites a new file on S3 with the content that is passed as an input attachment. The name of the file would be the same to the attachment name. Be careful: this action can process only one attachment - if it would be more or no attachment at all the execution would fail with exception.
bucketName
is not provided in metadata);Default Bucket Name and folder
if provided, the field is optional).{
"type": "object",
"properties": {
"filename": {
"type": "string",
"required": false
},
"bucketName": {
"type": "string",
"required": false
}
}
}
{
"type": "object",
"properties": {
"ETag": {
"type": "string",
"required": true
},
"Location": {
"type": "string",
"required": false
},
"Key": {
"type": "string",
"required": true
},
"Bucket": {
"type": "string",
"required": true
}
}
}
Action is deprecated. Use the CSV & or Batch component to create a csv file first, then write that file to S3.