How to set up AWS-SDK connectivity for Oracle Object Storage:- A step-by-step guide

Recently, I had a situation where I have to upload files to Oracle Object Storage. In the past, I used the aws-sdk for AWS S3. At first, I thought I would have to write new code from scratch. However, After searching for some time, I found out that Oracle Object Storage provides connectivity with the aws-sdk.

This saved me time and now I have code that I had previously used with AWS S3. It saved me a lot of time and effort and allowed me to quickly and easily connect to Oracle Object Storage.

If you find yourself in a similar situation, Then this blog is for you

What is Oracle Object Storage?

The Oracle Cloud Infrastructure Object Storage service is an internet-scale, high-performance storage platform that offers reliable and cost-efficient data durability. The Object Storage service can store an unlimited amount of unstructured data of any content type, including analytic data and rich content, like images and videos.

In this blog post, I will be using Node.js to connect with Oracle Cloud Infrastructure Object Storage. To get started, ensure that you have an Oracle account set up and have already created a bucket in your Object Storage service.

If you haven’t done it yet You can sign up for a free Oracle Cloud account by visiting the Oracle Cloud website. Once you’ve set up your account and created a new bucket in Object Storage, you’ll need accesskey , secretkey and storageEndpoint of storage

var AWS= require('aws-sdk');
var oosClient = new AWS.S3({
        region: region,
        accessKeyId: accessKeyId,
        secretAccessKey: secretAccessKey,
        endpoint: storageEndpoint,
        s3ForcePathStyle: true, 
        signatureVersion: 'v4'
 });

Region : region where your bucket is located

accessKeyId: The access key Id of the account

secretAccessKey: The secret access key Id of account

endpoint: The endpoint URL of the Oracle Object Storage service. This is used when connecting to non AWS S3 services.

s3ForcePathStyle: A Boolean value that specifies whether to use path-style URLs instead of virtual-hosted-style URLs when making requests to a bucket. This is used when connecting to non-AWS S3-compatible services like Oracle Object Storage

signatureVersion: This is version of the signature algorithm to use when making requests. By default it is ‘v4’

API to upload file

const fileContent = fs.readFileSync('local_path_of_file_you_want_to_upload');
const oosInfo= {
           Bucket: 'your_bucket_name', 
           Key: 'path', 
           Body: fileContent
       };
 oosClient.upload(oosInfo , function(err, data) {
                    if (err) throw s3Err
       console.log(" File uploaded successfully at "+ data.Location);
     });

When uploading files to storage, you can use an upload API to easily transfer the files. One important consideration when uploading large files is how to efficiently read the file data from the source location.

In Node.js, you can use the fs module to read files from a local directory. To read a file synchronously, you can use the readFileSync method.

However, if the file is large, it may not be efficient to read it into memory all at once. In this case, you can use the createReadStream method to read the file as a stream. By using a stream, you can read the file in smaller, more manageable chunks. This can help prevent memory issues and improve the overall performance of your application

API to List Files:

var bucketParams = {
    Bucket: "your_bucket_name",
    Prefix: "name/", // folder_for_which_you_want_to_list
                // in example in an list all the contents inside folder name
};
oosClient.listObjects(bucketParams, function (err, data) {
if (err) {
    console.log("Error", err);
    throw err;
}
var contents = data.Contents;
console.log(contents);
});

In this example, we first are importing the aws-sdk package and creating a new Oracle Object Storage object. We are then defining an object bucketParamsthat specifies the bucket name we want to list objects from and Prefix which specifies the folder from where you want to list. Finally, we are calling the listObjects method, passing in the params object and a callback function that will be called with the results of the operation.

API to download file

const paramsdownload = {
    Bucket: "your_bucket_name",
    Key: "path_of_file_you_want_to_download_from_object_storage",
};
const readStream = oosClient.getObject(paramsdownload).createReadStream();
var writableStream= fs.createWriteStream("local_path_of_file");
readStream.on("error", (e) => {

    console.log("Some error Occured" + e);
});
writableStream.once('finish', () => {
    console.log("The file download is complete ");
});
readStream.pipe(writableStream);

Here we are creating a read stream from the object using the getObject method and calling createReadStream(). We are also creating a write stream using the fs module's createWriteStream method and specifying the name of the local file we want to create.

Finally, we pipe the read stream into the write stream using the pipe method. The finish event is emitted when the write stream has been fully written to, at which point we log a message indicating that the file download is complete.

By using a read stream and a write stream in combination with the pipe method, we can download large files efficiently and without having to read the entire file into memory.

API to delete the file

var deleteparams={
    Bucket: "your_bucket_name",
    Key:"path_of_file_you_want_to_delete" 
}
s3.deleteObject(deleteparams,function (err,data){
    if(err)
    {
        console.log("Some error occured"+err);
        throw err;
    }
    else {
        console.log("The file deleted from "+data.Location);
    }
});

The delete API is used to delete the Object from the Storage

References

Amazon S3 Compatibility API
Using the Amazon S3 Compatibility API, customers can continue to use their existing Amazon S3 tools (for example, SDK…
docs.oracle.com

https://aws.amazon.com/sdk-for-javascript/