Go to All Forums

Amazon Web Service S3 Buckets confusing or not accurate

I added an "Amazon Web Service" Monitor, and I followed the instructions to add credentials so that site24x7 will have read-only access to my AWS information.  I use S3, Cloudfront, Cloudwatch and IAM but the site24x7 monitor only checks EC2, DynamoDB, Load balancer and S3.  So at least I can monitor S3.

But I don't understand the reported metrics, and I believe they are incorrect.

  • The "Amazon Web Service" Monitor says I have a "bucket size" of 9.  What does this mean?  Looking at my sole S3 bucket, there's not only nine of any one type of object.
  • The monitor says "number of objects" is 2,000.  What does this mean?  I would've expected this would match the "NumberOfObjects" I can fetch with the S3 API, but when I use Cloudfront or awscli, this metric is 2,995,827 not two-thousand.
  • The monitor says "virtual folders" is 1. What is a virtual folder in this context?  If you mean buckets, say "buckets," else I'll think you mean the actual virtual folders inside each bucket (that is to day, the distinict pathnames parts of S3 bucket object names), and I have hundreds of those.
  • The most important statitic, the one that determines how much I'm billed each month, would be the bucket size in bytes, is completely missing.

Here's an example of fetching the info(*) I expect your monitor to fetch, using exactly the same credentials I gave site24x7 to fetch so I know it's not that you're locked out: (*: average over a day, but I could refine this to match polling periods)

/usr/local/bin/aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time $(date -d 'now - 1 day' +%s) --end-time $(date +%s) --period 86400 --statistics A verage --metric-name NumberOfObjects --dimensions Name=BucketName,Value=${BUCKETNAME} Name=StorageType,Value=AllStorageTypes

{ "Datapoints": [ { "Timestamp": "2017-04-20T18:54:00Z", "Average": 2995827.0, "Unit": "Count" } ], "Label": "NumberOfObjects" }


/usr/local/bin/aws cloudwatch get-metric-statistics --namespace AWS/S3 --start-time $(date -d 'now - 1 day' +%s) --end-time $(date +%s) --period 86400 --statistics Average --metric-name BucketSizeBytes --dimensions Name=BucketName,Value=${BUCKETNAME} Name=StorageType,Value=StandardStorage

{ "Datapoints": [ { "Timestamp": "2017-04-20T18:52:00Z", "Average": 3859502995442.0, "Unit": "Bytes" } ], "Label": "BucketSizeBytes" }

 

Could someone tell me what the S3 Buckets metrics mean, and how I can configure the Amazon Web Services monitor to give me the metrics I require?

 

 

 

Like (1) Reply
Replies (3)

Hi Moses,

 

Thank you for contacting us first off. 

With respect to your queries, 

The "Amazon Web Service" Monitor says I have a "bucket size" of 9.  What does this mean?  Looking at my sole S3 bucket, there's not only nine of any one type of object.


The bucket size shown in our dashboard corresponds to MB. So , the value 9 denotes the size of bucket as 9MB.

 The monitor says "number of objects" is 2,000.  What does this mean?  I would've expected this would match the "NumberOfObjects" I can fetch with the S3 API, but when I use Cloudfront or awscli, this metric is 2,995,827 not two-thousand.


Would like to acknowledge this as a bug on our side. We will definitely fix this asap.

The monitor says "virtual folders" is 1. What is a virtual folder in this context?  If you mean buckets, say "buckets," else I'll think you mean the actual virtual folders inside each bucket (that is to day, the distinict pathnames parts of S3 bucket object names), and I have hundreds of those.


This is linked to the previous question of yours. This will get fixed automatically with that. We will have a look into this.

The most important statitic, the one that determines how much I'm billed each month, would be the bucket size in bytes, is completely missing.


As you can see , the bucket size denotes this metric. As we see there is a bug with respect to the number of objects, it may be wrong still. Once that is fixed , you will be getting numbers in MB instead of bytes. Would you like to have it in bytes?

 

We are currently in discussion phase with respect to S3 buckets as a separate monitor as we see the importance of it to the customer .

We would be delighted to support you with Cloudfront as well as S3 buckets monitoring, in a detailed manner , by understanding your requirement better!!!

Kindly let us know on your requirements , so that , we shall work out something in quick time.

 

Regards,

 

Ananthkumar K S

 

Like (0) Reply

Gigabytes (GB) would probably be best, since that's what Amazon use in it's billing statements:

  • $0.005 per 1,000 PUT, COPY, POST, or LIST requests
  • $0.004 per 10,000 GET and all other requests
  • $0.0225234872 per GB / month of storage used (blended price)

Maybe round it to the nearest 0.1 GB, since a half-gigabyte is worth $0.01

I'd guess alerts states would be:

  • if the BucketSizeBytes exceeds a user-defined amount (because the monthly expense may exceed a constrained budget), or 
  • if NumberOfObjects shrinks by a large fraction over a single day (looks like someone is deleting too much data)

But I wouldn't install these alerts in everyone's monitoring, because it would depend too much on each user's needs.  Maybe I'd offer them as examples for making custom alerts in the documentation.  and maybe there's other disaster scenarios I haven't thought of.

For CloudFront, I'm not sure what the alert states would be, other than "not responding" but that's already covered with Web Monitors.  Available metrics are

  • BytesUploaded
  • Requests
  • BytesDownloaded
  • 4xxErrorRate
  • 5xxErrorRate
  • TotalErrorRate

Maybe alert if "TotalErrorRate" is more than 10% of "Requests" over the last 2 hours?  Actually, can Site24x7 compare two metrics to make a third metric, or is that beyond the scope of Site24x7's design?  (It's okay if it is, that's not a small thing.)

 

 

Like (0) Reply

Hi Moses,

 

Thanks for giving your requirements. 

We will have a look into the suffs you have posted. 

 

Regards,

 

Ananthkumar K S

Like (0) Reply

Was this post helpful?