Pages

Thursday, September 27, 2018

AWS Support FAQs

AWS Support FAQs


General

Q. What is Amazon Web Services Support (AWS Support)?

AWS Support gives customers help on technical issues and additional guidance to operate their infrastructures in the cloud. Customers can choose a tier that meets their specific requirements, continuing the AWS tradition of providing the building blocks of success without bundling or long term commitments.

AWS Support is one-on-one, fast-response support from experienced technical support engineers. The service helps customers use AWS's products and features. With pay-by-the-month pricing and unlimited support cases, customers are freed from long-term commitments. Customers with operational issues or technical questions can contact a team of support engineers and receive predictable response times and personalized support.

Q: How are the enhanced AWS Support tiers different from Basic Support?

AWS Basic Support offers all AWS customers access to our Resource Center, Service Health Dashboard, Product FAQs, Discussion Forums, and Support for Health Checks – at no additional charge. Customers who desire a deeper level of support can subscribe to AWS Support at the Developer, Business, or Enterprise level.

Customers who choose AWS Support gain one-on-one, fast-response support from AWS engineers. The service helps customers use AWS's products and features. With pay-by-the-month pricing and unlimited support cases, customers are freed from long-term commitments. Customers with operational issues or technical questions can contact a team of support engineers and receive a predictable response times and personalized support.

Q: What types of issues are supported?

Your AWS Support covers development and production issues for AWS products and services, along with other key stack components.

"How to" questions about AWS service and features
Best practices to help you successfully integrate, deploy, and manage applications in the cloud
Troubleshooting API and AWS SDK issues
Troubleshooting operational or systemic problems with AWS resources
Issues with our Management Console or other AWS tools
Problems detected by Health Checks
A number of third-party applications such as OS, web servers, email, databases, and storage configuration
AWS Support does not include:

Code development

Q: What level of architecture support is provided by Support?

The level of architecture support provided varies by support level. Higher service levels provided progressively more support for the customer use case and application specifics.
Developer: Building Blocks
Guidance on how to use all AWS products, features, and services together. Includes guidance on best practices and generalized architectural advice.
Business: Use Case Guidance
Guidance on what AWS products, features, and services to use to best support your specific use cases. Includes guidance on optimizing AWS products and configuration to meet your specific needs.
Enterprise: Application Architecture
Consultative partnership supporting specific use cases and applications. Includes design reviews and architectural guidance. Enterprise-level customers support team includes a dedicated Technical Account Manager, and access to an AWS Solutions Architect.

Q: I only use one or two services. Can I purchase support for just the one(s) I'm using?

No. Our Support offering covers the entire AWS service portfolio. As many of our customers are using multiple infrastructure web services together within the same application, we’ve designed AWS Support with this in mind. We’ve found that the majority of support issues, among users of multiple services, relate to how multiple services are being used together. Our goal is to support your application as seamlessly as possible.

Q: How many support cases can I initiate with AWS Support?

As many as you need. Basic Support plan customers are restricted to customer support and service limit increase cases.

Q: How many users can open technical support cases?

The Business and Enterprise support plans allow an unlimited number of users to open technical support cases (supported by AWS Identity and Access Management (IAM)). The Developer Support plan allows one user to open technical support cases. Customers with the Basic Support plan cannot open technical support cases.

Q: How quickly will you fix my issue?

That depends on your issue. The problems that application or service developers encounter vary widely, making it difficult to predict issue resolution times. We can say, however, that we'll work closely with you to resolve your issue as quickly as possible.

Q: How do I contact you?

If you have a paid Support plan, you can open a web support case from Support Center. If you have Business or Enterprise-level Support, you can request that AWS contact you at any convenient phone number or start a conversation with one of our engineers via chat.

You can also see your options for contacting Support on the Contact Us page.

Support for Health Checks

Support for Health Checks monitors some of the status checks that are displayed in the Amazon EC2 console. When one of these checks does not pass, all customers have the option to open a high-severity Technical Support case. For more information, see Support for Health Checks.

Q: I'm not in the US. Can I sign up for AWS Support?

Yes, AWS Support is a global organization. Any AWS customer may sign up for and use AWS Support.

Q: Do you speak my language?

AWS Support is available in English and Japanese.

Q: How do I access Japanese Support?

To access Japanese Support, subscribers should select Japanese as their language preference from the dropdown at the top right of any AWS web page. Once your language preference is set to Japanese, all Support inquiries will be sent to our Japanese Support team.

Q: Who should use AWS Support?

We recommend all AWS customers use AWS Support to ensure a seamless experience leveraging AWS infrastructure services. We have created multiple tiers to fit your unique technical needs and budget.

Q: How do I offer support for my end customers' AWS-related issues?

If an issue is related to your AWS account, we'll be happy to help you. For problems with a resource provisioned under their own accounts, your customers will need to contact us directly. Due to security and privacy concerns we can only discuss specific details with the account holder of the resource in question. You many also inquire about becoming an AWS Partner, which offers different end-customer support options. For more information, see AWS Partner Network.

Q: I use an application someone else built on Amazon Web Services. Can I use AWS Support?

If the application uses resources provisioned under your AWS account, you can use AWS Support. First, we'll help you to determine whether the issue lies with an AWS resource or with the third-party application. Depending on that outcome, we'll either work to resolve your issue or let you know to contact the application developer for continued troubleshooting.

Q: How can I get started with AWS Support?

You can add AWS Support during the sign up process for any AWS product. Or simply select an AWS Support Plan.

Q: How much does AWS Support cost?

AWS Support offers differing levels of service to align with your needs and budget, including our Developer, Business, and Enterprise Support plans. See our pricing table for more details. 

Q: Why does my AWS Support bill spike when I purchase EC2 and RDS Reserved Instances and ElasticCache Reserved Cache Nodes?

When you prepay for compute needs with Amazon Elastic Compute Cloud (Amazon EC2) Reserved Instances, Amazon Relational Database Service (Amazon RDS) Reserved Instances, Amazon Redshift Reserved Instances, or Amazon ElastiCache Reserved Cache Nodes and are enrolled in a paid AWS Support plan, the one-time (upfront) charges for the prepaid resources are included in the calculation of your AWS Support charges in the month you purchase the resources. In addition, any hourly usage charges for reserved resources are included in the calculation of your AWS Support charges each month.

If you have existing reserved resources when you sign up for a Support plan, the one-time (upfront) charges for the reserved resources, prorated over the term of the reservation, are included in the price calculation for the first month of AWS Support. For example, if you purchase a three-year reserved instance on January 1 and sign up for the Business support plan on October 1 of the same year, 75% of the upfront fee you paid in January is included in the calculation of Support costs for October.

Q: How will I be charged and billed for my use of AWS Support?

Upon signup, you will be billed a minimum monthly service charge for the first calendar month (prorated).

In subsequent months if your usage-based charges exceed the minimum monthly service charge, you'll be billed for the difference at the end of the month. End of the month bills, being dated on the first of the following month, will thus reflect both the current month's usage-based charges.

Reserved resource customers (EC2 and RDS Reserved Instances and ElastiCache Reserved Cache Nodes) should expect their prepaid amounts to be included in the usage-based component during the month they are purchased.

Q: How do I cancel my AWS Support subscription?

To cancel a paid Support plan, switch to the Basic support plan:

Sign in to your AWS account with your root account credentials
Go to https://console.aws.amazon.com/support/plans/home
On the Support plans page, choose Change plan
On the Change support plan page, select the Basic plan, and then choose Change plan

Q: Can I sign up for AWS Support, receive assistance, and then cancel the subscription? If so, will I be charged a prorated amount?

You are obligated to pay for a minimum of one month of support each time you register to receive the service. While you may see a prorated refund when you cancel the service, your account will be charged again at the end of the month to account for the minimum subscription fee. We reserve the right to refuse to provide AWS Support to any customer that frequently registers for and terminates the service.

Q: What is Infrastructure Event Management (IEM)?

AWS Infrastructure Event Management is a short term engagement with AWS Support, available as part of the Enterprise-level Support product offering, and available for additional purchase for Business-level Support subscribers. AWS Infrastructure Event Management will partner with your technical and project resources to gain a deep understanding of your use case and provide architectural and scaling guidance for an event. Common use case examples for AWS Event Management include advertising launches, new product launches, and infrastructure migrations to AWS.

Q: How does Chat support work?

Chat is just another way, in addition to phone or email, to gain access to Technical Support engineers. By choosing the chat support icon in the Support Center, a chat session will be initiated through your browser. This provides a real-time, one-on-one interaction with our support engineers and allows additional information and links to be shared for faster issue resolution.

Offering support via chat is new for AWS, but not for Amazon.com. We have taken the same chat capabilities currently in use by retail customers and made it available for AWS technical support. Business and Enterprise-level customers can access chat capabilities within the Support Center. Support users can also launch a chat session from an individual case or the Contact Us section of the AWS website.

Q: What are the best practices for fault tolerance?

Customers frequently ask us if there is anything they should be doing to prepare for a major event that could affect a single Availability Zone. Our response to this question is that customers should follow general best practices related to managing highly available deployments (e.g., having a backup strategy, distributing resources across Availability Zones). The following links provide a good starting point:

Are you taking advantage of distributing compute resources across multiple AZs?
How to build fault tolerant applications on AWS?
What can I do to improve my monitoring and watch for unexpected behavior or errors?
What are the storage best practices in EC2?
What should I do if my instance is no longer responding?
How do I troubleshoot instances with Failed Status Checks?
AWS Trusted Advisor - Scan your AWS environment for optimization recommendations

Q: How do I configure Identity and Access Management (IAM) for support?

For details on how you can configure your IAM users to allow/deny access to AWS Support resources, see Accessing AWS Support.

Q: How long is case history retained?

Case history information is available for 12 months after creation.

Q. Can I get a history of AWS Support API calls made on my account for security analysis and operational troubleshooting purposes?

Yes. To receive a history of AWS Support API calls made on your account, you simply turn on CloudTrail in the AWS Management Console.

Note: The following calls to the AWS Support API are not recorded or delivered:

All Trusted Advisor operations: DescribeTrustedAdvisorCheckRefreshStatuses, DescribeTrustedAdvisorCheckResult, DescribeTrustedAdvisorChecks, DescribeTrustedAdvisorCheckSummaries, RefreshTrustedAdvisorCheck
For more information, see Logging AWS Support API Calls with AWS CloudTrail.

Support for Health Checks

Q: What is Support for Health Checks?

Support for Health Checks monitors some of the status checks that are displayed in the Amazon EC2 console. When one of these checks does not pass, all customers have the option to open a high-severity Technical Support case. Support for Health Checks covers certain checks for Amazon Elastic Compute Cloud (Amazon EC2) and Amazon Elastic Block Store (Amazon EBS).
Q: Which AWS services provide access to support through Support for Health Checks?

Support for Health Checks currently covers three health check scenarios: EC2 system status, EBS disabled I/O, and EBS stuck in attaching.

Q: How can I get support if an EC2 instance fails the system status check?

If an EC2 system status check fails for more than 20 minutes, a button appears that allows any AWS customer to open a case. Most of the details about your case are auto-populated, such as instance name, region, and customer information, but you can add additional context with a free-form text description.

Note: Support for Health Checks covers only the EC2 system status check, not the EC2 instance status check. For troubleshooting ideas, see Troubleshooting Instances with Failed Status Checks.
web-service-support-open-ticket-dialogue-whiteboard
Q: How can I get support if an EBS volume is stuck in attaching or has disabled I/O?

An EBS volume that has a health status of disabled I/O or is stuck in attaching displays a Troubleshoot Now button. You are presented with a number of self-remediation options that could potentially fix the problem without the need to contact support. If the EBS volume is still failing the health check after you have followed all applicable steps, choose Contact Support to open a case.
Q: What is the response time for my Support for Health Checks support case?

A Support for Health Checks case opened through the console is a high-severity case.

Q: How do I check the status of my case after it has been opened?

After you submit a case, the button changes from Contact Support to View Case. To view the case status, choose View Case.

Q: Do I have to open a case for each instance that is unresponsive?

You can, but you don’t need to. You can include additional context and instance names in the text description submitted with your initial case.

Q: Why must an EC2 instance fail the system status check for 20 minutes? Why not just allow customers to open a case immediately?

Most system status issues are resolved by automated processes in less than 20 minutes and do not require any action on the part of the customer. If the instance is still failing the check after 20 minutes, then opening a case brings the situation to the attention of our technical support team for assistance.

Q: Can any of my Identity and Access Management (IAM) users open a case?

Any user can create and manage a Support case for Health Checks case using their root account credentials. IAM users associated with accounts that have a Business or Enterprise Support plan can also create and manage a Support for Health Checks case.

AWS Trusted Advisor

Q: What is AWS Trusted Advisor?

AWS Trusted Advisor is an application that draws upon best practices learned from AWS’s aggregated operational history of serving hundreds of thousands of AWS customers. Trusted Advisor inspects your AWS environment and makes recommendations for saving money, improving system performance, or closing security gaps.

Q: How do I access Trusted Advisor?

Trusted Advisor is available in the AWS Management Console. All AWS users have access to the data for seven checks. Users with Business or Enterprise-level Support can access all checks. You can access the Trusted Advisor console directly at https://console.aws.amazon.com/trustedadvisor/.
Q: What made you choose the current checks/recommendations over others?

Every check was vetted for accuracy, consistency, and usefulness to our customers. We gather data and research to ensure we are making the right recommendations based on best practices and historical values. We have identified many possible checks for future implementation, and we will continue to add them over time.

Q: Does Trusted Advisor monitor my usage? Can Amazon see what I’m doing with AWS?

Trusted Advisor respects your privacy just as all Amazon Web Services do. We will never have access to your data or the software running on your account without your consent.

Q: What does Trusted Advisor check?

Trusted Advisor includes an ever-expanding list of checks in the following four categories:

Cost Optimization–Recommendations that can potentially save you money by highlighting unused resources and opportunities to reduce your bill.
Security–Identification of security settings that could make your AWS solution less secure.
Fault Tolerance–Recommendations that help increase the resiliency of your AWS solution by highlighting redundancy shortfalls, current service limits, and overutilized resources.
Performance–Recommendations that can help to improve the speed and responsiveness of your applications.

Q: How does the Trusted Advisor notification feature work?

The Trusted Advisor notification feature helps you stay up-to-date with your AWS resource deployment. You will be notified by weekly email when you opt in for this service, and it is totally free.

What is in the notification? The notification email includes the summary of saving estimates and your check status, especially highlighting changes of check status.
How do I sign up for the notification? This is an opt-in service, so do make sure to set up the notification in your dashboard. You can choose which contacts receive notification on the Preferences pane of the Trusted Advisor console.
Who can get this notification? You can indicate up to three recipients for the weekly status updates and savings estimates.
What language will the notification be in? The notification is available in English and Japanese.
How often will I get notified, and when? Currently you will receive a weekly notification email, typically on Thursday or Friday, and it will reflect your resource configuration over the past week (7 days). It is in our roadmap to provide an event-triggered mailer and more flexibility.
Can I unsubscribe from the notifications if I do not want to receive the email anymore? Yes. You can change the setting in your dashboard by clearing all the check boxes and then selecting "Save Preferences". Also, help us make this feature more relevant and better for you by using the "Feedback" button on the dashboard.
How much does it cost? It is totally free. Get started today!

Q: How does the "Recent Changes" feature work?

Trusted Advisor tracks the recent changes to your resource status on the console dashboard. The most recent changes over the past 30 days appear at the top to bring them to your attention. The system will track seven updates per page, and you can go to different pages to view all recent changes by clicking the forward or the backward arrow displayed on the top-right corner of the "Recent Changes" area.
Q: How does the "Exclude Items" function work?

If you don’t want to be notified about the status of a particular resource, you can choose to exclude (suppress) the reporting for that resource. You would normally do this after you have inspected the results of a check and decide not to make any changes to the AWS resource or setting that Trusted Advisor is flagging.

To exclude items, check the box to the left of the resource items, and then choose "Exclude". Excluded items appear in a separate view. You can restore (include) them at any time by selecting the items in the excluded items list and then choosing "Include".

The "Exclude Items" function is available only at the resource level, not at the check level. We recommend that you examine each resource alert before excluding it to make sure that you can still see the overall status of your deployment without overlooking a certain area. For an example, see AWS Trusted Advisor for Everyone in the AWS Blog.

Q: What is an action link?

Most items in a Trusted Advisor report have hyperlinks to the AWS Management Console, where you can take action on the Trusted Advisor recommendations. Action links are included for all services that support them.

For example, the Amazon EBS Snapshots check lists Amazon EBS volumes whose snapshots are missing or more than seven days old. In each row of the report, the volume ID is a hyperlink to that volume in the Amazon EC2 console, where you can take action to create a snapshot with just a couple of clicks.

Q: How do I manage the access to the Trusted Advisor console? What is the IAM policy?

For the Trusted Advisor console, access is controlled by IAM policies that use the trustedadvisor namespace, and access options include viewing and refreshing individual checks or categories of checks. For more information, see Controlling Access to the Trusted Advisor Console.

Q: How do I access AWS Trusted Advisor via API?

You can retrieve and refresh Trusted Advisor results programmatically. For more information, see About the AWS Support API.

Q: How often can I refresh my Trusted Advisor result?

You can refresh a check 5 minutes after it was last refreshed. You can refresh individual checks or refresh all the checks at once by choosing "Refresh All" in the top-right corner of the summary dashboard.

When you visit the Trusted Advisor dashboard, any checks that have not been refreshed in the last 24 hours are automatically refreshed; this can take a few minutes. The date and time of the last refresh is displayed to the right of the check title.

In addition, for customers with Business or Enterprise Support plans, the Trusted Advisor data is automatically refreshed weekly.
What does "The check could not be completed in the allotted time" mean? How can I fix this?

We set a limit on the amount of time a Trusted Advisor check can spend to process a check. You can try to refresh the check again later, in case it was a one-time failure.

Q: How do Trusted Advisor activities affect my Amazon CloudTrail logs?

Each customer action in Trusted Advisor triggers an API call that is documented in your Amazon CloudTrail logs. For example, when you refresh a Trusted Advisor check, you will see a call to the relevant resources with invokedBy and userAgent values of "support.amazonaws.com". This logging incurs minimal charges (a few cents per month). Automatic refreshes also appear in CloudTrail logs.

Q: Which Trusted Advisor checks and features are available to all AWS customers?

These seven Trusted Advisor checks are available to all customers at no cost: Service Limits (Performance category; details at Service Limits Check Questions) and Security Groups - Specific Ports Unrestricted, IAM Use, MFA on Root Account, EBS Public Snapshots, RDS Public Snapshots (Security category), and S3 Bucket Permissions (identifies S3 buckets that are publicly accessible due to ACLs or policies that allow read/write access for any user. Customers can access the remaining checks by upgrading to a Business or Enterprise Support plan.

You also have access to some Trusted Advisor features, including the weekly Trusted Advisor notification, the Action Links, the Exclude Items, and the Access Management features.

AWS Personal Health Dashboard

Q: How is AWS Personal Health Dashboard different from the AWS Service Health Dashboard?

The Service Health Dashboard is a good way to view the overall status of each AWS service, but provides little in terms of how the health of those services is impacting your resources. AWS Personal Health Dashboard provides a personalized view of the health of the specific services that are powering your workloads and applications. What’s more, Personal Health Dashboard proactively notifies you when AWS experiences any events that may affect you, helping provide quick visibility and guidance to help you minimize the impact of events in progress, and plan for any scheduled changes, such as AWS hardware maintenance.

Q: What actions should I take based on the status of AWS Personal Health Dashboard?

You will be able to view details about the event that is impacting your environment. AWS Personal Health Dashboard will continue to update the event regularly until the event ends and provide remediation guidance.

Q: What language will the notification be in?

All notifications will be available only in English. We will add support for other languages over time.


Q: What notifications channels are available?

AWS Personal Health Dashboard supports API, email, and CloudWatch Events (SQS, SNS, Lambda, Kinesis). Personal Health Dashboard also supports showing alerts in AWS Management Console navigation bar.

Q: How do I sign up for notifications?

You can navigate to the CloudWatch Events console and write custom rules to filter events of interest. These rules can be wired to targets such as SNS, SQS, Lambda or Kinesis that will be invoked when you rule pattern matches AWS Personal Health Dashboard events on the CloudWatch Events bus.

Q: Can I customize AWS Personal Health Dashboard?

Yes. You can customize Personal Health Dashboard through setting up notification preferences for the various types of events. You can also create custom remediation actions that are triggered in response to events. Set this up by visiting the CloudWatch Events console.

Q: Can AWS Personal Health Dashboard automate any actions I take today to recover from known events?

AWS Personal Health Dashboard will not take any actions on your behalf on your AWS environment. It will provide you the tooling required to wire up custom actions defined by you. The Personal Health Dashboard events will be published on the CloudWatch Events channel. You can write rules to capture these events and wire them to a Lambda functions. AWS Personal Health Dashboard also provides best practices and ‘how-to’ guides that help you define your automated run-books.

Q: Can I create custom actions with Lambda?

Yes, you can define custom actions in Lambda, and use CloudWatch Events to trigger Lambda actions in response to events.

Q: Can I run diagnostics in AWS Personal Health Dashboard?

No. At this time running diagnostics directly inside AWS Personal Health Dashboard is not available. However, you could attach a diagnostics automation script that will be executed by Lambda when an event occurs if wired appropriately.

Q: Will customers have API access to events on AWS Personal Health Dashboard?

Yes. The AWS Personal Health Dashboard event repository will be accessible with the Health API to customers who are on Business and Enterprise Support plans. Learn more about the AWS Health API.

Q: How does AWS Personal Health Dashboard work with Amazon CloudWatch?

CloudWatch and AWS Personal Health Dashboard can coexist to provide additional value beyond what just one service can provide by itself. Where you can create CloudWatch metrics and set alarms for the services available within the console, the Personal Health Dashboard provides notifications and information regarding issues that impact the underlying AWS infrastructure.

Cloudwatch alarms will provide information typically related to a symptom without providing insight into what may be the root cause. For example, for EBS you might create metrics to monitor VolumeReadBytes, VolumeWriteBytes, VolumeReadOps, VolumeWriteOps, VolumeQueueLength, etc. When a metric breaches, you are notified but receive little to help determine if the problem is related to your application, or with perhaps with the AWS infrastructure, such as an EBS lost volume.

With AWS Personal Health Dashboard, you receive a notification with detailed information regarding the underlying issue, as well as remediation guidance that allows you to identify the root of the issue and quickly troubleshoot to resolve.

AWS Personal Health Dashboard complements CloudWatch by providing insights into operational issues caused by AWS. You could overlay Personal Health Dashboard events alongside CloudWatch metrics to draw more insight into what actually triggered the alarms. Additionally, AWS Personal Health Dashboard can inform you when CloudWatch itself is facing service issues.

Well-Architected Reviews

Q: How many Well-Architected Reviews are Enterprise Support customers entitled to?

Every Enterprise Support customer is entitled to one Well-Architected Review. Additional reviews may be available based on customer need.

Q: How do I get started with my Well-Architected Review?

Enterprise Support customers can contact their Technical Account Manager to initiate a Well-Architected Review.

Operations Support

Q: How many Cloud Operations Reviews are Enterprise Support customers entitled to?

Every Enterprise Support customer is entitled to one Cloud Operations review. Additional reviews may be available based on customer need.

Q: How do I get started with my Cloud Operations Review?

Enterprise Support customers can contact their Technical Account Manager to initiate a Cloud Operations Review.

Training

Q: Do the training credits provided with the Enterprise Support plan expire?

Each year Enterprise Support customers are entitled to receive 500 qwikLABS credits. Unused credits will expire 1 year after the day in which they are applied to the administrator account. Unused credits cannot be rolled over to the next year. Unused credits that were purchased at the 30% discount will also expire, one year after they have been applied to the administrator account.

Third-Party Software

Q: What third-party software is supported?

AWS Support Business and Enterprise levels include Beta support for common operating systems and common application stack components. AWS Support engineers can assist with the setup, configuration, and troubleshooting of the following third-party platforms and applications:

Operating systems on EC2 instances:

Ubuntu Linux
Red Hat Enterprise Linux and Fedora
SUSE Linux (SLES and openSUSE)
CentOS Linux
Microsoft Windows Server 2008
Microsoft Windows Server 2008 R2
Microsoft Windows Server 2012
Microsoft Windows Server 2012 R2
Microsoft Windows Server 2016
Infrastructure components:

Sendmail and Postfix MTAs
OpenVPN and RRAS
SSH, SFTP, and FTP
LVM and Software RAID
Web servers:

Apache
IIS
Nginx
Databases:

MySQL
Microsoft SQL Server

Q: What if you can’t resolve my third-party software issue?

In the case that we are not able to resolve your issue we will collaborate with, or refer you to, the appropriate vendor support for that product. In some cases you may need to have a support relationship with the vendor to receive support from them.

Q: What are some of the most common reasons a customer might require third-party software support?

AWS Support can assist with installation, configuration, and troubleshooting of third-party software on the supported list. For other more advanced topics such as performance tuning, troubleshooting of custom code or scripts, security questions, etc. it may be necessary to contact the third-party software provider directly. While AWS Support will make every effort to assist, any assistance beyond installation, configuration, and basic troubleshooting of supported third-party software will be on a best-effort basis only.

AWS Account Closure

Q: How do I close my AWS Account?
Before closing your account, be sure to back up any applications and data that you need to retain. AWS may not be able to retrieve your account data after your account is closed. After completing your backup, visit your Account Settings page and choose "Close Account". This will close your AWS account and unsubscribe you from all AWS services. You will not be able to access AWS services or launch new resources when your account is closed.
If you manage multiple AWS accounts under the same company name that you would like to close, you can contact your account representative or open an account and billing support case for assistance. If you have additional questions or requirements associated with closing your AWS accounts, your account representative or AWS Customer Service can help.

Q: I received an error message when I tried to close my AWS account. What do I need to do?

If you receive an error message when trying to close your account, you can contact your account representative or open an account and billing support case for assistance.
Q: Will I be billed after I close my account?

Usage and billing stops accruing when your account is closed. You will be billed for any usage that has accrued up until the time you closed your account, and your final charges will be billed at the beginning of the following month.

Common AWS Concierge Customer Questions
Account Management

Q. How do I securely control access to my AWS services and resources?

We recommend that you use AWS Identity and Access Management (IAM), which enables you to securely control access to AWS services and resources for your users. Using IAM, you can create and manage AWS users and groups and use permissions to allow and deny access to AWS resources. IAM enables security best practices by allowing you to grant unique security credentials to users and groups to specify which AWS service APIs and resources they can access.
IAM access can be revoked if a user leaves the company or project, helping to ensure that root credentials are not exposed or compromised. For security and team functionality, IAM users are essential to using the full potential of your AWS account. For more information on IAM and controlling access to your billing information, see AWS Identity and Access Management (IAM) and Controlling Access to Your Billing Information.

Consolidated Billing

Q. What is Consolidated Billing?

Consolidated Billing is a feature that allows you to consolidate payment for multiple AWS accounts in your organization by designating one of them to be the payer account.

With Consolidated Billing, you can see a combined view of AWS charges incurred by all accounts, as well as get a detailed cost report for each individual AWS accounts associated with your payer account. Consolidated Billing is offered at no additional charge and allows for all accounts under the consolidated group to be treated as one account, which assists with achieving volume discounts more rapidly.

See Pay Bills for Multiple Accounts with Consolidated Billing in the AWS Billing and Cost Management User Guide.

Q. How can I use my AWS bill to evaluate costs?

AWS provides a number of different ways to explore your AWS monthly bill and to allocate costs by account, resource ID, or customer-defined tags.
To get a snapshot of your billing data, you can use the Billing and Cost Management console, the monthly cost allocation report, and the monthly billing report.

Q. What are blended rates?

For billing purposes, AWS treats all the accounts in a Consolidated Billing family as if they're one account. Blended rates appear on your bill as an average price for variable usage across an account family. This allows you to take advantage of two features that are designed to ensure that you pay the lowest available price for AWS products and services:
Pricing tiers. Pricing tiers reward higher usage with lower unit prices for services
Capacity reservations. Rates are discounted when you purchase some services in advance for a specific period of time.
You can always see the precise account usage, along with the unblended rates, in the detailed billing reports.
For more information, see Understanding Blended Rates in the AWS Billing and Cost Management User Guide.

Q. Why don't I see the same figures in the Billing and Cost Management console as I see in the detailed billing report?

The Billing and Cost Management console and the detailed billing report provide different information based on blended and unblended rates. For more information, see Understanding Blended Rates or contact us through the AWS Support Center.

Q. How do I tell which accounts benefited from Reserved Instance pricing?

The Detailed Billing Report shows the linked accounts that benefited from a Reserved Instance on your consolidated bill. The costs of the Reserved Instances can be unblended to show how the discount was distributed. Reserved Instance utilization reports also show the total cost savings (including upfront Reserved Instance costs) across a Consolidated Bill.

Q. How are credits applied across a Consolidated Bill?

Promotional and service credits are shared across the entire consolidated bill. Credits are used first by the account that the credit was applied to; if additional credit remains, it is distributed based on the service usage on the other linked accounts. This means that the Billing and Cost Management console shows any linked accounts that have benefited from a credit that was applied to another linked account. Unlike credits, refunds (credit memos) are viewable only by the payer account.
For more information, see AWS Credits.
Reporting

Q. How do I use the AWS Billing and Cost Management console?

The AWS Billing and Cost Management Console is a service that you use to pay your AWS bill, monitor costs, and visualize your AWS spend. There are many ways to use this tool for your account.
For more information, see What Is AWS Billing and Cost Management?

Q. How do I use Cost Explorer?

You can use Cost Explorer to visualize patterns in your spending on AWS resources over time. You can quickly identify areas that need further inquiry, and you can see trends that you can use to understand spend and to predict future costs.
For more information, see Manage Your Spend Data with Cost Explorer.

Q. How do I use the Amazon EC2 instance usage reports?

You can use the instance usage reports to view your instance usage and cost trends. You can see your usage data in either instance hours or cost. You can choose to see hourly, daily, and monthly aggregates of your usage data. You can filter or group the report by region, Availability Zone, instance type, AWS account, platform, tenancy, purchase option, or tag. After you configure a report, you can bookmark it so that it's easy to get back to later.
For more information, see Instance Usage Report.

Q. How do I use the Reserved Instance utilization report?

The Reserved Instance utilization report describes the utilization over time of each group (or bucket) of Amazon EC2 Reserved Instances that you own. Each bucket has a unique combination of region, Availability Zone, instance type, tenancy, offering type, and platform. You can specify the time range that the report covers, from a single day to weeks, months, a year, or three years.
For more information, see Reserved Instance Utilization Reports.

Reserved Instances

Q. How do I tell if my Reserved Instances are being used?

Three tools are available to determine Reserved Instance utilization:
Detailed billing report. This report shows hourly detail of all charges for an account or consolidated bill. Near the bottom of the report are line items that explain Reserved Instance utilization in an aggregated format (xxx hours purchased; xxx hours used). To configure your account for this report, see Getting Set Up for Usage Reports.
Reserved Instance utilization report. This report is accessible from the Billing and Cost Management console and shows a high-level overview of utilization. For more information, see Reserved Instance Utilization Reports.
Billing and Cost Management console. The "Bills" pane of the console shows Reserved Instance utilization in the highest level. This view provides the least detailed view of Reserved Instance utilization.

Q. Can I restrict a Reserved Instance benefit to a single account/instance?

Unfortunately, this is not supported. For billing purposes, Consolidated Billing treats all the accounts on the Consolidated Bill as one account. All accounts on a Consolidated Bill receive the hourly cost benefit of Reserved Instances purchased by any other account.

Q. How do I see how Reserved Instances are applied across my entire Consolidated Bill?

The detailed billing report shows the hourly detail of all charges on an account or consolidated bill. Near the bottom of the report, line items explain Reserved Instance utilization in an aggregated format (xxx hours purchased; xxx hours used). To configure your account for this report, see Getting Set Up for Usage Reports.

Q. How do I tell if and why a Reserved Instance is underutilized?

In addition to the three tools listed in How do I tell if my Reserved Instances are being used, AWS Trusted Advisor provides best practices (or checks) in four categories: Cost Optimization, Security, Fault Tolerance, and Performance. The Cost Optimization section includes a check for Amazon EC2 Reserved Instances Optimization. For more information about the Trusted Advisor check, see Reserved Instance Opimization Check Questions.
General Billing

Q. Which accounts are charged sales tax and why?

Tax is normally calculated at the linked account level. Each account must add its own tax exemption. For more information on US sales taxes and VAT taxes, see the following:

VAT customers
US Tax customers
AWS Tax Help
AWS Billing FAQ (additional tax information)
Limit Increases

Q. How do I submit an urgent limit increase request?

Submit limit increase requests in the AWS Support Center. Choose "Create case", select "Service Limit Increase", and then select an item from the"Limit Type" list.
We aim for a quick turnaround time with all limit increases. If your request is urgent, complete the details of the request and then select the "Phone" contact method for 24/7 service. Provide the agent with the support case ID, and we will follow up immediately with the relevant teams.
Resellers

Q. How do we bill our end customers based on the detailed billing report?

AWS does not support the billing of reseller end customers because each reseller uses unique pricing and billing structures. We do recommend that resellers not use blended rates for billing—these figures are averages and are not meant to reflect actual billed rates. The detailed billing report can show unblended costs for each account on a consolidated bill, which is more helpful for the purpose of billing end customers.

Wednesday, September 26, 2018

AWS Faq's

AWS Faq's


Q: What is Amazon S3?

Amazon S3 is object storage built to store and retrieve any amount of data from anywhere on the Internet. It’s a simple storage service that offers an extremely durable, highly available, and infinitely scalable data storage infrastructure at very low costs.


Q: What can I do with Amazon S3?

Amazon S3 provides a simple web service interface that you can use to store and retrieve any amount of data, at any time, from anywhere on the web. Using this web service, you can easily build applications that make use of Internet storage. Since Amazon S3 is highly scalable and you only pay for what you use, you can start small and grow your application as you wish, with no compromise on performance or reliability.

Amazon S3 is also designed to be highly flexible. Store any type and amount of data that you want; read the same piece of data a million times or only for emergency disaster recovery; build a simple FTP application, or a sophisticated web application such as the Amazon.com retail web site. Amazon S3 frees developers to focus on innovation instead of figuring out how to store their data.


Q: How can I get started using Amazon S3?

To sign up for Amazon S3, click this link. You must have an Amazon Web Services account to access this service; if you do not already have one, you will be prompted to create one when you begin the Amazon S3 sign-up process. After signing up, please refer to the Amazon S3 documentation and sample code in the Resource Center to begin using Amazon S3.


Q: What can developers do with Amazon S3 that they could not do with an on-premises solution?

Amazon S3 enables any developer to leverage Amazon’s own benefits of massive scale with no up-front investment or performance compromises. Developers are now free to innovate knowing that no matter how successful their businesses become, it will be inexpensive and simple to ensure their data is quickly accessible, always available, and secure.


Q: What kind of data can I store in Amazon S3?

You can store virtually any kind of data in any format. Please refer to the Amazon Web Services Licensing Agreement for details.


Q: How much data can I store in Amazon S3?

The total volume of data and number of objects you can store are unlimited. Individual Amazon S3 objects can range in size from a minimum of 0 bytes to a maximum of 5 terabytes. The largest object that can be uploaded in a single PUT is 5 gigabytes. For objects larger than 100 megabytes, customers should consider using the Multipart Upload capability.


Q: What storage classes does Amazon S3 offer?

Amazon S3 offers a range of storage classes designed for different use cases. There are four highly durable storage classes including Amazon S3 Standard for general purpose storage of frequently accessed data, Amazon S3 Standard-Infrequent Access or Amazon S3 One Zone-Infrequent Access for long-lived, but less frequently accessed data, and Amazon Glacier for long-term archive. You can learn more about these storage classes on the Amazon S3 Storage Classes page.


Q: What does Amazon do with my data in Amazon S3?

Amazon will store your data and track its associated usage for billing purposes. Amazon will not otherwise access your data for any purpose outside of the Amazon S3 offering, except when required to do so by law. Please refer to the Amazon Web Services Licensing Agreement for details.


Q: Does Amazon store its own data in Amazon S3?

Yes. Developers within Amazon use Amazon S3 for a wide variety of projects. Many of these projects use Amazon S3 as their authoritative data store and rely on it for business-critical operations.


Q:  How is Amazon S3 data organized?

Amazon S3 is a simple key-based object store. When you store data, you assign a unique object key that can later be used to retrieve the data. Keys can be any string, and they can be constructed to mimic hierarchical attributes. Alternatively, you can use S3 Object Tagging to organize your data across all of your S3 buckets and/or prefixes.

Q:   How do I interface with Amazon S3?

Amazon S3 provides a simple, standards-based REST web services interface that is designed to work with any Internet-development toolkit. The operations are intentionally made simple to make it easy to add new distribution protocols and functional layers.

Q:    How reliable is Amazon S3?

Amazon S3 gives any developer access to the same highly scalable, highly available, fast, inexpensive data storage infrastructure that Amazon uses to run its own global network of web sites. The S3 Standard storage class is designed for 99.99% availability, the S3 Standard-IA storage class is designed for 99.9% availability, and the S3 One Zone-IA storage class is designed for 99.5% availability. All of these storage classes are backed by the Amazon S3 Service Level Agreement.


Q:    How will Amazon S3 perform if traffic from my application suddenly spikes?

Amazon S3 was designed from the ground up to handle traffic for any Internet application. Pay-as-you-go pricing and unlimited capacity ensures that your incremental costs don’t change and that your service is not interrupted. Amazon S3’s massive scale enables us to spread load evenly, so that no individual application is affected by traffic spikes.


Q:    Does Amazon S3 offer a Service Level Agreement (SLA)?

Yes. The Amazon S3 SLA provides for a service credit if a customer's monthly uptime percentage is below our service commitment in any billing cycle.

AWS Regions
Q:  Where is my data stored?

You specify an AWS Region when you create your Amazon S3 bucket. For S3 Standard, S3 Standard-IA, and Amazon Glacier storage classes, your objects are automatically stored across multiple devices spanning a minimum of three Availability Zones, each separated by miles across an AWS Region. Objects stored in the S3 One Zone-IA storage class are stored redundantly within a single Availability Zone in the AWS Region you select. Please refer to Regional Products and Services for details of Amazon S3 service availability by AWS Region.

Q:  What is an AWS Region?

An AWS Region is a geographic location where AWS provides multiple, physically separated and isolated Availability Zones which are connected with low latency, high throughput, and highly redundant networking.


Q:  What is an AWS Availability Zone (AZ)?

An AWS Availability Zone is an isolated location within an AWS Region. Within each AWS Region, S3 operates in a minimum of three AZs, each separated by miles to protect against local events like fires, floods, etc.

Amazon S3 Standard, S3 Standard-Infrequent Access, and Amazon Glacier storage classes replicate data across a minimum of three AZs to protect against the loss of one entire AZ. This remains true in Regions where fewer than three AZs are publicly available. Objects stored in these storage classes are available for access from all of the AZs in an AWS Region.

The Amazon S3 One Zone-IA storage class replicates data within a single AZ. Data stored in this storage class is susceptible to loss in an AZ destruction event.


Q:  How do I decide which AWS Region to store my data in?

There are several factors to consider based on your specific application. You may want to store your data in a Region that…

...is near to your customers, your data centers, or your other AWS resources in order to reduce data access latencies.
...is remote from your other operations for geographic redundancy and disaster recovery purposes.
...enables you to address specific legal and regulatory requirements.
...allows you to reduce storage costs. You can choose a lower priced region to save money. For S3 pricing information, please visit the S3 pricing page.

Q:  In which parts of the world is Amazon S3 available?

Amazon S3 is available in AWS Regions worldwide, and you can use Amazon S3 regardless of your location. You just have to decide which AWS Region(s) you want to store your Amazon S3 data. See the AWS Regional Availability Table for a list of AWS Regions in which S3 is available today.


Billing
Q:  How much does Amazon S3 cost?

With Amazon S3, you pay only for what you use. There is no minimum fee. You can estimate your monthly bill using the AWS Simple Monthly Calculator.

We charge less where our costs are less. Some prices vary across Amazon S3 Regions. Billing prices are based on the location of your bucket. There is no Data Transfer charge for data transferred within an Amazon S3 Region via a COPY request. Data transferred via a COPY request between AWS Regions is charged at rates specified in the pricing section of the Amazon S3 detail page. There is no Data Transfer charge for data transferred between Amazon EC2 and Amazon S3 within the same region or for data transferred between the Amazon EC2 Northern Virginia Region and the Amazon S3 US East (Northern Virginia) Region. Data transferred between Amazon EC2 and Amazon S3 across all other regions (i.e. between the Amazon EC2 Northern California and Amazon S3 US East (Northern Virginia) is charged at rates specified on the Amazon S3 pricing page.


Q:  How will I be charged and billed for my use of Amazon S3?

There are no set-up fees or commitments to begin using the service. At the end of the month, your credit card will automatically be charged for that month’s usage. You can view your charges for the current billing period at any time on the Amazon Web Services web site, by logging into your Amazon Web Services account, and clicking “Account Activity” under “Your Web Services Account”.

With the AWS Free Usage Tier*, you can get started with Amazon S3 for free in all regions except the AWS GovCloud Region. Upon sign-up, new AWS customers receive 5 GB of Amazon S3 Standard storage, 20,000 Get Requests, 2,000 Put Requests, 15GB of data transfer in, and 15GB of data transfer out each month for one year.

Amazon S3 charges you for the following types of usage. Note that the calculations below assume there is no AWS Free Tier in place.

Storage Used:

Amazon S3 storage pricing is summarized on the Amazon S3 Pricing page.

The volume of storage billed in a month is based on the average storage used throughout the month. This includes all object data and metadata stored in buckets that you created under your AWS account. We measure your storage usage in “TimedStorage-ByteHrs,” which are added up at the end of the month to generate your monthly charges.

Storage Example:

Assume you store 100GB (107,374,182,400 bytes) of data in Amazon S3 Standard in your bucket for 15 days in March, and 100TB (109,951,162,777,600 bytes) of data in Amazon S3 Standard for the final 16 days in March.

At the end of March, you would have the following usage in Byte-Hours: Total Byte-Hour usage = [107,374,182,400 bytes x 15 days x (24 hours / day)] + [109,951,162,777,600 bytes x 16 days x (24 hours / day)] = 42,259,901,212,262,400 Byte-Hours.

Let's convert this to GB-Months: 42,259,901,212,262,400 Byte-Hours / 1,073,741,824 bytes per GB / 744 hours per month = 52,900 GB-Months

This usage volume crosses two different volume tiers. The monthly storage price is calculated below assuming the data is stored in the US East (Northern Virginia) Region: 50 TB Tier: 51,200 GB x $0.023 = $1,177.60 50 TB to 450 TB Tier: 1,700 GB x $0.022 = $37.40

Total Storage Fee = $1,177.60 + $37.40 = $1,215.00

Network Data Transferred In:

Amazon S3 Data Transfer In pricing is summarized on the Amazon S3 Pricing page. This represents the amount of data sent to your Amazon S3 buckets.

Network Data Transferred Out:

Amazon S3 Data Transfer Out pricing is summarized on the Amazon S3 Pricing page. For Amazon S3, this charge applies whenever data is read from any of your buckets from a location outside of the given Amazon S3 Region.

Data Transfer Out pricing rate tiers take into account your aggregate Data Transfer Out from a given region to the Internet across Amazon EC2, Amazon S3, Amazon RDS, Amazon SimpleDB, Amazon SQS, Amazon SNS and Amazon VPC. These tiers do not apply to Data Transfer Out from Amazon S3 in one AWS Region to another AWS Region.

Data Transfer Out Example:
Assume you transfer 1TB of data out of Amazon S3 from the US East (Northern Virginia) Region to the Internet every day for a given 31-day month. Assume you also transfer 1TB of data out of an Amazon EC2 instance from the same region to the Internet over the same 31-day month.

Your aggregate Data Transfer would be 62 TB (31 TB from Amazon S3 and 31 TB from Amazon EC2). This equates to 63,488 GB (62 TB * 1024 GB/TB).

This usage volume crosses three different volume tiers. The monthly Data Transfer Out fee is calculated below assuming the Data Transfer occurs in the US East (Northern Virginia) Region:
10 TB Tier: 10,239 GB (10×1024 GB/TB – 1 (free)) x $0.09 = $921.51
10 TB to 50 TB Tier: 40,960 GB (40×1024) x $0.085 = $3,481.60
50 TB to 150 TB Tier: 12,288 GB (remainder) x $0.070 = $860.16

Total Data Transfer Out Fee = $921.51+ $3,481.60 + $860.16= $5,263.27

Data Requests:

Amazon S3 Request pricing is summarized on the Amazon S3 Pricing Chart.

Request Example:
Assume you transfer 10,000 files into Amazon S3 and transfer 20,000 files out of Amazon S3 each day during the month of March. Then, you delete 5,000 files on March 31st.
Total PUT requests = 10,000 requests x 31 days = 310,000 requests
Total GET requests = 20,000 requests x 31 days = 620,000 requests
Total DELETE requests = 5,000×1 day = 5,000 requests

Assuming your bucket is in the US East (Northern Virginia) Region, the Request fees are calculated below:
310,000 PUT Requests: 310,000 requests x $0.005/1,000 = $1.55
620,000 GET Requests: 620,000 requests x $0.004/10,000 = $0.25
5,000 DELETE requests = 5,000 requests x $0.00 (no charge) = $0.00

Data Retrieval:

Amazon S3 data retrieval pricing applies for the S3 Standard-Infrequent Access (S3 Standard-IA) and S3 One Zone-IA storage classes and is summarized on the Amazon S3 Pricing page.

Data Retrieval Example:
Assume in one month you retrieve 300GB of S3 Standard-IA, with 100GB going out to the Internet, 100GB going to EC2 in the same AWS region, and 100GB going to CloudFront in the same AWS Region.

Your data retrieval fees for the month would be calculated as 300GB x $0.01/GB = $3.00. Note that you would also pay network data transfer fees for the portion that went out to the Internet.

Please see here for details on billing of objects archived to Amazon Glacier.

* * Your usage for the free tier is calculated each month across all regions except the AWS GovCloud Region and automatically applied to your bill – unused monthly usage will not roll over. Restrictions apply; See offer terms for more details.


Q:  Why do prices vary depending on which Amazon S3 Region I choose?

We charge less where our costs are less. For example, our costs are lower in the US East (Northern Virginia) Region than in the US West (Northern California) Region.


Q: How am I charged for using Versioning?

Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long):

1) Day 1 of the month: You perform a PUT of 4 GB (4,294,967,296 bytes) on your bucket.
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.

When analyzing the storage costs of the above operations, please note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month:

Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours.

Conversion to Total GB-Months
5,257,039,970,304 Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month

The fee is calculated based on the current rates for your region on the Amazon S3 Pricing page.


Q:  How am I charged for accessing Amazon S3 through the AWS Management Console?

Normal Amazon S3 pricing applies when accessing the service through the AWS Management Console. To provide an optimized experience, the AWS Management Console may proactively execute requests. Also, some interactive operations result in more than one request to the service.


Q:  How am I charged if my Amazon S3 buckets are accessed from another AWS account?

Normal Amazon S3 pricing applies when your storage is accessed by another AWS Account. Alternatively, you may choose to configure your bucket as a Requester Pays bucket, in which case the requester will pay the cost of requests and downloads of your Amazon S3 data.

You can find more information on Requester Pays bucket configurations in the Amazon S3 Documentation.


Q:  Do your prices include taxes?

Except as otherwise noted, our prices are exclusive of applicable taxes and duties, including VAT and applicable sales tax. For customers with a Japanese billing address, use of AWS services is subject to Japanese Consumption Tax.

Learn more about taxes on AWS services »


Security
Q:  How secure is my data in Amazon S3?   

Amazon S3 is secure by default. Upon creation, only the resource owners have access to Amazon S3 resources they create. Amazon S3 supports user authentication to control access to data. You can use access control mechanisms such as bucket policies and Access Control Lists (ACLs) to selectively grant permissions to users and groups of users. The Amazon S3 console highlights your publicly accessible buckets, indicates the source of public accessibility, and also warns you if changes to your bucket policies or bucket ACLs would make your bucket publicly accessible.

You can securely upload/download your data to Amazon S3 via SSL endpoints using the HTTPS protocol. If you need extra security you can use the Server-Side Encryption (SSE) option to encrypt data stored at rest. You can configure your Amazon S3 buckets to automatically encrypt objects before storing them if the incoming storage requests do not have any encryption information. Alternatively, you can use your own encryption libraries to encrypt data before storing it in Amazon S3.


Q:  How can I control access to my data stored on Amazon S3?

Customers may use four mechanisms for controlling access to Amazon S3 resources: Identity and Access Management (IAM) policies, bucket policies, Access Control Lists (ACLs), and Query String Authentication. IAM enables organizations with multiple employees to create and manage multiple users under a single AWS account. With IAM policies, customers can grant IAM users fine-grained control to their Amazon S3 bucket or objects while also retaining full control over everything the users do. With bucket policies, customers can define rules which apply broadly across all requests to their Amazon S3 resources, such as granting write privileges to a subset of Amazon S3 resources. Customers can also restrict access based on an aspect of the request, such as HTTP referrer and IP address. With ACLs, customers can grant specific permissions (i.e. READ, WRITE, FULL_CONTROL) to specific users for an individual bucket or object. With Query String Authentication, customers can create a URL to an Amazon S3 object which is only valid for a limited time. For more information on the various access control policies available in Amazon S3, please refer to the Access Control topic in the Amazon S3 Developer Guide.


Q:  Does Amazon S3 support data access auditing?

Yes, customers can optionally configure an Amazon S3 bucket to create access log records for all requests made against it. Alternatively, customers who need to capture IAM/user identity information in their logs can configure AWS CloudTrail Data Events.

These access log records can be used for audit purposes and contain details about the request, such as the request type, the resources specified in the request, and the time and date the request was processed.


Q:  What options do I have for encrypting data stored on Amazon S3?

You can choose to encrypt data using SSE-S3, SSE-C, SSE-KMS, or a client library such as the Amazon S3 Encryption Client. All four enable you to store sensitive data encrypted at rest in Amazon S3.

SSE-S3 provides an integrated solution where Amazon handles key management and key protection using multiple layers of security. You should choose SSE-S3 if you prefer to have Amazon manage your keys.

SSE-C enables you to leverage Amazon S3 to perform the encryption and decryption of your objects while retaining control of the keys used to encrypt objects. With SSE-C, you don’t need to implement or use a client-side library to perform the encryption and decryption of objects you store in Amazon S3, but you do need to manage the keys that you send to Amazon S3 to encrypt and decrypt objects. Use SSE-C if you want to maintain your own encryption keys, but don’t want to implement or leverage a client-side encryption library.

SSE-KMS enables you to use AWS Key Management Service (AWS KMS) to manage your encryption keys. Using AWS KMS to manage your keys provides several additional benefits. With AWS KMS, there are separate permissions for the use of the master key, providing an additional layer of control as well as protection against unauthorized access to your objects stored in Amazon S3. AWS KMS provides an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data. Also, AWS KMS provides additional security controls to support customer efforts to comply with PCI-DSS, HIPAA/HITECH, and FedRAMP industry requirements.

Using an encryption client library, such as the Amazon S3 Encryption Client, you retain control of the keys and complete the encryption and decryption of objects client-side using an encryption library of your choice. Some customers prefer full end-to-end control of the encryption and decryption of objects; that way, only encrypted objects are transmitted over the Internet to Amazon S3. Use a client-side library if you want to maintain control of your encryption keys, are able to implement or use a client-side encryption library, and need to have your objects encrypted before they are sent to Amazon S3 for storage.

For more information on using Amazon S3 SSE-S3, SSE-C, or SSE-KMS, please refer to the topic on Using Encryption in the Amazon S3 Developer Guide.


Q:  Can I comply with EU data privacy regulations using Amazon S3?

Customers can choose to store all data in the EU by using the EU (Frankfurt), EU (Ireland), EU (London), or EU (Paris) region. It is your responsibility to ensure that you comply with EU privacy laws. Please see the AWS GDPR Center for more information.


Q:  Where can I find more information about security on AWS?

For more information on security on AWS please refer to our Amazon Web Services: Overview of Security Processes document.


Q:  What is an Amazon VPC Endpoint for Amazon S3?

An Amazon VPC Endpoint for Amazon S3 is a logical entity within a VPC that allows connectivity only to S3. The VPC Endpoint routes requests to S3 and routes responses back to the VPC. For more information about VPC Endpoints, read Using VPC Endpoints.


Q:  Can I allow a specific Amazon VPC Endpoint access to my Amazon S3 bucket?

You can limit access to your bucket from a specific Amazon VPC Endpoint or a set of endpoints using Amazon S3 bucket policies. S3 bucket policies now support a condition, aws:sourceVpce, that you can use to restrict access. For more details and example policies, read Using VPC Endpoints.


Q: What is Amazon Macie?

Amazon Macie is an AI-powered security service that helps you prevent data loss by automatically discovering, classifying, and protecting sensitive data stored in Amazon S3. Amazon Macie uses machine learning to recognize sensitive data such as personally identifiable information (PII) or intellectual property, assigns a business value, and provides visibility into where this data is stored and how it is being used in your organization. Amazon Macie continuously monitors data access activity for anomalies, and delivers alerts when it detects risk of unauthorized access or inadvertent data leaks.


Q:  What can I do with Amazon Macie?

You can use Amazon Macie to protect against security threats by continuously monitoring your data and account credentials. Amazon Macie gives you an automated and low touch way to discover and classify your business data. It provides controls via templated Lambda functions to revoke access or trigger password reset policies upon the discovery of suspicious behavior or unauthorized data access to entities or third-party applications. When alerts are generated, you can use Amazon Macie for incident response, using Amazon CloudWatch Events to swiftly take action to protect your data.


Q:  How does Amazon Macie secure your data?

As part of the data classification process, Amazon Macie identifies customers’ objects in their S3 buckets, and streams the object contents into memory for analysis. When deeper analysis is required for complex file formats, Amazon Macie will download a full copy of the object, only keeping it for the short time it takes to fully analyze the object. Immediately after Amazon Macie has analyzed the file content for data classification, it deletes the stored content and only retains the metadata required for future analysis. At any time, customers can revoke Amazon Macie access to data in the Amazon S3 bucket. For more information, go to the Amazon Macie User Guide.

Durability & Data Protection
Q:  How durable is Amazon S3?

Amazon S3 Standard, S3 Standard–IA, S3 One Zone-IA, and Amazon Glacier are all designed to provide 99.999999999% durability of objects over a given year. This durability level corresponds to an average annual expected loss of 0.000000001% of objects. For example, if you store 10,000,000 objects with Amazon S3, you can on average expect to incur a loss of a single object once every 10,000 years. In addition, Amazon S3 Standard, S3 Standard-IA, and Amazon Glacier are all designed to sustain data in the event of an entire S3 Availability Zone loss.

As with any environment, the best practice is to have a backup and to put in place safeguards against malicious or accidental deletion. For S3 data, that best practice includes secure access permissions, Cross-Region Replication, versioning, and a functioning, regularly tested backup.


Q:  How are Amazon S3 and Amazon Glacier designed to achieve 99.999999999% durability?

Amazon S3 Standard, S3 Standard-IA, and Amazon Glacier storage classes redundantly store your objects on multiple devices across a minimum of three Availability Zones (AZs) in an Amazon S3 Region before returning SUCCESS. The S3 One Zone-IA storage class stores data redundantly across mutliple devices within a single AZ. These services are designed to sustain concurrent device failures by quickly detecting and repairing any lost redundancy, and they also regularly verify the integrity of your data using checksums.


Q:  What checksums does Amazon S3 employ to detect data corruption?

Amazon S3 uses a combination of Content-MD5 checksums and cyclic redundancy checks (CRCs) to detect data corruption. Amazon S3 performs these checksums on data at rest and repairs any corruption using redundant data. In addition, the service calculates checksums on all network traffic to detect corruption of data packets when storing or retrieving data.


Q:  What is Versioning?

Versioning allows you to preserve, retrieve, and restore every version of every object stored in an Amazon S3 bucket. Once you enable Versioning for a bucket, Amazon S3 preserves existing objects anytime you perform a PUT, POST, COPY, or DELETE operation on them. By default, GET requests will retrieve the most recently written version. Older versions of an overwritten or deleted object can be retrieved by specifying a version in the request.


Q:  Why should I use Versioning?

Amazon S3 provides customers with a highly durable storage infrastructure. Versioning offers an additional level of protection by providing a means of recovery when customers accidentally overwrite or delete objects. This allows you to easily recover from unintended user actions and application failures. You can also use Versioning for data retention and archiving.


Q:  How do I start using Versioning?

You can start using Versioning by enabling a setting on your Amazon S3 bucket. For more information on how to enable Versioning, please refer to the Amazon S3 Technical Documentation.

Q:  How does Versioning protect me from accidental deletion of my objects?

When a user performs a DELETE operation on an object, subsequent simple (un-versioned) requests will no longer retrieve the object. However, all versions of that object will continue to be preserved in your Amazon S3 bucket and can be retrieved or restored. Only the owner of an Amazon S3 bucket can permanently delete a version. You can set Lifecycle rules to manage the lifetime and the cost of storing multiple versions of your objects.


Q:  Can I setup a trash, recycle bin, or rollback window on my Amazon S3 objects to recover from deletes and overwrites?

You can use Lifecycle rules along with Versioning to implement a rollback window for your Amazon S3 objects. For example, with your versioning-enabled bucket, you can set up a rule that archives all of your previous versions to the lower-cost Glacier storage class and deletes them after 100 days, giving you a 100-day window to roll back any changes on your data while lowering your storage costs.


Q:  How can I ensure maximum protection of my preserved versions?

Versioning’s Multi-Factor Authentication (MFA) Delete capability can be used to provide an additional layer of security. By default, all requests to your Amazon S3 bucket require your AWS account credentials. If you enable Versioning with MFA Delete on your Amazon S3 bucket, two forms of authentication are required to permanently delete a version of an object: your AWS account credentials and a valid six-digit code and serial number from an authentication device in your physical possession. To learn more about enabling Versioning with MFA Delete, including how to purchase and activate an authentication device, please refer to the Amazon S3 Technical Documentation.


Q:  How am I charged for using Versioning?

Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning (let’s assume the current month is 31 days long):

1) Day 1 of the month: You perform a PUT of 4 GB (4,294,967,296 bytes) on your bucket.
2) Day 16 of the month: You perform a PUT of 5 GB (5,368,709,120 bytes) within the same bucket using the same key as the original PUT on Day 1.

When analyzing the storage costs of the above operations, please note that the 4 GB object from Day 1 is not deleted from the bucket when the 5 GB object is written on Day 15. Instead, the 4 GB object is preserved as an older version and the 5 GB object becomes the most recently written version of the object within your bucket. At the end of the month:

Total Byte-Hour usage
[4,294,967,296 bytes x 31 days x (24 hours / day)] + [5,368,709,120 bytes x 16 days x (24 hours / day)] = 5,257,039,970,304 Byte-Hours.

Conversion to Total GB-Months
5,257,039,970,304  Byte-Hours x (1 GB / 1,073,741,824 bytes) x (1 month / 744 hours) = 6.581 GB-Month

The fee is calculated based on the current rates for your region on the Amazon S3 Pricing Page.


S3 Standard-Infrequent Access (S3 Standard-IA)
Q:  What is S3 Standard-Infrequent Access?

Amazon S3 Standard-Infrequent Access (S3 Standard-IA) is an Amazon S3 storage class for data that is accessed less frequently but requires rapid access when needed. S3 Standard-IA offers the high durability, throughput, and low latency of the Amazon S3 Standard storage class, with a low per-GB storage price and per-GB retrieval fee. This combination of low cost and high performance make S3 Standard-IA ideal for long-term storage, backups, and as a data store for disaster recovery. The S3 Standard-IA storage class is set at the object level and can exist in the same bucket as the S3 Standard or S3 One Zone-IA storage classes, allowing you to use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.


Q:  Why would I choose to use S3 Standard-IA?

S3 Standard-IA is ideal for data that is accessed less frequently, but requires rapid access when needed. S3 Standard-IA is ideally suited for long-term file storage, older sync and share storage, and other aging data.


Q:  What performance does S3 Standard-IA offer?

S3 Standard-IA provides the same performance as the S3 Standard and S3 One Zone-IA storage classes.



Q:  How durable and available is S3 Standard-IA?

S3 Standard-IA is designed for the same 99.999999999% durability as the S3 Standard and Amazon Glacier storage classes. S3 Standard-IA is designed for 99.9% availability, and carries a service level agreement providing service credits if availability is less than our service commitment in any billing cycle.


Q:  How do I get my data into S3 Standard-IA?

There are two ways to get data into S3 Standard-IA. You can directly PUT into S3 Standard-IA by specifying STANDARD_IA in the x-amz-storage-class header. You can also set Lifecycle policies to transition objects from the S3 Standard to the S3 Standard-IA storage class.


Q:  Are my S3 Standard-IA objects backed by the Amazon S3 Service Level Agreement?

Yes, S3 Standard-IA is backed with the Amazon S3 Service Level Agreement, and customers are eligible for service credits if availability is less than our service commitment in any billing cycle.


Q:  How will my latency and throughput performance be impacted as a result of using S3 Standard-IA?

You should expect the same latency and throughput performance as the S3 Standard storage class when using S3 Standard-IA.


Q:  How am I charged for using S3 Standard-IA?

Please see the Amazon S3 pricing page for general information about S3 Standard-IA pricing.


Q:  What charges will I incur if I change the storage class of an object from S3 Standard-IA to S3 Standard with a COPY request?

You will incur charges for an S3 Standard-IA COPY request and an S3 Standard-IA data retrieval.


Q:  Is there a minimum storage duration charge for S3 Standard-IA?

S3 Standard-IA is designed for long-lived but infrequently accessed data that is retained for months or years. Data that is deleted from S3 Standard-IA within 30 days will be charged for a full 30 days. Please see the Amazon S3 pricing page for information about S3 Standard-IA pricing.


Q:  Is there a minimum object storage charge for S3 Standard-IA?

S3 Standard-IA is designed for larger objects and has a minimum object storage charge of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB. For example, a 6KB object in S3 Standard-IA will incur S3 Standard-IA storage charges for 6KB and an additional minimum object size fee equivalent to 122KB at the S3 Standard-IA storage price. Please see the Amazon S3 pricing page for information about S3 Standard-IA pricing.


Q:  Can I tier objects from S3 Standard-IA to S3 One Zone-IA or Amazon Glacier?

Yes. In addition to using Lifecycle policies to migrate objects from S3 Standard to S3 Standard-IA, you can also set up Lifecycle policies to tier objects from S3 Standard-IA to S3 One Zone-IA or Amazon Glacier.

S3 One Zone-Infrequent Access (S3 One Zone-IA)
Q:  What is S3 One Zone-IA storage class?

S3 One Zone-IA storage class is an Amazon S3 storage class that customers can choose to store objects in a single availability zone. S3 One Zone-IA storage redundantly stores data within that single Availability Zone to deliver storage at 20% less cost than geographically redundant S3 Standard-IA storage, which stores data redundantly across multiple geographically separate Availability Zones.

S3 One Zone-IA offers a 99% available SLA and is also designed for eleven 9’s of durability within the Availability Zone. But, unlike the S3 Standard and S3 Standard-IA storage classes, data stored in the S3 One Zone-IA storage class will be lost in the event of Availability Zone destruction.

S3 One Zone-IA storage offers the same Amazon S3 features as S3 Standard and S3 Standard-IA and is used through the Amazon S3 API, CLI and console. S3 One Zone-IA storage class is set at the object level and can exist in the same bucket as S3 Standard and S3 Standard-IA storage classes. You can use S3 Lifecycle policies to automatically transition objects between storage classes without any application changes.


Q:  What use cases are best suited for S3 One Zone-IA storage class?

Customers can use S3 One Zone-IA for infrequently-accessed storage, like backup copies, disaster recovery copies, or other easily re-creatable data.


Q:  Q: What performance does S3 One Zone-IA storage offer?

S3 One Zone-IA storage class offers similar performance to S3 Standard and S3 Standard-Infrequent Access storage.


Q:  How durable is the S3 One Zone-IA storage class?

S3 One Zone-IA storage class is designed for 99.999999999% of durability within an Availability Zone. However, S3 One Zone-IA storage is not designed to withstand the loss of availability or total destruction of an Availability Zone, in which case data stored in S3 One Zone-IA will be lost. In contrast, S3 Standard, S3 Standard-Infrequent Access, and Amazon Glacier storage are designed to withstand loss of availability or the destruction of an Availability Zone. S3 One Zone-IA can deliver the same or better durability and availability than most modern, physical data centers, while providing the added benefit of elasticity of storage and the Amazon S3 feature set.


Q:  What is the availability SLA for S3 One Zone-IA storage class?

S3 One Zone-IA offers a 99% availability SLA. For comparison, S3 Standard offers a 99.9% availability SLA and S3 Standard-Infrequent Access offers a 99% availability SLA. As with all S3 storage classes, S3 One Zone-IA storage class carries a service level agreement providing service credits if availability is less than our service commitment in any billing cycle. See the Amazon S3 Service Level Agreement.


Q:  How will using S3 One Zone-IA storage affect my latency and throughput?

You should expect similar latency and throughput in S3 One Zone-IA storage class to Amazon S3 Standard and S3 Standard-IA storage classes.


Q:  How am I charged for using S3 One Zone-IA storage class?

Like S3 Standard-IA, S3 One Zone-IA charges for the amount of storage per month, bandwidth, requests, early delete and small object fees, and a data retrieval fee. Amazon S3 One Zone-IA storage is 20% cheaper than Amazon S3 Standard-IA for storage by month, and shares the same pricing for bandwidth, requests, early delete and small object fees, and the data retrieval fee.

As with S3 Standard-Infrequent Access, if you delete a S3 One Zone-IA object within 30 days of creating it, you will incur an early delete charge. For example, if you PUT an object and then delete it 10 days later, you are still charged for 30 days of storage.

Like S3 Standard-IA, S3 One Zone-IA storage class has a minimum object size of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB. For example, a 6KB object in a S3 One Zone-IA storage class will incur storage charges for 6KB and an additional minimum object size fee equivalent to 122KB at the S3 One Zone-IA storage price. Please see the pricing page for information about S3 One Zone-IA pricing.


Q:  Is an S3 One Zone-IA “Zone” the same thing as an AWS Availability Zone?

Yes. Each AWS Region is a separate geographic area. Each region has multiple, isolated locations known as Availability Zones. The Amazon S3 One Zone-IA storage class uses an individual AWS Availability Zone within the region.


Q:  Are there differences between how Amazon EC2 and Amazon S3 work with Availability Zone-specific resources?

Yes. Amazon EC2 provides you the ability to pick the AZ to place resources, such as compute instances, within a region. When you use S3 One Zone-IA, S3 One Zone-IA assigns an AWS Availability Zone in the region according to available capacity.


Q:  Can I have a bucket that has different objects in different storage classes and Availability Zones?

Yes, you can have a bucket that has different objects stored in S3 Standard, S3 Standard-IA and S3 One Zone-IA.


Q:  Is S3 One Zone-IA available in all AWS Regions in which S3 operates?

Yes.


Q:  How much disaster recovery protection do I forgo by using S3 One Zone-IA?

Each Availability Zone uses redundant power and networking. Within an AWS Region, Availability Zones are on different flood plains, earthquake fault zones, and geographically separated for fire protection. S3 Standard and S3 Standard-IA storage classes offer protection against these sorts of disasters by storing your data redundantly in multiple Availability Zones. S3 One Zone-IA offers protection against equipment failure within an Availability Zone, but it does not protect against the loss of the Availability Zone, in which case, data stored in S3 One Zone-IA would be lost. Using S3 One Zone-IA, S3 Standard, and S3 Standard-IA options, you can choose the storage class that best fits the durability and availability needs of your storage.


Amazon Glacier
Q:  Does Amazon S3 provide capabilities for archiving objects to lower cost storage options?

Yes, Amazon S3 enables you to utilize Amazon Glacier’s extremely low-cost storage service for data archival. Amazon Glacier stores data for as little as $0.004 per gigabyte per month. To keep costs low yet suitable for varying retrieval needs, Amazon Glacier provides three options for access to archives, ranging from a few minutes to several hours. Some examples of archive uses cases include digital media archives, financial and healthcare records, raw genomic sequence data, long-term database backups, and data that must be retained for regulatory compliance.

Q:  How can I store my data using the Amazon Glacier option?

You can use Lifecycle rules to automatically archive sets of Amazon S3 objects to Amazon Glacier based on object age. Use the Amazon S3 Management Console, the AWS SDKs, or the Amazon S3 APIs to define rules for archival. Rules specify a prefix and time period. The prefix (e.g. “logs/”) identifies the object(s) subject to the rule. The time period specifies either the number of days from object creation date (e.g. 180 days) or the specified date after which the object(s) should be archived. Any S3 Standard, S3 Standard-IA, or S3 One Zone-IA objects which have names beginning with the specified prefix and which have aged past the specified time period are archived to Amazon Glacier. To retrieve Amazon S3 data stored in Amazon Glacier, initiate a retrieval job via the Amazon S3 APIs or Management Console. Once the retrieval job is complete, you can access your data through an Amazon S3 GET object request.

For more information on using Lifecycle rules for archival to Amazon Glacier, please refer to the Object Archival topic in the Amazon S3 Developer Guide.


Q:  Can I use the Amazon S3 APIs or Management Console to list objects that I’ve archived to Amazon Glacier?

Yes, like Amazon S3’s other storage classes (S3 Standard, S3 Standard-IA, and S3 One Zone-IA), Amazon Glacier objects stored using Amazon S3’s APIs or Management Console have an associated user-defined name. You can get a real-time list of all of your Amazon S3 object names, including those stored using the Amazon Glacier storage class, using the S3 LIST API or the S3 Inventory report.


Q:  Can I use Amazon Glacier APIs to access objects that I’ve archived to Amazon Glacier?

Because Amazon S3 maintains the mapping between your user-defined object name and Amazon Glacier’s system-defined identifier, Amazon S3 objects that are stored using the Amazon Glacier storage class are only accessible through the Amazon S3 APIs or the Amazon S3 Management Console.


Q:  How can I retrieve my objects that are archived in Amazon Glacier?

To retrieve Amazon S3 data stored in Amazon Glacier, initiate a retrieval request using the Amazon S3 APIs or the Amazon S3 Management Console. The retrieval request creates a temporary copy of your data in the S3 RRS or S3 Standard-IA storage class while leaving the archived data intact in Amazon Glacier. You can specify the amount of time in days for which the temporary copy is stored in S3. You can then access your temporary copy from S3 through an Amazon S3 GET request on the archived object.


Q:  How long will it take to restore my objects archived in Amazon Glacier?

When processing a retrieval job, Amazon S3 first retrieves the requested data from Amazon Glacier, and then creates a temporary copy of the requested data in S3 (which typically takes a few minutes). The access time of your request depends on the retrieval option you choose: Expedited, Standard, or Bulk retrievals. For all but the largest objects (250MB+), data accessed using Expedited retrievals are typically made available within 1-5 minutes. Objects retrieved using Standard retrievals typically complete between 3-5 hours. Bulk retrievals typically complete within 5-12 hours. For more information about Glacier retrieval options, please refer to the Glacier FAQ. 


Q:  What am I charged for archiving objects in Amazon Glacier?

Amazon Glacier storage is priced based on monthly storage capacity and the number of  Lifecycle transition requests into Amazon Glacier. Objects that are archived to Amazon Glacier have a minimum of 90 days of storage, and objects deleted before 90 days incur a pro-rated charge equal to the storage charge for the remaining days. See the Amazon S3 pricing page for current pricing.


Q:  How is my storage charge calculated for Amazon S3 objects archived to Amazon Glacier?

The volume of storage billed in a month is based on average storage used throughout the month, measured in gigabyte-months (GB-Months). Amazon S3 calculates the object size as the amount of data you stored plus an additional 32KB of Amazon Glacier data plus an additional 8KB of S3 Standard storage class data. Amazon Glacier requires an additional 32KB of data per object for Glacier’s index and metadata so you can identify and retrieve your data. Amazon S3 requires 8KB to store and maintain the user-defined name and metadata for objects archived to Amazon Glacier. This enables you to get a real-time list of all of your Amazon S3 objects, including those stored using the Amazon Glacier storage class, using the Amazon S3 LIST API or the S3 Inventory report. For example, if you have archived 100,000 objects that are 1GB each, your billable storage would be:

1.000032 gigabytes for each object x 100,000 objects = 100,003.2 gigabytes of Amazon Glacier storage.
0.000008 gigabytes for each object x 100,000 objects = 0.8 gigabytes of Amazon S3 Standard storage.

The fee is calculated based on the current rates for your AWS Region on the Amazon S3 Pricing Page.


Q:  How much data can I retrieve from Amazon Glacier for free?

You can retrieve 10GB of your Amazon Glacier data per month for free with the AWS free tier. The free tier allowance can be used at any time during the month and applies to Amazon Glacier Standard retrievals.


Q:  How am I charged for deleting objects from Amazon Glacier that are less than 90 days old?

Amazon Glacier is designed for use cases where data is retained for months, years, or decades. Deleting data that is archived to Amazon Glacier is free if the objects being deleted have been archived in Amazon Glacier for 90 days or longer. If an object archived in Amazon Glacier is deleted or overwritten within 90 days of being archived, there will be an early deletion fee. This fee is prorated. If you delete 1GB of data 30 days after uploading it, you will be charged an early deletion fee for 60 days of Amazon Glacier storage. If you delete 1 GB of data after 60 days, you will be charged for 30 days of Amazon Glacier storage.

Q:  How much does it cost to retrieve data from Amazon Glacier?

There are three ways to restore data from Amazon Glacier – Expedited, Standard, and Bulk Retrievals - and each has a different per-GB retrieval fee and per-archive request fee (i.e. requesting one archive counts as one request). For detailed Glacier pricing by AWS Region, please visit the Amazon Glacier pricing page.

Query in Place
Q:  What is "Query in Place" functionality?

Amazon S3 allows customers to run sophisticated queries against data stored without the need to move data into a separate analytics platform. The ability to query this data in place on Amazon S3 can significantly increase performance and reduce cost for analytics solutions leveraging S3 as a data lake. S3 offers multiple query in place options, including S3 Select, Amazon Athena, and Amazon Redshift Spectrum, allowing you to choose one that best fits your use case. You can even use Amazon S3 Select with AWS Lambda to build serverless apps that can take advantage of the in-place processing capabilities provided by S3 Select.


Q:  What is S3 Select?

S3 Select is an Amazon S3 feature that makes it easy to retrieve specific data from the contents of an object using simple SQL expressions without having to retrieve the entire object. You can use S3 Select to retrieve a subset of data using SQL clauses, like SELECT and WHERE, from objects stored in CSV, JSON, or Apache Parquet format. It also works with objects that are compressed with GZIP or BZIP2 (for CSV and JSON objects only), and server-side encrypted objects.


Q:  What can I do with S3 Select?

You can use S3 Select to retrieve a smaller, targeted data set from an object using simple SQL statements. You can use S3 Select with AWS Lambda to build serverless applications that use S3 Select to efficiently and easily retrieve data from Amazon S3 instead of retrieving and processing entire object. You can also use S3 Select with Big Data frameworks, such as Presto, Apache Hive, and Apache Spark to scan and filter the data in Amazon S3.

Q:  Why should I use S3 Select?

S3 Select provides a new way to retrieve specific data using SQL statements from the contents of an object stored in Amazon S3 without having to retrieve the entire object. S3 Select simplifies and improves the performance of scanning and filtering the contents of objects into a smaller, targeted dataset by up to 400%. With S3 Select, you can also perform operational investigations on log files in Amazon S3 without the need to operate or manage a compute cluster.


Q:  What is Amazon Athena?

Amazon Athena is an interactive query service that makes it easy to analyze data in Amazon S3 using standard SQL queries. Athena is serverless, so there is no infrastructure to setup or manage, and you can start analyzing data immediately. You don’t even need to load your data into Athena, it works directly with data stored in any S3 storage class. To get started, just log into the Athena Management Console, define your schema, and start querying. Amazon Athena uses Presto with full standard SQL support and works with a variety of standard data formats, including CSV, JSON, ORC, Apache Parquet and Avro. While Athena is ideal for quick, ad-hoc querying and integrates with Amazon QuickSight for easy visualization, it can also handle complex analysis, including large joins, window functions, and arrays.

Q:  What is Amazon Redshift Spectrum?

Amazon Redshift Spectrum is a feature of Amazon Redshift that enables you to run queries against exabytes of unstructured data in Amazon S3 with no loading or ETL required. When you issue a query, it goes to the Amazon Redshift SQL endpoint, which generates and optimizes a query plan. Amazon Redshift determines what data is local and what is in Amazon S3, generates a plan to minimize the amount of Amazon S3 data that needs to be read, requests Redshift Spectrum workers out of a shared resource pool to read and process data from Amazon S3.

Redshift Spectrum scales out to thousands of instances if needed, so queries run quickly regardless of data size. And, you can use the exact same SQL for Amazon S3 data as you do for your Amazon Redshift queries today and connect to the same Amazon Redshift endpoint using the same BI tools. Redshift Spectrum lets you separate storage and compute, allowing you to scale each independently. You can setup as many Amazon Redshift clusters as you need to query your Amazon S3 data lake, providing high availability and limitless concurrency. Redshift Spectrum gives you the freedom to store your data where you want, in the format you want, and have it available for processing when you need it.


Event Notification
Q: What are Amazon S3 Event Notifications?

Amazon S3 event notifications can be sent in response to actions in Amazon S3 like PUTs, POSTs, COPYs, or DELETEs. Notification messages can be sent through either Amazon SNS, Amazon SQS, or directly to AWS Lambda.

Q:  What can I do with Amazon S3 event notifications?

Amazon S3 event notifications enable you to run workflows, send alerts, or perform other actions in response to changes in your objects stored in S3. You can use S3 event notifications to set up triggers to perform actions including transcoding media files when they are uploaded, processing data files when they become available, and synchronizing S3 objects with other data stores. You can also set up event notifications based on object name prefixes and suffixes. For example, you can choose to receive notifications on object names that start with “images/."

Q:  What is included in an Amazon S3 event notification?

For a detailed description of the information included in Amazon S3 event notification messages, please refer to the Configuring Amazon S3 Event Notifications topic in the Amazon S3 Developer Guide.

Q: How do I set up Amazon S3 event notifications?

For a detailed description of how to configure event notifications, please refer to the Configuring Amazon S3 event notifications topic in the Amazon S3 Developer Guide. You can learn more about AWS messaging services in the Amazon SNS Documentation and the Amazon SQS Documentation.

Q:  What does it cost to use Amazon S3 event notifications?

There are no additional charges for using Amazon S3 for event notifications. You pay only for use of Amazon SNS or Amazon SQS to deliver event notifications, or for the cost of running an AWS Lambda function. Visit the Amazon SNS, Amazon SQS, or AWS Lambda pricing pages to view the pricing details for these services.


Amazon S3 Transfer Acceleration
Q:  What is S3 Transfer Acceleration?

Amazon S3 Transfer Acceleration enables fast, easy, and secure transfers of files over long distances between your client and your Amazon S3 bucket. S3 Transfer Acceleration leverages Amazon CloudFront’s globally distributed AWS Edge Locations. As data arrives at an AWS Edge Location, data is routed to your Amazon S3 bucket over an optimized network path.


Q:   How do I get started with S3 Transfer Acceleration?

To get started with S3 Transfer Acceleration enable S3 Transfer Acceleration on an S3 bucket using the Amazon S3 console, the Amazon S3 API, or the AWS CLI. After S3 Transfer Acceleration is enabled, you can point your Amazon S3 PUT and GET requests to the s3-accelerate endpoint domain name. Your data transfer application must use one of the following two types of endpoints to access the bucket for faster data transfer: .s3-accelerate.amazonaws.com or .s3-accelerate.dualstack.amazonaws.com for the “dual-stack” endpoint. If you want to use standard data transfer, you can continue to use the regular endpoints.

There are certain restrictions on which buckets will support S3 Transfer Acceleration. For details, please refer the Amazon S3 developer guide.


Q:   How fast is S3 Transfer Acceleration?

S3 Transfer Acceleration helps you fully utilize your bandwidth, minimize the effect of distance on throughput, and is designed to ensure consistently fast data transfer to Amazon S3 regardless of your client’s location. The amount of acceleration primarily depends on your available bandwidth, the distance between the source and destination, and packet loss rates on the network path. Generally, you will see more acceleration when the source is farther from the destination, when there is more available bandwidth, and/or when the object size is bigger.

One customer measured a 50% reduction in their average time to ingest 300 MB files from a global user base spread across the US, Europe, and parts of Asia to a bucket in the Asia Pacific (Sydney) region. Another customer observed cases where performance improved in excess of 500% for users in South East Asia and Australia uploading 250 MB files (in parts of 50MB) to an S3 bucket in the US East (N. Virginia) region.

Try the speed comparison tool to get a preview of the performance benefit from your location!


Q: Who should use S3 Transfer Acceleration?

S3 Transfer Acceleration is designed to optimize transfer speeds from across the world into S3 buckets. If you are uploading to a centralized bucket from geographically dispersed locations, or if you regularly transfer GBs or TBs of data across continents, you may save hours or days of data transfer time with S3 Transfer Acceleration.


Q:   How secure is S3 Transfer Acceleration?

S3 Transfer Acceleration provides the same security as regular transfers to Amazon S3. All Amazon S3 security features, such as access restriction based on a client’s IP address, are supported as well. S3 Transfer Acceleration communicates with clients over standard TCP and does not require firewall changes. No data is ever saved at AWS Edge Locations.


Q: What if S3 Transfer Acceleration is not faster than a regular Amazon S3 transfer?

Each time you use S3 Transfer Acceleration to upload an object, we will check whether S3 Transfer Acceleration is likely to be faster than a regular Amazon S3 transfer. If we determine that S3 Transfer Acceleration is not likely to be faster than a regular Amazon S3 transfer of the same object to the same destination AWS Region, we will not charge for the use of S3 Transfer Acceleration for that transfer, and we may bypass the S3 Transfer Acceleration system for that upload.


Q: Can I use S3 Transfer Acceleration with multipart uploads?

Yes, S3 Transfer Acceleration supports all bucket level features including multipart uploads.

Q:  How should I choose between S3 Transfer Acceleration and Amazon CloudFront’s PUT/POST?

S3 Transfer Acceleration optimizes the TCP protocol and adds additional intelligence between the client and the S3 bucket, making S3 Transfer Acceleration a better choice if a higher throughput is desired. If you have objects that are smaller than 1GB or if the data set is less than 1GB in size, you should consider using Amazon CloudFront's PUT/POST commands for optimal performance.


Q:  How should I choose between S3 Transfer Acceleration and AWS Snow Family (Snowball, Snowball Edge, and Snowmobile)?

The AWS Snow Family is ideal for customers moving large batches of data at once. The AWS Snowball has a typical 5-7 days turnaround time. As a rule of thumb, S3 Transfer Acceleration over a fully-utilized 1 Gbps line can transfer up to 75 TBs in the same time period. In general, if it will take more than a week to transfer over the Internet, or there are recurring transfer jobs and there is more than 25Mbps of available bandwidth, S3 Transfer Acceleration is a good option. Another option is to use both: perform initial heavy lift moves with an AWS Snowball (or series of AWS Snowballs) and then transfer incremental ongoing changes with S3 Transfer Acceleration.


Q:  Can S3 Transfer Acceleration complement AWS Direct Connect?

AWS Direct Connect is a good choice for customers who have a private networking requirement or who have access to AWS Direct Connect exchanges. S3 Transfer Acceleration is best for submitting data from distributed client locations over the public Internet, or where variable network conditions make throughput poor. Some AWS Direct Connect customers use S3 Transfer Acceleration to help with remote office transfers, where they may suffer from poor Internet performance.


Q:  Can S3 Transfer Acceleration complement the AWS Storage Gateway or a 3rd party gateway?

If you can configure the bucket destination in your 3rd party gateway to use an S3 Transfer Acceleration endpoint domain name you will see the benefit.

Visit this File section of the Storage Gateway FAQ to learn more about the AWS implementation.

Q: Can S3 Transfer Acceleration complement 3rd party integrated software?

Yes. Software packages that connect directly into Amazon S3 can take advantage of S3 Transfer Acceleration when they send their jobs to Amazon S3.

Learn more about Storage Partner Solutions »


Q:  Is S3 Transfer Acceleration HIPAA eligible?

Yes, AWS has expanded its HIPAA compliance program to include Amazon S3 Transfer Acceleration as a HIPAA eligible service. If you have an executed Business Associate Agreement (BAA) with AWS, you can use Amazon S3 Transfer Acceleration to enable fast, easy, and secure transfers of files including protected health information (PHI) over long distances between your client and your Amazon S3 bucket.

Learn more about HIPAA Compliance »


Storage Management
S3 Object Tagging
Q:  What are S3 object tags?

S3 object tags are key-value pairs applied to S3 objects which can be created, updated or deleted at any time during the lifetime of the object. With these, you’ll have the ability to create Identity and Access Management (IAM) policies, setup S3 Lifecycle policies, and customize storage metrics. These object-level tags can then manage transitions between storage classes and expire objects in the background.


Q:  How do I apply object tags to my objects?

You can add tags to new objects when you upload them or you can add them to existing objects. Up to ten tags can be added to each S3 object and you can use either the AWS Management Console, the REST API, the AWS CLI, or the AWS SDKs to add object tags.


Q:  Why should I use object tags?

Object tags are a tool you can use to enable simple management of your S3 storage. With the ability to create, update, and delete tags at any time during the lifetime of your object, your storage can adapt to the needs of your business. These tags allow you to control access to objects tagged with specific key-value pairs, allowing you to further secure confidential data for only a select group or user. Object tags can also be used to label objects that belong to a specific project or business unit, which could be used in conjunction with S3 Lifecycle policies to manage transitions to other storage classes (S3 Standard-IA, S3 One Zone-IA, and Amazon Glacier) or with S3 Cross-Region Replication to selectively replicate data between AWS Regions.

Q:  How can I update the object tags on my objects?

Object tags can be changed at any time during the lifetime of your S3 object, you can use either the AWS Management Console, the REST API, the AWS CLI, or the AWS SDKs to change your object tags. Note that all changes to tags outside of the AWS Management Console are made to the full tag set. If you have five tags attached to a particular object and want to add a sixth, you need to include the original five tags in that request.


Q:  Will my object tags be replicated if I use Cross-Region Replication?

Object tags can be replicated across AWS Regions using Cross-Region Replication. For customers with Cross-Region Replication already enabled, new permissions are required in order for tags to replicate. For more information about setting up Cross-Region Replication, please visit How to Set Up Cross-Region Replication in the Amazon S3 Developer Guide.


Q:  How much do object tags cost?

Object tags are priced based on the quantity of tags and a request cost for adding tags. The requests associated with adding and updating Object Tags are priced the same as existing request prices. Please see the Amazon S3 pricing page for more information.

Storage Class Analysis
Q:  What is Storage Class Analysis?

With Storage Class Analysis, you can analyze storage access patterns and transition the right data to the right storage class. This new S3 feature automatically identifies infrequent access patterns to help you transition storage to S3 Standard-IA. You can configure a Storage Class Analysis policy to monitor an entire bucket, prefix, or object tag. Once an infrequent access pattern is observed, you can easily create a new S3 Lifecycle age policy based on the results. Storage Class Analysis also provides daily visualizations of your storage usage on the AWS Management Console that you can export to an S3 bucket to analyze using business intelligence tools of your choice such as Amazon QuickSight.


Q:   How do I get started with Storage Class Analysis?

You can use the AWS Management Console or the S3 PUT Bucket Analytics API to configure a Storage Class Analysis policy to identify infrequently accessed storage that can be transitioned to the S3 Standard-IA or S3 One Zone-IA storage class or archived to the Amazon Glacier storage class. You can navigate to the “Management” tab in the S3 Console to manage Storage Class Analysis, S3 Inventory, and S3 CloudWatch metrics.


Q:   How am I charged for using Storage Class Analysis?

Please see the Amazon S3 pricing page for general information about Storage Class Analysis pricing.


Q:   How often is the Storage Class Analysis updated?

Storage Class Analysis is updated on a daily basis in the S3 Management Console. Additionally, you can configure Storage Class Analysis to export your report to an S3 bucket of your choice.

S3 Inventory
Q:  What is S3 Inventory?

The S3 Inventory report provides a scheduled alternative to Amazon S3’s synchronous List API. You can configure S3 Inventory to provide a CSV or ORC file output of your objects and their corresponding metadata on a daily or weekly basis for an S3 bucket or prefix. You can simplify and speed up business workflows and big data jobs with S3 Inventory. You can also use S3 inventory to verify encryption and replication status of your objects to meet business, compliance, and regulatory needs.


Q:  How do I get started with S3 Inventory?

You can use the AWS Management Console or the PUT Bucket Inventory API to configure a daily or weekly inventory report for all the objects within your S3 bucket or a subset of the objects under a shared prefix. As part of the configuration, you can specify a destination S3 bucket for your S3 Inventory report, the output file format (CSV or ORC), and specific object metadata necessary for your business application, such as object name, size, last modified date, storage class, version ID, delete marker, noncurrent version flag, multipart upload flag, replication status, or encryption status.


Q:  Can S3 Inventory report files be encrypted?

Yes, you can configure encryption of all files written by S3 inventory to be encrypted by SSE-S3 or SSE-KMS. For more information, refer to the user guide.

Q:  How do I use S3 Inventory?

You can use S3 Inventory as a direct input into your application workflows or Big Data jobs. You can also query S3 Inventory using Standard SQL language with Amazon Athena, Amazon Redshift Spectrum, and other tools such as Presto, Hive, and Spark.

Learn more about querying S3 Inventory with Athena »

Q:  How am I charged for using S3 Inventory?

Please see the Amazon S3 pricing page for S3 Inventory pricing. Once you configure encryption using SSE-KMS, you will incur KMS charges for encryption, refer to the KMS pricing page for detail.


S3 CloudWatch Metrics
Q:  How do I get started with S3 CloudWatch Metrics?

You can use the AWS Management Console to enable the generation of 1-minute CloudWatch request metrics for your S3 bucket or configure filters for the metrics using a prefix or object tag. Alternatively, you can call the S3 PUT Bucket Metrics API to enable and configure publication of S3 storage metrics. CloudWatch Request Metrics will be available in CloudWatch within 15 minutes after they are enabled. CloudWatch Storage Metrics are enabled by default for all buckets, and reported once per day.


Q:  Can I align S3 CloudWatch request metrics to my applications or business organizations?

Yes, you can configure S3 CloudWatch request metrics to generate metrics for your S3 bucket or configure filters for the metrics using a prefix or object tag.

Q:  What alarms can I set on my storage metrics?

You can use CloudWatch to set thresholds on any of the storage metrics counts, timers, or rates and trigger an action when the threshold is breached. For example, you can set a threshold on the percentage of 4xx Error Responses and when at least 3 data points are above the threshold trigger a CloudWatch alarm to alert a DevOps engineer.


Q:  How am I charged for using  S3 CloudWatch Metrics?

CloudWatch storage metrics are provided free. Cloudwatch request metrics are priced as custom metrics for Amazon CloudWatch. Please see the Amazon CloudWatch pricing page for general information about S3 CloudWatch metrics pricing.


S3 Lifecycle Management
Q:  What is S3 Lifecycle management?

S3 Lifecycle management provides the ability to define the lifecycle of your object with a predefined policy and reduce your cost of storage. You can set a lifecycle transition policy to automatically migrate objects stored in the S3 Standard storage class to the S3 Standard-IA, S3 One Zone-IA, and/or Amazon Glacier storage classes based on the age of the data. You can also set lifecycle expiration policies to automatically remove objects based on the age of the object. You can set a policy for multipart upload expiration, which expires incomplete multipart uploads based on the age of the upload.


Q:  How do I set up an S3 Lifecycle management policy?

You can set up and manage Lifecycle policies in the AWS Management Console, S3 REST API, AWS SDKs, or AWS Command Line Interface (CLI). You can specify the policy at the prefix or at the bucket level.

Q:  How much does it cost to use S3 Lifecycle management?

There is no additional cost to set up and apply Lifecycle policies. A transition request is charged per object when an object becomes eligible for transition according to the Lifecycle rule. Refer to the S3 Pricing page for pricing information.

Q:  What can I do with Lifecycle management policies?

As data matures, it can become less critical, less valuable, and/or subject to compliance requirements. Amazon S3 includes an extensive library of policies that help you automate data migration processes between storage classes. For example, you can set infrequently accessed objects to move into lower cost storage classes (like S3 Standard-IA or S3 One Zone-IA) after a period of time. After another period, those objects can be moved into Amazon Glacier for archive and compliance. If policy allows, you can also specify a lifecycle policy for object deletion. These rules can invisibly lower storage costs and simplify management efforts. These policies also include good stewardship practices to remove objects and attributes that are no longer needed to manage cost and optimize performance.


Q:  How can I use Amazon S3 Lifecycle management to help lower my Amazon S3 storage costs?

With Amazon S3 Lifecycle policies, you can configure your objects to be migrated to from the S3 Standard storage class to S3 Standard-IA or S3 One Zone-IA and/or archived to Amazon Glacier. You can also specify an S3 Lifecycle policy to delete objects after a specific period of time. You can use this policy-driven automation to quickly and easily reduce storage costs as well as save time. In each rule you can specify a prefix, a time period, a transition to S3 Standard-IA, S3 One Zone-IA, or Amazon Glacier, and/or an expiration. For example, you could create a rule that archives into Amazon Glacier all objects with the common prefix “logs/” 30 days from creation and expires these objects after 365 days from creation. You can also create a separate rule that only expires all objects with the prefix “backups/” 90 days from creation. S3 Lifecycle policies apply to both existing and new S3 objects, helping you optimize storage and maximize cost savings for all current data and any new data placed in S3 without time-consuming manual data review and migration. Within a lifecycle rule, the prefix field identifies the objects subject to the rule. To apply the rule to an individual object, specify the key name. To apply the rule to a set of objects, specify their common prefix (e.g. “logs/”). You can specify a transition action to have your objects archived and an expiration action to have your objects removed. For time period, provide the creation date (e.g. January 31, 2015) or the number of days from creation date (e.g. 30 days) after which you want your objects to be archived or removed. You may create multiple rules for different prefixes.


Q:   How can I configure my objects to be deleted after a specific time period?

You can set an S3 Lifecycle expiration policy to remove objects from your buckets after a specified number of days. You can define the expiration rules for a set of objects in your bucket through the Lifecycle configuration policy that you apply to the bucket.

Learn more about S3 Lifecycle expiration policies »


Q: Why would I use an S3 Lifecycle policy to expire incomplete multipart uploads?

The S3 Lifecycle policy that expires incomplete multipart uploads allows you to save on costs by limiting the time non-completed multipart uploads are stored. For example, if your application uploads several multipart object parts, but never commits them, you will still be charged for that storage. This policy can lower your S3 storage bill by automatically removing incomplete multipart uploads and the associated storage after a predefined number of days.

Learn more about using S3 Lifecycle to expire incomplete mulitpart uploads »


Cross-Region Replication
Q:  What is Amazon S3 Cross-Region Replication (CRR)?

CRR is an Amazon S3 feature that automatically replicates data between AWS Regions. With CRR, you can set up replication at a bucket level, a shared prefix level, or an object level using S3 object tags. You can use CRR to provide lower-latency data access in different geographic regions. CRR can also help if you have a compliance requirement to store copies of data hundreds of miles apart.


Q:  How do I enable CRR?

CRR is configured at the S3 bucket level. You enable a CRR configuration on your source bucket by specifying a destination bucket in a different Region for replication. You can use either the AWS Management Console, the REST API, the AWS CLI, or the AWS SDKs to enable CRR. Versioning must be enabled for both the source and destination buckets to enable CRR. To learn more, please visit How to Set Up Cross-Region Replication in the Amazon S3 Developer Guide.


Q:   Can I use CRR with S3 Lifecycle rules?

Yes, you can configure separate S3 Lifecycle rules on the source and destination buckets. For example, you can configure a lifecycle rule to migrate data from the S3 Standard storage class to the S3 Standard-IA or S3 One Zone-IA storage class or archive data to Amazon Glacier on the destination bucket.


Q:   Can I use CRR with objects encrypted by AWS Key Management Service (KMS)?

Yes, you can replicate KMS-encrypted objects by providing a destination KMS key in your replication configuration.

Learn more about replicating KMS-encrypted objects »


Q:   Are objects securely transferred and encrypted throughout replication process?

Yes, objects remain encrypted throughout the CRR process. The encrypted objects are transmitted securely via SSL from the source region to the destination region.


Q: Can I use CRR across AWS accounts to protect against malicious or accidental deletion?

Yes, you can set up CRR across AWS accounts to store your replicated data in a different account in the target region. You can use CRR Ownership Overwrite in your replication configuration to maintain a distinct ownership stack between source and destination, and grant destination account ownership to the replicated storage.


Q: What is the pricing for S3 Cross-Region Replication?

You pay the Amazon S3 charges for storage (in the S3 storage class you select), COPY requests, and inter-Region data transfer for the replicated copy of data. COPY requests and inter-Region data transfer are charged based on the source Region. Storage for replicated data is charged based on the target Region. For more information, please visit the S3 pricing page.

If the source object is uploaded using the multipart upload feature, then it is replicated using the same number of parts and part size. For example, a 100 GB object uploaded using the multipart upload feature (800 parts of 128 MB each) will incur request cost associated with 802 requests (800 Upload Part requests + 1 Initiate Multipart Upload request + 1 Complete Multipart Upload request) when replicated. You will incur a request charge of $0.00401 (802 requests x $0.005 per 1,000 requests) and a charge of $2.00 ($0.020 per GB transferred x 100 GB) for inter-region data transfer. After replication, the 100 GB will incur storage charges based on the destination region.


Amazon S3 and IPv6
Q:  What is IPv6?

Every server and device connected to the Internet must have a unique address. Internet Protocol Version 4 (IPv4) was the original 32-bit addressing scheme. However, the continued growth of the Internet means that all available IPv4 addresses will be utilized over time. Internet Protocol Version 6 (IPv6) is the new addressing mechanism designed to overcome the global address limitation on IPv4.


Q:   What can I do with IPv6?

Using IPv6 support for Amazon S3, applications can connect to Amazon S3 without the need for any IPv6 to IPv4 translation software or systems. You can meet compliance requirements, more easily integrate with existing IPv6-based on-premises applications, and remove the need for expensive networking equipment to handle the address translation. You can also now utilize the existing source address filtering features in IAM policies and bucket policies with IPv6 addresses, expanding your options to secure applications interacting with Amazon S3.

Q:   How do I get started with IPv6 on Amazon S3?

You can get started by pointing your application to Amazon S3’s new “dual-stack” endpoint, which supports access over both IPv4 and IPv6. In most cases, no further configuration is required for access over IPv6, because most network clients prefer IPv6 addresses by default.

Q:  Should I expect a change in Amazon S3 performance when using IPv6?

No, you will see the same performance when using either IPv4 or IPv6 with Amazon S3.

Q:   What can I do if my clients are impacted by policy, network, or other restrictions in using IPv6 for Amazon S3?

Applications that are impacted by using IPv6 can switch back to the standard IPv4-only endpoints at any time.


Q: Can I use IPv6 with all Amazon S3 features?

No, IPv6 support is not currently available when using Website Hosting and access via BitTorrent. All other features should work as expected when accessing Amazon S3 using IPv6.


Q: Is IPv6 supported in all AWS Regions?

You can use IPv6 with Amazon S3 in all commercial AWS Regions except China (Beijing) and China (Ningxia). You can also use IPv6 in the AWS GovCloud (US) Region.