Tuesday, November 14, 2023

AWS Solutions Architect Interview Questions and Answers (Part2)

 AWS Solutions Architect Interview Questions and Answers (Part2)


Q1. How terminating and stopping an instance are the different processes?

A1. Instance performs a regular shut down when it is stopped. It then performs transactions. As the entire EBS volumes remain present, it is possible to start the instance anytime again when you want. The best thing is when the instance remains in the stopped state, users don’t need to pay for that particular time.

Upon termination, the instance performs a regular shutdown. After this, the Amazon EBS volumes start deleting. You can stop them from deleting simply by setting the “Delete on Termination” to false. Because the instance gets deleted, it is not possible to run it again in the future.


Q2. At what value the instance’s tenancy attribute is to be set for running it on single-tenant hardware?

A2. It should be set to the Dedicated Instance for smoothly running it on single-tenant hardware. Other values are not valid for this operation.


Q3. When there is a need to acquire costs with an EIP?

A3. EIP stands for Elastic Internet Protocol address. Costs are acquired with an EIP when the same is associated and allocated with a stopped instance. In case only one Elastic IP is there with the instance you are running, you will not be charged for it. However, in case the IP is attached to a stopped instance or doesn’t attach to any instance, you need to pay for it.


Q4. What is the difference between an On-demand instance and a Spot Instance?

A4. Spot instance is similar to bidding and the price of bidding is known as the Spot price. Both Spot and on-demand instances are pricing models. In both of them, there is no commitment to the exact time from the user end. Without upfront payment, Spot instance can be used while the same is not possible in case of an On-demand instance. It needs to be purchased first and the price is higher than the spot instance.


Q5. Name the Instances types for which the Multi AZ-deployments are available?

A5. The Multi-AZ deployments are simply available for all the instances irrespective of their types and use.


Q6. When Instances are launched in the cluster placement group, what are the network performance parameters that can be expected?

A6. Actually, it depends largely on the type of Instance, as well as on the specification of network performance. In case they are started in the placement group, you can expect the following parameters

20 Gbps in case of full-duplex or when in multi-flow

Up to 10 Gbps in case of a single-flow

Outside the group, the traffic is limited to 5 Gbps.


Q7. Which Instance can be used for deploying a 4-node cluster of Hadoop in Amazon Web Services?A7. It is possible to use i2.large or c4.8x large Instance for this. However, c.4bx needs a better configuration on the PC. At some stages, you can simply launch the EMR for the automatic configuration of the server for you. Data can be put into S3 and EMR is able to pick it from there. It will load your data in S3 again after processing it.


Q8.What do you know about an AMI?

A8. AMI is generally considered as the template for the virtual machines. While starting an instance, it is possible to select pre-baked AMI’s that AMI commonly have in them. However, not all AMIs are available to use free of cost. It is also possible to have a customized AMI and the most common reason to use the same is nothing but saving the space on Amazon Web Service. This is done in case a group of software is not required and AMI can simply be customized in that situation.


Q9. Tell us various parameters that you should consider while selecting the Availability Zone?

A9. For this, there are various parameters that should be kept in mind. Some of them are the performance, pricing, latency, as well as response time.


Q10. What do you know about the private and the public address?

A10. Well, the private address is directly correlated with the Instance and is sent back to EC2 only in case it is terminated or stopped. On the other side, the public address is correlated in a similar manner with the Instance until it is terminated or stopped. It is possible to replace the public address with Elastic IP. This is done when a user wants it to stay with Instance as per the need.


Q11. Is it possible to run the multiple websites on the EC2 server with one Elastic IP address?

A11. No, it’s not possible. We need more than one elastic IP in such a case.


Q12. Name the practices available when it comes to securing the Amazon EC2?

A12.This can be done through several practices. A review of the protocols in the security group is to be monitored regularly and it is to be ensured that the principle of least is applicable over there. Next practice is using access management and AWS identity for controlling and securing access. Access is to be restricted to hosts and networks that are trusted. In addition to this, only those permissions are opened which are required and not any other. It would also be good to disable password based logins for the instances.


Q13. What are the states available in Processor State Control?

A13. It contains two states and they are:


P-state-It has different levels starting from P0 to P15. P0 represents the highest frequency and P15 represents the lowest frequency.

C-State-Its levels are from C0 to C6 where C6 is the strongest state for the processor.It is possible to customize these states in a few EC2 instances which enable users to customize processor as per need.

 Q14. Name the approach that restricts the access of third-party software in Storage Service to S3 bucket named “Company Backup”?

A14. There is a policy named custom IAM user policy that limits the S3 API in the bucket


Q15. It is possible to use S3 with EC2 instances. How?

A15. Yes, it’s possible if the instances are having root devices and they are supported by the instance storage. Amazon uses one of the very reliable, scalable, fast, as well as inexpensive networks for hosting all their websites. With the help of S3, it is possible for the developers to get access to the same network. There are tools available in AMI’s that users can consider when it comes to executing systems in EC2. The files can simply be moved between EC2 and S3.


Q16. Is it possible to speed up data transfer in Snowball? How?

A16. Yes, it’s possible. There are certain methods for this. First is simply copying from different hosts to the same Snowball. Another method is by creating a group of smaller files. This is helpful as it cut down the encryption issues. Data transfer can also be enhanced by simply copy operations again and again at the same time provided the workstation is capable to bear the load.


Q17. Name the method that you will use for moving the data to a very long distance?

A17. Amazon Transfer Acceleration is a good option. There are other options such as Snowball but the same doesn’t support data transfer over a very long distance such as among continents. Amazon Transfer Acceleration is the best option because it simply throttles the data with the help of network channels that are optimized and assures very fast data transfer speed.


Q18. What will happen if you launch the instances in Amazon VPC?

A18. This is a common approach that is considered when it comes to launching EC2 instances. Each instance will be having a default IP addressed if the instances are launched in Amazon VPC. This approach is also considered when you need to connect cloud resources with the data centers.


Q19.  Is it possible to establish a connection between the Amazon cloud and a corporate data center? How?

A19. Yes, it’s possible. For this, first, a Virtual Private Network is to be established between the Virtual private cloud and the organization’s network. After this, the connection can simply be created and data can be accessed reliably.


Q20. Why is it not possible to change or modify the private IP address of an EC2 instance when it is running?

A20.  This is because the private IP remains with the instance permanently or through the life cycle. Thus it cannot be changed or modified. However, it is possible to change the secondary private address.


Q21. Why are subnets required to be created?

A21. They are needed to utilize the network with a large number of hosts in a reliable manner. Of course, it’s a daunting task to manage them all. By dividing the network into smaller subnets, it can be made simpler and the chances of errors or data loss can be eliminated up to an excellent extent.


Q22. Is it possible to attach multiple subnets to a routing table?

A22. Yes, it’s possible. They are generally considered when it comes to routing the network packets. Actually, when a subnet has several route tables, it can create confusion about the destination of these packets. It is because of no other reason than this there should be only one route table in a subnet. The route table can have unlimited records and therefore it is possible to attach multiple subnets to a routeing table.


Q23. What happens if the AWS Direct Connect fails to perform its function?

A23. It is recommended to backup the Direct Connect as in case of a power failure you can lose everything. Enabling BFD i.e. Bi-directional Forwarding Detection can avoid the issues. In case no backup is there, VPC traffic would be dropped and you need to start everything from the initial point again.


Q24.  What will happen if the content is absent in CloudFront and a request is made?

A24.  CloudFront sent the content from the primary server directly to the cache memory of the edge location. As it’s a content delivery system, it tries to cut down the latency and that is why it will happen. If the operation is performed for the second time, the data would directly be served from the cache location.


Q25. Is it possible to use direct connect for transferring the objects from the data centers?

A25. Yes, it is possible. Cloud Front simply supports custom origins and thus this task can be performed. However, you need to pay for it depending on the data transfer rates.


Q26. When there is a need to consider Provisional IOPS than Standard RDS storage in AWS?

A26. In case you have hosts that are batch oriented, there is a need for the same. The reason is provisional IOPs are known to provide faster IO rates. However, they are a bit expensive when compared to other options. Hosts with batch processing don’t need manual intervention from the users. It is because of this reason provisional IOPs are preferred.


Q28. Is it possible to run multiple DB for Amazon RDS free of cost?

A28. Yes, it’s possible. However, there is a strict upper limit of 750 hours of usage post which everything will be billed as per RDS prices. In case you exceed the limit, you will be charged only for the extra hours beyond 750.


Q29. Name the services which can be used for collecting and processing e-commerce data?

A29. Amazon Redshift and Amazon DynamoDB are the best options. Generally, data from e-commerce websites are in an unstructured manner. As both of them are useful for unstructured data, we can use them.


Q30. What is the significance of Connection Draining?

A30. There are certain stages when the traffic needs to be re-verified for bugs unwanted files that raise security concerns. Connection draining helps in re-routing the traffic that comes from the Instances and which is in a queue to be updated.


Q31. What is auto-scaling?

 A31. Auto-scaling is a feature of AWS which allows you to configure and automatically provision and spin-up new instances without the need for your intervention.


Q32. What are the different types of cloud services?

A32. Different types of cloud services are:

Software as a Service (SaaS)

Data as a Service (DaaS)

Platform as a Service (PaaS)

Infrastructure as a Service (IaaS)


Q33. What is the type of architecture, where half of the workload is on the public load while at the same time half of it is on the local storage?

A33. Hybrid cloud architecture.


Q34. Can I vertically scale an Amazon instance? How do you do it?

A34. Yes, Spinup a new larger instance than the one you are running, then pause that instance to detach the root EBS volume from this server and discard. After that, stop the live instance and detach its root volume. Note the unique device ID and attach that root volume to the new server, and start again. This way you will have scaled vertically.


Q35. How can you send a request to Amazon S3?

A35. You can send requests by using the REST API or the AWS SDK wrapper libraries that wrap the underlying Amazon S3 REST API.


Q36. Should encryption be used for S3?

A36. Encryption should be considered for sensitive data as S3 is a proprietary technology.


Q37. What are the various AMI design options?

A37. Fully Baked AMI, JeOS (just enough operating system) AMI, and Hybrid AMI.


Q38. Explain what is a T2 instance?

A38.  T2 instances are designed to provide moderate baseline performance and the capability to burst to higher performance as required by workload.


Q39. What is a Serverless application in AWS?

A39. The AWS Serverless Application Model (AWS SAM) extends AWS CloudFormation to provide a simplified way of defining the Amazon API Gateway APIs, AWS Lambda functions, and Amazon DynamoDB tables needed by your serverless application.


Q40. What is the use of Amazon ElastiCache?

A40. Amazon ElastiCache is a web service that makes it easy to deploy, operate, and scale an in-memory data store or cache in the cloud.


Q41. Explain how the buffer is used in Amazon web services?

A41. The buffer is used to make the system more robust to manage traffic or load by synchronizing different components.


Q42. Differentiate between stopping and terminating an instance?

A42. When an instance is stopped, the instance performs a normal shutdown and then transitions to a stopped state. When an instance is terminated, the instance performs a normal shutdown, then the attached Amazon EBS volumes are deleted unless the volume’s deleteOnTermination attribute is set to false.


Q43. Is it possible to change the private IP addresses of an EC2 while it is running/stopped in a VPC?A43. The primary private IP address cannot be changed. Secondary private addresses can be unassigned, assigned or moved between interfaces or instances at any point.


Q44. Give one instance where you would prefer Provisioned IOPS over Standard RDS storage?A44. When you have batch-oriented workloads.


Q45. What is the boot time for an instance store backed instance?

A45. The boot time for an Amazon Instance Store -Backed AMI is less than 5 minutes.


Q46. Will you use encryption for S3?

A46. Yes, I will, as it is a proprietary technology. It’s always a good idea to consider encryption for sensitive data on S3.


Q47. What is Identity Access Management and how is it used?

A47. It is a web service, which is used to securely control access to AWS services. Identity Access Management allows you to manage users, security credentials, and resource permissions.


Q48. Explain the advantages of AWS’s Disaster Recovery (DR) solution.

A48. Following are the advantages of AWS’s Disaster Recovery (DR) solution:


AWS offers a cost-effective backup, storage, and DR solution, helping the companies to reduce their capital expenses

Fast setup time and greater productivity gains

AWS helps companies to scale up even during seasonal fluctuations

It seamlessly replicates on-premises data to the cloud

Ensures fast retrieval of files

Q49. What is DynamoDB?

A49. DynamoDB is a fully managed proprietary NoSQL database service, supporting key-value and document data structures. It can be used when a fast and flexible NoSQL database with a flexible data model and reliable performance is required.


Q50. Which data centers are deployed for cloud computing?

A50. There are two data centers in cloud computing, one is Containerized Data centers, and another is Low-Density Data Centers.


Q51. Which AWS services will you use to collect and process e-commerce data for near real-time analysis?

A51. Following are the AWS services will be used to collect and process e-commerce data for near real-time analysis:

Amazon DynamoDB

Amazon ElastiCache

Amazon Elastic MapReduce

Amazon Redshift


Q52. What is SQS?

A52.  Simple Queue Service (SQS) is a distributed message queuing service that acts as a mediator for two controllers. It is a pay-per-use web service.


Q53. What are the popular DevOps tools?

A53. The popular DevOps tools are –

Chef, Puppet, Ansible, and SaltStack – Deployment and Configuration Management Tools

Docker – Containerization Tool

Git – Version Control System Tool

Jenkins – Continuous Integration Tool

Nagios – Continuous Monitoring Tool

Selenium – Continuous Testing Tool


Q54. What is Hybrid cloud architecture?

A54. It is a type of architecture where the workload is divided into two halves among which one is on public load and the other is on the local storage. It is a mix of on-premises, private cloud and third-party, and public cloud services between two platforms.


Q55. What Is Configuration Management?

A55. Configuration management is used to manage the configuration of systems and the services that they provide entirely through code. This is a repetitive and consistent process that is achieved through –

Intuitive command-line interface

Lightweight and easily readable domain-specific language (DSL)

Comprehensive REST-based API


Q56. What are the features of Amazon cloud search?

A56. Amazon cloud search features:

AutoComplete advice

Boolean Searches

Entire text search

Faceting term boosting

Highlighting

Prefix Searches

Range searches


Q57. How do you access the data on EBS in AWS?

A57. Data cannot be accessible on EBS directly by a graphical interface in AWS. This process includes assigning the EBS volume to an EC2 instance. Here, when the volume is connected to any of the instances either it can be Windows or Unix, you can write or read on it. First, you can take a screenshot from the volumes with data and build unique volumes with the help of screenshots. Here, each EBS volume can be attached to only a single instance.


Q58. What is the difference between Amazon RDS, Redshift and Dynamo DB?

A58. Differentiate between Amazon RDS, Redshift and Dynamo DB:

Features Amazon RDS Redshift Dynamo DB

Primary Usage Feature Conventional Databases Datawarehouse Database for dynamically modified data

Database Engine MySQL, Oracle DB, SQL Server, Amazon Aurora, Postgre SQL Redshift NoSQL

Computing Resources Instances with 64 vCPU and 244 GB RAM

 Nodes with vCPU and 244 GB RAM Not specified, SaaS-Software as a Service.

Multi A-Z Replication Additional Service Manual In-built

Maintenance Window 30 minutes every week. 30 minutes every week. No impact

 

Q59. If you hold half of the workload on the public cloud whereas different half is on local storage, in such case what type of architecture can be used?

A59. In such cases, the hybrid cloud architecture can be used.


Q60. Mention the possible connection issues you encounter when connecting to an EC2 instance?A60. Following are the possible connection issues you encounter when connecting to an EC2 instance:

Server refused key

Connection timed out

Host key not found, permission denied.

Unprotected private key file

No supported authentication method available


Q61. What are lifecycle hooks in AWS autoscaling?

A61.Lifecycle hooks can be added in the autoscaling group. It enables you to perform custom actions by pausing instances where the autoscaling group terminates and launches them. Every auto-scaling group consists of multiple lifecycle hooks.


Q62. What is a Hypervisor?

A62. A Hypervisor is a type of software used to create and run virtual machines. It integrates physical hardware resources into a platform which are distributed virtually to each user. Hypervisor includes Oracle Virtual Box, Oracle VM for x86, VMware Fusion, VMware Workstation, and Solaris Zones.


Q63. Explain the use of Route Table?

A63. Route Table is used to control the network traffic where each subnetwork of VPC is associated with a route table. Route table consists of a large number of information, whereas connecting multiple subnetworks to a route table is also feasible.


Q64. What is the use of Connection Draining?

A64. Connection Draining is a process used to support load balancer.  It keeps tracking all of the instances if any instance fails connection draining drag all the traffic from that specific failed instance and re-route the traffic to the active instances.


Q65. Explain the use of Amazon Transfer Acceleration Service?

A65. Amazon Transfer Acceleration Service is used to boost your data transfer with the help of advanced network paths. It also transfers files fast and secures between your client and an S3 bucket.


Q66. How to update AMI tools at the Boot-Time on Linux?

A66. To update  AMI tools at the Boot-Time on Linux:

# Update to Amazon EC2 AMI tools

echo ” + Updating EC2 AMI tools”

yum update -y aws-amitools-ec2

echo ” + Updated EC2 AMI tools”


Q67. How does Encryption is done in S3?

A67.  Encryption is done in S3 by using:

In Transit: SSL/TLS

 At Rest

Server-Side in Encryption

S3 Managed Keys – SSE-S3

AWS Key Management Service, Managed of Keys – SSE-KMS

 6.Server-Side Encryption with Customer Provided Keys – SSE-C

Client-Side Encryptions

 

Q68. What are the pricing models for EC2 instances?

A68. Following are the different pricing model for EC2 instances:

Dedicated

Reserved

On-demand

Scheduled

Spot


Q69. What are the parameters for S3 pricing?

A69. Following are the parameters for S3 pricing:

Transfer acceleration

Number of requests you make

Storage management

Data transfer

Storage used


No comments:

Post a Comment