== CODE IMPLEMENTATION ==
AWS Command Line Interface (CLI) & Software Developer Kit (SDK)
- both protected by access keys, generated from AWS Management Console
- CLI can direct access to public APIs of AWS resources
- to use MFA with CLI, which means a temporary session
- run the STS GetSessionToken API call
aws sts get-session-token --serial-number arn-of-the-mfa-device --token-code code-from-token --duration-seconds 3600
- run the STS GetSessionToken API call
- CLI Credentials Provider Chain
- Command line options – –region, –output, and –profile
- Environment variables – AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY, and AWS_SESSION_TOKEN
- CLI credentials file – aws configure ~/.aws/credentials
- CLI configuration file – aws configure ~/.aws/config
- Container credentials – for ECS tasks
- Instance profile credentials – for EC2 Instance Profiles
- SDK is set of libraries for programming, as language-specific APIs
- the AWS CLI uses the Python SDK (boto3)
- “us-east-1” would be chosen by default, if no region specified
- SDK Credential Provider Chain
- Java system properties – aws.accessKeyId and aws.secretKey
- Environment variables –
AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY - The default credential profiles file – ex at: ~/.aws/credentials, shared by many SDK
- Amazon ECS container credentials – for ECS containers
- Instance profile credentials– used on EC2 instances
- pagination parameters in AWS CLI command could help to solve the Time-Out issue from massive resource item processing
- –page-size
- The CLI still retrieves the entire list, but it makes a greater number of service API calls in the background and retrieves fewer items with each request. This increases the probability that individual calls will succeed in without the use of a timeout.
Cloud Development Kit (CDK)
- CloudFormation using JSON/YAML, but CDK using Javascript/Typescript, Python, Java, .Net
- Contain higher level components, constructs
- encapsulate everything for final CloudFormation stack creation
- AWS Construct Library or Construct Hub
- Layer 1 (L1): CloudFormation(CFN) resources, prefix with “Cfn”, and all resource properties needed to be explicitly configured
- Layer 2 (L2): intent-based API resources, with defaults and boilerplate, also can use methods
- Layer 3 (L3): aka Patterns, represents as multiple related resources (for example, API Gateway + Lambda, or Fargate cluster + Application Load Balancer)
- The codes would be complied to CloudFormation template
- Benefits for Lambda & ECS/EKS as infrastructures and applications runtime codes implemented together
- SAM focus on serverless, good for Lambda, but only JSON/YAML
- Bootstrapping: the process of provisioning before deploying AWS environment (Account+Region)
- CDKToolkit (CloudFormation stack), with S3 Bucket – store files and IAM Roles
- Error: “Policy contains a statement with one or more invalid principal”, due to the lack of new IAM Roles for each new environment
- UnitTest, using CDK Assertion Module for Jest(Javascript) or Pytest(Python)
- Fine-granted Assertions (common): check certain property of certain resource
- Snapshot Test: test against baseline template
AWS CloudFormation
- provision infrastructure using a text-based (JSON/YAML) templates (uploaded to S3) that describes exactly what resources are provisioned and their settings.
- manages the template history similar to how code is managed in source control
- Delete the stack would also remove each individual artifact
- Components
- AWSTemplateFormatVersion
- Description
- Resources (mandatory) – aws resources
- Parameters – dynamic inputs (AllowedValues/NoEcho) !Ref
- Mappings – static variables
- Outputs – reference about what has been created !ImportValue
- Conditions
- (Reference / Functions) as Helper
- Transformer- used for Serverless services, especially for AWS SAM.
- 2 methods of updating a stack
- direct update – CloudFormation immediately deploys your changes
- change sets – preview your changes first, then decide if you want to deploy
- Intrinsic Functions
- Fn::Ref (!Ref): for AWS resource, the ID is return while using Fn::Ref to call
- Fn::GetAtt (!GetAtt)
- Fn::FindInMap (!FindInMap) [MapName, TopLevelKey, SecondLevelKey]
- Fn::ImportValue (!ImportValue)
- Fn::Base64 (!Base64): convert string to Base64; heavily used in UserData
- Conditions (Fn::If, Fn::Not, Fn::Equal, Fn::And, Fn::Or, etc)
- Capabilities
- CAPABILITY_(Resource Named)_IAM, CAPABILITY_IAM?
- CAPABILITY_AUTO_EXPAND, for Macro and Nested stakes
- InsufficientCapabiltiesException
- DeletionPolicy
- Delete (wont work in S3 if the bucket is not empty)
- Retain
- Snapshot: create a final snapshot before deleted
- Stack policy is a JSON to tell which resource(s) should be protected as not touched during update; so an explicit ALLOW for the desired resources for update is needed
- Termination Protection is to prevent accidental delete
- Custom Resources
- custom functions running via Lambda, for example, empty a S3 bucket
- AWS::CloudFormation::CustomResource or Custom::MyCustomResourceTypeName
- Properties with Service Token (Lambda function or SNS topic, in the sam region) and optional Input data
- Stackset is used for cross accounts/regions stacks management, with a single CloudFormation template.
- A stack instance is simply a reference to a stack in a target account within a region.
- (Python) Helper scripts
- cfn-init – Use to retrieve and interpret resource metadata, install packages, create files, and start services.
- cfn-signal – Use to signal with a CreationPolicy or WaitCondition, so you can synchronize other resources in the stack when the prerequisite resource or application is ready.
- cfn-get-metadata – Use to retrieve metadata for a resource or path to a specific key.
- cfn-hup – Use to check for updates to metadata and execute custom hooks when changes are detected.
- Local artifacts declared in CodeUri property
- The aws cloudformation package command packages the local artifacts (local paths) that your AWS CloudFormation template references.
- After you package your template’s artifacts, run the aws cloudformation deploy command to deploy the returned template.
AWS Serverless Application Model (SAM)
- configure via JSON/YAML, complied to CloudFormation stack
- use CodeDeploy for Lambda function
- Traffic Shifting (from OLD ver to New ver)
- Linear: grow traffic every N minutes until 100%
- Canary: try X percent then 100%
- AllAtOnce: immediate
- Pre- and Pro- for testing on traffic shifting
- rollback by AWS CloudWatch Alarm
- AppSpec.yml
- Name
- Alias
- CurrentVersion
- TargetVersion
- Traffic Shifting (from OLD ver to New ver)
- run Lambda, API Gateway, DynamoDB locally
- Lambda start/invoke
- API Gateway
- AWS Events (sample payloads for event resources)
- SAM Recipe
- Transform Header – template
- Write Code
- Package and Deploy – into S3 Bucket
- SAM commands
- sam init – creating a new SAM project
- sam build – resolve dependencies and construct deployment artifacts for all functions and layers in the SAM template.
- sam package – prepares the serverless application for deployment by zipping artifacts, uploading them to S3, and generating a CloudFormation template with references to the uploaded artifacts in S3. But, it doesn’t deploy the application.
- sam deploy – zips your code artifacts, uploads them to Amazon S3, and produces a packaged AWS SAM template file that it uses to deploy your application
- for nested applications, need “
CAPABILITY_AUTO_EXPAND
” option - Compared with “aws cloudformation deploy” – deploy a CloudFormation stack, it expects that your artifacts are already packaged and uploaded to S3.
- for nested applications, need “
- sam publish – publishes an AWS SAM application to the AWS Serverless Application Repository
- sam sync – update existing SAM template
- as Accelerate, reduce latency on deployments for rapid development testing
- using “–code” option, without updating infrastructure (service APIs and bypass CloudFormation)
- sam local
- can specify a named profile from your AWS CLI configuration using the –profile parameter with the sam local invoke command
- run the aws configure with the –profile option to set the credentials for a named profile
- AWS SAM template file
- AWS::Serverless::Application – for nested application
- AWS::Serverless::Function – configuration information for creating a Lambda function
- AWS::Serverless::LayerVersion – creates a Lambda layer version (LayerVersion) that contains library or runtime code that’s needed by a Lambda function
- AWS::Serverless::Api – describes an API Gateway resource. It’s useful for advanced use cases where you want full control and flexibility when you configure your APIs. Mostly as part of event sources of “AWS::Serverless::Function”
- SAM Policy Templates
- apply permissions to Lambda Functions
- SAM Multiple Environments, using “samconfig.toml”
AWS Elastic Beanstalk
- provision infrastructure using a text-based template that describes exactly what resources are provisioned and their settings
- Amazon EC2 Instance
- Amazon CloudWatch
- ELB & ASG
- AWS S3
- RDS, DynamoDB
- Amazon SNS
- complied to CloudFormation stack
- Components
- Application
- Application Version
- Environment
- Web Server Tier and Worker Tier
- Deployment Method
- All at once, has downtime
- Rolling: running under capacity, no additional costs
- Rolling with additional batches: compared to Rolling, this let application running at capacity (ie temporary create more instances)
- Immutable: create new instances in a new ASG, then swap; zero downtime
- (Blue Green: new environment then swap, using Route53 with weighted policies)
- Traffic Splitting: canary testing

- Lifecycle
- max most Application versions: 1000, use LifeCycle Policy to phase out, based on Time or Space
- has option to retain source bundles on S3
- EB Extensions
- YAML/JSON, with “.config” extension as file name
- update defaults with “option_settings”
- place under the “.ebextensions/” folder under root of source code
- resources managed by .ebextensions would be deleted if the environment goes away
- EB Clone, can help to setup exact same “configuration” environment
- Load Balancer type and configure
- RDS configure, but no data
- Environment variables
- EB Migration
- Once EB created, the Elastic Load Balancer (ELB) type cannot change
- Create another environment with new ELB, then using Route53 update or CNAME swap
- Decouple RDS with EB, for PROD
- Once EB created, the Elastic Load Balancer (ELB) type cannot change
- extra Configuration file could be able to add to the source bundle
- cron.yaml – schedule tasks
- env.yaml – configure the environment name, solution stack, and environment links
- Dockerrun.aws.json – multi-container Docker environments that are hosted in Elastic Beanstalk
- Elastic Beanstalk worker environments simplify the process by managing the Amazon SQS queue (with support of DLQ) and running a daemon process on each instance that reads from the queue

AWS Amplify
- create mobile and web applications (aka. ElasticBeans for mobile and web application)
- Authentication (Cognito) + Data Store (AppSync + DynamoDB)
- End-to-End (E2E) test, using Cypress
AWS AppConfig
- deploy dynamic configuration change without code deployment; validate with JSON Schema or Lambda Function
- provides the functionality to manage feature flags, a powerful technique that allows developers to test and control new features in live environments
AWS AppSync
- extend Cognito Sync (user data, like app preferences or game state), and also allowing multiple users to synchronize and collaborate in real time on shared data.
- managed service of GraphQL, combining multiples data sources
- retrieve data in “realtime” or “MQTT” of WebSocket
- for mobile apps: local data access and data sync
- Security: API_KEY, AWS_IAM, OPENID_CONNECT, AMAZON_COGNITO_USER_POOLS

AWS Systems Manager
- focused on management and operations of AWS resources (EC2 and On-Premise), such as automation, patching, and configuration
- with SSM Agent installed on nodes
== CONTAINERS ==
Amazon Elastic Container Service (ECS)
- Container management service for Docker containers (ECS Task)
- Highly scalable / high performance, lets you run applications on an EC2 cluster
- Amazon Elastic Container Registry (ECR) is private repository for Docker images, the public version is Amazon ECR Public Gallery; backed by Amazon S3, access controlled through IAM
- ECS Launch Types
- Fargate Launch Type is serverless, managed by AWS
- EC2 Launch Type gives you direct access to the instances, but you have to manage them, with ECS Agent
- ECS Agent would use EC2 Instance Profile
- ESC Tasks use each individual ESC Task Role, which is defined in the task definition
- Mount EFS for ECS tasks, which can ensure all tasks in any AZ will share the same data; in comparison, S3 cannot be mounted as File System
- ECS Task definition is metadata in JSON, up to 10 containers in one file
- Image name
- Port Binding for Container and Host
- on EC2 Launch type, if only define container port, then the ALB would use Dynamic Host Port Mapping, then on EC2 instance’s Security Group should set allow on any port from ALB security group
- each task has its unique private IP on Fargate Launch, so only define the container port
- Memory and CPU required
- Environment variables (Hardcoded ,SSM Parameter Store, Secrets Manager, or files stored in S3)
- Networking
- IAM Role (One IAM Role per Task Definition)
- Logging configuration (CloudWatch)
- Data Volume to share data among multiple containers (Applications and Metrics/Logs, aka sidecar)
- EC2 Launch Type – using EC2 instance storage
- Fargate Launch Type – using ephemeral storage (20-200 GB), data deleted when containers demolished
- ECS Task Placement strategy & Task Placement constraints – Only for EC2 Launch Type
- find instances meet CPU/Memory/Port requirements
- find those satisfy task placement constraints
- distinctInstance – place each task on different container instance
- memberOf – using Cluster Query Language, placing on certain instances (like t2.*)
- find those satisfy task placement strategies
- Binpack – cost-saving by using least available amount of CPU or Memory as minimum instances
- Random
- Spread (can be AZ or instance ID)
- ECS does not use EC2 Auto Scaling, instead, uses the AWS Application Auto Scaling based on
- Average CPU Utilization
- Average Memory Utilization – Scale on RAM
- ALB Request Count Per Target
- AWS Application Auto Scaling policy can be
- Target Tracking – scale based on the target specific CloudWatch metric
- Step Scaling – based on a specified CloudWatch Alarm
- Scheduled Scaling
- Under EC2 Launch Type, the way to auto-scaling EC2 instances by
- Auto Scaling Group Scaling – use EC2 ASG to check instance loadings (CPU, Memory, etc.)
- ECS Cluster Capacity Provider, paired with ASG
- AWS Coplit is the CLI tool, running apps on AppRunner, ECS and Fargate; with CodePipeline for deployment
Amazon Elastic Kubernetes Service (EKS)
- EC2 Launch for deploy worker node; Fargate for serverless
- Kubernetes is cloud-agnostic
- Node Types
- Managed Node Groups
- AWS handles EC2 instances with ASG managed by EKS
- On-Demand or Spot instances
- Self-Managed Nodes
- Self create and manage EC2 instance with self-define ASG
- On-Demand or Spot instances
- AWS Fargate
- Managed Node Groups
- Can specify StorageClass manifest on EKS cluster, leverage a Container Storage Interface (CSI) compliant driver
- Amazon EBS (EC2)
- Amazon EFS (EC2, Fargate)
- Amazon FSx for Lustre (EC2)
- Amazon FSx for NetApp ONTAP (EC2)
== MONITORING ==
AWS CloudWatch
- Metrics: Collect and track key metrics for every AWS services
- namespace (specify a namespace for each data point, as new metric)
- dimension is an attributes (instance id, environment, …)
- timestamps
- for EC2 memory
- CloudWatch does not monitor the memory, swap, and disk space utilization of your instances. If you need to track these metrics, you can install a CloudWatch agent in your EC2 instances.
- (EC2) Memory usage is a custom metric, using API PutMetricData
- for Lambda function
- The ConcurrentExecutions metric in Amazon CloudWatch explicitly measures the number of instances of a Lambda function that are running at the same time.
- StorageResolution can be 1min (Standard) or 1/5/10/30 sec(High Resolution)
- Data point range of custom metric would be 2 weeks for past history and 2 hours in future
- detailed monitoring, just shorten the period to 1-minute; no extra fields

- Logs: Collect, monitor, analyze and store log files
- Group – application (to encrpyt with KMS keys, need to use CloudWatch Logs API)
- stream – instances / log files / containers
- export
- Amazon S3, may take up to 12 hour, with API CreateExportTask
- Using Logs Subscripton to export real-time events to Kinesis Data Streams, Kinesis Data Firehose, AWS Lambda, with Subscription Filter
- Cross-Account Subscription (Subscription Filter -> Subscription Destination)
- Live Tail – for realtime tail watch
- By default, no logs from EC2 machine to CloudWatch
- CloudWatch Logs Agent – only push logs
- CloudWatch Unified Agent – push logs + collect metrics (extra RAM, Process, Swap) + centralized by SSM Parameter Store
- Metric Filters to trigger alarms; not traceback of history
- With “aws logs associate-kms-key“, enable (AWS KMS) encryption for an existing log group, eliminating the need to recreate the log group or manually encrypt logs before submission
- Log Insight
- facilitate in-depth analysis of log data
- enables users to run queries on log data collected from various AWS services and applications in real-time
- Alarms: Re-act in real-time to metrics / events
- based on a single metric; Composite Alarms are monitoring on multiple other alarms
- Targets
- EC2
- EC2 ASG
- Amazon SNS
- Settings
- Period is the length of time to evaluate the metric or expression to create each individual data point for an alarm. It is expressed in seconds. If you choose one minute as the period, there is one datapoint every minute.
- Evaluation Period is the number of the most recent periods, or data points, to evaluate when determining alarm state.
- Datapoints to Alarm is the number of data points within the evaluation period that must be breaching to cause the alarm to go to the ALARM state. The breaching data points do not have to be consecutive, they just must all be within the last number of data points equal to Evaluation Period.

- Synthetics Canary: monitor your APIs, URLs, Websites, …
- Events, now called Amazon EventBridge
- Schedule – cron job
- Event Pattern – rules to react/trigger services
- Event Bus,a router that receives events and delivers them to zero or more destinations, or targets.
- (AWS) default, Partner, Custom
- Schema – the structure template for event (json)
- CloudWatch Evidently
- validate/serve new features to specified % of users only
- Launches (= feature flags) and Experiments (= A/B testing), and Overrides (specific variants assigned to specific user-id)
- evaluation events stored in CloudWatch Logs or S3
AWS X-Ray
- Troubleshooting (not monitoring) application performance and errors as “centralized service map visualization”
- Request tracking across distributed systems
- Focus on Latency, Errors and Fault analysis
- Compatible
- AWS Lambda
- Elastic Beanstalk
- ECS
- ELB
- API Gateway
- EC2 Instances or any application server (even on premise)
- But X-Ray cannot track the memory and swap usage of the instance; only CloudWatch Agents can do.
- Enable by
- AWS X-Ray SDK (on applications)
- Install X-Ray daemon (low lv UDP packet interceptor on OS) (on EC2 or ECS)
- a software application that listens for traffic on UDP port 2000, gathers raw segment data, and relays it to the AWS X-Ray API.
- for EC2, X-Ray daemon can be installed via user-data script
- for ECS, create a Docker image that runs the X-Ray daemon, upload it to a Docker image repository, and then deploy it to your Amazon ECS cluster
- Lambda runs the daemon automatically any time a function is invoked for a sampled request
- Enable X-Ray AWS Integration (IAM Role with proper permission) (on AWS services)
- for ElasticBeanstalk: to enable the X-Ray daemon by including the xray-daemon.config configuration file in the .ebextensions directory of your source code.
- Instrumentation means the measure of product’s performance, diagnose errors, and to write trace information
- AWS X-Ray receives data from services as segments. X-Ray then groups segments that have a common request into traces. X-Ray processes the traces to generate a service graph that provides a visual representation of your application.
- segments/subsegments -> traces -> service graph
- Segments: each application / service will send them
- Subsegments: if you need more details in your segment, especially for DynanmoDB.
- Trace: segments collected together to form an end-to-end trace
- A trace segment is just a JSON representation of a request that your application serves.
- Sampling: decrease the amount of requests sent to X-Ray, reduce cost
- (default) 1st request each second (aka reservoir: 1), and then 5% of additional requests (aka rate: 0.05)
- Annotations: Key Value pairs used to index traces (for search) and use with filters
- Metadata: “EXTRA” Key Value pairs, not indexed, not used for searching
- AWS X-Ray receives data from services as segments. X-Ray then groups segments that have a common request into traces. X-Ray processes the traces to generate a service graph that provides a visual representation of your application.
- A subset of segment fields are indexed by X-Ray for use with filter expressions. You can search for segments associated with specific information in the X-Ray console or by using the GetTraceSummaries API.



- X-Ray APIs Policy
- AWSXrayWriteOnlyAccess
- PutTraceSegments
- PutTelemetryRecords
- GetSamplingRules
- GetSamplingTargets
- GetSamplingStatisticSummaries
- AWSXrayReadOnlyAccess – grant console access
- GetServiceGraph
- BatchGetTraces
- GetTraceSummaries
- GetTraceGraph
- AWSXRayDaemonWriteAccess
- AWSXrayFullAccess – Read + Write + configure encryption key settings and sampling rules
- AWSXrayWriteOnlyAccess
- APIs
- GetTraceSummaries – trace summaries, as a list of trace IDs of the application (also with annotations)
- BatchGetTraces – full traces, retrieve the list of traces (ie activity events)
- GetGroup – retrieves the group resource details.
- GetServiceGraph – shows which services process the incoming requests, including the downstream services that they call as a result.
- If a load balancer or other intermediary forwards a request to your application, X-Ray takes the client IP from the X-Forwarded-For header in the request instead of from the source IP in the IP packet.
Amazon Managed Grafana
- a fully managed and secure data visualization service that you can use to instantly query, correlate, and visualize operational metrics, logs, and traces

Use case | What is it optimized for? | Monitoring and observability services |
---|---|---|
Monitoring and alerting | These services are optimized to provide real-time visibility, proactive issue detection, resource optimization, and efficient incident response, contributing to overall application and infrastructure health. | – Amazon CloudWatch – Amazon CloudWatch Logs – Amazon EventBridge |
Application performance monitoring | These services provide comprehensive insights into application behavior, offer tools for identifying and resolving performance bottlenecks, aid in efficient troubleshooting, and contribute to delivering modern user experiences across distributed and web applications. | – Amazon CloudWatch Application Signals – Amazon Managed Service for Prometheus – AWS X-Ray – Amazon CloudWatch Synthetics |
Infrastructure observability | These services provide a holistic view of your cloud resources, helping you make more informed decisions about resource utilization, performance optimization, and cost-efficiency. | – Amazon CloudWatch Metrics – Amazon CloudWatch Container Insights |
Logging and analysis | These services help you efficiently manage and analyze log data, troubleshoot, detect anomalies, support security, meeting compliance requirements, and get actionable insights into your applications and infrastructure. | – Amazon Cloudwatch Logs Insights – Amazon CloudWatch Logs Anomaly Detection – Amazon Managed Grafana – Amazon OpenSearch Service – Amazon Kinesis Data Streams |
Security and compliance monitoring | Optimized to provide a robust security framework, enabling proactive threat detection, continuous monitoring, compliance tracking, and audit capabilities to help safeguard your AWS resources and maintain a secure and compliant environment. | – Amazon GuardDuty – AWS Config – AWS CloudTrail |
Network monitoring | These services provide visibility into network traffic, enhance security by detecting and preventing threats, enable efficient network traffic management, and support incident response activities. | – Amazon CloudWatch – Network Monitor – Amazon CloudWatch Internet Monitor – Amazon VPC Flow Logs – AWS Network Firewall |
Distributed tracing | These services provide a comprehensive view of the interactions and dependencies within your distributed applications. They enable you to diagnose performance bottlenecks, optimize application performance, and support the smooth functioning of complex systems by offering insights into how different parts of your application communicate and interact. | – AWS Distro for OpenTelemetry – AWS X-Ray – Amazon CloudWatch Application Signals (Preview) |
Hybrid and multicloud observability | Maintain reliable operations, provide modern digital experiences for your customers, and get help to meet service level objectives and performance commitments. | – Amazon CloudWatch (hybrid and multicloud support) |
== CICD ==
AWS CodePipeline
- [Machine Learning] Automate the entire process of training, testing, and deploying the models
- automating pipeline from code to deployments, as visual workflow
- Consists of stages:
- Each stage can have sequential actions and/or parallel actions
- Manual approval can be defined at any stage
- each stage can create artifacts, stored in S3 bucket
- Use AWS EventBridge for troubleshooting; with CloudTrail for audit AWS API calls
AWS CodeBuild
- Compile the code, execute unit tests, and build the necessary artifacts
- Build instruction on buildspec.yml, stored at the root of codes
- env
- variables – plaintext variables
- parameter-store – variables stored in SSM Parameter Store
- secrets-manager – variables stored in AWS Secrets Manager
- phases
- install – install dependencies you may need for your build
- pre_build – final commands to execute before build
- Build – actual build commands
- post_build – finishing touches (e.g., zip output)
- artifacts – what to upload to S3 (encrypted with KMS)
- cache – files to cache (usually dependencies) to S3 for future build speedup
AWS CodeDeploy
- [Machine Learning] Automatically release new versions of the models to various environments while avoiding downtime and handling the complexity of updating them
- Deploy new applications versions
- EC2 Instances, On-premises servers
- need CodeDeployAgent on the target instance, with S3 access permit
- In-place Deployment (compatible with existing ASG)
- AllAtOnce: most downtime
- HalfAtATime: reduced capacity by 50%
- OneAtATime: slowest, lowest availability impact (health-check on every new instance deployed)
- Custom: define your %
- or Blue/Green Deployment (new ASG created) (also as one deployment attempt)
- must be using an ELB
- Lambda functions (integrated into SAM)
- with only Blue/Green deployment
- traffic would be redirected by
- Linear
- Canary (two deployment attempts only)
- AllAtOnce (one deployment attempt)
- ECS Services
- with only Blue/Green deployment
- needs Application Load Balancer (ALB) to control traffic
- Linear
- Canary (two deployment attempts only)
- AllAtOnce (one deployment attempt)
- not Elastic Beanstalk
- EC2 Instances, On-premises servers
- Automated Rollback capability
- Use appspec.yml to define actions (manage each application deployment as a series of lifecycle event hooks in CodeDeploy)
- Rollback = redeploy a previously the last known good revision as a new deployment (with new deployment ID)

AWS CodeArtifact
- store, publish, and share software packages (aka code dependencies)
- CodeArtifact seamlessly integrates with Amazon EventBridge, a service that automates actions responding to specific events, including any activity within a CodeArtifact repository. This integration allows you to establish rules that dictate the actions to be taken when a particular event occurs.

AWS CodeGuru
- a developer tool that provides intelligent recommendations to improve the quality of your codebase and for identifying an application’s most “expensive” lines of code in terms of resource intensiveness, CPU performance, and code efficiency
- using Machine Learning for automated code reviews and application performance recommends
- Supports Java and Python
- CodeGuru Reviewer – identify critical issues, security vulnerabilities
- CodeGuru Profiler
- CodeGuru Agent
- MaxStackDepth – the max depth of chain on method call
- MemoryUsageLimitPercent
- MinimumTimeForReportingInMilliseconds
- ReportingIntervalInMilliseconds
- SamplingIntervalInMilliseconds