• Stars
    star
    326
  • Rank 124,073 (Top 3 %)
  • Language
  • License
    MIT No Attribution
  • Created about 2 years ago
  • Updated 12 months ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Automated Incident Response and Forensics Framework

Use Case

This use-case was developed for a customer in the automotive industry operating a large set of accounts. Their problems were:

  • Incident Response and Forensics was a manual process prone to mistakes
  • Time-consuming process with many steps
  • Hard to perform by non-trained personnel

To address this we created the Automated Incident Response and Forensics framework. The framework aims to facilitate automated steps for incident response and forensics based on the AWS Incident Response White Paper.

Goals

The goal is to provide a set of processes enabled by Lambda functions as to:

  • Provide an easy way to trigger the IR process with minimum knowledge
  • Provide an automated repeatable processes, alligned with the AWS IR White Paper
  • Provide segregation of accounts to operate the automation steps, store artifacts and create forensic environments

Limitations

Note that this framework does not intend to generate artifacts which can be considered as electronic evidence, submissible in court.

Overview of the Framework

Automated Incident Response and Forensics follows a standard digital forensic process (or phases) consisting of:

  • Containment
  • Acquisition
  • Examination
  • Analysis

Investigations can be performed on static data (e.g. acquired memory or disk images) as well as dynamic, “live” but segregated systems.

By using this environment, a Security Operations Centre team can improve their security incident response process through:

  • Having ability to perform forensics in a segregated environment to avoid accidental compromise of production resources
  • Having a standardized, repeatable, automated process to do containment and analysis.
  • Allowing for any environment owner to trigger the IR process with the minimal knowledge required to be on how to do use tags
  • Having a standardized, clean environment to perform incident analysis and forensics without the noise of a larger environment
  • Having the ability to create multiple analysis environments in parallel
  • Focusing SOC resources on incident response and not maintenance and documentation of a cloud forensics environment
  • Moving away from manual hands-on process towards an automated one to achieve scalability
  • Using scripts for consistency and to avoid repeatable tasks

Additionally, customers will avoid using persistent infrastructure and pay for resources when they need them.

Architecture

The environment will consist of 2 main accounts – a Security and a Forensics accounts. The reason for having 2 accounts is to separate them from any other customer accounts to reduce blast radius in case of a failed forensic analysis, ensure the isolation and protection of the integrity of the artifacts being analyzed, and keeping the investigation confidential. Separate accounts also avoid situations where the threat actors might have used all the resources immediately available to your compromised AWS account by hitting service quotas and so preventing you from instantiating an Amazon EC2 instance to perform investigations. Also, having separate Security and Forensic account allows for creating separate roles – a Responder for acquiring evidence and an Investigator for analyzing it. Each of the roles would have access to their separate accounts.

The Security account is where the 2 main Step Functions are created for memory and disk image acquisition. Once running, those reach into the Member account (an account with the EC2 instances involved in an incident) and trigger a set of Lambda functions that will gather either a memory or disk dump. Those artifacts are then stored in the Forensics account.

A Forensics account will hold the artifacts gathered by the Step Functions in the “Analysis artifacts” S3 bucket. The Forensics account will also have a EC2 Image Builder pipeline that builds an AMI image of a Forensics instance. Currently it’s based on SANS SIFT Workstation (https://www.sans.org/tools/sift-workstation/). The build process uses the Maintenance VPC which has connectivity to the Internet. This can be later used for spinning up EC2 instance for analysis of the gathered artifacts in the Analysis VPC. The Analysis VPC does not have Internet connectivity. By default, the AWS ProServe creates 3 analysis subnets. More subnets can be created (up to 200 which is the quota for number of subnets in VPC) but the VPC endpoints need to have those subnets added for SSM Sessions Manager to work in them.

image info

Workflow

image info

Pre-requisites

  1. 2 accounts
    • Security account - can be an existing account, preferably fresh
    • Forensic account - preferably fresh
  2. AWS Organizations set up
  3. In Member accounts:
    • EC2 role needs to have access to S3 (Get / List) and be accessible by SSM. It’s suggested to use AWS managed roles:
      • AmazonSSMManagedInstanceCore
      • Note that this role will automatically be attached to the instance when IR is triggered, until the response has finished after which the IAM will remove all rights to the instance
    • VPC Endpoints need to be added to VPC and Subnets in which the target EC2 instances reside. Those endpoints are: S3 (gateway), EC2messages, SSM and SSMMessages
  4. If the EC2 instances don’t have AWS CLI installed, Internet access will be required for the disk snapshot and memory acquisition to work. In this case the scripts will reach out to the Internet to download the AWS CLI installation files and will install them on the instance in scope.

Installing the CloudFormation scripts

The CloudFormation scripts are marked 1 to 8, with the first word of the script name indicating in which account the script needs to be deployed. Note that the order of launching the CFN templates is important.

  • 1-forensic-AnalysisVPCnS3Buckets.yaml: deployed in the forensics account and creates the S3 buckets, VPCs and enables CloudTrail
  • 2-forensic-MaintenanceVPCnEC2ImageBuilderPipeline.yaml: Deploys the maintenance VPC and image builder pipeline based on SANS SIFT
  • 3-security_IR-Disk_Mem_automation.yaml: Deploys the functions in the security account which enable disk and memory acquisition.
  • 4-security_LiME_Volatility_Factory.yaml: Triggers a build function to start creating the memory modules based of the given AMI ids. Note that AMI ids are different across regions. Whenever you need new memory modules, you can simply re-run this script with the new AMI ids. You could consider integrating this with your golden image AMI builder pipelines (if used in your environment)
  • 5-member-IR-automation.yaml: Creates the member IR automation function which triggers the IR process. It allows for sharing EBS volumes across accounts, automated posting to Slack channels during the IR process, triggering the forensics process and isolating the instances after the process finishes.
  • 6-forensic-artifact-s3-policies.yaml: After all the scripts have been deployed this script fixes the permissions required for all the cross-account interactions.
  • 7-security-IR-vpc.yaml: Configures a VPC used for IR volume processing

Operating the Incident Response Framework

The incident response framework can be triggered by creating a Tag with key SecurityIncidentStatus and value Analyze for a given EC2 instance. This will trigger the member Lambda function that will automatically start isolation and memory/disk acquisition. It will also re-tag the asset at the end (or on failure) with Contain. This triggers the containment which fully isolates the instance with a no INBOUND/OUTBOUND security group and with an IAM role that disallows all access.

When an EC2 instance is compromised or suspect to be compromised, a tag must be attached to it with as key SecurityIncidentStatus with as value either "Analyze" or "Contain" (note that this is case sensitive).

This can be done from the console:

image info

Or use the AWS CLI aws ec2 create-tags --resources <instance-id> --tags Key=SecurityIncidentStatus,Value=Contain

The tag change event will be forwarded to the Lambda function which will evaluate the tag value If the tag value matches the analyze or containment value it will proceed with executing the containment actions:

  • Enable Termination Protection
  • For "Analyze":
    • Attach the correct IAM profiles required for SSM and S3
    • Send an SNS message to the Security Incident response SNS topic in the security account to trigger the automated flow
  • For "Contain":
    • Attach an IAM profile which disables any access to the AWS API
    • Create and attach a security group that prevents any in- and outbound traffic

The flow can also be triggered directly from the security account directly through the stepfunction: image info

In the pop-up form, fill out the required details. The “Name” field of the execution if optional but helps to identify previous executions (as seen on screen above in point 4.) image info The “Input” field, although described as optional, is in fact required by the Memory and Disk Acquisition state machines. It provides required parameters for the underlying scripts to run successfully. The format used is JSON. The following parameters are used:

  1. CaseId – String for case ID. It can be anything that helps you map it to your security incident management system. The fully automated system generates this automatically.
  2. InstanceId – the ID of the instance on which you intend to perform the memory or disk acquisition
  3. Account – the Member account number where the instance is running
  4. Region – the region name where the instance is running
  5. RetainArtefacts – should the volume snapshots be also left in the Security account after they have been processed and stored in the Forensic Account in the IR-Artifacts bucket

An example execution JSON:

{
  "CaseId": "INC456789",
  "InstanceId": "i-3f182ce62a7962a53",
  "Account": "234567890123",
  "Region": "us-east-2",
  "RetainArtefacts": "true"
}
  1. After the necessary parameters have been provided, press the “Start execution” button on the bottom right.

  2. As the state machine execution progresses, you can check the details of each step. Click on a completed step (marked in green) and then click the link to see the CloudWatch Logs. This functionality is useful in case of a need to troubleshoot. image info

  3. After a successful execution of the state machine, the artifacts can be found in the Forensics Account in the “-ir-artifacts” S3 bucket. The artifacts will be in the folder named after the CaseId parameter: image info

SecurityHub Actions

If you want to create a custom action so that you can use the dropdown box from Security Hub, you can deploy the CFN template found under “modules/Security Hub Custom Actions“. Subsequently you will need to modify the ”IRAutomation“ role in each of the member accounts to allow the Lambda function that executes the action to assume the ”IRAutomation“ role.

SecurityHub Action

To do this, go to each member account, find the “IRAutiomation” IAM role: [Image: image.png]And subsequently add “arn:aws:iam::9999999:role/SecurityHubCustomActionLambdaRole” (replace 9999999 with your security account id) so that the policy looks like this:

{
   "Version":"2012-10-17",
   "Statement":[
      {
         "Effect":"Allow",
         "Principal":{
            "AWS":[
               "arn:aws:iam::9999999:role/MemoryAutomationLambdaRole",
               "arn:aws:iam::9999999:role/SnapshotAutomationLambdaRole",
               "arn:aws:iam::9999999:role/SecurityHubCustomActionLambdaRole"
            ]
         },
         "Action":"sts:AssumeRole"
      }
   ]
}

Once you have done that you can trigger responses on EC2 events by using the actions dropdown in SecurityHub as shown above.

Future Roadmap

  • Support for Windows
  • Support for ARM based AMIs
  • Better way of generating memory modules automatically without having to recreate the CloudFormation stack

More Repositories

1

git-secrets

Prevents you from committing secrets and credentials into git repositories
Shell
11,616
star
2

llrt

LLRT (Low Latency Runtime) is an experimental, lightweight JavaScript runtime designed to address the growing demand for fast and efficient Serverless applications.
JavaScript
7,555
star
3

aws-shell

An integrated shell for working with the AWS CLI.
Python
7,116
star
4

autogluon

AutoGluon: AutoML for Image, Text, and Tabular Data
Python
4,348
star
5

aws-cloudformation-templates

A collection of useful CloudFormation templates
Python
4,302
star
6

mountpoint-s3

A simple, high-throughput file client for mounting an Amazon S3 bucket as a local file system.
Rust
3,986
star
7

gluonts

Probabilistic time series modeling in Python
Python
3,686
star
8

deequ

Deequ is a library built on top of Apache Spark for defining "unit tests for data", which measure data quality in large datasets.
Scala
2,871
star
9

aws-lambda-rust-runtime

A Rust runtime for AWS Lambda
Rust
2,829
star
10

aws-sdk-rust

AWS SDK for the Rust Programming Language
2,754
star
11

amazon-redshift-utils

Amazon Redshift Utils contains utilities, scripts and view which are useful in a Redshift environment
Python
2,643
star
12

diagram-maker

A library to display an interactive editor for any graph-like data.
TypeScript
2,359
star
13

amazon-ecr-credential-helper

Automatically gets credentials for Amazon ECR on docker push/docker pull
Go
2,261
star
14

amazon-eks-ami

Packer configuration for building a custom EKS AMI
Shell
2,164
star
15

aws-lambda-powertools-python

A developer toolkit to implement Serverless best practices and increase developer velocity.
Python
2,148
star
16

aws-well-architected-labs

Hands on labs and code to help you learn, measure, and build using architectural best practices.
Python
1,834
star
17

aws-config-rules

[Node, Python, Java] Repository of sample Custom Rules for AWS Config.
Python
1,473
star
18

smithy

Smithy is a protocol-agnostic interface definition language and set of tools for generating clients, servers, and documentation for any programming language.
Java
1,356
star
19

aws-support-tools

Tools and sample code provided by AWS Premium Support.
Python
1,290
star
20

open-data-registry

A registry of publicly available datasets on AWS
Python
1,199
star
21

sockeye

Sequence-to-sequence framework with a focus on Neural Machine Translation based on PyTorch
Python
1,181
star
22

aws-lambda-powertools-typescript

Powertools is a developer toolkit to implement Serverless best practices and increase developer velocity.
TypeScript
1,179
star
23

dgl-ke

High performance, easy-to-use, and scalable package for learning large-scale knowledge graph embeddings.
Python
1,144
star
24

aws-sdk-ios-samples

This repository has samples that demonstrate various aspects of the AWS SDK for iOS, you can get the SDK source on Github https://github.com/aws-amplify/aws-sdk-ios/
Swift
1,038
star
25

aws-sdk-android-samples

This repository has samples that demonstrate various aspects of the AWS SDK for Android, you can get the SDK source on Github https://github.com/aws-amplify/aws-sdk-android/
Java
1,018
star
26

aws-solutions-constructs

The AWS Solutions Constructs Library is an open-source extension of the AWS Cloud Development Kit (AWS CDK) that provides multi-service, well-architected patterns for quickly defining solutions
TypeScript
1,013
star
27

aws-cfn-template-flip

Tool for converting AWS CloudFormation templates between JSON and YAML formats.
Python
981
star
28

amazon-kinesis-video-streams-webrtc-sdk-c

Amazon Kinesis Video Streams Webrtc SDK is for developers to install and customize realtime communication between devices and enable secure streaming of video, audio to Kinesis Video Streams.
C
975
star
29

aws-lambda-go-api-proxy

lambda-go-api-proxy makes it easy to port APIs written with Go frameworks such as Gin (https://gin-gonic.github.io/gin/ ) to AWS Lambda and Amazon API Gateway.
Go
967
star
30

eks-node-viewer

EKS Node Viewer
Go
947
star
31

multi-model-server

Multi Model Server is a tool for serving neural net models for inference
Java
936
star
32

ec2-spot-labs

Collection of tools and code examples to demonstrate best practices in using Amazon EC2 Spot Instances.
Jupyter Notebook
905
star
33

aws-mobile-appsync-sdk-js

JavaScript library files for Offline, Sync, Sigv4. includes support for React Native
TypeScript
902
star
34

aws-saas-boost

AWS SaaS Boost is a ready-to-use toolset that removes the complexity of successfully running SaaS workloads in the AWS cloud.
Java
901
star
35

fargatecli

CLI for AWS Fargate
Go
891
star
36

aws-api-gateway-developer-portal

A Serverless Developer Portal for easily publishing and cataloging APIs
JavaScript
879
star
37

ecs-refarch-continuous-deployment

ECS Reference Architecture for creating a flexible and scalable deployment pipeline to Amazon ECS using AWS CodePipeline
Shell
842
star
38

fortuna

A Library for Uncertainty Quantification.
Python
836
star
39

dynamodb-data-mapper-js

A schema-based data mapper for Amazon DynamoDB.
TypeScript
818
star
40

goformation

GoFormation is a Go library for working with CloudFormation templates.
Go
812
star
41

flowgger

A fast data collector in Rust
Rust
796
star
42

aws-js-s3-explorer

AWS JavaScript S3 Explorer is a JavaScript application that uses AWS's JavaScript SDK and S3 APIs to make the contents of an S3 bucket easy to browse via a web browser.
HTML
771
star
43

aws-icons-for-plantuml

PlantUML sprites, macros, and other includes for Amazon Web Services services and resources
Python
737
star
44

aws-devops-essential

In few hours, quickly learn how to effectively leverage various AWS services to improve developer productivity and reduce the overall time to market for new product capabilities.
Shell
674
star
45

aws-apigateway-lambda-authorizer-blueprints

Blueprints and examples for Lambda-based custom Authorizers for use in API Gateway.
C#
660
star
46

amazon-ecs-nodejs-microservices

Reference architecture that shows how to take a Node.js application, containerize it, and deploy it as microservices on Amazon Elastic Container Service.
Shell
650
star
47

amazon-kinesis-client

Client library for Amazon Kinesis
Java
621
star
48

aws-deployment-framework

The AWS Deployment Framework (ADF) is an extensive and flexible framework to manage and deploy resources across multiple AWS accounts and regions based on AWS Organizations.
Python
617
star
49

aws-lambda-web-adapter

Run web applications on AWS Lambda
Rust
610
star
50

dgl-lifesci

Python package for graph neural networks in chemistry and biology
Python
594
star
51

aws-security-automation

Collection of scripts and resources for DevSecOps and Automated Incident Response Security
Python
585
star
52

aws-glue-libs

AWS Glue Libraries are additions and enhancements to Spark for ETL operations.
Python
565
star
53

python-deequ

Python API for Deequ
Python
535
star
54

aws-athena-query-federation

The Amazon Athena Query Federation SDK allows you to customize Amazon Athena with your own data sources and code.
Java
507
star
55

data-on-eks

DoEKS is a tool to build, deploy and scale Data & ML Platforms on Amazon EKS
HCL
504
star
56

shuttle

Shuttle is a library for testing concurrent Rust code
Rust
465
star
57

ami-builder-packer

An example of an AMI Builder using CI/CD with AWS CodePipeline, AWS CodeBuild, Hashicorp Packer and Ansible.
465
star
58

route53-dynamic-dns-with-lambda

A Dynamic DNS system built with API Gateway, Lambda & Route 53.
Python
461
star
59

aws-servicebroker

AWS Service Broker
Python
461
star
60

amazon-ecs-local-container-endpoints

A container that provides local versions of the ECS Task Metadata Endpoint and ECS Task IAM Roles Endpoint.
Go
456
star
61

datawig

Imputation of missing values in tables.
JavaScript
454
star
62

aws-jwt-verify

JS library for verifying JWTs signed by Amazon Cognito, and any OIDC-compatible IDP that signs JWTs with RS256, RS384, and RS512
TypeScript
452
star
63

amazon-dynamodb-lock-client

The AmazonDynamoDBLockClient is a general purpose distributed locking library built on top of DynamoDB. It supports both coarse-grained and fine-grained locking.
Java
447
star
64

ecs-refarch-service-discovery

An EC2 Container Service Reference Architecture for providing Service Discovery to containers using CloudWatch Events, Lambda and Route 53 private hosted zones.
Go
444
star
65

ssosync

Populate AWS SSO directly with your G Suite users and groups using either a CLI or AWS Lambda
Go
443
star
66

handwritten-text-recognition-for-apache-mxnet

This repository lets you train neural networks models for performing end-to-end full-page handwriting recognition using the Apache MXNet deep learning frameworks on the IAM Dataset.
Jupyter Notebook
442
star
67

awscli-aliases

Repository for AWS CLI aliases.
437
star
68

aws-config-rdk

The AWS Config Rules Development Kit helps developers set up, author and test custom Config rules. It contains scripts to enable AWS Config, create a Config rule and test it with sample ConfigurationItems.
Python
436
star
69

snapchange

Lightweight fuzzing of a memory snapshot using KVM
Rust
427
star
70

aws-security-assessment-solution

An AWS tool to help you create a point in time assessment of your AWS account using Prowler and Scout as well as optional AWS developed ransomware checks.
423
star
71

lambda-refarch-mapreduce

This repo presents a reference architecture for running serverless MapReduce jobs. This has been implemented using AWS Lambda and Amazon S3.
JavaScript
422
star
72

aws-lambda-cpp

C++ implementation of the AWS Lambda runtime
C++
409
star
73

aws-cloudsaga

AWS CloudSaga - Simulate security events in AWS
Python
389
star
74

amazon-kinesis-producer

Amazon Kinesis Producer Library
C++
385
star
75

soci-snapshotter

Go
383
star
76

pgbouncer-fast-switchover

Adds query routing and rewriting extensions to pgbouncer
C
381
star
77

serverless-photo-recognition

A collection of 3 lambda functions that are invoked by Amazon S3 or Amazon API Gateway to analyze uploaded images with Amazon Rekognition and save picture labels to ElasticSearch (written in Kotlin)
Kotlin
378
star
78

amazon-sagemaker-workshop

Amazon SageMaker workshops: Introduction, TensorFlow in SageMaker, and more
Jupyter Notebook
378
star
79

serverless-rules

Compilation of rules to validate infrastructure-as-code templates against recommended practices for serverless applications.
Go
378
star
80

logstash-output-amazon_es

Logstash output plugin to sign and export logstash events to Amazon Elasticsearch Service
Ruby
374
star
81

kinesis-aggregation

AWS libraries/modules for working with Kinesis aggregated record data
Java
370
star
82

smithy-rs

Code generation for the AWS SDK for Rust, as well as server and generic smithy client generation.
Rust
369
star
83

syne-tune

Large scale and asynchronous Hyperparameter and Architecture Optimization at your fingertips.
Python
363
star
84

aws-sdk-kotlin

Multiplatform AWS SDK for Kotlin
Kotlin
359
star
85

dynamodb-transactions

Java
354
star
86

amazon-kinesis-client-python

Amazon Kinesis Client Library for Python
Python
354
star
87

aws-serverless-data-lake-framework

Enterprise-grade, production-hardened, serverless data lake on AWS
Python
349
star
88

threat-composer

A simple threat modeling tool to help humans to reduce time-to-value when threat modeling
TypeScript
346
star
89

amazon-kinesis-agent

Continuously monitors a set of log files and sends new data to the Amazon Kinesis Stream and Amazon Kinesis Firehose in near-real-time.
Java
342
star
90

rds-snapshot-tool

The Snapshot Tool for Amazon RDS automates the task of creating manual snapshots, copying them into a different account and a different region, and deleting them after a specified number of days
Python
337
star
91

amazon-kinesis-scaling-utils

The Kinesis Scaling Utility is designed to give you the ability to scale Amazon Kinesis Streams in the same way that you scale EC2 Auto Scaling groups – up or down by a count or as a percentage of the total fleet. You can also simply scale to an exact number of Shards. There is no requirement for you to manage the allocation of the keyspace to Shards when using this API, as it is done automatically.
Java
333
star
92

amazon-kinesis-video-streams-producer-sdk-cpp

Amazon Kinesis Video Streams Producer SDK for C++ is for developers to install and customize for their connected camera and other devices to securely stream video, audio, and time-encoded data to Kinesis Video Streams.
C++
332
star
93

landing-zone-accelerator-on-aws

Deploy a multi-account cloud foundation to support highly-regulated workloads and complex compliance requirements.
TypeScript
330
star
94

route53-infima

Library for managing service-level fault isolation using Amazon Route 53.
Java
326
star
95

mxboard

Logging MXNet data for visualization in TensorBoard.
Python
326
star
96

aws-sigv4-proxy

This project signs and proxies HTTP requests with Sigv4
Go
325
star
97

statelint

A Ruby gem that provides a command-line validator for Amazon States Language JSON files.
Ruby
324
star
98

graphstorm

Enterprise graph machine learning framework for billion-scale graphs for ML scientists and data scientists.
Python
317
star
99

ecs-nginx-reverse-proxy

Reference architecture for deploying Nginx on ECS, both as a basic static resource server, and as a reverse proxy in front of a dynamic application server.
Nginx
317
star
100

simplebeerservice

Simple Beer Service (SBS) is a cloud-connected kegerator that streams live sensor data to AWS.
JavaScript
316
star