• Stars
    star
    100
  • Rank 338,640 (Top 7 %)
  • Language
    Python
  • License
    MIT License
  • Created almost 8 years ago
  • Updated about 1 year ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

Easily create AWS managed resources in an isolated VPC for hosting web applications.

AWS Web Stacks

https://circleci.com/gh/caktus/aws-web-stacks.svg?style=svg

AWS Web Stacks is a library of CloudFormation templates that dramatically simplify hosting web applications on AWS. The library supports using Elastic Beanstalk, ECS, EKS, EC2 instances (via an AMI you specify), or Dokku for the application server(s) and provides auxilary managed services such as an RDS instance, ElastiCache instance, Elasticsearch instance (free) SSL certificate via AWS Certificate Manager, S3 bucket for static assets, ECS repository for hosting Docker images, etc. All resources (that support VPCs) are created in a self-contained VPC, which may use a NAT gateway (if you want to pay for that) or not, and resources that require API authentication (such as S3 or Elasticsearch) are granted permissions via the IAM instance role and profile assigned to the application servers created in the stack.

The CloudFormation templates are written in troposphere, which allows for some validation at build time and simplifies the management of several related templates.

If a NAT gateway is not used, it's possible to create a fully-managed, self-contained hosting environment for your application entirely within the free tier on AWS (albeit not with all stacks, for example, there is no free tier for EKS). To try it out, select one of the following:

Β  Elastic Beanstalk ECS EKS EC2 Instances Dokku
Without NAT Gateway EB-No-NAT ECS-No-NAT EKS-No-NAT EC2-No-NAT Dokku-No-NAT
With NAT Gateway EB-NAT ECS-NAT EKS-NAT EC2-NAT n/a

If you'd like to review the CloudFormation template first, or update an existing stack, you may also wish to use the YAML template directly:

Β  Elastic Beanstalk ECS EKS EC2 Instances Dokku
Without NAT Gateway eb-no-nat.yaml ecs-no-nat.yaml eks-no-nat.yaml ec2-no-nat.yaml dokku-no-nat.yaml
With NAT Gateway eb-nat.yaml ecs-nat.yaml eks-nat.yaml ec2-nat.yaml n/a

Documentation

In addition to this README, there is additional documentation at http://aws-web-stacks.readthedocs.io/

Elastic Beanstalk, Elastic Container Service, EC2, Dokku, or EKS?

Elastic Beanstalk is the recommended starting point. Elastic Beanstalk comes with a preconfigured autoscaling configuration, allows for automated, managed updates to the underlying servers, allows changing environment variables without recreating the underlying service, and comes with its own command line tool for managing deployments. The Elastic Beanstalk environment uses the multicontainer docker environment to maximize flexibility in terms of the application(s) and container(s) deployed to the stack.

Elastic Container Service (ECS) or Elastic Kubernetes Service (EKS) might be useful if complex container service definitions are required.

If you prefer to configure application servers manually using Ansible, Salt, Chef, Puppet, or another such tool, choose the EC2 option. Be aware that the instances created are managed by an autoscaling group, so you should suspend the autoscaling processes on this autoscaling group (after the initial instances are created) if you don't want it to bring up new (unprovisioned) instances or potentially even terminate one of your instances should it appear unhealthy, e.g.:

aws autoscaling suspend-processes --auto-scaling-group-name <your-ag-name>

For very simple, Heroku-like deploys, choose the Dokku option. This will give you a single EC2 instance based on Ubuntu 16.04 LTS with Dokku pre-installed and global environment variables configured that will allow your app to find the RDS, ElastiCache, and Elasticsearch nodes created with this stack.

NAT Gateways

NAT Gateways have the added benefit of preventing network connections to EC2 instances within the VPC, but come at an added cost (and no free tier).

If a NAT Gateway stack is selected, you'll have the option of creating a bastion host or VPN server in the stack, using an AMI and instance type of your choice. The bastion type selected will determine which ports are opened by default for this host. If SSH, only SSH traffic will be allowed from the IP address or subnet configured by the AdministratorIPAddress parameter. If OpenVPN, HTTPS and SSH traffic will be allowed from the AdministratorIPAddress, and OpenVPN UDP traffic from any address. Additional ports will need to be opened manually via the AWS console or API.

Stack Creation Process

Creating a stack takes approximately 30-35 minutes. The CloudFront distribution and RDS instance typically take the longest to finish, and the EB environment or ECS service creation will not begin until all of its dependencies, including the CloudFront distribution and RDS instance, have been created.

SSL Certificate

For the Elastic Beanstalk, Elastic Container Service, and EC2 (non-GovCloud) options, an automatically-generated SSL certificate is included. The certificate requires approval from the domain owner before it can be issued, and your stack creation will not finish until you approve the request. Be on the lookout for an email from Amazon to the domain owner (as seen in a whois query) and follow the link to approve the certificate. If you're using a .io domain, be aware that prior steps may be necessary to receive email for .io domains, because domain owner emails cannot be discovered via whois.

Manual ACM Certificates

You also have the option to not create a certificate as part of the stack provisioning process. If you do this, an HTTPS listener (and corresponding certificate) can be manually attached to the load balancer after stack creation via the AWS Console or using awscli using the steps below.

To request a new certificate using DNS validation, run the following command with --domain-name matching your desired domain:

aws acm request-certificate --domain-name [DOMAIN NAME] --validation-method DNS

You can query the CNAME name and value variables using describe-certificate:

aws acm list-certificates
aws acm describe-certificate --certificate-arn=YOUR-CertificateArn

Add the listed CNAME to your DNS provider to complete the verification process.

Once verified, add an HTTPS listener to the environment's ELB:

aws elb describe-load-balancers --query "LoadBalancerDescriptions[*].LoadBalancerName"
aws elb create-load-balancer-listeners --load-balancer-name [LB NAME]
                                       --listeners "SSLCertificateId=[CERTIFICATE-ARN],Protocol=HTTPS,LoadBalancerPort=443,InstanceProtocol=HTTP,InstancePort=80"

Encryption (using AWS Key Management Service)

Server-side encryption support is available, via the UseAES256Encryption parameter, on the following AWS resources:

  • EC2 EBS (for application EC2 instances and bastion host)
  • ElastiCache Redis (ReplicationGroup)
  • RDS
  • S3
  • EKS Envelope Encryption (via EnableEksEncryptionConfig)

By default, when enabled, an AWS managed CMK (customer master key) will be created the first time you try to create an encrypted resource within that service. AWS will manage the policies associated with AWS managed CMKs on your behalf. You can track AWS managed keys in your account and all usage is logged in AWS CloudTrail, but you have no direct control over the keys themselves. These keys will be shared across all resources utilizing default encryption within your AWS account.

Customer Managed CMK

The CustomerManagedCmkArn parameter allows your stack to be encrypted with a Customer Managed CMK. You have full control over these CMKs, including establishing and maintaining their key policies, IAM policies, and grants, enabling and disabling them, rotating their cryptographic material, adding tags, creating aliases that refer to the CMK, and scheduling the CMKs for deletion.

Required CMK Key Policy for Use with Encrypted Volumes

Important: If you specify a customer managed CMK, several steps are required to support Amazon EBS encryption within Amazon EC2 Auto Scaling.

1. You (or your account administrator) must give the appropriate service-linked role access to the CMK, so that Amazon EC2 Auto Scaling can launch instances on your behalf. To do this, you must modify the CMK's key policy. If omitted, auto scaling will fail to launch instances. See Required CMK Key Policy for Use with Encrypted Volumes for more information.

2. You must encrypt the AMI specified in the AMI parameter with your customer managed CMK. Existing AMIs can easily be copied and encrypted with your key from within the AWS Console. Follow the steps in Copying an AMI and use your customer managed CMK ARN when prompted for a Master Key. Once copied, use the new AMI for your stack AMI parameter.

Resources Created

The following is a partial list of resources created by this stack, when Elastic Beanstalk is used:

  • ApplicationRepository (AWS::ECR::Repository): A Docker image repository that your EB environment or ECS cluster will have access to pull images from.
  • AssetsBucket (AWS::S3::Bucket): An S3 bucket for storing application-related static assets. Permissions are set up automatically so your application can put new assets via the S3 API.
  • AssetsDistribution (AWS::CloudFront::Distribution): A CloudFront distribution corresponding to the above S3 bucket.
  • Certificate (AWS::CertificateManager::Certificate): An SSL certificate tied to the Domain Name specified during setup. Note that the "Approve" link in the automated email sent to the domain owner as part of certificate creation must be clicked before stack creation will finish.
  • EBApplication (AWS::ElasticBeanstalk::Application): The Elastic Beanstalk application.
  • EBEnvironment (AWS::ElasticBeanstalk::Environment): The Elastic Beanstalk environment, which will be pre-configured with the environment variables specified below.
  • Elasticsearch (AWS::Elasticsearch::Domain): An Elasticsearch instance, which your application may use for full-text search, logging, etc.
  • PostgreSQL (AWS::RDS::DBInstance): The RDS instance for your application. Includes a security group to allow access only from your EB or ECS instances in this stack. Note: this CloudFormation resource is named "PostgreSQL" for backwards-compatibility reasons, but the RDS instance can be configured with any database engine supported by RDS.
  • Redis (AWS::ElastiCache::CacheCluster): The Redis ElasticCache instance for your application. Includes a cache security group to allow access only from your EB or ECS instances in this stack.
  • Vpc (AWS::EC2::VPC): The VPC that contains all relevant stack-related resources (such as the EB or ECS EC2 instances, the RDS instance, and ElastiCache instance). The VPC is created with two subnets in different availability zones so that, for MultiAZ RDS instances or EB/ECS clusters with multiple EC2 instances, resources will be spread across multiple availability zones automatically.

GovCloud Support

AWS GovCloud does not support Elastic Beanstalk, Elastic Container Service, Certificate Manager, CloudFront, or Elasticsearch. You can still create a reduced stack in GovCloud by downloading one of the following templates and uploading it to CloudFormation via the AWS Management Console:

Without NAT Gateway gc-no-nat.yaml
With NAT Gateway gc-nat.yaml

This template will create:

  • a VPC and the associated subnets,
  • an RDS instance,
  • a Redis instance
  • an Elastic Load Balancer (ELB),
  • an Auto Scaling Group and associated Launch Configuration, and
  • the number of EC2 instances you specify during stack creation (using the specified AMI)

There is no way to manage environment variables when using straight EC2 instances like this, so you are responsible for selecting the appropriate AMI and configuring it to serve your application on the specified port, with all of the necessary secrets and environment variables. Note that the Elastic Load Balancer will not direct traffic to your instances until the health check you specify during stack creation returns a successful response.

Environment Variables within your server instances

Once your environment is created you'll have an Elastic Beanstalk (EB) or Elastic Compute Service (ECS) environment with the environment variables you need to run a containerized web application. These environment variables are:

  • AWS_REGION: The AWS region in which your stack was created.
  • AWS_STORAGE_BUCKET_NAME: The name of the S3 bucket in which your application should store static assets
  • AWS_PRIVATE_STORAGE_BUCKET_NAME: The name of the S3 bucket in which your application should store private/uploaded files or media. Make sure you configure your storage backend to require authentication to read objects and encrypt them at rest, if needed.
  • CDN_DOMAIN_NAME: The domain name of the CloudFront distribution connected to the above S3 bucket; you should use this (or the S3 bucket URL directly) to refer to static assets in your HTML
  • ELASTICSEARCH_ENDPOINT: The domain name of the Elasticsearch instance. If (none) is selected for the ElasticsearchInstanceType during stack creation, the value of this variable will be an empty string ('').
  • ELASTICSEARCH_PORT: The recommended port for connecting to Elasticsearch (defaults to 443).
  • ELASTICSEARCH_USE_SSL: Whether or not to use SSL (defaults to 'on').
  • ELASTICSEARCH_VERIFY_CERTS: Whether or not to verify Elasticsearch SSL certificates. This should work fine with AWS Elasticsearch (the instance provides a valid certificate), so this defaults to 'on' as well.
  • DOMAIN_NAME: The domain name you specified when creating the stack, which will be associated with the automatically-generated SSL certificate and as an allowed origin in the CORS configuration for the S3 buckets.
  • ALTERNATE_DOMAIN_NAMES: A comma-separated list of alternate domain names provided to the stack. These domains, if any, will also be included in the automatically-generated SSL certificate and S3 CORS configuration.
  • SECRET_KEY: The secret key you specified when creating this stack
  • DATABASE_URL: The URL to the RDS instance created as part of this stack. If (none) is selected for the DatabaseClass during stack creation, the value of this variable will be an empty string ('').
  • DATABASE_REPLICA_URL: The URL to the RDS database replica instance. This is an empty string if there's no replica database.
  • CACHE_URL: The URL to the Redis or Memcached instance created as part of this stack (may be used as a cache or session storage, e.g.). If using Redis, note that it supports multiple databases and no database ID is included as part of the URL, so you should append a forward slash and the integer index of the database, if needed, e.g., /0. If (none) is selected for the CacheNodeType during stack creation, the value of this variable will be an empty string ('').

When running an EB stack, you can view and edit the keys and values for all environment variables on the fly via the Elastic Beanstalk console or command line tools.

Elasticsearch Authentication

Since AWS Elasticsearch does not support VPCs, the Elasticsearch instance in this stack does not accept connections from all clients. The default policy associated with the instance requires HTTP(S) requests to be signed using the AWS Signature Version 4. The instance role associated with the EC2 instances created in this stack (whether using Elastic Beanstalk, Elastic Container Service, or EC2 directly) is authorized to make requests to the Elasticsearch instance. Those credentials may be obtained from the EC2 instance meta data.

If you're using Python, credentials may be obtained automatically using Boto and requests signed using the aws-requests-auth package.

Deployment to Elastic Beanstalk

You can deploy your application to an Elastic Beanstalk stack created with this template as follows.

First, build and push your docker image to the ECR repository created by this stack (you can also see these commands with the appropriate variables filled in by clicking the "View Push Commands" button on the Amazon ECS Repository detail page in the AWS console):

$(aws ecr get-login --region <region>)  # $(..) will execute the output of the inner command
docker build -t <stack-name> .
docker tag <stack-name>:latest <account-id>.dkr.ecr.<region>.amazonaws.com/<stack-name>:latest
docker push <account-id>.dkr.ecr.<region>.amazonaws.com/<stack-name>:latest

Once working, you might choose to execute these commands from the appropriate point in your CI/CD pipeline.

Next, create a Dockerrun.aws.json file in your project directory, pointing it to the image you just pushed:

{
  "AWSEBDockerrunVersion": 2,
  "containerDefinitions": [
    {
      "name": "my-app",
      "image": "<account-id>.dkr.ecr.<region>.amazonaws.com/<stack-name>:latest",
      "essential": true,
      "memory": 512,
      "portMappings": [
        {
          "hostPort": 80,
          "containerPort": 8000
        }
      ],
      "logConfiguration": {
        "logDriver": "awslogs",
        "options": {
          "awslogs-region": "<region>",
          "awslogs-group": "<log group>",
          "awslogs-stream-prefix": "my-app"
        }
      }
    }
  ]
}

You can add and link other container definitions, such as an Nginx proxy or background task workers, if desired.

A single CloudWatch Logs group will be created for you. You can find its name by navigating to the AWS CloudWatch Logs console (after stack creation has finished). If prefer to create your own log group, you can do so with the aws command line tool:

pip install -U awscli
aws logs create-log-group --log-group-name <log-group-name> --region <region>

Finally, you'll need to install the AWS and EB command line tools, commit or stage for commit the Dockerrun.aws.json file, and deploy the application:

pip install -U awscli awsebcli
git add Dockerrun.aws.json
eb init  # select the existing EB application and environment, when prompted
eb deploy --staged  # or just `eb deploy` if you've committed Dockerrun.aws.json

Once complete, the EB environment should be running a copy of your container. To troubleshoot any issues with the deployment, review events and logs via the Elastic Beanstack section of the AWS console.

Dokku

When creating a Dokku stack, you may find it advantageous to upload your normal SSH public key to AWS, rather than using one that AWS generates. This way, you'll already be set up to deploy to your Dokku instance without needing to keep track of an extra SSH private key.

The CloudFormation stack creation should not finish until Dokku is fully installed; cfn-signal is used in the template to signal CloudFormation once the installation is complete.

DNS

After the stack is created, you'll want to inspect the Outputs for the PublicIP of the instance and create a DNS A record (possibly including a wildcard record, if you're using vhost-based apps) for your chosen domain.

For help creating a DNS record, please refer to the Dokku DNS documentation.

Environment Variables

The environment variables for the other resources created in this stack will be passed to Dokku as global environment variables.

If metadata associated with the Dokku EC2 instance changes, updates to environment variables, if any, will be passed to the live server via cfn-hup. Depending on the nature of the update this may or may not result the instance being stopped and restarted. Inspect the stack update confirmation page carefully to avoid any unexpected instance recreations.

Deployment

You can create a new app on the remote server like so, using the same SSH key that you specified during the stack creation process (if you didn't use your shell's default SSH key, you'll need to add -i /path/to/private_key to this command):

ssh dokku@<your domain or IP> apps:create python-sample

and then deploy Heroku's Python sample to that app:

git clone https://github.com/heroku/python-sample.git
cd python-sample
git remote add dokku dokku@<your domain or IP>:python-sample
git push dokku master

You should be able to watch the build complete in the output from the git push command. If the deploy completes successfully, you should be able to see "Hello world!" at http://python-sample.your.domain/

For additional help deploying to your new instance, please refer to the Dokku documentation.

Let's Encrypt

The Dokku stack does not create a load balancer and hence does not include a free SSL certificate via Amazon Certificate Manager, so let's create one with the Let's Encrypt plugin, and add a cron job to automatically renew the cert as needed:

ssh ubuntu@<your domain or IP> sudo dokku plugin:install https://github.com/dokku/dokku-letsencrypt.git
ssh dokku@<your domain or IP> config:set --no-restart python-sample [email protected]
ssh dokku@<your domain or IP> letsencrypt python-sample
ssh dokku@<your domain or IP> letsencrypt:cron-job --add python-sample

The Python sample app should now be accessible over HTTPS at https://python-sample.your.domain/

Creating or updating templates

Templates built from the latest release of aws-web-stacks will be available in S3 (see links near the top of this file). They're built with generic defaults.

Templates are built by setting some environment variables with your preferences and then running python -c 'import stack' (see the Makefile). The template file is output to standard output. It's easy to do this on one line:

USE_EC2=on python -c 'import stack' >my_ec2_stack_template.yaml

Here are the environment variables that control the template creation.

USE_EC2=on
Create EC2 instances directly.
USE_GOVCLOUD=on
Create EC2 instances directly, but disables AWS services that aren't available in GovCloud like the AWS Certificate Manager and Elastic Search.
USE_EB=on
Create an Elastic Beanstalk application
USE_ECS=on
Create an Elastic Container Service.
USE_EKS=on
Create an AWS EKS (Kubernetes) cluster.
USE_DOKKU=on
Create an EC2 instance containing a Dokku server

I believe those environment variables are mutually exclusive. The remaining ones can be used in combination with each other or one of the above.

USE_NAT_GATEWAY=on
Don't put the services inside your VPC onto the public internet, and add a NAT gateway to the stack to the services can make connections out.
DEFAULTS_FILE=<path to JSON file>

Changes the default values for parameters. The JSON file should just be a dictionary mapping parameter names to default values, e.g.:

{
    "AMI": "ami-078c57a94e9bdc6e0",
    "AssetsUseCloudFront": "false"
}

One more example, creating EC2 instances without a NAT gateway and overriding the parameter defaults:

USE_EC2=on DEFAULTS_FILE=stack_defaults.json python -c 'import stack' >stack.yaml

Contributing

Please read contributing guidelines here.

Good luck and have fun!

Copyright 2017, 2018 Jean-Phillipe Serafin, Tobias McNulty.

More Repositories

1

django-timepiece

A multi-user Django application for tracking people's time on projects.
Python
361
star
2

django-project-template

Django project template for startproject (Requires 2.2+)
Python
212
star
3

django-treenav

Extensible, hierarchical, and pluggable navigation system for Django sites
Python
129
star
4

django-scribbler

django-scribbler is an application for managing snippets of text for a Django website.
Python
115
star
5

django-email-bandit

A Django email backend for hijacking email sending in a test environment.
Python
74
star
6

django-jsx

A simple tool for using Django and React/JSX together
Python
58
star
7

django-pagelets

Simple CMS for Django projects
JavaScript
58
star
8

fabulaws

A Python tool for creating and interacting with ephemeral AWS resources
Python
41
star
9

dockerfile_post

Code to accompany https://www.caktusgroup.com/blog/2017/03/14/production-ready-dockerfile-your-python-django-app/
Python
40
star
10

django-file-picker

django-file-picker
JavaScript
37
star
11

margarita

A collection of delicious Salt states for Django project deployments.
SaltStack
35
star
12

django-sticky-uploads

Enhanced file input widget for Django which uploads the file in the background and retains value on form errors.
Python
31
star
13

taytay

β™₯ Taylor Swift Lyric Generator β™₯
Python
29
star
14

django-comps

A utility that provides an entry point for integrating front end designers into a django project
Python
27
star
15

rapidsms-twilio

RapidSMS Twilio Backend
Python
22
star
16

django-filters-facet

A django-filter extension to refine search results using faceted navigation functionality.
Python
21
star
17

django-crumbs

django-crumbs is a pluggable Django app for adding breadcrumbs to your project.
Python
17
star
18

django_bread

Help create BREAD views in Django
Python
14
star
19

rapidsms-appointments

A reusable RapidSMS application for sending appointment reminders.
Python
12
star
20

django-styleguide

Styleguide helper for projects that work with design teams
CSS
12
star
21

tracerlib

Tracerlib provides a set of helpers to make tracing Python code easier.
Python
11
star
22

rapidsms-threadless-router

A RapidSMS router implementation that removes the threading functionality from the legacy Router class.
Python
10
star
23

rapidsms-tropo

Tropo backend for RapidSMS
Python
9
star
24

guidebook

9
star
25

rapidsms-healthcare

A Django/RapidSMS application for managing patient and healthcare provider records in a pluggable fashion.
Python
8
star
26

django-template-graph

Python
7
star
27

django-sanitizer

Python
7
star
28

django-dry-choices

Create choices for Django model and form choice fields in a concise, consistent and DRY manner.
Python
7
star
29

rapidsms-raspberrypi

JavaScript
7
star
30

redundant

A technical debt analysis tool
Python
7
star
31

smsdemo

Demo RapidSMS Application for PyCon 2105 Workshop https://us.pycon.org/2015/schedule/presentation/479/
Python
6
star
32

django-app-template

A template for creating a new reusable application for Django.
Python
6
star
33

drf-sample

Django REST Framework Sample
Python
5
star
34

tequila-django

Ansible role for setting up a Django project running under gunicorn and/or Celery on a server
Shell
5
star
35

rapidsms-clickatell

RapidSMS Clickatell Backend
Python
5
star
36

caktus-sphinx-theme

Custom Sphinx theme for project written and maintained by Caktus Consulting Group
Python
4
star
37

philly-hip

Code for hip.phila.gov
Python
4
star
38

ultimatetictactoe

JavaScript
4
star
39

rapidsms-natal-care

A RapidSMS application for pre-natal and post-natal appointment tracking.
Python
4
star
40

cordwainer

A better CSV library
Python
4
star
41

hyperfiction

Interactive Fiction engine and mark up language, written using Polymer Platform
4
star
42

pygsm-gateway

Simple HTTP gateway for using a GSM modem with PyGSM
Python
3
star
43

tequila-dokku

3
star
44

rapidsms-broadcast

Python
3
star
45

rapidsms-nutrition

RapidSMS app for monitoring growth and nutrition
Python
3
star
46

framework-playground

Erlang
3
star
47

ansible-role-django-k8s

Ansible role with sane defaults to deploy a Django app to Kubernetes.
Jinja
3
star
48

tequila

Straight-up project deployment roles, without the Salt.
3
star
49

rapidsms-salt

JavaScript
2
star
50

invoke-kubesae

Basic management tasks for working with a Kubernetes cluster
Python
2
star
51

tequila-nginx

Ansible role for setting up nginx on a server as a forwarding proxy for Django
2
star
52

rapidsms-deploy-dotcloud

Python
2
star
53

rapidsms-salt-multi-server

JavaScript
2
star
54

rapidsms-groups

Python
2
star
55

commcare-utilities

Python
2
star
56

ansible-role-k8s-web-cluster

An Ansible role to help configure Kubernetes clusters for web apps.
1
star
57

jade-truffle

The smallest Caktus
Python
1
star
58

ansible-role-podman-containers

1
star
59

hosted-rapidpro

Shell
1
star
60

rapidsms-reports

Python
1
star
61

ansible-role-aws-web-stacks

Ansible Role - aws-web-stacks
1
star
62

jazzhands

Python
1
star
63

tequila-solr

Ansible role for Solr for Django projects
1
star
64

slider

Ship It Day experiment using React to build a slider game/puzzle.
JavaScript
1
star
65

gulp-tasks

JavaScript
1
star
66

tequila-postgresql

Ansible role for creating a Postgres database for a Django project
1
star
67

ibid

Ibid IRC bot
Python
1
star
68

pycon-dashboard

HTML
1
star
69

beeware-sqlite3

Proof of concept BeeWare app using sqlite3
Python
1
star
70

reaktus

JavaScript
1
star
71

tequila-rabbitmq

Ansible role for creating a task queue for a Django project using RabbitMQ
1
star
72

tequila-cli

Python
1
star
73

caktus-docker-context

Ansible playbook to configure a remote host as a docker context
1
star
74

Traffic-Stops

NC CopWatch is a website to monitor and identify racial profiling practices by North Carolina law enforcement agencies
Jupyter Notebook
1
star
75

tequila-common

Ansible role for setting up users, keys, directories, and a firewall for a Django server
1
star
76

django-opendebates

Python
1
star
77

lightning-talk-lunches

Slides for the lightning talk lunches sponsored by Caktus Consulting Group
JavaScript
1
star
78

rapidsms-rerouter

Python
1
star
79

talks

HTML
1
star
80

offlinedatacollector

ShipIt Day Project: proof of concept for offline enabled data collection via a browser
JavaScript
1
star