GCP Config Connector
Config Connector is a Kubernetes add-on that allows customers to manage GCP resources, such as Cloud Spanner or Cloud Storage, through your cluster's API.
With Config Connector, now you can describe GCP resources declaratively using Kubernetes-style configuration. Config Connector will create any new GCP resources and update any existing ones to the state specified by your configuration, and continuously makes sure GCP is kept in sync. The same resource model is the basis of Istio, Knative, Kubernetes, and the Google Cloud Services Platform.
As a result, developers can manage their whole application, including both its Kubernetes components as well as any GCP dependencies, using the same configuration, and -- more importantly -- tooling. For example, the same customization or templating tool can be used to manage test vs. production versions of an application across both Kubernetes and GCP.
This repository contains full Config Connector source code. This inlcudes controllers, CRDs, install bundles, and sample resource configurations.
Usage
See https://cloud.google.com/config-connector/docs/overview.
For simple starter examples, see the Resource reference and Cloud Foundation Toolkit Config Connector Solutions.
Building Config Connector
Recommended Operating System
- Ubuntu (18.04/20.04)
- Debian (9/10/11)
Software requirements
Set up your environment
Option 1: Set up an environment in a fresh VM (recommended)
-
Create an Ubuntu 20.04 VM on Google Cloud.
-
Open an SSH connection to the VM.
-
Create a new directory for GoogleCloudPlatform open source projects if it does not exist.
mkdir -p ~/go/src/github.com/GoogleCloudPlatform
-
Update apt and install build-essential.
sudo apt-get update sudo apt install build-essential
-
Clone the source code.
cd ~/go/src/github.com/GoogleCloudPlatform git clone https://github.com/GoogleCloudPlatform/k8s-config-connector
-
Change to environment-setup directory.
cd ~/go/src/github.com/GoogleCloudPlatform/k8s-config-connector/scripts/environment-setup
-
Set up sudoless Docker.
./docker-setup.sh
-
Exit your current session, then SSH back in to the VM. Then run the following to ensure you have set up sudoless docker correctly:
docker run hello-world
-
Install Golang.
cd ~/go/src/github.com/GoogleCloudPlatform/k8s-config-connector/scripts/environment-setup ./golang-setup.sh source ~/.profile
-
Install other build dependencies.
./repo-setup.sh source ~/.profile
-
Set up a GKE cluster for testing purposes. The script
gcp-setup.sh
also deploys Config Connector CRDs and workloads including controller manager and webhooks into the cluster.NOTE:
gcp-setup.sh
assumes there is a GKE cluster named "cnrm-dev" in your default GCP project configured through gcloud, and creates one if it doesn't exist. If you prefer to use an existing GKE cluster, you can modifyCLUSTER_NAME
in the script and use the existing cluster name instead. Make sure the existing GKE cluster has workload identity enabled../gcp-setup.sh
Option 2: Set up an environment manually yourself
-
Install all required dependencies
-
Add all required dependencies to your
$PATH
. -
Set up a GOPATH.
-
Add
$GOPATH/bin
to your$PATH
. -
Clone the repository:
cd $GOPATH/src/github.com/GoogleCloudPlatform git clone https://github.com/GoogleCloudPlatform/k8s-config-connector
Build the source code
-
Enter the source code directory:
cd $GOPATH/src/github.com/GoogleCloudPlatform/k8s-config-connector
-
Build the controller:
make manager
-
Build the CRDs:
make manifests
-
Build the config-connector CLI tool:
make config-connector
Create a Resource
-
Enable Artifact Registry for your project.
gcloud services enable artifactregistry.googleapis.com
-
Create a Docker repository. You may need to wait ~10-15 minutes to let your cluster get set up after running
make deploy
.cd $GOPATH/src/github.com/GoogleCloudPlatform/k8s-config-connector kubectl apply -f config/samples/resources/artifactregistryrepository/artifactregistry_v1beta1_artifactregistryrepository.yaml
-
Wait a few minutes and then make sure your repository exists in GCP.
gcloud artifacts repositories list
If you see a repository, then your cluster is properly functioning and actuating K8s resources onto GCP.
Make a Code Change
At this point, your cluster is running a CNRM Controller Manager image built on your system. Let's make a code change to verify that you are ready to start development.
Edit cmd/manager/main.go in your local repository.
Insert the log.Printf(...)
statement below on the first line of the
main()
function.
package manager
func main() {
log.Printf("I have finished the getting started guide.")
...
}
To apply the change, you can either deploy the container image into the GKE Cluster, or run the Controller Manager directly as a local executable.
Build and Deploy the Controller Manager into the GKE Cluster
Build and deploy your change, force a pull of the container image.
make deploy-controller && kubectl delete pods --namespace cnrm-system --all
Verify your new log statement is on the first line of the logs for the CNRM Controller Manager pod.
kubectl --namespace cnrm-system logs cnrm-controller-manager-0
Build and Run the Controller Manager locally
If you don't want to deploy the controller manager into your dev cluster, you can run it locally on your dev machine with the steps below.
kubectl edit statefulset cnrm-controller-manager -n cnrm-system
and scale down the replica to 0.- Run
make run
and inspect the output logs.
Contributing to Config Connector
Please refer to our contribution guide for more details.