• Stars
    star
    353
  • Rank 120,322 (Top 3 %)
  • Language
    Java
  • License
    Apache License 2.0
  • Created over 10 years ago
  • Updated 11 days ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

CDP Public Cloud is an integrated analytics and data management platform deployed on cloud services. It offers broad data analytics and artificial intelligence functionality along with secure user access and data governance features.

Cloudbreak

Maintainability Build Automated Build Pulls Licence

Local Development Setup

As of now this document focuses on setting up your development environment on macOS. You'll need Homebrew to install certain components in case you don't have them already. To get Homebrew please follow the installation instructions on the Homebrew homepage: https://brew.sh

As a prerequisite, you need to have Java 17 installed. You can choose from many options, including the Oracle JDK, Oracle OpenJDK, or an OpenJDK from any of several providers. For help in choosing your JDK, consult Java is Still Free.

You'll need Docker. For Mac, use Docker Desktop for Mac. Please allocate at least 6 CPU and 12 GB Memory to the process. (Depends on that how many service running in your IntelliJ and in Docker containers)

Cloudbreak Deployer

The simplest way to set up the working environment to be able to start Cloudbreak on your local machine is to use the Cloudbreak Deployer.

First you need to create a sandbox directory which will store the necessary configuration files and dependencies of Cloudbreak Deployer. This directory must be created outside the cloned Cloudbreak git repository:

mkdir cbd-local
cd cbd-local

The next step is to download the latest cloudbreak-deployer onto your machine:

curl -s https://raw.githubusercontent.com/hortonworks/cloudbreak-deployer/master/install-dev | sh && cbd --version

Add the following to the file named Profile under the cbd-local directory you have just created. Please note, when a cbd command is executed you should go to the deployment's directory where your Profile file is found (cbd-local in our example). The CB_SCHEMA_SCRIPTS_LOCATION environment variable configures the location of SQL scripts that are in the core/src/main/resources/schema directory in the cloned Cloudbreak git repository. In a similar fashion, the rest of the other *_SCHEMA_SCRIPTS_LOCATION environment variables configure the locations of SQL scripts that are associated with those respective services.

Please note that the full path needs to be configured and env variables like $USER cannot be used. You also have to set a password for your local Cloudbreak in UAA_DEFAULT_USER_PW:

export CB_LOCAL_DEV_LIST=
export UAA_DEFAULT_SECRET=cbsecret2015
export CB_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/core/src/main/resources/schema
export CONSUMPTION_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/cloud-consumption/src/main/resources/schema
export DATALAKE_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/datalake/src/main/resources/schema
export ENVIRONMENT_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/environment/src/main/resources/schema
export FREEIPA_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/freeipa/src/main/resources/schema
export PERISCOPE_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/autoscale/src/main/resources/schema
export REDBEAMS_SCHEMA_SCRIPTS_LOCATION=/Users/YOUR_USERNAME/YOUR_PROJECT_DIR/cloudbreak/redbeams/src/main/resources/schema
export ULU_SUBSCRIBE_TO_NOTIFICATIONS=true
export CB_INSTANCE_UUID=$(uuidgen | tr '[:upper:]' '[:lower:]')
export CB_INSTANCE_NODE_ID=5743e6ed-3409-420b-b08b-f688f2fc5db1
export PUBLIC_IP=localhost
export VAULT_AUTO_UNSEAL=true
export DPS_VERSION=2.0.0.0-142

If you want to use the mock-infrastructure, you need to add to the Profile the following:

export MOCK_INFRASTRUCTURE_HOST=localhost

If you want to save some memory then one or more services can be skipped in local runs like (see complete list of supported service names below):

export CB_LOCAL_DEV_LIST=periscope,distrox-api,environments2-api,datalake-api

If you are using AWS (commercial regions), then also add the following lines, substituting your control plane AWS account ID, and the AWS credentials that you have created for the CB role.

export CB_AWS_ACCOUNT_ID="YOUR_AWS_ACCOUNT_ID"
export AWS_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"

Furthermore, in order to use AWS GovCloud, also add the following lines, again substituting your control plane AWS GovCloud account ID, and the AWS GovCloud credentials that you have created for the CB role.

export CB_AWS_GOV_ACCOUNT_ID="YOUR_AWS_GOVCLOUD_ACCOUNT_ID"
export AWS_GOV_ACCESS_KEY_ID="YOUR_ACCESS_KEY"
export AWS_GOV_SECRET_ACCESS_KEY="YOUR_SECRET_KEY"

At first, you should start every service from cbd to check that your cloud environment is set up correctly.

export CB_LOCAL_DEV_LIST=

When this setup works, you can remove services from cbd, and run them locally. For example, in order to run Cloudbreak, Periscope, Datalake, FreeIPA, Redbeams, Environment, Thunderhead Mock (and Thunderhead API), IDBroker Mapping Management, and Environments2 API services locally (from IDEA or the command line), put this into your Profile:

export CB_LOCAL_DEV_LIST=cloudbreak,periscope,datalake,freeipa,redbeams,environment,thunderhead-mock,thunderhead-api,idbmms,environments2-api

Containers for these applications won't be started and Uluwatu (or the cdp & dp CLI tools) will connect to Java processes running on your host. You don't have to put all the applications into local-dev mode; the value of the variable could be any combination. The following service names are supported in CB_LOCAL_DEV_LIST:

  • audit
  • audit-api
  • cadence
  • cloudbreak
  • cluster-proxy
  • consumption
  • core-gateway
  • datalake
  • datalake-api
  • datalake-dr
  • distrox-api
  • environment
  • environments2-api
  • freeipa
  • idbmms
  • mock-infrastructure
  • periscope
  • redbeams
  • thunderhead-api
  • thunderhead-mock
  • workloadiam

You need to log in to DockerHub:

docker login

And then provide your username and password.

Then run these commands:

cbd start
cbd logs cloudbreak

In case you see org.apache.ibatis.migration.MigrationException at the end of the logs run these commands to fix the DB and the re-run the previous section (cbd start and logs):

cbd migrate cbdb up
cbd migrate cbdb pending

For some reason if you encounter a similar problem with Periscope, Datalake, FreeIPA, Redbeams, Environment, or Consumption, then run the following commands, and you can restart the Cloudbreak Deployer:

cbd migrate periscopedb up
cbd migrate periscopedb pending

cbd migrate datalakedb up
cbd migrate datalakedb pending

cbd migrate freeipadb up
cbd migrate freeipadb pending

cbd migrate redbeamsdb up
cbd migrate redbeamsdb pending

cbd migrate environmentdb up
cbd migrate environmentdb pending

cbd migrate consumptiondb up
cbd migrate consumptiondb pending

You can track any other application's logs to check the results by executing the following command:

cbd logs periscope # or any other service name supported in CB_LOCAL_DEV_LIST

If everything went well then Cloudbreak will be available on https://localhost. For more details and config parameters please check the documentation of Cloudbreak Deployer.

The deployer has generated a certs directory under cbd-local directory which will be needed later on to set up IDEA properly.

If not already present, you shall create an etc directory under cbd-local directory and place your Cloudera Manager license file license.txt there. This is essential for Thunderhead Mock to start successfully. (Request a licence from us)

Cloudbreak Service Ports

When cloudbreak is started in a container, the port it listens on is 8080. If cloudbreak is added to the CB_LOCAL_DEV_LIST variable, all services expect the cloudbreak port to be 9091.

Linux Difference

Cloudbreak Deployer is unable to determine the IP address on a Linux machine. Therefore, you must add in the public IP address manually to your Profile.

export PUBLIC_IP=127.0.0.1

Enable gitconfig, githooks

We have some quality of life configurations and scripts that helps you submit a pull-request in a proper format.

Running the following command will enable them for you:

make enable-gitconfig

This will update the .git/config directory to apply configs from the .gitconfig file in the root of the project.

IDEA

Check Out the Cloudbreak Repository

Go to https://github.com/hortonworks/cloudbreak, either clone or download the repository, use SSH which is described here: https://help.github.com/articles/connecting-to-github-with-ssh/

Important: update ~/.gradle/gradle.properties file with the two following properties in order to download artifacts from the internal repository. You can find the details on our Wiki page.

  • defaultCmPrivateRepoUser
  • defaultCmPrivateRepoPassword

Project Settings in IDEA

In IDEA set your SDK to your Java version under:

Set project SDK

File -> Project Structure -> Project Settings -> Project -> Project SDK -> 17

Set project Language level

File -> Project Structure -> Project Settings -> Project -> Project Language Level -> 17

Set Gradle JVM

IntelliJ IDEA -> Preferences -> Build, Execution, Deployment -> Gradle -> Gradle JVM -> 17

Set Import Order

Import static all other imports
<blank line>
import java.*
<blank line>
import javax.*
<blank line>
import org.*
<blank line>
import com.*
<blank line>
import all other imports

Import Project

Cloudbreak can be imported into IDEA as a Gradle project by specifying the cloudbreak repo root under Import Project. Once it is done, you need to import the proper code formatter by using the File -> Import Settings... menu and selecting the idea_settings.jar located in the config/idea directory in the Cloudbreak git repository.

Also, you need to import inspection settings called inpsections.xml located in config/idea:

IntelliJ IDEA -> Preferences -> Editor -> Inspections -> Settings icon -> Import Profile

Cloudbreak integrates with GRPC components. This results in generated files inside the project with big file sizes. By default, IDEA ignores anything that is more than 8MB, resulting in unknown classes inside the IDEA context. To circumvent this, you need to add this property to your IDEA properties.

Go to Help -> Edit Custom Properties..., then insert

#parse files up until 15MB
idea.max.intellisense.filesize=15000

Restart IDEA, and Rebuild.

Activating Cloudbreak Code Styles

After importing, be sure to navigate to:

IntelliJ IDEA -> Preferences -> Editor -> Code Style -> Java -> Scheme

And, select the new scheme Default (1).

Otherwise, IntelliJ will constantly reorder your imports differently from CB conventions.

PKIX SSL Error - Import Mock-Infrastructure's certificate to your Java trust store before launch FreeIPA or Cloudbreak(core) locally

We needed to eliminate the vulnerable TrustEveryThingTrustStore implementation from our code base and this indicates that the certificate of the Mock-Infrastructure service needs to be added to the Java trust store if you run FreeIPA and/or Cloudbreak(core) services locally and would like to create deployments with mock provider or simply run the integration tests locally. Because the image catalog could not be downloaded from the mentioned service due to SSL handshake issues, for example:

{"message":"Creation of FreeIPA failed: Failed to get image catalog: javax.net.ssl.SSLHandshakeException: PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target from https://localhost:10090/mock-image-catalog?catalog-name=freeipa-catalog&cb-version=2.68.0-b64&runtime=7.2.15&mock-server-address=localhost:10090","payload":null}

Example import commands, do not forget to update the path of the certificate to your Cloudbreak repository location:

# How to import on Linux
sudo keytool -import -alias mock-infra -noprompt -file "/home/${USER}/prj/cloudbreak/mock-infrastructure/src/main/resources/keystore/infrastructure-mock.cer" -keystore         /etc/ssl/certs/java/cacerts -storepass changeit

# How to import on MacOS
keytool -import -alias mock-infra -noprompt -file ~/prj/hortonworks/cloudbreak/mock-infrastructure/src/main/resources/keystore/infrastructure-mock.cer -keystore /opt/homebrew/opt/openjdk@11/libexec/openjdk.jdk/Contents/Home/lib/security/cacerts -storepass changeit

If you still get the same error then specify the Trust Store for Cloudbreak(core) and/or FreeIpa:

# How to import on Linux
-Djavax.net.ssl.trustStore=/etc/ssl/certs/java/cacerts

# How to import on MacOS
-Djavax.net.ssl.trustStore=/opt/homebrew/opt/openjdk@11/libexec/openjdk.jdk/Contents/Home/lib/security/cacerts

Note:

  • The path of the Java trust store may be different for your development environment, then please update path in the commands to the right location
  • This solution is only needed until the CB-18493 has been solved.

Running Cloudbreak in IDEA

To launch the Cloudbreak application execute the com.sequenceiq.cloudbreak.CloudbreakApplication class (set Use classpath of module to cloudbreak.core.main) with the following JVM options:

-Dcb.db.port.5432.tcp.addr=localhost
-Dcb.db.port.5432.tcp.port=5432
-Dserver.port=9091
-Daltus.ums.host=localhost
-Dvault.addr=localhost
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file and <NODE_ID> with some value e.g.: CB-1

Note that if you're upgrading from 2.16 (or earlier) to master you may have to set this value in the database also to ensure the flow restart functionality for progressing cluster(s)

You can set this by executing the following SQL on the cbdb database:

UPDATE flowlog 
SET cloudbreaknodeid = 'YOUR_NODE_ID_VALUE';

Where the YOUR_NODE_ID_VALUE value must be the same what you provide in the cloudbreak run configuration mentioned above.

Afterward add these entries to the environment variables (the same values that you set in Profile):

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
CB_AWS_ACCOUNT_ID=
AWS_GOV_ACCESS_KEY_ID=
AWS_GOV_SECRET_ACCESS_KEY=
CB_AWS_GOV_ACCOUNT_ID=

The database migration scripts are run automatically by Cloudbreak, but this migration can be turned off with the -Dcb.schema.migration.auto=false JVM option.

Configure Before launch task

In order to be able to determine the local Cloudbreak and FreeIPA version automatically, a Before launch task has to be configured for the project in IntelliJ IDEA. The required steps are the following:

  1. Open Run/Debug Configurations for the project
  2. Select your project's application
  3. Click on Add in the Before launch panel
  4. Select Run Gradle Task with the following parameters
    1. Gradle project: cloudbreak:core or cloudbreak:freeipa depending on the service
    2. Tasks: buildInfo
  5. Confirm and restart the application

Running Periscope in IDEA

After importing the cloudbreak repo root, launch the Periscope application by executing the com.sequenceiq.periscope.PeriscopeApplication class (set Use classpath of module to cloudbreak.autoscale.main) with the following JVM options:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the periscope.cloudbreak.url should be http://localhost:9091
-Dperiscope.db.port.5432.tcp.addr=localhost
-Dperiscope.db.port.5432.tcp.port=5432
-Dperiscope.cloudbreak.url=http://localhost:8080
-Dserver.port=8085
-Daltus.ums.host=localhost
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dinstance.node.id=<NODE_ID>
--add-opens java.base/java.util.concurrent=ALL-UNNAMED

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Running Datalake in IDEA

After importing the cloudbreak repo root, launch the Datalake application by executing the com.sequenceiq.datalake.DatalakeApplication class (set Use classpath of module to cloudbreak.datalake.main) with the following JVM options:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the datalake.cloudbreak.url should be http://localhost:9091
-Dserver.port=8086
-Dcb.enabledplatforms=AWS,AZURE,MOCK
-Ddatalake.cloudbreak.url=http://localhost:8080
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dvault.addr=localhost
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Running FreeIPA in IDEA

After importing the cloudbreak repo root, launch the FreeIPA application by executing the com.sequenceiq.freeipa.FreeIpaApplication class (set Use classpath of module to cloudbreak.freeipa.main) with the following JVM options:

-Dfreeipa.db.addr=localhost
-Dserver.port=8090
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Then add these entries to the environment variables (the same values that you set in Profile):

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
CB_AWS_ACCOUNT_ID=
AWS_GOV_ACCESS_KEY_ID=
AWS_GOV_SECRET_ACCESS_KEY=
CB_AWS_GOV_ACCOUNT_ID=

Running Redbeams in IDEA

After importing the cloudbreak repo root, launch the Redbeams application by executing the com.sequenceiq.redbeams.RedbeamsApplication class (set Use classpath of module to cloudbreak.redbeams.main) with the following JVM options:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the redbeams.cloudbreak.url should be http://localhost:9091
-Dredbeams.db.port.5432.tcp.addr=localhost
-Dredbeams.db.port.5432.tcp.port=5432
-Dredbeams.cloudbreak.url=http://localhost:8080
-Dserver.port=8087
-Daltus.ums.host=localhost
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dcb.enabledplatforms=AWS,AZURE,MOCK
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Then add these entries to the environment variables (the same values that you set in Profile):

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
CB_AWS_ACCOUNT_ID=
AWS_GOV_ACCESS_KEY_ID=
AWS_GOV_SECRET_ACCESS_KEY=
CB_AWS_GOV_ACCOUNT_ID=

Running the Environment Service in IDEA

After importing the cloudbreak repo root, launch the Environment application by executing the com.sequenceiq.environment.EnvironmentApplication class (set Use classpath of module to cloudbreak.environment.main) with the following JVM options:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the environment.cloudbreak.url should be http://localhost:9091
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Denvironment.cloudbreak.url=http://localhost:8080
-Denvironment.enabledplatforms="YARN,YCLOUD,AWS,AZURE,MOCK"
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Then add these entries to the environment variables (the same values that you set in Profile):

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
CB_AWS_ACCOUNT_ID=
AWS_GOV_ACCESS_KEY_ID=
AWS_GOV_SECRET_ACCESS_KEY=
CB_AWS_GOV_ACCOUNT_ID=

Running the Consumption Service in IDEA

After importing the cloudbreak repo root, launch the Consumption application by executing the com.sequenceiq.consumption.ConsumptionApplication class (set Use classpath of module to cloudbreak.cloud-consumption.main) with the following JVM options:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the consumption.cloudbreak.url should be http://localhost:9091
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dserver.port=8099
-Dconsumption.cloudbreak.url=http://localhost:8080
-Dinstance.node.id=<NODE_ID>

Replace <VAULT_ROOT_TOKEN> and <NODE_ID> with the value of VAULT_ROOT_TOKEN and CB_INSTANCE_NODE_ID respectively from the Profile file.

Then add these entries to the environment variables (the same values that you set in Profile):

AWS_ACCESS_KEY_ID=
AWS_SECRET_ACCESS_KEY=
CB_AWS_ACCOUNT_ID=
AWS_GOV_ACCESS_KEY_ID=
AWS_GOV_SECRET_ACCESS_KEY=
CB_AWS_GOV_ACCOUNT_ID=

Running Thunderhead Mock in IDEA

After importing the cloudbreak repo root, launch the Thunderhead Mock application by executing the com.sequenceiq.thunderhead.MockThunderheadApplication class (set Use classpath of module to cloudbreak.mock-thunderhead.main) with the following JVM options:

-Dauth.config.dir=<CBD_LOCAL_ETC>

Replace <CBD_LOCAL_ETC> with the full path of your cbd-local/etc directory that shall already contain the Cloudera Manager license file license.txt.

Please make sure that thunderhead-api has also been added to CB_LOCAL_DEV_LIST list in Profile file of cbd (besides thunderhead-mock).

Running Mock-Infrastructure in IDEA

After importing the cloudbreak repo root, launch the mock-infrastructure application by executing the com.sequenceiq.mock.MockInfrastructureApplication class (set Use classpath of module to cloudbreak.mock-infrastructure.main) with the following JVM options:

--add-opens java.base/java.util=ALL-UNNAMED

Please make sure that mock-infrastructure has been added to CB_LOCAL_DEV_LIST list in the Profile file of cbd.

In the Profile file make sure to also add:

export MOCK_INFRASTRUCTURE_HOST=localhost

Command Line

Running Cloudbreak from the Command Line

To run Cloudbreak from the command line first set the AWS environment variables (use the same values as in Profile)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export CB_AWS_ACCOUNT_ID=...
export AWS_GOV_ACCESS_KEY_ID=...
export AWS_GOV_SECRET_ACCESS_KEY=...
export CB_AWS_GOV_ACCOUNT_ID=...

Set the CM repository credentials in order to download artifacts from the internal repository. Ask us for details in the #eng_cb_dev_internal Slack channel.

export CM_PRIVATE_REPO_USER=
export CM_PRIVATE_REPO_PASSWORD=

then run the following Gradle command:

./gradlew :core:buildInfo :core:bootRun --no-daemon -PjvmArgs="-Dcb.db.port.5432.tcp.addr=localhost \
-Dcb.db.port.5432.tcp.port=5432 \
-Dcb.schema.scripts.location=$(pwd)/core/src/main/resources/schema
-Dserver.port=9091 \
-Daltus.ums.host=localhost
-Dvault.root.token=<VAULT_ROOT_TOKEN>
-Dspring.config.location=$(pwd)/core/src/main/resources/application.yml,$(pwd)/core/src/main/resources/application-dev.yml,$(pwd)/core/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

The database migration scripts are run automatically by Cloudbreak, this migration can be turned off with the -Dcb.schema.migration.auto=false JVM option.

Running Periscope from the Command Line

To run Periscope from the command line, run the following Gradle command:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the periscope.cloudbreak.url should be http://localhost:9091
./gradlew :autoscale:bootRun -PjvmArgs="-Dperiscope.db.port.5432.tcp.addr=localhost \
-Dperiscope.db.port.5432.tcp.port=5432 \
-Dperiscope.cloudbreak.url=http://localhost:8080 \
-Dperiscope.schema.scripts.location=$(pwd)/autoscale/src/main/resources/schema
-Dserver.port=8085 \
-Daltus.ums.host=localhost \
-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dspring.config.location=$(pwd)/autoscale/src/main/resources/application.yml,$(pwd)/autoscale/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running Datalake from the Command Line

To run Datalake from the command line, run the following Gradle command:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the datalake.cloudbreak.url should be http://localhost:9091
./gradlew :datalake:bootRun -PjvmArgs="-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dserver.port=8086 \
-Ddatalake.cloudbreak.url=http://localhost:8080
-Dspring.config.location=$(pwd)/datalake/src/main/resources/application.yml,$(pwd)/datalake/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running FreeIPA from the Command Line

To run the FreeIPA management service from the command line first set the AWS environment variables (use the same values as in Profile)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export CB_AWS_ACCOUNT_ID=...
export AWS_GOV_ACCESS_KEY_ID=...
export AWS_GOV_SECRET_ACCESS_KEY=...
export CB_AWS_GOV_ACCOUNT_ID=...

then run the following Gradle command:

./gradlew :freeipa:bootRun --no-daemon -PjvmArgs="-Dfreeipa.db.addr=localhost \
-Dserver.port=8090 \
-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dspring.config.location=$(pwd)/freeipa/src/main/resources/application.yml,$(pwd)/freeipa/src/main/resources/application-dev.yml,$(pwd)/freeipa/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running Redbeams from the Command Line

To run the Redbeams from the command line first set the AWS environment variables (use the same values as in Profile)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export CB_AWS_ACCOUNT_ID=...
export AWS_GOV_ACCESS_KEY_ID=...
export AWS_GOV_SECRET_ACCESS_KEY=...
export CB_AWS_GOV_ACCOUNT_ID=...

then run the following Gradle command:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the redbeams.cloudbreak.url should be http://localhost:9091
./gradlew :redbeams:bootRun --no-daemon -PjvmArgs="-Dredbeams.db.port.5432.tcp.addr=localhost \
-Dredbeams.db.port.5432.tcp.port=5432 \
-Dredbeams.cloudbreak.url=http://localhost:8080 \
-Dredbeams.schema.scripts.location=$(pwd)/redbeams/src/main/resources/schema \
-Dserver.port=8087 \
-Daltus.ums.host=localhost \
-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dspring.config.location=$(pwd)/redbeams/src/main/resources/application.yml,$(pwd)/redbeams/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running the Environment Service from the Command Line

To run the Environment service from the command line first set the AWS environment variables (use the same values as in Profile)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export CB_AWS_ACCOUNT_ID=...
export AWS_GOV_ACCESS_KEY_ID=...
export AWS_GOV_SECRET_ACCESS_KEY=...
export CB_AWS_GOV_ACCOUNT_ID=...

then run the following Gradle command:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the environment.cloudbreak.url should be http://localhost:9091
./gradlew :environment:bootRun -PjvmArgs="\
-Denvironment.cloudbreak.url=http://localhost:8080 \
-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dspring.config.location=$(pwd)/environment/src/main/resources/application.yml,$(pwd)/environment/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running the Consumption Service from the Command Line

To run the Consumption service from the command line first set the AWS environment variables (use the same values as in Profile)

export AWS_ACCESS_KEY_ID=...
export AWS_SECRET_ACCESS_KEY=...
export CB_AWS_ACCOUNT_ID=...
export AWS_GOV_ACCESS_KEY_ID=...
export AWS_GOV_SECRET_ACCESS_KEY=...
export CB_AWS_GOV_ACCOUNT_ID=...

then run the following Gradle command:

  • Note: If cloudbreak is in the CB_LOCAL_DEV_LIST variable, the consumption.cloudbreak.url should be http://localhost:9091
./gradlew :cloud-consumption:bootRun -PjvmArgs="\
-Dserver.port=8099 \
-Dconsumption.cloudbreak.url=http://localhost:8080 \
-Dvault.root.token=<VAULT_ROOT_TOKEN> \
-Dspring.config.location=$(pwd)/cloud-consumption/src/main/resources/application.yml,$(pwd)/cloud-consumption/build/resources/main/application.properties"

Replace <VAULT_ROOT_TOKEN> with the value of VAULT_ROOT_TOKEN from the Profile file.

Running Thunderhead Mock from the Command Line

To run Thunderhead Mock from the command line, run the following Gradle command:

./gradlew :mock-thunderhead:bootRun -PjvmArgs="\
-Dserver.port=10080 \
-Dauth.config.dir=<CBD_LOCAL_ETC> \
-Dspring.config.location=$(pwd)/mock-thunderhead/src/main/resources/application.yml"

Replace <CBD_LOCAL_ETC> with the full path of your cbd-local/etc directory that shall already contain the Cloudera Manager license file license.txt.

Please make sure that thunderhead-api has also been added to CB_LOCAL_DEV_LIST list in Profile file of cbd (besides thunderhead-mock).

Running Mock-Infrastructure from the Command Line

To run Mock-Infrastructure from the command line first set the following environment variable

export MOCK_INFRASTRUCTURE_HOST=localhost

then run the following Gradle command:

./gradlew :mock-infrastructure:bootRun -PjvmArgs="\
--add-opens java.base/java.util=ALL-UNNAMED \
-Dspring.config.location=$(pwd)/mock-infrastructure/src/main/resources/application.yml"

Please make sure that mock-infrastructure has been added to CB_LOCAL_DEV_LIST list in the Profile file of cbd.

Database Development

If any schema change is required in Cloudbreak services databases (cbdb / periscopedb / datalakedb / redbeamsdb / environmentdb / freeipadb / consumptiondb), then the developer needs to write SQL scripts to migrate the database accordingly. The schema migration is managed by MyBatis Migrations in Cloudbreak and the cbd tool provides an easy-to-use wrapper for it. The syntax for using the migration commands is cbd migrate <database name> <command> [parameters] e.g. cbd migrate cbdb status. Create a SQL template for schema changes:

cbd migrate cbdb new "CLOUD-123 schema change for new feature"

As result of the above command an SQL file template is generated under the path specified in CB_SCHEMA_SCRIPTS_LOCATION environment variable, which is defined in Profile. The structure of the generated SQL template looks like the following:

-- // CLOUD-123 schema change for new feature
-- Migration SQL that makes the change goes here.



-- //@UNDO
-- SQL to undo the change goes here.

Once you have implemented your SQLs then you can execute them with:

cbd migrate <database-name> up

Make sure pending SQLs to run as well:

cbd migrate <database-name> pending

If you would like to roll back the last SQL file, then just use the down command:

cbd migrate <database-name> down

In order to check the status of database

cbd migrate <database-name> status

#Every script that has not been executed will be marked as ...pending... in the output of status command:

------------------------------------------------------------------------
-- MyBatis Migrations - status
------------------------------------------------------------------------
ID             Applied At          Description
================================================================================
20150421140021 2015-07-08 10:04:28 create changelog
20150421150000 2015-07-08 10:04:28 CLOUD-607 create baseline schema
20150507121756 2015-07-08 10:04:28 CLOUD-576 change instancegrouptype hostgroup to core
20151008090632    ...pending...    CLOUD-123 schema change for new feature

------------------------------------------------------------------------

Building

Gradle is used for build and dependency management. The Gradle wrapper is added to the Cloudbreak git repository, so building can be done with:

./gradlew clean build

Before running the above command, however, be sure to make the changes mentioned in Check Out the Cloudbreak Repository to your ~/.gradle/gradle.properties.

How to Reach CM UI Directly (Not Through Knox)

With the current design on the cluster's gateway node there is an NGiNX which is responsible for routing requests through Knox by default. But there are cases when the CM UI needs to be reached directly. It is possible on the same port by the same NGiNX on the clouderamanager/ path of the provisioned cluster. Please note that the trailing slash is significant for the routing to work.

For example: https://tb-nt-local.tb-local.xcu2-8y8x.workload-dev.cloudera.com/clouderamanager/

Be aware of that this routing mechanism is based on cookies, so if you have problems to reach the CM UI directly especially when you reached any service through Knox previously then the deletion of cookies could solve your issues.

How to Contribute

I would like to start by the warm welcome if you would like to contribute to our project, making our - and from the point of contribution, it's yours also - goals closer.

We're happy for having your helpful intention to make this project greater than ever, but for this, I'd like to introduce you to some of our guidelines that you should follow for a successful contribution.

When you would like to make a contribution you can do that by opening pull request against the desired version, but along with some very suggested guidelines not just for the sake of understandability but for having a properly combined request.

Appearance

First, let's start with the appearance. At the time of this writing, we don't enforce any formal requirements to the pull request message by any kind of tool, but we have the following strongly recommended guidelines:

  • if your commit message/jira description fits into a Twitter message then probably, it is too short, and might not be clear what is the intention there
  • if it contains words like: fix or handle then probably you should consider some rewording, but of course, sometimes it is acceptable
  • if your commit fixes something obvious, e.g, a compile error, then of course you don’t need to write a long description about why it is a good idea to fix a compile error
  • compared to a 200-1000 lines of code change (which is the size of our average commit), adding a few more lines into the commit message/jira description is just a tiny effort but would make a huge difference

We were talking about what we should avoid, but let's see a few good examples, which helps the reviewer to understand the purpose of that commit:

https://github.com/hortonworks/cloudbreak/commit/56fdde5c6f48f48a378b505a170b3e3d83225c85

https://github.com/hortonworks/cloudbreak/commit/d09b0074c45af209ccf34855dcf4c1f34c3ccebb

https://github.com/hortonworks/cloudbreak/commit/c93b91fd6a08de7516ab763098f2dcd3abc149f0

https://github.com/hortonworks/cloudbreak/commit/f50f5c8f38941db958eac27c663ae00ecba7b0f5

Coding Guidelines

  • If you introduce a new Cloud SDK or API for a feature, please ensure that the newly introduced API calls are supported in every region and if not then search for an alternative solution. It is often the case that the cloud providers gradually introduce their new services.

Catching Up

When you're working on your precious change on your beloved branch and all of a sudden you face the issue of getting your branch drop behind from the desired/initial branch where you would like to open your future pull request, our way of catching up is rebasing.

If you're experiencing this quite common then the good practice would be fetching and rebasing the initial branch multiple times in a day because there are periods of times when dozens of changes are landing on different branches. I agree, sometimes (especially when someone is working on a huge change) continuously rebasing our branch could be could really be a pain in the bottom, but this practice ensures that we're submitting our commits in proper order and way.

In addition, please do not merge branches together if you can solve your problem with rebasing, and even if you think that your change would have no impact on the codebase, or the actual collection of functionalities - if you're not from our team or don't have written permission from one of our team members - please never ever push directly anything to the master branch normally and especially not by force.

Additional Info

More Repositories

1

hive-testbench

Java
373
star
2

gohadoop

Go
309
star
3

data-tutorials

Hortonworks tutorials
Shell
283
star
4

ansible-hortonworks

Ansible playbooks for deploying Hortonworks Data Platform and DataFlow using Ambari Blueprints
Python
248
star
5

simple-yarn-app

Simple YARN application
Java
167
star
6

streamline

StreamLine - Streaming Analytics
Java
164
star
7

kubernetes-yarn

Go
117
star
8

ambari-shell

CLI for Apache Ambari
Java
89
star
9

structor

Vagrant files creating multi-node virtual Hadoop clusters with or without security.
HTML
67
star
10

HDP-Public-Utilities

Shell
65
star
11

hoya

Deploys and manages applications within a YARN cluster
Java
64
star
12

hadoop-icons

58
star
13

hive-json

A rough prototype of a tool for discovering Apache Hive schemas from JSON documents.
Java
42
star
14

cloudbreak-deployer

Cloudbreak Deployer Tool
Shell
35
star
15

hortonworks-sandbox

hortonworks-sandbox
Python
34
star
16

spark-native-yarn

Tez port for Spark API
Scala
32
star
17

docker-e2e-protractor

This project is going to be retired soon, please use the successor at https://github.com/hortonworks/docker-e2e-cloud
Shell
24
star
18

cloud-haunter

Cloud agnostic resource monitoring and janitor tool
Go
22
star
19

docker-logrotate

Logrotation for docker containers
Shell
22
star
20

mini-dev-cluster

Mini YARN/DFS cluster for developing and testing YARN-based applications (e.g., Tez)
Java
20
star
21

ambari-rest-client

Groovy client library for Apache Ambari's REST API
Groovy
20
star
22

docker-socat

Shell
20
star
23

docker-cloudbreak-uaa

Docker container to run a UAA identity server
Dockerfile
19
star
24

dstream

Java
18
star
25

data_analytics_studio

16
star
26

cloudbreak-images

Saltstack scripts to bake amazon/gcc/azure/openstack images suitable for Cloudbreak
Shell
14
star
27

registry

Schema Registry
Java
13
star
28

cb-cli

Go
13
star
29

docker-cloudbreak

Docker image for Cloudbreak
Shell
12
star
30

cloudbreak-openstack-ansible

Setting up a production ready OpenStack installation
Shell
12
star
31

efm

Java
11
star
32

templeton

New Templeton Repository
Java
10
star
33

docker-protractor

Ubuntu Docker Image for Protractor
Shell
8
star
34

nifi-android-s2s

Java
7
star
35

hadoop0

A Docker sandbox with Hadoop 0.0 (aka Nutch 0.8-dev) and word count example.
Shell
7
star
36

HBaseReplicationBridgeServer

HBase Replication Bridge Server
Java
7
star
37

fieldeng-nifi-druid-integration

Java
7
star
38

docker-cloudbreak-autoscale

Docker image with Periscope
Shell
7
star
39

cloudbreak-docs

Cloudbreak 1.x documentation repo
JavaScript
6
star
40

spark-native-yarn-samples

Scala
5
star
41

fieldeng-scythe

Time Series Library
Scala
5
star
42

fieldeng-modern-clickstream

Shell
5
star
43

fluid-bootstrap-theme

FLUID product design system theme for Bootstrap.
HTML
5
star
44

cloudbreak-documentation

Cloudbreak 2.0 - 2.7.x documentation repo. Cloudbreak 2.8+ docs are stored in the https://github.com/hortonworks/dita-docs repo
CSS
5
star
45

cbd-quickstart

Shell
5
star
46

nifi-ios-s2s

Repository for an iOS client library for Apache NiFi
Objective-C
5
star
47

bman

Bman - An Apache Hadoop cluster manager
Python
4
star
48

salt-bootstrap

Tool for bootstrapping VMs launched by Cloudbreak
Go
4
star
49

docker-haveged

Haveged container to increase entropy
Makefile
4
star
50

pso-hdp-local-repo

Scripts used to create a Local Repo for installations.
Shell
4
star
51

fieldeng-pyscythe

Python Time Series Library
Python
2
star
52

minifi-java

Java
2
star
53

fieldeng-device-manager-demo

Java
2
star
54

docker-cloudbreak-shell

Shell
2
star
55

HA-Monitor

Java
2
star
56

fieldeng-cronus

Industrial IoT NiFi Layer
Java
2
star
57

isa-l-release

Public isa_l release repository
1
star
58

cloudbreak-service-registration

Go
1
star
59

fieldeng-ad-server

JavaScript
1
star
60

fieldeng-biologics-manufacturing-demo

Shell
1
star
61

docker-cloudbreak-autoscale-db

Shell
1
star
62

dp-cli-common

Go
1
star
63

docker-cloudbreak-server-db

Shell
1
star
64

fieldeng-nifi-atlas-lineage-reporter

Java
1
star
65

hue-release

Public hue release repository
1
star
66

fieldeng-data-simulators

Java
1
star
67

fieldeng-retail-store-monitor-demo

Java
1
star
68

fieldeng-nifi-livy-integration

Java
1
star
69

DGC-aetna

JavaScript
1
star
70

ccp-chain-parsing

Java
1
star
71

vega-lite-ui

JavaScript
1
star
72

fieldeng-credit-card-transaction-monitor-mobile-app

Java
1
star
73

azure-cbd-quickstart

Shell
1
star
74

docker-mybatis-migrations

Shell
1
star
75

minifi-cpp

C++
1
star
76

fieldeng-rhea

Industrial IoT UI Layer
CSS
1
star
77

iop-solr-stack

Ambari Solr mpack for helping BI/HDP migration
Python
1
star