• Stars
    star
    112
  • Rank 306,119 (Top 7 %)
  • Language
    C
  • Created over 12 years ago
  • Updated almost 5 years ago

Reviews

There are no reviews yet. Be the first to send feedback to the community and the maintainers!

Repository Details

The Encog project for C/C++

Encog Machine Learning Framework

Encog C/C++ v1.0 (experimental)

This is quick and experimental port of Encog that I did for C/C++. I am not currently developing this port, but I am putting it on GitHub incase it is useful to someone. The primary purpose of this port was to experiment with CUDA. However, it will work either with or without CUDA. The CPU version does make use of OpenMP for efficient processing.

This file includes the complete source code for Encog for C. The header files are designed so that Encog can also be used with C++. This file includes instructions on how to compile and execute Encog for C.

Visual C++

Just open the encog-c.sln file and compile as you would any Visual Studio project.

UNIX

Simply execute the make command in the directory that includes the Encog makefile. The makefile has been tested with Linux, MAC, and Raspberry PI's Debian 7 release.

There are several options you can use.

To force 32 or 64 bit compile.

make ARCH=32
make ARCH=64

To compile with CUDA (for GPU).

make CUDA=1

You can also combine:

make ARCH=64 CUDA=1

Clear previous builds:

make clean

Raspberry PI

The gcc that comes with Raspberry PI seems to have trouble with the -m32 option. The following command will compile Encog for Raspberry PI.

make ARCH=RPI

Encog CUDA Support

Encog for C can make use of a nVidia CUDA enabled GPU for increased performance. Even if you do not plan to program in C, you can use the Encog for C command line tool to train neural networks. Encog for C makes use of the same EG Files and EGB Files used by other Encog platforms, such as the Encog Workbench. CUDA is a very specialized architecture and will not provide a performance boost for all operations. Currently CUDA can only be used with the PSO training method. It is unlikely that RPROP will be extended to CUDA as the CUDA architecture is not particularly conducive to RPROP. RPROP, due to is "backward propagation" nature requires the activations of all neurons to be kept. Memory access is one of the most cycle-intensive aspects of GPU programming. CUDA can achieve great speeds when a SMALL amount of memory must be kept during training. CUDA also works well if a small amount of memory is kept temporarily and then overwritten as training progresses. This is the case with PSO.

Using CUDA with Encog for C

When Encog for C is compiled CUDA must be specified. The command to compile Encog with CUDA is given here.

make CUDA=1 ARCH=64

The above command will compile Encog for CUDA and 64-bit CPU. This is the most advanced build of Encog for C. I provide CUDA binaries for both Mac and Windows. To find out if your version of Encog for C supports CUDA issue the following command.

encog-cmd CUDA

This will perform a simple test of the CUDA system. If you are using a CUDA Encog build the version will be reported like this:

* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *

If you are using a CUDA build, but your system does not have CUDA drivers or a CUDA GPU, you will receive a system dependent error message. For more information, see the troubleshooting section of Encog for C.

The CUDA build of Encog will always use the GPU if the training method supports it. To disable the GPU, use the option /gpu:0. You can also specify /gpu:1 to enable the GPU; however, this is redundant, given that the default operation is to use the GPU. The GPU will only be used with PSO training.

A Simple Benchmark

The Encog command line utility contains a simple benchmark. This benchmark can be used to compare training results between GPU/CPU and CPU only. When the GPU is enabled, Encog is still making full use of your CPU cores. The GPU is simply brought in to assist with certain calculations. The following shows the output from a simple benchmark run. The benchmark is 10,000 data items of 10 inputs and one output, and 100 iterations of PSO. The following time is achieved using GPU and CPU.

heaton:encog-c jheaton$ ./encog benchmark /gpu:1

* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Processor/Core Count: 8
Basic Data Type: double (64 bits)
GPU: enabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100

Performing benchmark...please wait
Benchmark time(seconds): 4.2172
Benchmark time includes only training time.

Encog Finished.  Run time 00:00:04.4040
heaton:encog-c jheaton$
As you can see from above, the benchmark was completed in 4.2 seconds. Now we will try the same benchmark, but disable the GPU.
heaton:encog-c jheaton$ ./encog benchmark /gpu:0

* * Encog C/C++ (64 bit, CUDA) Command Line v1.0 * *
Processor/Core Count: 8
Basic Data Type: double (64 bits)
GPU: disabled
Input Count: 10
Ideal Count: 1
Records: 10000
Iterations: 100

Performing benchmark...please wait
Benchmark time(seconds): 5.3727
Benchmark time includes only training time.

Encog Finished.  Run time 00:00:05.3749
heaton:encog-c jheaton$ 

As you can see, the benchmark was completed in one less second. As you increase the amount of training data the gap tends to increase. On small training sets, the overhead of involving the GPU may actually slow training. You would not want to use the GPU on a simple XOR train.

The above benchmark was performed on a MacBook Pro with an Intel i7 CPU and a nVidia 650M GPU. For more information on the computer see the article on Jeff's Computers. Results will be better with more advanced GPU's. The M on the 650 also means that this is a "mobile" edition of the GPU. Mobile GPU's tend to perform worse than desktop GPUs.

More Repositories

1

t81_558_deep_learning

T81-558: Keras - Applications of Deep Neural Networks @Washington University in St. Louis
Jupyter Notebook
5,671
star
2

aifh

Artificial Intelligence for Humans
Java
909
star
3

encog-java-core

Java
744
star
4

encog-dotnet-core

C#
430
star
5

app_deep_learning

T81-558: PyTorch - Applications of Deep Neural Networks @Washington University in St. Louis
Jupyter Notebook
291
star
6

jh-kaggle-util

Jeff Heaton's Kaggle Utilities
Python
279
star
7

encog-javascript

Encog for Javascript.
JavaScript
197
star
8

present

Code from Jeff Heaton's YouTube videos, articles, and conference presentations.
Assembly
173
star
9

encog-java-examples

Java
163
star
10

pyimgdata

Python
107
star
11

jeffheaton-book-code

Source code from my older (pre Artificial Intelligence for Humans books) books. I am no longer updating these older editions.
Java
88
star
12

mergelife

Evolve complex cellular automata with a genetic algorithm.
Python
73
star
13

encog-java-workbench

Java
60
star
14

pretrained-gan-70s-scifi

Pretrained model 1024x1024 trained on 1970s scifi art
Jupyter Notebook
51
star
15

pretrained-gan-minecraft

Minecraft GAN
Jupyter Notebook
44
star
16

encog-dotnet-more-examples

This project will contain additional examples for Encog, beyond the console examples provided with Encog. These examples are primarily Winforms GUI applications, and may make use of third party libraries other than Encog.
C#
43
star
17

docker-stylegan2-ada

My Docker image for running Stylegan2 ADA with GPU
Dockerfile
39
star
18

papers

Repository to hold source code related to academic papers I've published.
Python
30
star
19

encog-sample-csharp

A sample application for Encog C#.
C#
28
star
20

app_generative_ai

T81-559: Applications of Generative Artificial Intelligence
Jupyter Notebook
27
star
21

pretrained-gan-fish

Pretrained model for fish, 256x256
Jupyter Notebook
24
star
22

article-code

Python
23
star
23

stylegan2-toys

Various projects that I've worked on for special effects in StyleGAN2.
Jupyter Notebook
18
star
24

encog-sample-java

Sample stand-alone Encog project
Java
15
star
25

libsvm-java

This is simply a repository to hold the latest libsvm Java, and allow me to track changes. This is not my own project. See homepage URL for source.
14
star
26

pretrained-merry-gan-mas

Jupyter Notebook
13
star
27

docker-jupyter-python-r

Jupyter notebook with Python and R.
Jupyter Notebook
12
star
28

ios_video_classify

A simple IOS application that uses mobilenet to classify 1000 different images from an IOS device's video camera.
Swift
11
star
29

phd-dissertation

Dissertation (Jeff Heaton)
Java
10
star
30

apollo64

ËœApollo64 BBS: This is an old Commodore 64 based BBS that I created on back in the late 80's. Basic/6510 assembler.
Assembly
10
star
31

jeffheaton.github.io

The AIFH website.
HTML
10
star
32

proben1

A copy of the datasets for PROBEN1 from the paper "Proben1: A Set of Neural Network Benchmark Problems and Benchmarking Rules", Lutz Prechelt
Perl
10
star
33

mergelife-experiments

Data for several runs of MergeLife
9
star
34

pretrained-gan-tech

Technology GAN
Jupyter Notebook
9
star
35

jeffheaton

8
star
36

pysamppackage

A sample package for Python to learn how to structure a package.
8
star
37

tf-intro

Jupyter Notebook
7
star
38

ga-csharp

A simple C# Genetic Algorithm
C#
7
star
39

encog-silverlight-core

Encog for Silverlight is no longer supported or maintained. Version 3.0 was the last released version.
6
star
40

jlatexmath-example

A simple example of using JLatexMath
Java
6
star
41

stylegan2-cloud

Utilities to run StyleGAN2 ADA Pytorch in the cloud
5
star
42

docker-stylegan3

Docker image to run StyleGAN3 with GPU
Dockerfile
3
star
43

jna-example

Simple Java JNA example with a Maven build script.
Java
3
star
44

jeffheaton-bookcode

Source code from books published by Jeff Heaton
3
star
45

data-mirror

A mirror of some of the files at data.heatonresearch.com
HTML
2
star
46

data

Some common datasets with headers added and properly setup for Pandas/others
2
star
47

jheaton_images

Images that I use for various sites, like Kaggle
2
star
48

baseenv-jupyter

Basic Docker environment for Jupyter and Python
Dockerfile
2
star
49

docker-jupyterhub

My Jupyterhub docker image
2
star
50

cabi_genai_automation

Introduction to Automation with LangChain, Generative AI, and Python
Jupyter Notebook
2
star
51

kaggle-otto-group

Jeff Heaton's Entry for Kaggle Otto Group Product Classification Challenge
1
star
52

jheaton-ds2

Some of my datasets (public)
1
star
53

docker-mergelife

Docker image for MergeLife
Dockerfile
1
star