There are no reviews yet. Be the first to send feedback to the community and the maintainers!
DesignPatterns
A solution to some of the previous University exams and homework assignments from the course on the Object Oriented Programming. Some lecture notes are to be uploadedGeometryPlotting
Manipulations with basic geometrical primitives, such as point, line, plane, sphere, triangle in 3D space: translation and rotation operations, distance calculation, intersections, orthogonal projections of one object into another, etc. The objects can be defined in global or in one of the local coordinate systems and converted form one coordinate system into another. The library was build to be as simple and intuitive as posible. Users do not have to remember the reference coordinate system of each object. The objects store the coordinate system they are defined in and all transformations will be caried out implicitly when necessary.MachineLearning
My solutions to the complete Machine Learning Course by Andrew Ng on CourseraAttentionUNet_LungSegmentation
DesignPatternsJava
Implementation of Gang Of Four Design patterns in Java, as far as some useful books on the topicCodeAI
Our solution for predicting soil functional properties at unsampled locations using infrared spectroscopy, georeferencing of soil samples, and earth remote sensing data. Won a first place and a chance to visitWe Are Developers conference in Berlin.roland-river
Implementation of the ROLAND: Graph Learning Framework for Dynamic Graphs suitable to integrate into RiverGibbsSampler
My implementation of the Gibbs Sampler on NVIDIA Jetson NanoMarkovCluster
Basic implementation represented at Code Camp Macedonia. MCL is unsupervised graph clustering algorithm based on simulation of stochastic flow in graphs.DeepLearningPurePython3
Pure Python3 codes for the Deep Learning Coursera repositories.MLPlayground
Implementation of some basic Machine Learning algorithms in F#. Usually implemented on famous public datasets (Iris, Titanic, MNIST) depending on the problem.QLearning_MATLAB
Some basic exercises and algorithms of Reinforcement learning, including Feed Forward, Backpropagation, Gradient descent etc.DeepLearningSpecialization
My solutions to the Deep Learning Specialization on Coursera.Mathematica11.3_KeyGen
A simple script to generate working product keys for Mathematica 11.3 based upon your MathID and a product key for a trial period. The full description on KeyGens for Mathematica will soon be available at my websiteAutoencoder
Simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data. In another file a convolutional layer is added for better results.CUDNNSamples
Samples of NVIDIAS' CuDNN. The NVIDIA CUDA® Deep Neural Network library (cuDNN) is a GPU-accelerated library of primitives for deep neural networks. cuDNN provides highly tuned implementations for standard routines such as forward and backward convolution, pooling, normalization, and activation layers.DCGAN
DCGAN (Deep convolutional generative adversarial networks) is one of the popular and successful network design for GAN. It mainly composes of convolution layers without max pooling or fully connected layers. It uses convolutional stride and transposed convolution for the downsampling and the upsampling. The figure below is the network design for the generator.DatathonIBM
Credit risk prediction model for large logistics & distribution company. The idea behind the solution was to use the Altman Z credit score in order to turn the unsupervised into semi-supervised problem, and then apply self-training methods.DataStructures
Demos of the implementation of the basic, both linear and non-linear, data structures in C/C++.MarkovNetworkOCR
Markov network or undirected graphical model is a set of random variables having a Markov property described by an undirected graph. In other words, a random field is said to be a Markov random field if it satisfies Markov properties. A Markov network or MRF is similar to a Bayesian network in its representation of dependencies; the differences being that Bayesian networks are directed and acyclic, whereas Markov networks are undirected and may be cyclic. Thus, a Markov network can represent certain dependencies that a Bayesian network cannot (such as cyclic dependencies); on the other hand, it can't represent certain dependencies that a Bayesian network can (such as induced dependencies). The underlying graph of a Markov random field may be finite or infinite.Love Open Source and this site? Check out how you can help us