There are no reviews yet. Be the first to send feedback to the community and the maintainers!
Underwater-Image-Enhancement-by-Wavelength-Compensation-and-Dehazing
ACQUIRING clear images in underwater environments is an important issue in ocean engineering. The quality of underwater images plays a pivotal role in scientific missions such as monitoring sea life, taking census of populations, and assessing geological or biological environments. Capturing images underwater is challenging, mostly due to haze caused by light that is reflected from a surface and is deflected and scattered by water particles, and colour change due to varying degrees of light attenuation for different wavelengths. Light scattering and colour change result in contrast loss and colour deviation in images acquired underwater.Mapping-and-Localization-of-TurtleBot-Using-ROS
The motto of the project is to gain experience in the implementation of different robotic algorithms using ROS framework. The first step of task is to build a map of the environment and navigate to a desired location in the map. Next, we have to sense the location of marker (e.g. AR marker, color markers etc) in the map, where there is pick and place task, and autonomously localise and navigate to the desired marker location. After reaching to the desired marker location, we have to precisely move towards the specified location based on visual servoing. At the desired location, we have a robotic arm which picks an object (e.g a small cube) and places on our turtlebot (called as pick and place task). After, the pick and place task, again the robot needs to find another marker, which specifies the final target location, and autonomously localise and navigate to the desired marker location, which finishes the complete task of the project.Human-Activity-Recognition-from-Videos-Using-Machine-Learning
Nowadays, it’s a very hot topic on video-based human action detection, which has recently been demonstrated to be very useful in a wide range of applications including video surveillance, tele-monitoring of patients and senior people, medical diagnosis and training, video content analysis and search, and intelligent human computer interaction [1]. As video camera sensors become less expensive, this approach is increasingly attractive since it is low cost and can be adapted to different video scenarios.Visual-Tracking-Using-MeanShift
Mean-Shift (MS) Mean-Shift (MS) is widely known as one of the most basic yet powerful tracking algorithms. Mean- Shift considers feature space as an empirical probability density function (pdf). If the input is a set of points then MS considers them as sampled from the underlying pdf. If dense regions (or clusters) are present in the feature space, then they correspond to the local maxima of the pdf. For each data point, MS associates it with the nearby peak of the pdf As an example, you can see the car sequence in file “Mean_Shift_Tracking.m”. We want to track the car in this sequence. We first needed to define the initial patch of the car in the first frame of the sequence. And then the moving car patch will be estimated by using the Bhattacharya coefficient and the weights corresponding to the neighboring patches. It will be deeply explained in the report.Install-TensorFlow-and-Keras-on-Jetson-TX2-with-JetPack-3.1
A Very Easy Way to Install Keras and TensorFlow on Jetson TX2. This repository is just to help people struggling to install Tensor Flow and Keras on TX2Implemented-8-Point-Algorithm-
The eight-point algorithm is an algorithm used in computer vision to estimate the essential matrix or the fundamental matrix related to a stereo camera pair from a set of corresponding image points.3D-Model-Based-Tracking-using-VISP
We have done 3D model object detection, tracking, computing pose using VISP library. We had seen great advantages of this library as we can track and detect using a single line of code and the data structures defined for holding data are well defined and the class and methods used in the library are well documented with many examples.Projective-Reconstruction
From several images of a scene and the coordinates of corresponding points identified in the different images, it is possible to construct a three-dimensional point-cloud model of the scene and compute the camera locations. From uncalibrated images the model can be reconstructed up to an unknown projective transformation, which can be upgraded to a Euclidean model by adding or computing calibration information.Wavelet-Transformation-on-Images
The wavelet transform is similar to the Fourier transform (or much more to the windowed Fourier transform) with a completely different merit function. The main difference is this: Fourier transform decomposes the signal into sines and cosines, i.e. the functions localized in Fourier space; in contrary the wavelet transform uses functions that are localized in both the real and Fourier space.Intensity-Based-Visual-Servoing-using-VISP
We have done Intensity Based Visual Servoing by computing velocities for the robot using Control Law by VISP library. We had seen great advantages of this library as we can track and detect using a single line of code and the data structures defined for holding data are well defined and the class and methods used in the library are well documented with many examples. We found it's easy to use this library for visual servoing in the context of our tasks.8-Point-Algorithm-Vs-5-Point-Algorithm
The goal of this practical is to compare the classical linear 8 points algorithm and the linear 5 points knowing the vertical direction of the camera. Let's consider 50 points randomly distributed in a cube of size [-300*300]*[-300*300]*[- 300*300] in the world frame (Ow;Xw; Yw;Zw). Let's note respectively, (Oc1 ;Xc1 ; Yc1 ;Zc1) and (Oc2 ;Xc2 ; Yc2 ;Zc2) the camera positions. We suppose a calibrated camera posed at a rotation Ri and Ti of the world coordinate (Xw = RiXci + Ti).Calibrated-SfM
Incremental SfM is the standard approach that adds on one image at a time to grow the reconstruction. While this method is robust, it is not scalable because it requires repeated operations of expensive bundle adjustment. Global SfM is different from incremental SfM in that it considers the entire view graph at the same time instead of incrementally adding more and more images to the Reconstruction.2D-Filtering-using-VHDL
The goal is to process the input data flow (corresponding to lena image) using a 2D filter. Two main tasks are expected: The design and the validation of a customizable 2D filter (filter IP) The implementation on a Nexys4 evaluation board of the 2D filter. The filter IP implementation should be included in a reference design (furnished by teacher) to ease the integration. The filter IP could be split into two main parts: the memory cache which aims to be temporarily stored the data flow before filtering and the processing part. The cache memory designed for simultaneous pixel accesses enables a 3x3 pixel neighbourhood to be accessible in one clock cycle. The structure is based on flip-flop registers and First-In-First-Out (FIFO) memory.Implementing-Horn-Schunck-and-Lucas-Kanade-Optical-Flow-methods
The goal of this work is to familiarize yourself with optical flow problem. Horn-Schunck and Lucas-Kanade methods will be applied to image stabilization problem.Wall-Follower-E-Puck
The Aim of this project was to Move the robot forward and stop at 2 cm of the wall And then Start following the wall at a distance of aprox. 1cm.4-Point-Algorithm-Vs-2-Point-Algorithm
The goal of this practical is now to compare the classical linear 4 points algorithm and the linear 2 points knowing the vertical direction of the camera.Image-Processing-Toolbox-using-Matlab
This application is embedding functions of Matlab 2016b for users. This application consist of one GUI for displaying input and output images by Matlab. It accepts one image at a time and storing it as original image. Each modification is applied on output image consecutively. By default, all operations are disabled until you select an image.Perform-Odometry-functions-on-E-Puck
Odometry is the use of data from motion sensors to estimate change in position over time. It is used in robotics by some legged or wheeled robots to estimate their position relative to a starting location. This method is sensitive to errors due to the integration of velocity measurements over time to give position estimates. Rapid and accurate data collection, instrument calibration, and processing are required in most cases for odometry to be used effectively.Visual-Tracking-using-Background-Subtraction
The goal of this Visual Tracking module is to learn about, and, more importantly, to learn how to use basic tracking algorithms and evaluate their performances. We will start with a very simple and effective technique called Background Subtraction which can be used to initialize the tracker, i.e. to find the target’s position in the first frame of a sequence, or to track the target through the entire sequence. Background Subtraction Background subtraction (BS) is widely used in surveillance and security applications, and serves as a first step in detecting objects or people in videos. BS is based on a model of the scene background that is the static part of the scene. Each pixel is analyzed and a deviation from the model is used to classify pixels as being background or foreground.Bug-0-Algo-Implementation-on-E-Puck
we have to set a goal anywhere in the plane and there might be or might not be obstacles in the path. And the Robot have to first identify the Goal in comparative to his own location and then start moving towards it and it should turn according to the angle of rotation required for facing towards goal.Image-Registration-using-Matlab
Image registration is the procedure consisting of aligning an unregistered image (also called moving image) into a template image (also called fixed image) via a geometric transformation. This problem is usually addressed as presented in Fig. 1. An iterative procedure takes place to infer the geometric transformation (parametric or non-parametric) via an optimizer, which maximizes the similarity between the two images.Stereo-Vision-Camera-Calibration-using-OpenCV
I have made some functions in QT for stereo vision camera calibration, so if you want stereo camera calibration, you just need to upload your images captured from right camera and images captured from left camera. Load Images: Load images button will load all the images captured by both left and right cameras. Camera Parameters: This button will compute the stereo vision parameters like internal and external parameters of both cameras as well as projection matrix of both cameras. And after computing it will save the parameters in two text files (one for each camera) and locate that files to your desired location. Load Camera parameters: This button will load all your parameters saved in both text files and can be used for further purposes like computing disparity map. 3D Map: This button will use the loaded parameters of the “Load camera parameters” button and will use two photos captured from both cameras and will compute the 3D view of those pictures. Which we can use for extracting 3D location of the Tip.Love Open Source and this site? Check out how you can help us