There are no reviews yet. Be the first to send feedback to the community and the maintainers!
DynamicPathFollowingRobot
This project explores making a wireless automatic guided robot which does not require any kind of intrusive modifications to be made to the environment, apart from installing an overhead camera. Generally environments that make use of automatic guided vehicles (AGVs) have to plan the path(s) where the robots should go before installing the tracks, like magnetic strips or metal tracks; this is an investment even before using the robots. If any change to the path(s) is required to be made, then more cost is incurred. In this paper a four wheeled differential drive robot has been controlled wirelessly to follow paths drawn on a graphical user interface within a workspace of 1.8m by 1.4m. The robot is controlled by correcting its orientation through visual feedback from a camera. Error analysis was performed to investigate how well the robot followed the path drawn. The estimated error of the robot is within a few centimeters of the path and can be reduced by modifying various thresholds.Temporal_Difference_Learning_Path_Planning
When born, animals and humans are thrown into an unknown world forced to use their sensory inputs for survival. As they begin to understand and develop their senses they are able to navigate and interact with their environment. The process in which we learn to do this is called reinforcement learning. This is the idea that learning comes from a series of trial and error where there exists rewards and punishments for every action. The brain naturally logs these events as experiences, and decides new actions based on past experience. An action resulting in a reward will then be higher favored than an action resulting in a punishment. Using this concept, autonomous systems, such as robots, can learn about their environment in the same way. Using simulated sensory data from ultrasonic sensors, moisture sensors, encoders, shock sensors, pressure sensors, and steepness sensors, a robotic system will be able to make decisions on how to navigate through its environment to reach a goal. The robotic system will not know the source of the data or the terrain it is navigating. Given a map of an open environment simulating an area after a natural disaster, the robot will use model-free temporal difference learning with exploration to find the best path to a goal in terms of distance, safety, and terrain navigation. Two forms of temporal difference learning will be tested; off-policy (Q-Learning) and onpolicy (Sarsa). Through experimentation with several world map sizes, it is found that the off-policy algorithm, Q-Learning, is the most reliable and efficient in terms of navigating a known map with unequal states.RIT_Thesis
RIT Thesis DocumentGreekFamilyTree
An application to create visual graphics of a Brother/Sister tree.PuzzleSolver
My CS3 project. I needed to create a solver that could solve 3 unique puzzles; Clock, Water, and Chess Solitaire.PersonalRoboticProject
Delivery-Time-and-Distance-Service
A time and distance estimator between two addresses using VBScript and Google Maps Distance Matrix APIArduinoProjects
A collection of my several Arduino ProjectsLove Open Source and this site? Check out how you can help us