Recently, a group of aerospace engineering students collaborated with Autoponics project participants to write a proposal for the 2013 Exploration Habitat (X-Hab) Academic Innovation Challenge led by NASA and the National Space Grant Foundation. The team was selected as one of five across the country to receive the award, and will be pursuing the topic of Remote Plant Food Production Capability.
The students leading the effort are Heather Hava and Christine Fanchiang from aerospace engineering, and Daniel Zukowski from computer science. Joe Tanner is the faculty lead, and faculty support is provided by David Klauss and Nikolaus Correll.
The work that has come out of the Autoponics project has laid the groundwork for the upcoming collaboration with NASA, and we’re very excited to see the project evolve and take on a whole new dimension as we push for a fully remote-controlled robotic plant production system. The X-Hab Challenge project presents fascinating problems due to the fact that remote operation and deployment to other planetary surfaces greatly influence the design considerations. While the system will be built for testing here on Earth in an analogue habitat, the thought that a similar system might some day be deployed to the Moon or Mars gets us thinking in an exciting new direction.
After upgrading to the Asus Xtion Pro, we ran some tests and found that it didn’t fulfill our needs for the project in regards to depth. We have the ability to capture rgb with it and the quality might be sufficient, so we have not thrown it out yet. Currently, there is a laser scanner in the works for higher quality readings at a shallower depth of field. Also, we have mounted an IPEVO Point 2 View camera to a tilt bracket. Using the mounted camera and the laser scanner, we could build a Kinect-like sensor for our purposes. The biggest problem with building a custom vision system is that we are using hardware that is totally specific to our implementation, so development might be slower because there are fewer eyes on the code.
The tilt bracket allows us to adjust the camera up and down, and we have not implemented the panning functionality yet because w. The pan/tilt bracket allows us to get more angles on the plants which means more data. The more data we can aggregate about the plants, the better we will be able to judge their current state.
The laser scanner will be on the scene soon. We were going to be using a line laser because we had one donated to us; however, we have recently been given a high quality laser scanner. Perhaps both can be used in the final product.
We got our new structured-light 3D sensor, the Xtion Pro Live by Asus! Just like the Kinect, but better specs.
When Microsoft’s Xbox Kinect was adopted by the hobbyist community, companies took notice. Microsoft released their Kinect for Windows platform, which includes an impressive SDK that exposes high-level object and body recognition to developers. Asus has now jumped on the bandwagon, updating their Xtion Pro depth sensor with the Live edition, a nearly identical sensor to the Kinect. In fact, Asus licensed the technology from PrimeSense, the developers of the structured light analysis and skeleton recognition software behind Microsoft’s product.
The two sensors are almost the same, but here are some important differences.
Asus Xtion Pro Live
Field of View
Horizontal field of view:
Vertical field of view:
1.2m – 3.5m
0.8m – 3.5m
320×240 16-bit @ 30 frames/sec
640×480 @ 30 fps,
320×240 @ 60 fps
640×480 32-bit @ 30 frames/sec
1280×1024 @ 30 fps
12V DC + 5V USB connection
5V USB connection
Also, the Asus is about half the size and weight of the Kinect. Just look at this side-by-side comparison.
All of these differences are significant for our research and led us to switch. The increased resolution of the Xtion Pro is a very important upgrade, because it means higher quality data with more details. And we’ll be able to take closer photos, at 0.8 meters rather than 1.2.
So there you have it. Stay tuned for images captured by the Asus, they’re on their way!
Much progress to report since our last post.
First of all, the aeroponics system is up and running! We are pumping water through the system on a cycle of 15 minutes on, 15 minutes off. The far end of the top level is seeing about 12 PSI, which, although lower than we had planned, still provides enough pressure for the spray line to create a mist inside of the grow tubes.
The system is seeded with 15 different plant species. For each species, we planted one per level. For this first run, we chose to use a wider variety of plants, even large plants like zucchini and corn which we will only be able to grow for a few weeks before they get too big and need to be removed. During this trial run, we’ll be able to see how the system performs on different plants, and we can also collect preliminary point cloud and RGB data on early stage growth in a wide variety of plants. After this round, we will plant the system with 6 species/cultivars in groups of 8. This configuration works well since the system has 6 lamps which each cover 8 grow sites.
Anasazi Flour Corn
Fox Cherry Tomato
Long Standing Spinach
Jericho Romaine Lettuce
Dwarf Blue Vates Kale
We expect to see germination and the emergence of our first seedlings within days. Once we have a few sprouts we’ll begin adding nutrients to the reservoir and closely monitoring nutrient levels, pH, temperature, and lamp height.
Now that we’re pumping water and have an active system, we need to have some capability to do remote monitoring to check that the system is working and we aren’t flooding the lab. We’ll start with a basic webcam feeding images every few seconds to a our website so you too can check in on the project.
We decided to use ROS over other tools because it provides the most integration of all parts of our system. We have integrated the Kinect into ROS, and can now record data from the Kinect into files that ROS can read (.bag files). We are using rviz to visualize the data that is captured.
We moved away from using ROS in a virtual machine because it was difficult to send the data through another operating system and into a virtual machine without the data being hijacked by the host machine in the process. We are now running ROS on a native installation of Ubuntu 11.10.
Future work will be focused on better examination and analysis of recorded data. We would also like better integration of the systems so that we can control them all easily in ROS.
The rig is driven by a NEMA 17 0.9 degree 400 steps/rev stepper motor.
The second video below shows the motor performing successive full revolutions clockwise, then counterclockwise.
The wiring is currently not long enough to run the mount along the entire rail, so the next step will be to wire up a coiled wire to the motor so we can run it across the 2.5 meter length. We will test the motor over successive runs across the full distance to test for slip.
We are using a Kinect to gather data that we can analyze. We are using the Kinect because it can provide us with three dimensional point clouds. The camera on the Kinect provides an RGB image with a resolution of 640 x 480, and the IR sensor on the Kinect captures depth data with the same resolution. The data can be sent to a computer, where it can then be used to build a three dimensional point cloud of the object that the kinect was looking at. This point cloud will be used to gather important information about the plants that are in our autoponics system.
The software we are using to capture and analyze the data is Ubuntu 10.04 LTS running RGBDSlam inside of ROS. We have been able to get the Kinect up and running within the virtual machine hosted by OSX; however, we are still working on getting the data to stream into RGBDSlam correctly. There are drivers and an SDK for windows that are due to be released on February 1st, so perhaps those will help us in our efforts