When dealing with unmanned agricultural vehicles (remotely-controlled vehicles, robots), vision systems
are a key-factor for implementing field-solutions having direct interactions with crops.
Among all the possible information given by a vision system, the punctual estimation of the canopy
volume is surely an interesting parameter: it is related to the crop vegetative status and, hence, it is fundamental
for performing and setting-up properly some important field-operations (e.g., pruning/thinning,
spraying). A system able to recognize the canopy volume can provide either the input-signals for
implementing a robotic real-time site-specific farming system or relevant information for a proper crop
management. However, there are many practical difficulties in the field implementation of such a system:
complex canopy shapes, different colours, textures and illumination conditions with projected shadows.
Terrestrial/aerial vision systems working on visible-light wavelengths and/or 2D-images of crops,
although capable of excellent performances, have a computationally-heavy post-processing; therefore,
they are unsuitable for implementing low-cost real-time servo-actuated cropping systems (e.g., robotised
sprayers). Instead, a vision system composed by two LiDAR sensors aligned vertically, scanning the same
targets, could give a sort of stereoscopic vision, here named ‘‘lateral-linear-stereoscopic vision”.
The aim of this study is assessing the opportunity to use such a system on an automatic or autonomous/
robotised implement by performing some preliminary tests in a controlled environment. The
resulting system is independent of the lighting conditions (it works also in the dark), is highly reliable
(no projected shadows) and data processing is very fast. Although further studies are required to overcome
the issues that could arise in a future field implementation, this system has all the premises to
be successfully embedded in an automatized monitoring system.