Session:Photogrammetry
Description | Hands on photogrammetry processing pipelines. |
---|---|
Website(s) | |
Type | Hands-On |
Kids session | No |
Keyword(s) | hardware, software |
Person organizing | User:Polto |
Language | en - English |
Other sessions...
|
Starts at | 2015/08/14 17:00 |
---|---|
Ends at | 2015/08/14 19:00 |
Duration | 120 minutes |
Location | Room:Hackcenter 2 |
Starts at | 2015/08/16 17:00 |
---|---|
Ends at | 2015/08/16 19:00 |
Duration | 120 minutes |
Location | Room:Hackcenter 2 |
Hands on Free Software photogrammetry processing pipelines.
- openMVG
- MVE
Contents
Installing OpenMVG
The complete instruction is accessible on https://github.com/openMVG/openMVG/blob/master/BUILD
In the scope of this workshop we will use development version of openMVG on GNU/Linux, but installation is also possible on Mac OS or MS Windows.
Build instructions
Required tools
- Cmake
- Git
- c/c++ compiler (gcc or visual studio or clang)
Setup the required external library
sudo apt-get install libpng-dev libjpeg-dev libtiff-dev libxxf86vm1 libxxf86vm-dev libxi-dev libxrandr-dev
If you want see the view graph svg logs
sudo apt-get install graphviz
Getting the sources
git clone --recursive https://github.com/openMVG/openMVG.git
Switch to develop branch
git checkout develop
Build openMVG
cd openMVG mkdir openMVG_Build cd openMVG_Build cmake . ../openMVG/src/ make # or make -jN #where N is the number of cpu cores.
Using the GlobalSfM pipeline
../openMVG_Build/software/SfM/openMVG_main_SfMInit_ImageListing -i . -d ../openMVG/src/openMVG/exif/sensor_width_database/sensor_width_camera_database.txt -o . ../openMVG_Build/software/SfM/openMVG_main_ComputeFeatures -o ./ -i ./sfm_data.json -m SIFT --describerPreset ULTRA ../openMVG_Build/software/SfM/openMVG_main_ComputeMatches -i ./sfm_data.json -o ./ -r 0.8 -g e ../openMVG_Build/software/SfM/openMVG_main_GlobalSfM -i ./sfm_data.json -m . -o . ../openMVG_Build/software/SfM/openMVG_main_ComputeSfM_DataColor -i ./sfm_data.json -o color.ply ../openMVG_Build/software/SfM/openMVG_main_ComputeStructureFromKnownPoses -i ./sfm_data.json -m . -o robust.json ../openMVG_Build/software/SfM/openMVG_main_ComputeSfM_DataColor -i ./robust.json -o robust_color.ply ../openMVG_Build/software/SfM/openMVG_main_openMVG2PMVS -i ./robust.json -o .
Dense point-cloud using PMVS
pmvs2 ./PMVS/ pmvs_options.txt
CMPMVS
Viewers
Desktop viewer
You can use CloudCompare or MeshLab for viewing pointclouds on desktop.
Web viewer
Potree is available with it's converter
MicMac
Get micmac (working version)
- UPDATE ***
made a mistake while "packaging" it, you should get the new one (permalink soon) or delete $micmac-root/lib/ and build/ and recompile it
wget http://doxel.org/download/micmac.tgz (for this sessions) tar xzvf micmac.tgz
or clone the official mercurial repository (which does not compile because of code errors last time I tried)
hg clone https://culture3d:culture3d@geoportail.forge.ign.fr/hg/culture3d micmac
Build MicMac
mkdir build/ cd build/ cmake ../ make && make install
Micmac Dependencies
Micmac rely on theses packages (debian here):
build-essential cmake qt5-default imagemagick exiv2
Camera database
Micmac get informations about captors in the file located at micmac/include/XML_Users/DicoCamera.xml and into the exif metadata of each image (FocalLengh tag), you must have these informations for your captor
Reconstruction pipelines
An example can be found into micmac-raw-upstream/datasets/ccc-statue
reconstruct.sh is a basic reconstruction pipeline with micmac instructions on how to use it can be found into the command-line file
Basic reconstruction pipeline (hopefuly explained)
#!/bin/sh BIN=$1 WD=$2 MASTER=$3 OUTPUT=$4 # features computation "${BIN}Tapioca" All "${WD}.+.jpg" 1000 # initial captor calibration step which produces calibration data in Ori-orientation/ "${BIN}Tapas" RadialExtended "${WD}.+.jpg" Out=calibration # computes cameras orientations using the previous step data (which can be made on a separate calibration dataset) "${BIN}Tapas" AutoCal "${WD}.+.jpg" InCal=calibration Out=orientation # produces a scarse point cloud with a visualisation of the camera(s) positions "${BIN}AperiCloud" "${WD}.+.jpg" orientation "Out=pos_cam.ply" # computes the depth map using $MASTER as master image "${BIN}Malt" GeomImage "${WD}.+.jpg" orientation "Master=${MASTER}" "DirMEC=results/" ZoomF=4 ZoomI=32 'UseGpu=false' # produces a dense point cloud of the scene and uses $MASTER colors and project it on the point cloud "${BIN}Nuage2Ply" "${WD}results/NuageImProf_STD-MALT_Etape_6.xml" "Out=${OUTPUT}" "Attr=${MASTER}"