Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions .flake8
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
[flake8]
max_line_length = 180
32 changes: 26 additions & 6 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -1,10 +1,30 @@
# Byte-compiled / optimized / DLL files
__pycache__/
*.py[cod]
*$py.class
*.npy

# Environments
.env
.venv
env/
venv/
ENV/
env.bak/
venv.bak/

# Installer logs
pip-log.txt

# IDE project files
.idea
.vscode

# Distribution / packaging
*_FSCT_output/
*FSCT_output/

# Examples
*.las
!example.las
model/training_history.csv
scripts/__pycache__
.vscode
venv
.idea
docker
*FSCT_output
10 changes: 10 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
Main changes:
- Main file added to invoke code from project root path

- Unused imports removed

- The argparse module added (make easy to write user-friendly command-line interfaces)

- FSCT main configuration moved to specific configuration files (config.ini)

- Dockerized code with entrypoint to invoke executions from host
23 changes: 23 additions & 0 deletions Dockerfile
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
FROM python:3.9

RUN mkdir /app
RUN mkdir /datasets
WORKDIR /app

ENV VIRTUAL_ENV=/opt/venv
RUN python3 -m venv $VIRTUAL_ENV
ENV PATH="$VIRTUAL_ENV/bin:$PATH"

COPY requirements.txt .
RUN pip install -r requirements.txt

COPY data data
COPY model model
COPY scripts scripts
COPY tools tools
COPY config.ini .
COPY main.py .
COPY multiple_plot_centres_file_config.py .
COPY entrypoint.sh .

ENTRYPOINT ["./entrypoint.sh"]
35 changes: 34 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ it works (or doesn't), please let me know!
If you have any difficulties or find any bugs, please get in touch and I will try to help you get it going.
Suggestions for improvements are greatly appreciated.

If you do not have an Nvidia GPU, please set the ```use_CPU_only``` setting in ```run.py``` to True.
If you do not have an Nvidia GPU, please set the ```use_cpu_only``` setting in ```run.py``` to True.

## How to use

Expand All @@ -54,6 +54,39 @@ this will contain the following outputs.
Start with small plots containing at least some trees. The tree measurement code will currently cause an error if it
finds no trees in the point cloud.

## Docker
1. Create docker volume to access from host
```
docker volume create datasets
```
2. Build your docker image
```
docker build --rm -t fsct-image .
```
### Run image (ensure rebuild docker image before running)
```
docker run -i --rm \
-v "$(pwd)/datasets:/datasets" \
--name fsct \
fsct [--additional --parameters --here]
```
* Run docker container and get into shell
```
docker run --rm -v ~/datasets:/datasets --name fsct -it fsct-image /bin/bash
```

* Run docker FSCT with arguments

#### Example:
> docker run --rm -v ~/datasets:/datasets --name fsct fsct-image -f ~/datasets/mydataset/model.laz

### Run Docker FSCT with GPU support
You need to install [Nvidia Docker](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html#docker)
#### Example:
> docker run --rm -v $work_dir:/datasets --gpus all --name fsct fsct-image -f /datasets/$dataset/model.laz



## FSCT Outputs

```Plot_Report.html``` and ```Plot_Report.md```
Expand Down
106 changes: 106 additions & 0 deletions config.ini
Original file line number Diff line number Diff line change
@@ -0,0 +1,106 @@
[FSCT_main_parameters]

# [X, Y] Coordinates of the plot centre (metres). If "None", plot_centre is computed based on the point cloud bounding box.
plot_centre=None

# Circular Plot options - Leave at 0 if not using.
# If 0 m, the plot is not cropped. Otherwise, the plot is cylindrically cropped from the plot centre with plot_radius + plot_radius_buffer.
plot_radius=0
# See README. If non-zero, this is used for "Tree Aware Plot Cropping Mode".
plot_radius_buffer=0

# Set these appropriately for your hardware.
# You will get CUDA errors if this is too high, as you will run out of VRAM. This won't be an issue if running on CPU only. Must be >= 2.
batch_size=8
# Number of CPU cores you want to use. If you run out of RAM, lower this. 0 means ALL cores.
num_cpu_cores=0
# Set to True if you do not have an Nvidia GPU, or if you don't have enough vRAM.
use_cpu_only=True

# Optional settings - Generally leave as they are.
# If your point cloud resolution is a bit low (and only if the stem segmentation is still reasonably accurate), try increasing this to 0.2.
slice_thickness=0.15
# If your point cloud is really dense, you may get away with 0.1.
# The smaller this is, the better your results will be, however, this increases the run time.
slice_increment=0.05

# If you don't need the sorted stem points, turning this off speeds things up.
# Veg sorting is required for tree height measurement, but stem sorting isn't necessary for standard use.
sort_stems=1

# If the data contains noise above the canopy, you may wish to set this to the 98th percentile of height, otherwise leave it at 100.
height_percentile=100
# A tree must have a cylinder measurement below this height above the DTM to be kept. This filters unsorted branches from being called individual trees.
tree_base_cutoff_height=5
# Turn on if you would like a semantic and instance segmented point cloud. This mode will override the "sort_stems" setting if on.
generate_output_point_cloud=1

# If you activate "tree aware plot cropping mode", this function will use it.
# Any vegetation points below this height are considered to be understory and are not assigned to individual trees.
ground_veg_cutoff_height=3
# Vegetation points can be, at most, this far away from a cylinder horizontally to be matched to a particular tree.
veg_sorting_range=1.5
# Stem points can be, at most, this far away from a cylinder in 3D to be matched to a particular tree.
stem_sorting_range=1
# Lowest height to measure diameter for taper output.
taper_measurement_height_min=0
# Highest height to measure diameter for taper output.
taper_measurement_height_max=30
# diameter measurement increment.
taper_measurement_height_increment=0.2
# Cylinder measurements within +/- 0.5*taper_slice_thickness are used for taper measurement at a given height. The largest diameter is used.
taper_slice_thickness=0.4
# Generally leave this on. Deletes the files used for segmentation after segmentation is finished.
delete_working_directory=True
# You may wish to turn it off if you want to re-run/modify the segmentation code so you don't need to run pre-processing every time.
# Will delete a number of non-essential outputs to reduce storage use.
minimise_output_size_mode=0

[FSCT_other_parameters]
# Don't change these unless you really understand what you are doing with them/are learning how the code works.
# These have been tuned to work on most high resolution forest point clouds without changing them, but you may be able
# to tune these better for your particular data. Almost everything here is a trade-off between different situations, so
# optimisation is not straight-forward.

model_filename="model.pth"
# Dimensions of the sliding box used for semantic segmentation.
box_dimensions=[6, 6, 6]
# Overlap of the sliding box used for semantic segmentation.
box_overlap=[0.5, 0.5, 0.5]
# Minimum number of points for input to the model. Too few points and it becomes near impossible to accurately label them (though assuming vegetation class is the safest bet here).
min_points_per_box=1000
# Maximum number of points for input to the model. The model may tolerate higher numbers if you decrease the batch size accordingly (to fit on the GPU), but this is not tested.
max_points_per_box=20000
# Don't change
noise_class=0
# Don't change
terrain_class=1
# Don't change
vegetation_class=2
# Don't change
cwd_class=3
# Don't change
stem_class=4
# Resolution of the DTM.
grid_resolution=0.5
vegetation_coverage_resolution=0.2
num_neighbours=5
sorting_search_angle=20
sorting_search_radius=1
sorting_angle_tolerance=90
max_search_radius=3
max_search_angle=30
# Used for HDBSCAN clustering step. Recommend not changing for general use.
min_cluster_size=30
# During cleaning, this w
cleaned_measurement_radius=0.2
# Generally leave this on, but you can turn off subsampling.
subsample=0
# The point cloud will be subsampled such that the closest any 2 points can be is 0.01 m.
subsampling_min_spacing=0.01
# Minimum valid Circuferential Completeness Index (CCI) for non-interpolated circle/cylinder fitting. Any measurements with CCI below this are deleted.
minimum_cci=0.3
# Deletes any trees with fewer than 10 cylinders (before the cylinder interpolation step).
min_tree_cyls=10
# Very ugly hack that can sometimes be useful on point clouds which are on the borderline of having not enough points to be functional with FSCT. Set to a positive integer. Point cloud will be copied this many times (with noise added) to artificially increase point density giving the segmentation model more points.
low_resolution_point_cloud_hack_mode=0
3 changes: 3 additions & 0 deletions entrypoint.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
#!/bin/bash
echo Executing FSCT wit arguments: "$@"
python main.py "$@"
37 changes: 37 additions & 0 deletions main.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
import sys
from scripts import run
from scripts import run_with_multiple_plot_centres
import multiple_plot_centres_file_config

if __name__ == '__main__':
"""
Choose one of the following or modify as needed.
Directory mode will find all .las files within a directory and sub directories but will ignore any .las files in
folders with "FSCT_output" in their names.
File mode will allow you to select multiple .las files within a directory.
Alternatively, you can just list the point cloud file paths.
If you have multiple point clouds and wish to enter plot coords for each, have a look at "run_with_multiple_plot_centres.py"
"""
opts = [opt for opt in sys.argv[1:] if opt.startswith("-")]
args = [arg for arg in sys.argv[1:] if not arg.startswith("-")]
mode = "-a"
type = run

# If multiple or single processing
if "-m" in opts:
type = run_with_multiple_plot_centres
# args indicates multiple_plot_centres_file_config
if args is None:
args = multiple_plot_centres_file_config
# directory mode
if "-d" in opts:
mode = "-d"
# file mode
elif "-f" in opts:
mode = "-f"
# attended user file mode
elif "-a" in opts:
mode = "-a"
else:
raise SystemExit(f"Usage: {sys.argv[0]} (-c | -u | -l) <arguments>...")
type.exec(mode, args)
9 changes: 9 additions & 0 deletions multiple_plot_centres_file_config.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
'''
This script is an example of how to provide multiple different plot centres with your input point clouds.
@args
[[*.las, [your_plot_centre_X_coord, your_plot_centre_Y_coord], your_plot_radius], [*.las,[X,Y], radius], [...], [...]]
'''
multiple_plot_centres_file_config = [
['your_point_cloud1.las', [0, 0], 100],
['your_point_cloud2.las', [300, 200], 50],
]
1 change: 1 addition & 0 deletions requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -17,6 +17,7 @@ Jinja2==3.1.2
joblib==1.1.0
kiwisolver==1.4.2
laspy==2.1.2
lazrs==0.4.1
Markdown==3.3.7
MarkupSafe==2.1.1
matplotlib==3.5.2
Expand Down
Empty file added scripts/__init__.py
Empty file.
2 changes: 1 addition & 1 deletion scripts/combine_multiple_output_CSVs.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
import pandas as pd

from run_tools import FSCT, directory_mode, file_mode
from scripts.run_tools import FSCT, directory_mode, file_mode


def combine_multiple_output_CSVs(point_clouds_to_process, csv_file_to_combine):
Expand Down
20 changes: 8 additions & 12 deletions scripts/inference.py
Original file line number Diff line number Diff line change
@@ -1,18 +1,15 @@
import os
from abc import ABC
import torch
import torch_geometric
from torch_geometric.data import Dataset, DataLoader, Data
import numpy as np
import glob
import pandas as pd
from preprocessing import Preprocessing
from model import Net
from scripts.model import Net
from sklearn.neighbors import NearestNeighbors
from scipy import spatial
import os
import time
from tools import get_fsct_path
from tools import load_file, save_file
from scripts.tools import get_fsct_path
from scripts.tools import load_file, save_file
import shutil
import sys

Expand Down Expand Up @@ -69,10 +66,9 @@ class SemanticSegmentation:
def __init__(self, parameters):
self.sem_seg_start_time = time.time()
self.parameters = parameters

if not self.parameters["use_CPU_only"]:
print("Is CUDA available?", torch.cuda.is_available())
self.device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
if not self.parameters['use_cpu_only']:
print('Is CUDA available?', torch.cuda.is_available())
self.device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
else:
self.device = torch.device("cpu")

Expand All @@ -98,7 +94,7 @@ def inference(self):
test_loader = DataLoader(test_dataset, batch_size=self.parameters["batch_size"], shuffle=False, num_workers=0)

model = Net(num_classes=4).to(self.device)
if self.parameters["use_CPU_only"]:
if self.parameters["use_cpu_only"]:
model.load_state_dict(
torch.load(
get_fsct_path("model") + "/" + self.parameters["model_filename"],
Expand Down
Loading