back to home

boku-ilen / landscapelab

Geodata-driven Landscape Visualization built with Godot and Geodot

68 stars
13 forks
29 issues
GDScriptGDShaderGLSL

AI Architecture Analysis

This repository is indexed by RepoMind. By analyzing boku-ilen/landscapelab in our AI interface, you can instantly generate complete architecture diagrams, visualize control flows, and perform automated security audits across the entire codebase.

Our Agentic Context Augmented Generation (Agentic CAG) engine loads full source files into context on-demand, avoiding the fragmentation of traditional RAG systems. Ask questions about the architecture, dependencies, or specific features to see it in action.

Source files are only loaded when you start an analysis to optimize performance.

Embed this Badge

Showcase RepoMind's analysis directly in your repository's README.

[![Analyzed by RepoMind](https://img.shields.io/badge/Analyzed%20by-RepoMind-4F46E5?style=for-the-badge)](https://repomind.in/repo/boku-ilen/landscapelab)
Preview:Analyzed by RepoMind

Repository Overview (README excerpt)

Crawler view

LandscapeLab The LandscapeLab (LL) is an immersive 3D visualization tool based entirely on geospatial data. Instead of streaming data in a proprietary format - like most modern 3D digital twins do -, we use our custom GDNative plugin Geodot for performant loading of local data in common geospatial formats. As the required data quickly scales to extensive amounts for large areas, we usually cut data to a certain extent (~60GB for ~80x80km extents). However, in theory, you could harness the LL to visualize the entire world. We mostly use the tool for "participatory planning" in workshop-environments. Thus, we additionally provide a 2D map interface and game-logic based on geodata, which may be based on the same data as the 3D visualization. Changes, adjustments and new planings in the 2D interface may in turn be directly reflected in the 3D tool or game-logic and vice versa. Project Structure and Philosophy Our philosophy was to make the visualization-concepts as re-usable as possible. Instead of creating a new rendering logic for each visualization part and type of asset, we abstracted many concepts into the most fundamental building block of the LL: s (see LayerCompositions.gd → ). Once loaded (deserialized) from a configuration, each instantiates a 3D-renderer and UI-elements. Most importantly, the and need to be used to get a bare landscapelab. Additionally, buildings, trees, roads, power-lines, fences and any other 3D-asset based on point data may be added. Which Data, which Renderer? Most people using this tool will be familiar with this distinction: On a high level, in GIS, we seperate between raster and vector data - continous grids of pixels vs. vertices and paths in a coordinate system. For raster-data the LL (mainly) uses: • Digital Terrain Model (DTM) → serves as the -coordinate for terrain, assets, ...) • normalized Digital Surface Model (nDSM; difference of DSM - DTM) → finding the height of whatever is on top of the bare terrain • Landuse → defines terrain-textures and vegetation at the location Vector data is further divided into one of the following: • Point Data → usually used to place assets at the desired location, e.g. trees, wind-turbines, ... • Line Data → place objects along lines (repeating objects; e.g. fences), "stretch" objects on a line (line object; e.g. streets), define connected objects (e.g. power-lines; each vertex is a pole which are subsequently connected with cables) • Polygon Data → buildings Summarized in a table: | data type | renderers that consume it | how the data are used inside the renderer | | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | **DTM** (ground elevation) | • ( ) • ( ) • / ( ) • family – , , , , ( ) • ( ) | Forms the base terrain mesh, supplies z-values for draping vectors, and lets every object/plant/building “snap” to the correct ground level. | | **nDSM / surface height** | • ( ) • / (their is typically the canopy-height or nDSM raster) | Provides above-ground relief: Realistic Terrain blends it with the DTM for a true‐to‐life surface, while vegetation renderers convert it to individual tree/plant heights. | | **Land-use / land-cover raster** | • ( ) • ( ) | Guides texture selection (grass, soil, water, etc.) in the terrain shader and determines which vegetation type should grow in each cell. | | **Point features** | • , ( ) • ( ) • ( – often points) • ( ) | Each point spawns one scene or multimesh instance (trees, turbines, furniture, etc.). Intersections become junction markers in the road network. | | **Line features** | • ( ) • (power-line cables, etc.) • (when fed a line layer, objects are tiled along the polyline) | Rendered as extruded road meshes, hanging cables, or regularly spaced objects along the path. | | **Polygon features** | • ( ) • ( ) • (plant polygons, if supplied) | Footprints are extruded to 3-D buildings or filled with instanced objects; polygon objects distribute assets on a lattice inside each polygon. | [1]: https://raw.githubusercontent.com/boku-ilen/landscapelab/master/Layers/LayerComposition.gd "raw.githubusercontent.com" Planning Game One key element of the LL is what we usually refer to as the "Table" see the UI. As mentioned, the LL is a tool used in participatory planning workshops. To support discussions in larger groups we created an input-interface that is not bottlenecked by a single person operating traditional input methods (i.e. mouse and keyboard). Leveraging the landscapelab-table software, on a 2D map interface (usually projected onto a table), a camera can detect 3 colors and 2 sizes of toy-bricks which define some sort of input. For instance, we could define a setup like: • blue brick: teleport player position to point • green brick: set wind-turbine • red brick: set building Setup Currently, setting up the LandscapeLab is a cumbersome process, that may require internal knowledge. For small project that should be able to load geodata in real-time, we recommend having a look at our more user-friendly GDNative plugin Geodot which is also used by the LL. If, however, you really want to set up the LandscapeLab, feel free to contact us at in the Geodot Discord server, where we frequently communicate with users and collaborators. We intend to make this tool more accessible (e.g. providing a more sophisticated documentation/wiki), but currently lack time-resources and workforce. Also, we c…