How They Work Programmatically¶
This page explains how CoRE Stack turns analytical logic into runnable, inspectable, and publishable computations.
The Main Programmatic Surfaces¶
- computing/urls.py
- computing/api.py
- computing/utils.py
- utilities/gee_utils.py
- pipeline modules under
computing/
The Usual Code Path¶
flowchart TD
A[Request or local call] --> B[computing/urls.py]
B --> C[computing/api.py]
C --> D[Celery task or direct function]
D --> E[pipeline module under computing/]
E --> F[computing/utils.py]
E --> G[utilities/gee_utils.py]
F --> H[GeoServer and DB metadata]
G --> I[GEE asset paths, exports, sync]
The Thin-Handler Principle¶
The route layer should stay thin and should not become the science layer.
- Normalize request fields early so downstream naming and asset lookup stay predictable.
- Hand heavy work off quickly to a function or task boundary.
- Return a fast acknowledgement instead of turning the API layer into a long-running worker.
The Reusable Layers¶
1. Request and route layer¶
computing/urls.pycomputing/api.py
This layer exposes route names and maps requests to handlers.
2. Task or callable boundary¶
- direct function calls for local experimentation
- Celery tasks for async production-style execution
This is the best place to separate:
- pure computation
- retry behavior
- auth or entry-point concerns
3. Processing layer¶
- workflow modules under
computing/
This layer contains the actual geospatial logic.
4. Publication and metadata layer¶
computing/utils.pypublic_api/views.py
This is where outputs become discoverable datasets, layers, and public metadata.
5. Integration helper layer¶
utilities/gee_utils.py
This layer standardizes asset naming, initialization, exports, and sync helpers.
Why Local-First Matters Programmatically¶
The safest sequence is:
- make the analytical function work locally
- make it callable from shell or a small script
- expose it through the Django Computing API
- add Celery, auth, retries, and publication integration
That is why the developer flow begins with Local Pipeline First, not with Celery or admin wiring.
Where Public Data Re-enters¶
Once the computation is published, downstream public surfaces begin to depend on it:
That is the bridge from internal computation to public usefulness.