Skip to content

Troubleshooting Guide

Common issues when installing, validating, or starting the current CoRE Stack backend.

The current install flow assumes installation/install.sh, nrm_app/.env, and the built-in internal_api_initialisation_test.py check.


Install Or Rerun Problems

conda: command not found

The current installer default is:

  • MINICONDA_DIR=$HOME/miniconda3
  • CONDA_ENV_NAME=corestackenv

Try:

source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda activate corestackenv

If you changed MINICONDA_DIR or CONDA_ENV_NAME in install.sh, use those values instead.

You are not sure which installer step to rerun

Print the current step names first:

bash installation/install.sh --list-steps

These rerun examples assume you are in the backend repo root.

Then rerun only what you need:

# Continue from a later point
bash installation/install.sh --from gee_configuration

# Rerun only the validation step
bash installation/install.sh --only initialisation_check

# Skip the large admin-boundary download on a partial rerun
bash installation/install.sh --from env_file --skip admin_boundary_data

PostgreSQL connection errors

Check the service first:

sudo systemctl status postgresql
sudo systemctl start postgresql
sudo systemctl enable postgresql

The current installer defaults are:

  • database: corestack_db
  • user: corestack_admin

Test that pair directly:

psql -h localhost -U corestack_admin -d corestack_db

If you changed those defaults in install.sh, test the values you actually configured.

RabbitMQ is missing or Celery cannot connect

sudo apt install -y rabbitmq-server
sudo systemctl start rabbitmq-server
sudo systemctl enable rabbitmq-server
sudo systemctl status rabbitmq-server

nrm_app/.env is missing or you still have a repo-root .env

The current runtime env file is:

nrm_app/.env

install.sh can migrate a legacy repo-root .env into that location, but the active backend should now read from nrm_app/.env.

Rerun just the env step if needed:

bash installation/install.sh --only env_file

Admin-boundary download or extraction fails

The installer downloads a large archive and expects the final data under:

data/admin-boundary/input/
data/admin-boundary/output/

Useful checks:

find data/admin-boundary -maxdepth 3 -type f | head

The installer can normalize an older nested extraction layout automatically, but if the directory is incomplete, rerun:

bash installation/install.sh --only admin_boundary_data

If you already have the dataset and do not want to download it again on later reruns, use --skip admin_boundary_data.


Built-In Validation Failures

jwt-auth fails

The initialization test creates a Bearer token automatically from an active Django user.

The installer normally handles this by creating or refreshing a test superuser like:

  • username: test_user_####
  • password: test_change_me

If that step was skipped or the user was removed, rerun:

bash installation/install.sh --only superuser,initialisation_check

Or create your own user:

source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda activate corestackenv
cd /path/to/core-stack-backend
python manage.py createsuperuser

GEE_DEFAULT_ACCOUNT_ID is blank or gee-probe fails

The easiest fix is to import a GEE JSON through the installer:

bash installation/install.sh --from gee_configuration --gee-json /full/path/to/service-account.json

You can also provide it with the generic input form:

bash installation/install.sh --from gee_configuration --input gee_json=/full/path/to/service-account.json

Then rerun the strict GEE validation:

source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda activate corestackenv
cd /path/to/core-stack-backend
python computing/misc/internal_api_initialisation_test.py --require-gee

gcs-upload-probe fails

This usually means the configured GEE service account can authenticate with Earth Engine, but it cannot write to the Google Cloud Storage bucket used by the first computing API.

Grant bucket write access to that same service account. The current test output explicitly calls out storage.objects.create, and cleanup is smoother if it also has storage.objects.delete.

geoserver-probe warns or fails

The current initialization check probes:

${GEOSERVER_URL}/rest/about/version.json

Confirm that nrm_app/.env has all three values:

  • GEOSERVER_URL
  • GEOSERVER_USERNAME
  • GEOSERVER_PASSWORD

You can queue them directly into the installer:

bash installation/install.sh \
  --from gee_configuration \
  --input geoserver_url=https://host/geoserver \
  --input geoserver_username=admin \
  --input geoserver_password=your-password

admin-boundary-compute fails

The validation script needs:

  • data/admin-boundary/input/soi_tehsil.geojson
  • state-wise district GeoJSON files under data/admin-boundary/input/<state>/

If those are missing or corrupted, rerun the admin-boundary step, then rerun the initialization check.

first-computing-api is skipped or fails

That check only runs fully when all of these are ready:

  • an active Django user
  • GEE_DEFAULT_ACCOUNT_ID
  • working GEE authentication
  • working GCS upload access
  • reachable GeoServer REST credentials
  • usable admin-boundary sample data

When it fails, the most important line is usually setup-next-step in the initialization output. Follow that guidance before trying manual curl tests.


Public API Smoke-Test Problems

The public API smoke test is skipped

The current installer skips public_api_check when PUBLIC_API_X_API_KEY is blank.

Provide it up front:

bash installation/install.sh \
  --only public_api_check \
  --input public_api_key=your-public-api-key \
  --input public_api_base_url=https://geoserver.core-stack.org/api/v1

Or add the same values to nrm_app/.env.

You want to debug the helper directly

The backend ships a standard-library helper at installation/public_api_client.py.

Common checks:

# Minimal smoke test
python installation/public_api_client.py smoke-test

# Resolve names against the active hierarchy
python installation/public_api_client.py resolve \
  --state bihar \
  --district jamu \
  --tehsil jami

# Download one tehsil package
python installation/public_api_client.py download \
  --state assam \
  --district cachar \
  --tehsil lakhipur

Auth And API Usage Errors

401 Unauthorized on public-data APIs

The public_api routes expect X-API-Key, not a Bearer token.

Use:

curl \
  -H "X-API-Key: YOUR_API_KEY" \
  "http://127.0.0.1:8000/api/v1/get_generated_layer_urls/?state=karnataka&district=raichur&tehsil=devadurga"

JWT route confusion

If a route expects JWT auth, use:

Authorization: Bearer YOUR_JWT_TOKEN

The installer validation generates that token automatically for its own checks, but manual API testing still needs a valid JWT for JWT-protected routes.


Manual Runtime Problems

Django server will not start

source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda activate corestackenv
cd /path/to/core-stack-backend
python manage.py check
python manage.py runserver 127.0.0.1:8000

Celery worker will not start

source "$HOME/miniconda3/etc/profile.d/conda.sh"
conda activate corestackenv
cd /path/to/core-stack-backend
celery -A nrm_app worker -l info -Q nrm

Swagger is not loading

Confirm the server is actually up, then open:

You are calling stale route names

The current auth-free geography routes are:

  • GET /api/v1/get_states/
  • GET /api/v1/get_districts/<state_id>/
  • GET /api/v1/get_blocks/<district_id>/
  • GET /api/v1/proposed_blocks/

Generated layer URLs return nothing

That usually means one of these is missing:

  • the location is not activated
  • the layer metadata was never written
  • the layer was never published or the public-facing record is incomplete

The public API helper and the installer smoke test are both useful here because they validate the public surface with the current route names and active-location data.


Next Pages