Query data lakes, streams, and graphs in ArcGIS — no ETL.

ArcGIS Enterprise Custom Data Feeds powered by DuckDB let you connect Parquet/GeoParquet, S3/Blob, Iceberg/Delta, Kafka streams, and graph data directly into Feature Services.

High-performance

DuckDB’s in-process, vectorized engine delivers interactive map queries with low latency.

No-ETL data virtualization

Read Parquet, Arrow, CSV, and cloud objects in-place — eliminate brittle pipelines.

Real-time + Graph

Stream from Kafka and analyze networks with SQL/PGQ via DuckPGQ.

Get started in 3 steps

  1. Clone the repo and install dependencies: npm install
  2. Configure sources in config/config.json (S3, Blob, Iceberg, Kafka, etc.)
  3. Build and deploy the provider to ArcGIS Server: npm run buildcdf deploy

See Installation for commands.

Supported sources

  • Files: Parquet, GeoParquet, Arrow, CSV, Shapefiles
  • Cloud: AWS S3, Azure Blob (via httpfs/aws/azure)
  • Lakehouse: Apache Iceberg, Delta Lake
  • Streaming: Apache Kafka (Tributary)
  • Events: WebSocket, Redis Pub/Sub (Radio)
  • Graph: SQL/PGQ (DuckPGQ)

Security & operations

  • Use environment variables for credentials
  • TLS for network endpoints
  • Input sanitization and least-privilege access
  • Version alignment with ArcGIS Server CDF runtime

Performance levers

  • Vectorized execution, parallel scans
  • Columnar I/O for Parquet/Arrow
  • Query caching and predicate pushdown

Resources