Guides
On this page
Step-by-step guides. Each guide walks through a specific task.
- Create a Lib Function: build a lib function for API ingestion or outbound sync
- Set Up OAuth Authentication: authenticate with OAuth providers for API access
- Sync Data to External APIs: push data with change detection
- Collect Events: accept HTTP POST with durable buffering
- Serve Data via OData: connect Power BI, Excel, Grafana
- Preview Changes: run the full pipeline against a temporary catalog
- Schedule Pipeline Runs: install OS-native cron for automated runs
- Maintain DuckLake Storage: compact files, expire snapshots, clean up
- Mask Sensitive Columns: apply mask, hash, or redact tags
How to build a lib function for API ingestion or outbound sync. Step-by-step from API dict to working fetch/push function.
How to authenticate with OAuth providers for API ingestion and outbound sync. One-time setup, automatic token refresh.
Pull data from REST APIs into your pipeline using lib functions. Pagination, incremental loads, OAuth, and error handling.
Collect events via HTTP POST with durable buffering. Events are materialized into DuckLake during pipeline runs.
Serve pipeline results to Power BI, Excel, Grafana, or any OData-compatible BI tool via OData v4.
Preview all pipeline changes before committing to production. OndatraSQL's sandbox mode runs against a temporary catalog copy.
Push pipeline data to external APIs using @sink. Step-by-step setup with batching, rate limiting, and per-row tracking.
Compact files, expire snapshots, and clean up storage in DuckLake.
Install an OS-native scheduler for automated pipeline runs. One command sets up systemd on Linux or launchd on macOS.
Apply mask, hash, or redact tags to columns and the pipeline handles the rest. Protect sensitive data during materialization.
OndatraSQL