Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getclaro.ai/llms.txt

Use this file to discover all available pages before exploring further.

Claro is designed for continuous catalog work, not one-off runs. Three layers handle automation:
  1. Schedules — recurring runs of any operation.
  2. Chained pipelines — output of one operation triggers the next.
  3. Connectors — inbound and outbound integrations with your existing systems.

Schedules

Every operation can run on a schedule. From the catalogue’s Operation tab, choose a recurrence — hourly, daily, weekly, or a cron expression — and pick a filter so only the relevant records are processed each run. Use schedules for:
  • Daily price and competitor monitoring.
  • Weekly data quality reports (Analyse).
  • Hourly supplier portal pickups when suppliers update frequently.
  • Recurring exports to BigQuery or S3.
Each scheduled run shows up in the operation’s run history alongside on-demand runs.

Chained pipelines

Operations can be chained — completion of one triggers the next, optionally gated by filters or review checkpoints. A typical onboarding pipeline:
Data Source Mapping

Validate Data

Normalize Data

Bulk Enrichment

Push & Sync
Pipelines are configured on the catalogue’s Operation tab. You can:
  • Branch on outcome (e.g. route low-confidence enrichment to a different reviewer).
  • Halt on validation failure to prevent bad data downstream.
  • Re-run a single stage without re-running the whole pipeline.

Inbound connectors

For ingesting data, see Data Import & Ingestion. The full set:
  • File upload (CSV, XLSX)
  • Google Drive
  • S3
  • HTTPS scraping
  • Scheduled HTTP pulls
  • Supabase, BigQuery, Postgres
  • Supplier Portal
  • Email-as-source
A write API also accepts changes from upstream systems with full schema validation.

Outbound connectors

For pushing data, see Distribute. The full set:
ConnectorUse
ShopifyProducts, variants, metafields, images.
AmazonListings via SP-API.
BigQueryAppend/replace tables, with partitioning.
Google SheetsBi-directional sync with named ranges.
WebhooksPer-change, batched, or scheduled, signed payloads.
S3Periodic dumps.
Supabase / PostgresInsert/update via SQL or REST.
A read API is available for downstream systems that prefer to pull.

Notifications and alerts

Slack

Real-time alerts for operation completions, failures, and Monitor drifts.
  • Per-channel routing — different channels for different operations or catalogues.
  • Slash commands to trigger operations on demand.
  • Configurable verbosity (summary vs. per-record).
Setup: Settings → Integrations → Slack, then authorize the workspace and pick channels.

Email

Per-user or per-team email digests for review queues and monitoring alerts. Configurable cadence (real-time, hourly, daily).

Webhooks

Subscribe an endpoint to operation events: run.started, run.completed, run.failed, change.applied, change.queued_for_review, change.approved, change.rejected. Payloads are signed for verification.

Custom integrations (Dedicated plan)

For systems not covered by the built-in connectors:
  • Custom REST endpoints for proprietary systems.
  • Database connectors for direct pipeline integration.
  • ETL integration with Airflow, Prefect, dbt.
  • Dedicated technical support during implementation, with sandbox access.

Integration requests

Want a specific integration? We prioritize based on user demand. Most-requested integrations are fast-tracked in our quarterly review.