Better Data Logo
Intelligence · Data Pipeline

Your data in. Your loops running. In under an hour.

Data Pipeline handles import, validation, and load for core entities with schema mapping, connector support, and a concrete OSS-to-cloud migration path.

Products import · 1,247 rows · PROCESSING (84%)

1,239 valid ✓ · 8 warnings · 0 errors

Inventory sync · 4,821 records · Completed 14m ago

Supplier list · 89 records · Completed 2h ago

PO history · 312 records · Completed yesterday

6 entities

Products, inventory, suppliers, POs, orders, and locations

Field mapping

Map source columns to Better Data schema with transform rules

OSS path

CLI export from self-hosted OSS imports directly to cloud

Import any entity. Map any field.

Source data can be loaded without pre-reformatting. Field mapper controls keep schema translation explicit and repeatable across recurring import jobs.

  • - Products with SKU, GTIN, category, and supplier mapping
  • - Inventory with location, lot, expiry, and quantity context
  • - Supplier records with lead time and terms
  • - Purchase and sales orders with line detail
Your columnsBetter Data field
product_codeSKU
descriptionname
qty_on_handon_hand_qty (int)
warehouselocation_id
exp_dateexpiry_date

Validation report

1,239 valid ✓ · 8 errors ✗ · 0 warnings

RowIssueType
42SKU missingRequired
87GTIN checksum failFormat
203Duplicate SKU LMB-BRS-001Conflict

Validation before import, not after

Data integrity checks run before write. Teams get row-level diagnostics for required fields, formats, and conflicts, then choose exactly what proceeds to load.

  • - Required field checks before commit
  • - Format validation for numeric, date, and GTIN constraints
  • - Duplicate detection against existing records
  • - Downloadable row-level error reports with reason codes

OSS to SaaS migration that actually works

OSS export archives can be loaded directly through the SaaS import processor without manual remapping. Migration remains traceable and repeatable from command line to final job close.

  • - CLI export from OSS produces import-ready archive
  • - SaaS processor validates and loads in one job
  • - Tenant mappings preserved across environments
  • - Full audit trail for migration events

OSS Self-Hosted

betterdata export --tenant lumebonde --format zip

SaaS Import Processor

lumebonde-export-2026-03-06.zip

6 entity types · 4,821 records

Status: COMPLETED ✓

Built for operators and developers

Data Pipeline is the operational entry point for new tenants, migrations, and recurring sync jobs. Every load is validated and auditable before loops consume the data.

API and integration

  • REST API for import job creation, status polling, and error reports
  • Connector support for scheduled syncs into Better Data entities
  • OSS CLI export command produces Data Pipeline-compatible archives
View import docs →

Related modules

Catalog
Product and supplier records imported through pipeline jobs.
Inventory
Inventory positions bootstrapped from validated imports.
Analytics
Loop event exports for external analysis use cases.
Forecasting
Historical demand imports seed baseline scenarios.

Your data in Better Data. Loops live same day.

Book a 30-minute demo and bring your product file. We will run a live import together.

Book a demo