Bulk import and export

CSV, Benchling and plate-reader files in; CSV out for downstream analysis.

Most labs already have data — in CSVs from instruments, in Benchling, in legacy spreadsheets, in plate-reader outputs. This page covers how to land that data in Dalea in bulk, and how to get data back out for downstream analysis in Python or R.

What you can import

FormatSourceLands in
CSVPlate readers, qPCR cyclers, flow cytometers, manual exportsResult tables (with sample mapping) or entity tables (objects)
Benchling export (.zip / .json)Benchling ELNAuto-mapped to environments and inventory
Excel (.xlsx)Legacy lab spreadsheetsEntity or result tables, with column-mapping prompts
JSONCustom pipelines, instrument softwareAny table with a matching schema

Imports are atomic: either the whole batch lands or nothing does. Validation errors flag specific rows, the rest of the batch waits, and you decide whether to fix-and-retry or split.

CSV import — bulk register objects

  1. Open the target table

    Workspace → Data → environment → the entity table (e.g. Animals).

  2. Click Bulk import → CSV

    Drag the file in or pick from your machine. Dalea reads the first row as headers by default — toggle if your file doesn't have a header row.

  3. Map columns

    Each CSV column maps to a column in the target table. Dalea pre-suggests mappings by name match. Set unmapped columns to "Ignore" or "Add as new column" if you want to extend the schema in flight.

  4. Resolve references

    Reference columns (e.g. study_group → study groups table) need a value-matching rule: by display ID, by exact name, or by another unique column. Dalea previews how many rows match.

  5. Preview & confirm

    The preview shows the first 50 rows post-validation, with green ticks and red flags. Add an audit reason ("Bulk register 24 animals for DLA-7 study, from receiving sheet") and commit.

A 24-animal CSV typically lands in under a second.

CSV import — record results into a batch

The same flow applies to result tables, with two extra steps:

  • The import goes into a result batch (a recording session). Either pick an open batch or create a new one as part of the import.
  • For plate-reader exports, Dalea offers plate-aware mapping: drop in a 96-well CSV with OD readings and your plate map, and Dalea joins them by well position into per-sample rows automatically.

Realistic scenario: a Tecan plate reader exports a CSV with 96 OD₄₅₀ values. You already have a plate map block in the protocol document that says A1 = standard 4000 pg/mL, A2 = standard 4000 pg/mL replicate, … H12 = sample M-12 t=24h replicate B. Dalea matches well to sample, fits the standard curve, and records back-calculated concentrations into the cytokines result batch — all from one upload.

Benchling import

If you have a Benchling subscription and an export, Dalea recognises the format and auto-maps:

  • Benchling projects → Dalea projects
  • Benchling entries → Dalea documents
  • Benchling registries → Dalea data environments (one entity table per registry)
  • Benchling assay results → Dalea result tables
  • Benchling inventory → Dalea inventory items

The import runs as a background job; for big migrations (10k+ objects with cross-references) it can take several minutes. You'll see live progress and a summary report at the end. The objects keep their Benchling display IDs as aliases, so existing references in legacy documents resolve.

Conflict resolution

Imports surface a conflict report when:

  • A required column is missing in the CSV.
  • An enum value isn't in the target column's allow-list.
  • A reference doesn't resolve.
  • A row would violate a uniqueness constraint.

For each conflict, Dalea offers four resolutions:

  • Skip row — ignore this row, continue with the rest.
  • Add value — extend the enum allow-list to include the new value.
  • Edit value — open an inline editor to fix the typo.
  • Map differently — re-run the column-mapping step.

Resolutions are batched: pick once for "all rows with this issue" and Dalea applies it to every matching row.

Export

Every queryable surface in Dalea has an export button. Three formats:

FormatWhat you get
CSVOne row per record. Display IDs for references; ISO dates; numeric units in column headers. Best for Excel, Pandas, R.
JSONFull structured object including dimension/measurement split for result rows and any reference metadata. Best for programmatic re-import or downstream pipelines.
ParquetColumnar binary; large datasets only. Best for Spark or DuckDB.

Exports respect the current filters on the table or saved query. So if you filter to "study DLA-7, last 30 days, dose ≥ 10 mg/kg" and click export, you get exactly that slice.

Bioinformatician pattern: round-trip analysis

A common pattern:

  1. Run the experiment in Dalea, capture results in a result batch.
  2. Export the batch as CSV.
  3. Run downstream analysis in Python (PK modelling, NCA, dose-response fits) or R (DESeq2, mixed-effects models).
  4. Re-import the analysis output as a new result batch in a separate result table (e.g. PK parametersauc_0_24, cmax, tmax, cl_per_kg).
  5. A study-summary document can now embed both raw concentrations and derived parameters side by side.

The audit trail records the export, the analysis script (if you upload it as a file in the result-batch metadata), and the re-import.

Tips

Imports run an audit trail

Every bulk import creates an audit event with the file name, the row count, the operator, the timestamp and the audit reason. The original file is also attached to the result batch (or to the registration event) so you can re-derive everything later.

Pre-validate big files

For imports over 10 000 rows, click Validate without committing. Dalea runs the full validation pass and produces a conflict report without writing anything. Fix the source file, then run for real.

What's next