# Dalea Wiki — Full Corpus > Concatenated markdown for every page in the Dalea Wiki. Generated at build time. # Welcome What Dalea is, who it is for, and how the four sites fit together. --- ## What is Dalea? Path: /welcome/what-is-dalea Summary: A unified research orchestration platform for life science. **Dalea is a research orchestration platform for life science.** It unifies the four software stacks every modern lab needs — an electronic lab notebook, a structured data warehouse, an inventory and sample tracker, and an AI assistant — into a single collaborative workspace with one identity, one permission model, and one audit trail. The platform is built on the assumption that **every result must be reproducible**. Each protocol has explicit reagents, hazards and timing. Each measurement is anchored to the animal, plate well, or sample it came from. Each schema change carries an audit reason. Data is exportable in open formats and the platform is EU Data Act compliant and EU-hosted. ## Why this exists European BioPharma alone spends roughly **€23 B per year on research that cannot be replicated**. Critical experimental context — buffer recipes, assay timings, animal weights — lives in lab notebooks that nobody else can read, in spreadsheets on personal computers, or in tools whose vendors went out of business. Dalea fixes this by giving every result a structured home from day one, then lets your team work in it like a notebook. ## What you can do in a Dalea workspace ## The single mental model Everything in Dalea sits in one of these layers:
A **workspace** is the unit of collaboration and the data boundary. You can belong to many. Inside a workspace, work is organised into **projects** (a study, a campaign), which group documents, files and structured data. Tables, environments and inventory all live at the workspace level so they can be referenced from any document. A typical pre-clinical IND-enabling team might run one Dalea workspace per programme: - **Workspace** "IND-128 Discovery Team" - **Projects** inside it: Lead optimisation, Mouse PK, Tox screen, IND submission - **Environments**: a shared In-vivo PK schema reused across studies - **Inventory**: freezer L-204, plus carousel-fed compound stock - **Templates**: an IACUC submission form locked to your institutional language The same researcher may also belong to a "Method dev" workspace and a "Manuscripts" workspace — switching is one click in the sidebar. ## What's next --- ## The four domains Path: /welcome/the-four-domains Summary: How dalea.app, dalea.market, dalea.wiki and dalea.tech fit together. Dalea is delivered through four sibling sites. They share a single design language and a single identity model, but each has a distinct purpose. The product. Sign in, create documents and data, run studies. The app you live in day-to-day. }, { key: 'dalea.market', value: <>The community marketplace. Browse, publish and review reusable templates, environments, blocks and bundles. }, { key: 'dalea.wiki', value: <>This site. Documentation, tutorials, concepts, and a machine-readable corpus for LLMs. }, { key: 'dalea.tech', value: <>The marketing site. Pricing, the company, blog posts, demos. }, ]} /> ## How they connect ``` ┌────────────┐ │ dalea.tech │ Discover Dalea └──────┬─────┘ ↓ ┌───────────────────────┐ │ dalea.app │ Your workspace, documents, data └─────┬───────────┬─────┘ │ │ Publish │ │ Look things up ↓ ↓ ┌────────────┐ ┌────────────┐ │ dalea.market│ │ dalea.wiki │ └────────────┘ └────────────┘ ``` Templates and packages flow **out** of dalea.app into dalea.market when you publish. Documentation and machine-readable context flow **into** dalea.app from dalea.wiki via in-product help links and via the AI assistant's system prompt. ## Identity is shared Your Dalea login is a single account that works everywhere. You sign in once at [dalea.app](https://dalea.app); the marketplace and wiki recognise you through the same OAuth issuer. For enterprise customers with a dedicated tenant, the same identity bridges across your own subdomain too — so a researcher with `@acme.com` mail signs in once and is recognised on the wiki, the marketplace and the app. ## Where to go from here --- ## Who Dalea is for Path: /welcome/who-its-for Summary: From individual researchers to regulated enterprise labs. Dalea is built for **anyone whose work depends on reproducible biology** — from a graduate student analysing flow cytometry data, to an IND-enabling team coordinating a GLP tox study, to a CRO that needs to deliver structured datasets to its sponsor. ## By role ## By stage ## What Dalea is not It is not a LIMS in the sense of a fixed schema sold per assay. The schema is yours, designed in the app, and reusable across studies. It is also not a protocol-only tool; your protocols stay co-located with the data they generated. If you're a single bench scientist who just wants a smarter Word document, Dalea will feel like overkill at first — but the moment your second result lands and you want to ask "what was the IC₅₀ for batch 4?", you will be glad it was structured from day one. The fastest way to know whether Dalea fits is to run the [Your first PK/PD study](/tutorials/your-first-pk-study) tutorial. It walks through a realistic mouse PK study end to end and only takes about 15 minutes. --- # Get started From sign-up to your first reproducible result in under fifteen minutes. --- ## Create your account Path: /get-started/create-your-account Summary: Sign up by email, OAuth or passkey. You can sign up for Dalea in under a minute. There is no credit card and no email allow-list; the free tier is intended for individuals and small teams to use indefinitely.

Open dalea.app/register. Enterprise customers with a dedicated tenant should use the URL provided by their administrator.

Three options, all valid:

  • Email + password — universal, requires email verification.
  • OAuth — Google, GitHub, or Microsoft. Fastest. Click a provider, approve.
  • Passkey — WebAuthn. Pair your device on first sign-in; subsequent sign-ins are biometric only.

You'll get a one-time link by email. Click it to land back in the app, signed in.

The first time you sign in, Dalea automatically creates:

  • your Personal organisation (free tier)
  • a default workspace inside it (named "Personal" — you can rename)
  • a welcome document nudging you towards the Learn hub
Passkeys eliminate password phishing entirely and they sync via your platform keychain. On macOS that's iCloud Keychain; on Android, Google Password Manager; on Windows, Windows Hello. Add a passkey from Settings → Security. ## What if your team uses SSO? If your organisation has been provisioned with an academic or enterprise tier, your admin will share a **provisioning code**. Enter it during sign-up to land directly in the org's first workspace with the right role pre-assigned. SSO via SAML/OIDC follows the same flow but skips the password step. ## Two-factor authentication Optional but recommended on shared lab accounts. Configure TOTP from Settings → Security → Two-factor authentication. Recovery codes are single-use and shown only once — store them in a password manager. ## What's next You're signed in. Next, set up your first workspace and invite your team: --- ## Your first workspace Path: /get-started/your-first-workspace Summary: What is auto-created, naming and inviting your team. A **workspace** is the unit of collaboration in Dalea. It contains your documents, your data schemas, and your inventory. Members of a workspace see and can operate on everything in it according to their role. ## Rename the default workspace When you sign up, Dalea creates a workspace called "Personal". Most teams immediately rename it to a study or programme code: Click the workspace name in the sidebar header → Settings. Pick something specific. IND-128 Discovery beats My workspace. The icon and colour appear in the sidebar workspace switcher and in the breadcrumb bar — visual differentiation matters once you join three or four workspaces. A one-line description ("Lead optimisation for the EGFR programme; PK and tox leads.") tells future members who joins this workspace and why. ## Invite your team Workspaces support custom roles, but five built-ins cover most labs: You can only invite someone to a workspace if they are already a member of the parent organisation (or get invited to the org at the same time). For internal teams this is automatic via SSO; for external collaborators, your admin first invites them to the org, then to specific workspaces. ## Pre-flight checklist Before you start filling the workspace with content, take 90 seconds to: - **Decide who has each role.** Owner is for one or two people; Editor for everyday scientists; Data Engineer for whoever is allowed to change the schema. - **Decide what your projects will be.** Projects (which we'll create in a moment) group documents and files. Keep them coarse — usually one per study or campaign. - **Pick a naming convention for studies.** Something like {'--'} (e.g. EGFR-AB-007) helps you find things in three years. ## What's next --- ## Your first document Path: /get-started/your-first-document Summary: Open the editor, type, embed a chart. Documents in Dalea look like a notebook page but they are made of **structured blocks**. A paragraph is a block. A protocol step is a block. A 96-well plate map is a block. This means everything you write is automatically searchable, referenceable from other places, and — when you publish a template — reusable. ## Create the document From any page, click your workspace name. You land on the workspace home with quick-access cards for Documents, Data, Inventory, Templates. Pick a project to drop it into (or "Unsorted"). Give it a title — something operational like {`Plasma collection — IFN-γ kinetics, mouse #M-12`} is more useful than Notes. The editor is plain — type Markdown-style or just write. # at the start of a line becomes a heading. Type / to open the slash menu. Try inserting a protocol step block. Fill in the duration, the reagents, the hazard level. Add another step. These will live in the document but also be queryable by Dalea's AI assistant. Here is what a small protocol looks like once you've added a few protocol-step blocks. Click each step to "run" it — Dalea timestamps the action and logs the operator:
## Embed something other than text Protocols are just one of two dozen block types. Some you'll reach for first: ## Multiplayer is on by default Anyone in the workspace with edit permission can open the document at the same time. You'll see their cursor and selection in real time. There is no "save" button — every keystroke is synced live and added to the version history. To revert, click Version history in the document header. Use it liberally; restoring a version is reversible. ## What's next --- ## The five-minute tour Path: /get-started/the-five-minute-tour Summary: Sidebar, command palette, chat, notifications. A whirlwind through the surfaces you'll touch every day. None of these are mandatory — the app works fine if you only use the sidebar — but they each save real time. ## The sidebar The left rail. From top to bottom: - **Workspace switcher** — your current workspace, click to change. Workspaces are grouped by organisation in the dropdown. - **Workspace tabs** — Home, Documents, Templates, Data, Inventory, Search. Each is workspace-scoped. - **Recent documents** — quick links to what you opened last. - **Global links** — Learn (the in-product tutorial hub), Marketplace, Bug report. Collapse it with the chevron at the bottom; on mobile it becomes a hamburger menu. ## The command palette Hit ⌘K (Mac) or Ctrl K (Linux/Windows) anywhere in the app. The command palette is bimodal: - **Plain keywords** become a search across documents, data records, templates and files in the current workspace. - **Action-oriented queries** ("create a new document", "show all templates") open the AI assistant. Pressing Shift Enter from the palette always opens the chat with the current query as the first message. ## The AI chat panel A side panel toggled by ⌘/ or the sparkle icon. The assistant has the context of your current workspace and document. Ask it things like: - "Summarise the latest run of the IFN-γ ELISA assay and flag any outliers." - "Find all mice in study DLA-7 with a baseline weight under 22 g." - "Open the protocol for plasma collection." Destructive actions (creating, updating, deleting) always pop a confirmation card before the assistant proceeds. You can pre-approve specific action types. ## Notifications The bell in the page header. You'll see workspace invitations, mentions (when someone @-tags you in a document or comment), and platform notices. A red dot indicates unread. ## The chat *with* a document Drag any document onto the chat panel to "dock" it side-by-side. You can edit the document and ask the assistant questions about it at the same time, without the panel covering your work. ## What's next You've now seen the four most-used surfaces — sidebar, palette, chat, and the notification bell. Time to build something real: --- # Concepts The mental model behind every screen in Dalea. --- ## The Dalea hierarchy Path: /concepts/the-hierarchy Summary: User → Org → Workspace → Project → everything else. Almost everything you'll do in Dalea is anchored to a particular workspace, and almost every page in the product follows the same hierarchy. Understanding it once removes a lot of "where did that go?" later.
## The four hierarchy layers ## The artefacts inside a workspace ## Two practical consequences If you wouldn't be comfortable having a person see *everything* in a workspace, they don't belong in it. Spin up a separate workspace instead. Workspaces are cheap. Putting a document in a project does not hide it from other workspace members. Projects are an organisational tool, not a permission boundary. If you need real isolation, use roles or a separate workspace. ## What's next --- ## Workspaces Path: /concepts/workspaces Summary: The collaboration unit and the data boundary. A **workspace** is the most important concept in Dalea. It is simultaneously: - a **collaboration unit** — the set of people who can see and operate on the work, - a **data boundary** — schemas, records and inventory in one workspace are invisible to another, - a **storage quota carrier** — quota and feature flags are issued to the workspace via its parent organisation's tier. ## Anatomy of a workspace Inside a workspace you'll find these sub-pages, each accessible from the sidebar: ## When to make a new workspace Make a new workspace when **either**: - the people change (an external CRO needs limited access; a manuscript review group shouldn't see in-progress data); **or** - the data boundary changes (a new programme starts and you don't want it polluting the search index of an older one). A workspace is cheap to create — give it a name and it exists. Don't be precious about keeping the workspace count low. ## When to keep one big workspace Keep work in **one** workspace when team and data are the same and you want to: - reference the same inventory of reagents across studies, - reuse the same data environment (e.g. a single "In-vivo PK" schema across multiple programmes), - search across all running studies in one query. A common pattern in biopharma: **one workspace per programme**, with projects inside the workspace for individual studies. Compound libraries and reagent freezers often live in their own workspace and are referenced from study workspaces by ID. ## Switching workspaces Click your workspace name in the sidebar header. The dropdown groups workspaces by organisation: your Personal org, then any orgs you belong to. The currently open workspace is highlighted; the most recently visited workspaces appear at the top. The keyboard shortcut ⌘⇧W opens the same dropdown. ## Workspace settings `Settings → Workspaces → ` exposes: - **Details** — name, icon, colour, description - **Members** — invite, change role, remove - **Roles** — create or edit custom roles (in addition to the five built-in roles) - **OAuth clients** — for connecting external apps via OAuth (e.g. Claude Desktop) - **Audit trail** — workspace-scoped audit log review (if your role includes the permission) ## What's next --- ## Environments and data Path: /concepts/environments-and-data Summary: Tables, columns, objects, results, naming schemes. Dalea has two ways to store information: **documents** (free-form, rich-text, collaborative) and **environments** (structured, schema-validated, queryable). Most labs use both. This page explains the structured side — environments and the four primitives that live in them. ## Why structure matters A spreadsheet is structured. An ELN page is unstructured. The first lets you ask "what was the average plasma concentration at 4 h across all 3 mg/kg dose groups?". The second forces you to read. In Dalea you can have both: write up the experiment in a document, *and* have the underlying data be queryable in real time because it lives in an environment. ## The five primitives ## The lifecycle Designing a schema in Dalea always follows the same six steps. Watch them auto-cycle or click any to inspect:
## Entity vs result tables This is the only schema decision that occasionally trips people up. - **Entity tables** describe *things* — animals, plasma samples, plates. One row = one thing. They're optimised for lookup and reference. - **Result tables** describe *measurements* — concentrations, viability values, expression levels. They're optimised for analytical queries (group, aggregate, filter on dimensions). Result tables explicitly split columns into: - **Dimensions** — the axes you'll query and group by (animal, timepoint, treatment). - **Measurements** — the numeric values you actually recorded (concentration_ug_ml, body_weight_g, ct_value). This split is what lets you ask "mean concentration at 4 h, grouped by dose group" in one click. ## Naming schemes Most labs hate manually typing IDs. Naming schemes generate them for you. A scheme is a pattern with placeholders: - `ANM-{N}` → ANM-001, ANM-002, … - `SMP-{YYYY}-{N}` → SMP-2026-001 - `[Sex]{N:000}` → F001, F002 … Schemes can reset per table, per workspace, or never. They run as soon as you create an object — you almost never assign a display ID by hand. ## What's next --- ## Templates and locking Path: /concepts/templates-and-locking Summary: How a templated document differs from a free one. A **template** is a document blueprint that you (or someone else) reuses to start new documents. A template can be free — meaning the new document is just a duplicate that you can edit freely — or it can be **locked**, restricting which parts of the document the user is allowed to edit. Locking matters in two situations: 1. **Standard operating procedures** where the steps must run in a fixed order, with fixed reagents, but the operator fills in observations as they go. 2. **Regulated forms** (IACUC submissions, GxP procedure runs) where the language is institutionally approved and must not be changed without re-approval. ## The three lock states Try each — the demo below is fully interactive:
Behind the scenes, locking has three independent layers, all stored in the document's real-time state: `partial` is what makes form-style templates possible. A protocol-step block can have `body` locked but `observed_value` editable; a registration-table block can have the schema locked but allow appending rows. ## How a templated document differs from a free document When you create a document from a template, the lock configuration travels with it. Three things happen: - The editor's slash menu hides forbidden insertions when structure is locked. - The document's outline and block IDs are pre-populated; lock state travels with the document and is preserved through every edit. - The server independently re-validates every save against the lock config — client-side enforcement is for UX, server-side enforcement is for trust. ## Versioning Templates are versioned. Editing a template creates a new version (v1, v2, …) with an optional changelog. Documents track which template they came from and which version, so you can prompt users when a newer version is available. If you publish the template to dalea.market, each release is immutable; yanking marks a release as unsafe but doesn't delete it. ## What's next --- ## Roles and permissions Path: /concepts/roles-and-permissions Summary: The five built-in workspace roles plus custom ones. Dalea has two scopes of permission: **organisation roles** (control billing, member directory, workspace creation) and **workspace roles** (control day-to-day work inside a workspace). A user always has exactly one role per organisation and exactly one role per workspace they belong to. ## Organisation roles ## Workspace roles Five built-in roles cover most labs. You can also define custom roles with arbitrary combinations of permissions. ## How permissions actually work Internally, every action a user can take has a permission name (e.g. `MANAGE_INVENTORY_STRUCTURE`, `EDIT_DOCUMENTS`). Roles are bags of permissions, and the role hierarchy expands so granting a stronger permission implicitly grants its weaker neighbours: `MANAGE_INVENTORY_STRUCTURE → EDIT_INVENTORY → VIEW_INVENTORY`. This means custom roles can never accidentally elevate above what they grant — you can't grant "edit inventory" without also granting "view inventory". ## OAuth client roles When you create an OAuth client (e.g. for Claude Desktop) you assign it a workspace role. The actual effective permissions when the app makes a call are the **intersection** of the user's permissions and the app's role: if you are a workspace Owner but you connected Claude with the Viewer role, Claude can only read. ## What about platform admins? Platform-level administration (cross-org visibility, audit health monitoring) is handled exclusively by Dalea's own operations team. Customers — including enterprise tenants — never see or manage other organisations. ## What's next --- ## Glossary Path: /concepts/glossary Summary: Every Dalea-specific term, A to Z. Every Dalea-specific term in one place. Use the in-page search (⌘F). ## A — F ## I — N ## O — R ## S — Z --- # Daily use Search, shortcuts, notifications and the muscle-memory of using Dalea every day. --- ## Keyboard shortcuts Path: /daily-use/keyboard-shortcuts Summary: A complete reference for macOS, Windows and Linux. A complete reference for everything you can do without reaching for the mouse. Keys are written in macOS form (`⌘`); on Windows and Linux substitute `Ctrl`. ## Global | Action | Shortcut | |---|---| | Open command palette / search | `⌘K` | | Open the AI chat panel | `⌘/` | | Switch workspace | `⌘⇧W` | | Create a new document | `⌘⇧Enter` | | Create a new project | `⌘⇧P` | | Toggle the sidebar | `⌘B` | | Open keyboard-shortcut overlay | `?` (when not editing text) | ## Inside the editor | Action | Shortcut | |---|---| | Open the slash menu | `/` at the start of a line | | Open the @-mention menu | `@` anywhere in text | | Bold / italic / underline | `⌘B` / `⌘I` / `⌘U` | | Inline code | `⌘E` | | Link | `⌘K` *(when text is selected — overrides the global palette)* | | Heading 1 / 2 / 3 | `⌘⌥1` / `⌘⌥2` / `⌘⌥3` | | Bullet / numbered list | `⌘⇧8` / `⌘⇧7` | | Indent / outdent list | `Tab` / `⇧Tab` | | Toggle block as quote | `⌘⇧Q` | | Insert horizontal rule | `---` then `Enter` | | Insert math equation (inline) | `⌘⇧E` | | Toggle fullscreen on the focused block | `⌘⇧F` | | Add a comment to the focused block | `⌘⌥M` | | Open version history | `⌘⌥H` | ## In the command palette | Action | Shortcut | |---|---| | Cycle through results | `↑` / `↓` | | Open in current tab | `Enter` | | Open in new tab | `⌘Enter` | | Switch to AI mode | `Tab` | | Send the typed query to the AI chat | `⇧Enter` | | Close palette | `Esc` | ## In the AI chat | Action | Shortcut | |---|---| | New line in your message | `⇧Enter` | | Send message | `Enter` | | Cancel the in-progress assistant reply | `Esc` | | Approve a pending tool call | `⌘Enter` | | Reject a pending tool call | `⌘Backspace` | | Dock the current document onto the chat | drag the document tab onto the chat panel | ## In tables and the data table block | Action | Shortcut | |---|---| | Move between cells | Arrow keys | | Edit the focused cell | `Enter` or just start typing | | Commit the edit and move down | `Enter` | | Commit the edit and move right | `Tab` | | Cancel the edit | `Esc` | | Insert a row above / below | `⌘⇧↑` / `⌘⇧↓` | | Insert a column left / right | `⌘⇧←` / `⌘⇧→` | | Delete the focused row | `⌘Backspace` | The `?` overlay shows a context-aware shortcut card. If your cursor is in the editor it shows editor shortcuts; if it's in a data table it shows table shortcuts; with no focus it shows global shortcuts. ## What's next --- ## Command palette Path: /daily-use/command-palette Summary: How search and AI mode share the same Cmd+K box. The command palette is the single keyboard surface for getting around Dalea. It's a small box you summon with `⌘K`, and it's intentionally **bimodal** — the same box runs both full-text search and the AI assistant, and it picks between them based on what you type. ## How the modes work When you type something the palette tries to detect what you want: - **Search keywords** like `IFN-γ` or `DLA-7` or `M-12 plasma` — the palette runs a workspace search and shows ranked results. - **Action-style queries** like *"create a new document"*, *"summarise the latest ELISA"*, *"open the protocol from yesterday"* — the palette switches to AI mode and routes to the in-product assistant. The detection is a heuristic on the leading verb ("create", "open", "find", "compare", "summarise", "draft"…) plus question marks and length. When you don't agree with what it picked, you can override: | Override | What happens | |---|---| | Press `Tab` | Force the palette into AI mode. | | Press `⇧Enter` | Send the typed query to the chat panel as a new message. | | Press `Enter` on a search result | Open it. | | Press `⌘Enter` on a search result | Open it in a new tab. | ## What search returns Workspace-scoped, full-text, and ranked. You'll see results from: - **Documents** — by title, body text and block content - **Data records** — by display ID (`ANM-001`) or any text-typed column - **Inventory items** — by SKU, name, lot - **Templates** — by name and description - **Files** — by filename and uploader Each result shows its type icon, the workspace it belongs to (when you have several open), the breadcrumb (Project → Folder → name), and a relative timestamp. ## Filters The palette has a small inline filter row above the results. Click a chip to scope: - **Type** — documents, data records, inventory, files, templates - **Project** — limit to one project - **Environment** — limit to one data environment (only shows when relevant) - **Date** — Anytime, last 7 days, last 30 days, last 90 days Filters compose. For richer filtering, see the dedicated [Search syntax and filters](/daily-use/search-syntax-and-filters) page. ## What AI mode does When the palette detects an action query (or you press `Tab`), it switches to AI mode in-place. You'll see the box turn purple and the result list become a conversational reply. In AI mode: - Hit `Enter` to send the query and open the chat panel with the response. - Hit `Esc` to bail out and stay on the page you were on. - Hit `⇧Enter` to send the query AND keep the palette open for chained questions. If the assistant proposes a destructive action (creating, updating, deleting) it asks for confirmation in the chat panel — the palette does not run destructive actions by itself. ## Tips For most users, the palette beats the sidebar. Type the first few letters of any document, project, environment or template and you're there in two keystrokes. The palette only searches the workspace you're in. To find something in another workspace, switch first with ⌘⇧W, then search. ## What's next --- ## Search syntax and filters Path: /daily-use/search-syntax-and-filters Summary: Full-text plus filters by type, date, project and environment. Workspace search is full-text, ranked by recency and relevance, and supports a small set of filter chips and operators. This page is the reference for both. ## Where search lives Three places, same engine: - **Command palette** (`⌘K`) — quickest, best for "I know it's named X" - **Workspace → Search** — dedicated page, good for browsing results with all filters open - **In-product chat** — the AI uses search as one of its tools when answering questions like "find all documents about IFN-γ" ## Filter chips | Chip | What it does | |---|---| | **Type** | Documents, data records, inventory, files, templates | | **Project** | Limit to one project inside the workspace | | **Environment** | Limit to one data environment (only shows when relevant) | | **Date** | Anytime, last 7 days, last 30 days, last 90 days, custom range | | **Author** | Limit to documents/records created by one workspace member | Multiple chips compose with AND. So `Type=Documents` + `Project=DLA-7 PK` + `Date=last 7 days` returns only DLA-7 PK study documents touched this week. ## Operators in the query string | Operator | Example | Effect | |---|---|---| | Quoted phrase | `"IND-enabling tox"` | Exact-phrase match. | | `-term` | `kinase -competitive` | Excludes results containing the term. | | `field:value` | `lot:24-119` | Searches a specific structured field. Available fields: `lot`, `id`, `author`, `project`, `tag`. | | `before:date` | `before:2026-04-01` | Only results created or modified before that date (ISO format). | | `after:date` | `after:2026-03-15` | Inverse of `before:`. | | `tag:value` | `tag:elisa` | Match a tag (templates and packages support tags). | Operators are case-insensitive and combine with AND. Add a leading `OR` before a clause to widen: `IFN-γ OR IL-6`. ## Realistic searches ```text "baseline weight" Project=DLA-7 PK Date=last 30 days ``` > Find protocol mentions of "baseline weight" inside the DLA-7 PK study, modified > in the last 30 days. ```text lot:24-119 ``` > Every record, document and inventory item that references antibody lot 24-119. > Useful for tracing assay drift. ```text EC50 -unsuccessful Type=Documents ``` > Documents mentioning EC₅₀ but not the word "unsuccessful". Useful when you have > dozens of dose-response analyses and want to skip the failed ones. ```text author:dr.anya Type=Templates Date=last 90 days ``` > Templates Dr. Anya created or updated in the last quarter. ## Result ranking Results are ranked by a combination of: 1. **Relevance** — exact match in title beats body text; quoted phrases beat loose matches. 2. **Recency** — newer documents float up. 3. **Authorship** — your own work and your direct collaborators' work get a small ranking boost. There's no global keyword stop-list — Greek letters, special characters and unit abbreviations all index. ## Tips Most filters can also be set by clicking. After you run a search, click the result-type icon in any row to add that type as a filter chip. Search is workspace-scoped. To search across workspaces, the AI assistant can help — ask it to "find all documents mentioning DLA-7 in any workspace I belong to" and it will iterate. ## What's next --- ## Docking documents with chat Path: /daily-use/docking-documents-with-chat Summary: Edit a document and ask the assistant about it side-by-side. The AI chat panel is normally a side panel that you open with `⌘/`. By default it overlays whatever you were looking at, which is fine for quick questions but awkward when you actually want the assistant to help with the document you're editing. **Docking** solves this. You drag a document tab onto the chat panel and it sits side-by-side: editor on the left, chat on the right. You can edit and ask in parallel, and the assistant always has full context of what's open. ## How to dock The chat slides in from the right. At the top of any open document there's a small tab pill with the document title. A blue drop-zone appears across the panel. Release. Editor on the left, chat on the right. Both have full functionality. ## Why this matters Three things that aren't possible without docking: - **Edit while the assistant analyses.** Add a paragraph, then ask "is this consistent with the data table above?". The assistant sees the change. - **Cite straight from the document.** Ask the assistant to refer to "the protocol step about plasma collection" and it knows exactly which block you mean. - **Keep the chat history visible.** Long analytical sessions stay in view while you work, instead of vanishing behind a closed panel. ## Common patterns ## Undocking Drag the chat tab off the panel, or press `⌘/` to collapse the panel back to its overlay form. Your conversation history is preserved. ## Tips On large monitors the docked layout works at any width. On 13-inch laptops you get the best experience by collapsing the sidebar with ⌘B first. You can dock more than one document — they stack as tabs above the editor pane. The assistant will always assume the focused tab is the "current" document unless you reference another by name. ## What's next --- # Editor & blocks Authoring documents: the slash menu, every block type, multiplayer. --- ## Editor overview Path: /editor/overview Summary: What the editor produces and how it differs from Notion or Word. The Dalea editor is a real-time collaborative document editor built specifically for life-science work. It produces documents made of **structured blocks** — paragraphs, protocol steps, plate maps, charts, code blocks, callouts. Every block is a typed, schema-validated unit, which means everything you write is also queryable, embeddable elsewhere, and amenable to AI tools. If you've used Notion or Word, the operating model is familiar but the toolset is domain-specific. ## How Dalea differs ## The slash menu — your single entry point Inside any document, type / at the start of a line. The slash menu opens. Categories you'll see: - **Text** — heading, paragraph, list, quote, divider, section - **Data** — registration table, lookup table, data form - **Compute** — local spreadsheet, Python code, code block, chart - **Lab** — protocol step, protocol group, 96-well plate, reagent, equipment - **Media** — file embed, image, equation, callout ## Multiplayer and conflict-free sync Every keystroke syncs to anyone else with the document open. You'll see their cursors and selections in real time. When two people edit the same paragraph at the same time, both edits land — there is no "merge conflict" notion to interrupt your flow. There is no save button. There is also no auto-save delay; it's instant. ## Comments Hover any block; click the speech bubble. Comments live in the right margin, anchored to the block. They support threaded replies. Use them for review feedback, questions, or tagging colleagues with `@`. ## Version history Every document has an automatic version log. Snapshots are created on disconnect, and you can also create named manual snapshots ("Submission draft v3"). The diff viewer shows added, removed and modified blocks side-by-side. Restoring a version is non-destructive — it creates a new version on top. ## Templates A document can be saved as a template, optionally with a lock configuration that restricts what subsequent users may edit. See [Templates and locking](/concepts/templates-and-locking). ## What's next --- ## The slash menu Path: /editor/slash-menu Summary: Inserting any block, searching the catalog, the recents list. The slash menu is the single entry point for inserting any block in Dalea. Type `/` at the start of an empty line — or anywhere on a fresh line — and the menu opens. Type a few letters to filter, or browse by category. ## How to open it | Where | Trigger | |---|---| | Empty line | Just type `/`. | | Inside a paragraph | Press `Enter` to make a new line, then `/`. | | Inside a list item | Press `Enter` to break out, then `/`. | | Inside a section block | Same as a paragraph. | | Inside a data table or local spreadsheet | Slash is treated as text — the menu doesn't open. Insert blocks above or below the table instead. | ## Categories The menu groups blocks the way scientists actually think about them: ## Keyboard navigation | Key | Action | |---|---| | `↑` `↓` | Move between block types | | `→` | Drill into a category to see its blocks | | `←` | Back to category list | | `Enter` | Insert the focused block | | `Esc` | Close menu without inserting | | Any letter | Filter results — search matches block name and aliases | ## Filtering by keyword The menu's filter is fuzzy and matches both block names and well-known aliases: - Type `/elisa` → suggests **96-well plate** and **chart (dose-response)**. - Type `/code` → suggests **Python code** (executable) and **code block** (display only). - Type `/img` → suggests **image** and **file embed**. - Type `/proto` → suggests **protocol step** and **protocol group**. - Type `/eq` → suggests **equation** (LaTeX) and **equipment** reference. Aliases are workspace-shared. Workspace owners can add custom aliases for templates published in the workspace. ## Inserting from a template If your workspace has saved templates that are tagged "block-level", they appear under their own category in the slash menu. Inserting one drops the block (or group of blocks) at the cursor and respects any lock configuration the template came with — see [Templates and locking](/concepts/templates-and-locking). A common pattern: lab heads create a "Standard Protocol Step" template (with hazard, PPE and timing pre-filled to your institutional defaults) and the team inserts it via the slash menu instead of building each step from scratch. ## Tips The Recents section at the top of the menu remembers what you used last across your whole account, not per document. If you've spent the last hour adding chart blocks, the next document you open will surface chart at the top of the menu too. The slash menu is hidden inside templated documents whose structure is locked — otherwise users could insert disallowed blocks. Inside a structure-locked template, only the original block IDs are editable. ## What's next --- ## Block catalog Path: /editor/block-catalog Summary: All block types at a glance with examples. A high-level map of every block in Dalea, grouped by what kind of work it supports. Click any block name to jump to its detailed page (some pages are still being written in v1 — those are linked but flagged). ## Text and structure | Block | Purpose | |---|---| | Paragraph | Body text. Bold, italic, links, sub/superscript, alignment. | | Heading (H1–H6) | Section titles with anchor IDs for deep-linking. | | Bullet / numbered list | Procedures, ingredient lists, findings. | | Blockquote | Cited text or pull-quotes. | | Divider | Horizontal rule, optionally labelled (`Methods / Results`). | | Section | Collapsible container that groups blocks under one title. Supports nested sections. | ## Data and tables | Block | Purpose | |---|---| | **[Data table](/editor/blocks/data-table)** | Inline spreadsheet with typed columns, formulas, validation, frozen headers. | | Registration table | Spreadsheet that writes back to a data environment — register samples or animals from inside a document. | | Lookup table | Read-only view that queries a data environment and renders the result inline. | ## Visualisation and compute | Block | Purpose | |---|---| | **[Chart](/editor/blocks/chart)** | Line, scatter, bar, dot plot, histogram, box plot, heatmap, Kaplan-Meier survival, dose-response. Driven by inline data, a table in the same document, or a saved query. | | Local spreadsheet | Lightweight cell grid for ad-hoc calculation. Formulas; export to CSV. | | Python code | Run Python directly in the browser. Outputs as text, image, table or HTML. | | Code block | Syntax-highlighted code. No execution. Languages: Python, R, Julia, Bash, SQL, JS, MD, JSON, YAML. | ## Life science | Block | Purpose | |---|---| | **[Protocol step](/editor/blocks/protocol-step)** | A single step with duration, status, hazards, PPE, linked reagents/equipment. | | Protocol group | A container of protocol steps with a title and version. | | Reagent | Reference to a chemical/biological material. CAS, supplier, concentration. | | Equipment | Reference to lab instrumentation with current settings. | | 96-well plate | Plate designer. Assign samples; overlay result heatmaps. See the [PlateMap](#) example below. |
## Media and utilities | Block | Purpose | |---|---| | File embed / Image | Upload or link. Inline, block, or card layout. Recognises FASTA / FASTQ / GenBank / PDB / MOL / SDF / FCS / TIFF / ND2 / CZI. | | Equation | LaTeX, inline or display mode. | | Callout | Note / tip / warning / caution box. Optional title, collapsible. | | Mention | `@user` reference; sends a notification on insert. | ## What's next --- ## Protocol step block Path: /editor/blocks/protocol-step Summary: Build a lab procedure step with timing, hazards and reagents. The protocol step block is the building block of every lab procedure in Dalea. Each step carries enough metadata to be both human-readable and machine-queryable: how long it takes, what hazards it presents, what reagents and equipment it depends on, and whether someone has run it. ## What a step looks like Click any step's number to "run" it. Dalea logs the operator, the timestamp and any notes added during the run, producing an auditable execution record:
## Anatomy of a step ## Linking reagents Inside the body of a step, type `@` to insert a reference. Dalea auto-completes from: - Reagents declared earlier in the same protocol group - Inventory items in the workspace (with their current lot and expiration) - Objects in any data environment (e.g. a specific antibody clone) References stay live: if the reagent's lot changes, the step's reference updates. ## Running a protocol When someone opens a document and clicks a step's "run" button, Dalea: 1. Stamps the operator (your user ID) and the current timestamp. 2. Optionally prompts for an audit reason in regulated tiers. 3. Updates the step's status from `pending` to `in-progress`. 4. Starts a wall-clock timer if the step has a duration. 5. On completion, records the actual elapsed time alongside the planned duration. Subsequent edits to a *completed* step are tracked: who modified what, when, and the optional reason. This is what makes Dalea suitable for GLP and 21 CFR Part 11 work. ## Example: full ELISA capture protocol The protocol used in the demo above is intentionally realistic. Here is the relevant fragment of the PK/PD study you'll build in the [main tutorial](/tutorials/your-first-pk-study): 1. **Coat plate.** 100 µL anti-IFN-γ at 2 µg/mL in PBS, overnight 4 °C. Hazard: low. 2. **Block.** Wash 3× PBS-T. 200 µL 1% BSA, 1 h RT. Hazard: none. 3. **Standards + samples.** 100 µL each, in duplicate, 2 h RT. Hazard: low. 4. **Develop.** TMB 100 µL, stop with 50 µL 2 N H₂SO₄. Read OD₄₅₀. Hazard: medium (acid). ## Authoring tip Use **protocol groups** to wrap your steps. The group block has a title and a version, and rolls up totals (cumulative duration, count by hazard level). It also collapses, which is invaluable on long protocols. ## What's next --- ## Chart block Path: /editor/blocks/chart Summary: Fifteen chart types with realistic assay data. The chart block renders any of fifteen chart types from inline data, a table in the same document, or a saved query against a data environment. It re-renders automatically when its source updates — handy for live study-status documents. ## A realistic example: PK time-course Three dose groups (3, 10, 30 mg/kg), plasma concentration over 24 hours, modelled with first-order absorption and elimination. Hover any legend chip to highlight a series:
## ELISA standard curve The chart block fits a 4-parameter logistic curve to your standards. Click any sample to project its OD reading onto the curve and read off concentration:
## Chart types ## Data sources A chart can be powered by any of: - **Inline data** — a small static array typed into the block. - **A table block** earlier in the same document. - **A saved query** against a data environment in the workspace. The query result rebuilds when the underlying records change, so the chart stays current. ## Configuration Common knobs: - Axes: linear / log; explicit limits or auto. - Series colour: per-series override or palette. - Error bars: column for `±value`, or auto from grouped data. - Legend position; gridlines; title. - Reference lines (e.g. LLOQ on an immunoassay). ## Fullscreen mode Click the expand icon to take any chart fullscreen — useful during meetings. Charts that have group/colour controls keep their controls in fullscreen. ## What's next --- ## Data table block Path: /editor/blocks/data-table Summary: Spreadsheet with formulas, units and validation. The data table block is an inline, typed spreadsheet. Unlike a free Markdown table, it has explicit column types, validation, formulas, and frozen headers — all the things you'd expect from a small Excel or Airtable view, but in your document. ## When to reach for it Use a data table when you want to capture or compute values **inside a document** (rather than persisting them to a workspace-level data environment). Typical uses: - Buffer recipes with concentration / volume / final-concentration columns - Daily body-weight readings for a small mouse cohort - Reagent-prep checklists with computed totals - Quick PK parameter calculations alongside narrative If the data needs to be queryable across documents, use a [Registration table](/editor/block-catalog#data-and-tables) (writes to an environment) or a [Lookup table](/editor/block-catalog#data-and-tables) (reads from one) instead. ## Example: reagent prep for an ELISA The component below is the **actual spreadsheet engine** Dalea uses inside the data-table block. Edit any cell — change a stock concentration, retarget a final volume — and the formulas in the **Vol stock (µL)** column will recompute live:
Formulas in the data-table block follow the same conventions you'd expect from a spreadsheet: `=ROUND(...)`, `=SUM(...)`, cell references like `=B2*D2/A2`. Formulas can also reference other tables in the same document (`=Samples!B4`). ## Column types ## Validation Column-level validation runs as the user types. Invalid cells get an amber flag and a tooltip. The document remains saveable but the chart blocks downstream will warn that they're computing on partial data. ## Importing CSV Drag a CSV file onto the table or use the slash menu's "Import CSV". Dalea will auto-detect column types and ask you to confirm. The first row is assumed to be a header unless you toggle otherwise. For larger imports (≥ 10 k rows) use the [Bulk import](/data/designing-an-environment) flow into a data environment instead — inline data tables are sized for tens to a few hundred rows. ## What's next --- ## Registration table Path: /editor/blocks/registration-table Summary: Write structured data straight into a workspace environment from a document. A registration table is a spreadsheet **inside a document** that writes the rows you fill into a workspace data environment. It's the cleanest way to register samples, animals, or compounds without leaving your protocol notebook. ## When to use it Use a registration table when: - You're authoring a protocol or study document and the document itself is the natural place to capture the entities (animals, samples, compounds). - You want the row IDs (e.g. `ANM-001`) to land in the workspace's animals table immediately, ready to be referenced from a result batch later. - You want the audit trail to show the document as the source of registration. Don't use it when: - The data is one-off scratch work — use a [Data table](/editor/blocks/data-table) instead. - The data already exists in the environment and you just want to display it — use a [Lookup table](/editor/blocks/lookup-table). ## How it differs from a data table A data table is **document-local**: rows live only inside the document. A registration table is a **write-through view**: every committed row creates an object in the underlying data environment table. The columns shown are exactly the columns of the target table, including reference and enum fields. | | Data table | Registration table | |---|---|---| | Row storage | In the document | In a workspace environment table | | Schema | Defined inline | Inherited from the target table | | Validation | Row-level | Inherited (column types, validation rules, naming schemes) | | Survives document delete | No | Yes — registered objects stay | | Query-able from elsewhere | No | Yes — queries, charts, the AI assistant all see them | ## Authoring one Type / in an empty line, pick Registration table from the Data category. A small picker appears. Select the environment (e.g. In-vivo PK) and the table (e.g. Animals). The block instantiates with the table's column definitions. Optional. By default all columns appear. You can hide columns you don't want your team to fill in this document — they'll stay nullable in the underlying table. Type values cell by cell. Display IDs (e.g. ANM-001) auto-generate when you commit a row, following the table's naming scheme. Enum and reference columns get drop-downs. Each row is registered when its required fields are valid. A green checkmark appears on the row; a red flag means a validation error (typically out-of-range values or required fields blank). ## A realistic example In a study protocol document for **DLA-7**, a registration table targets the `Animals` table. The user fills in 24 rows during pre-study briefing: | sex | strain | baseline_weight_g | study_group | |---|---|---|---| | F | C57BL/6 | 23.4 | GRP-1 (vehicle) | | F | C57BL/6 | 22.8 | GRP-1 (vehicle) | | F | C57BL/6 | 24.1 | GRP-2 (3 mg/kg) | | … | … | … | … | Dalea generates `ANM-001` through `ANM-024`, validates each weight against the 15–35 g range, and checks that every animal references an existing study group. The 24 animals are now queryable from any chart, lookup table, or saved query in the workspace. ## Locked registration tables A registration table inside a **locked template** typically pins: - The target table — so users can't redirect registrations elsewhere. - The visible columns — so the data shape is consistent across runs. - The validation rules — inherited from the target table, never bypassed. But leaves **row entry editable**, so each instantiation of the template captures fresh data. This pattern is the bread and butter of regulatory study templates: the structure of registration is fixed, the contents are filled in during the experiment. ## Tips A locked IACUC submission template can include a registration table for animals. Reviewers see the exact same form every submission; investigators fill it in and the entities flow into the institutional animal-tracking environment automatically. Deleting a row in the registration block soft-deletes the underlying object (it's archived, not destroyed). Result records that reference the archived object stay intact, with the reference flagged. Restoring the row also restores the references. ## What's next --- ## Lookup table Path: /editor/blocks/lookup-table Summary: Embed a live read-only view of a query in any document. A lookup table is a **read-only view of a query** embedded in a document. Unlike a data table (document-local) or a registration table (writes back), a lookup table just shows results from a saved query or an inline filter, and re-runs whenever the underlying data changes. ## When to use it Use a lookup table when you want to: - Reference data that lives in an environment without copying it into the doc. - Show a live snapshot of "all animals in DLA-7 with baseline weight > 24 g" inside the protocol document so the team can see them while running the study. - Build a study summary that always reflects the freshest results. - Embed the output of a saved query inside any document. Don't use it when you need users to *enter* data — use a [Registration table](/editor/blocks/registration-table) instead. ## What you see Each row is a live link to the underlying object. Click any row to open the object's detail page in a side panel. Each column displays: - For reference columns, the display ID (`ANM-001`) plus a hover preview. - For enum columns, the enum label with the configured colour. - For numeric columns, the value with its unit (e.g. `23.4 g`). - For date columns, locale-aware formatting. The table re-renders within a second of any change to the underlying data. ## Three ways to populate it Best for reuse. Pick a saved query in the workspace; the block displays its columns. Updating the saved query updates every document that uses it. For one-off views. Pick an environment and table, then add filter chips (sex = F, dose_mg_per_kg = 3, etc.). The filter is stored in the block. Click any object's display ID anywhere in the document and choose "Show related objects". Dalea generates a lookup table of objects that reference (or are referenced by) the source object. ## A realistic example In a DLA-7 PK/PD study document: > **Animals receiving 30 mg/kg with baseline weight > 24 g** > > | animal_id | sex | strain | baseline_weight_g | study_group | > |---|---|---|---|---| > | ANM-019 | F | C57BL/6 | 24.1 | GRP-4 | > | ANM-022 | F | C57BL/6 | 25.6 | GRP-4 | > | ANM-024 | F | C57BL/6 | 24.8 | GRP-4 | This lookup table sits alongside the protocol so the bench scientist can see at a glance which animals match the inclusion criterion. If a fourth animal's weight is updated tomorrow, the table updates everywhere it's embedded. ## Configuration | Option | What it does | |---|---| | **Source** | Saved query, inline filter, or reference. | | **Visible columns** | Subset; hidden columns aren't loaded, so wide tables stay snappy. | | **Sort** | One or more columns, ascending or descending. | | **Row limit** | Cap to 10, 100, 1000 or all. Useful when the underlying table is huge. | | **Refresh policy** | Live (every change), manual (a refresh button), or pinned to a specific result-batch close event. | | **Empty-state text** | Custom message when zero rows match — useful in templated documents. | ## Locked lookup tables in templates When a template includes a locked lookup table, the **source** is typically pinned (the query, filter and visible columns can't be changed by users) but the **row data** is naturally fresh because it's read from the live environment. This pattern shows up in IND-enabling tox study templates: a locked lookup table titled "Animals on study" appears in every document spawned from the template, and always shows the current animals — no manual sync required. ## Tips A lookup table feeding a chart is the simplest way to put a live timecourse or group comparison into a document. Both blocks can share a saved query, so they update in lockstep. The lookup table records which version of the saved query produced its current view. If the query changes later, prior versions of the document still have a note in their version history saying which query state they were rendered against. ## What's next --- ## Real-time collaboration Path: /editor/real-time-collaboration Summary: Cursors, presence, comments, version history. Every Dalea document is multiplayer by default. There is nothing to enable, no "share" toggle — anyone in the workspace with the right role can open the same document and edit alongside you in real time. ## What you'll see - **Cursors** — coloured carets with your collaborators' names hover where they are typing. Each user gets a stable colour for the session. - **Selections** — when someone selects text, you'll see the highlighted region in their colour. - **Presence indicator** — small avatar stack at the top right of the document shows who else is here right now. - **Connection status** — a tiny pill in the bottom right says "Synced", "Reconnecting…", or "Offline". Edits made while offline are queued and replay when you reconnect. ## How it works Conceptually, every edit is an atomic operation that commutes with every other edit. Two people can simultaneously bold the same word while a third deletes it, and all three operations land in a sane final state — no merge prompt, no overwrite, no lost work. Edits are broadcast to peers in real time and persisted to durable storage continuously. There is no save button and no debounce delay; synchronisation is instant. ## Comments Hover any block. The right margin shows a small "+" button. Click to add a comment; optionally `@`-mention a collaborator. The comment is anchored to the block — if the block moves, the comment moves with it. Comments support threaded replies and resolve/unresolve states. Resolved comments disappear from the margin but remain searchable. ## Version history Open `…` → `Version history` from the document header. - **Manual snapshots** — give a version a name like "Submission draft v3" or "Pre-IACUC review". They're lightweight and free; create them liberally. - **Auto snapshots** — Dalea creates one whenever the last collaborator disconnects, preserving "what we left off with". - **Diff viewer** — pick two versions; see added, removed and modified blocks side-by-side. Cell-level diffs for tables. - **Restore** — non-destructive. Restoring v3 onto v7 creates v8 = v3, preserving v7 in the history. ## Permissions and the read-only mode If your workspace role doesn't grant edit permission on documents, the editor opens in read-only mode: blocks render normally, but typing does nothing and the slash menu is hidden. Comments are still available to Commenters and above. If a document is created from a locked template, the lock configuration further restricts what even Editors can change. See [Templates and locking](/concepts/templates-and-locking). ## What's next --- # Data & inventory Designing schemas, recording results, tracking physical samples. --- ## Designing an environment Path: /data/designing-an-environment Summary: Walk through a real PK/PD study schema in mice. Designing the schema for a study is the highest-leverage hour you'll spend in Dalea. A schema that captures the right entities and relationships will let you ask arbitrary analytical questions for the next decade. A schema that doesn't will leave you exporting CSVs and joining things in Pandas forever. This page walks through the schema for a realistic mouse PK/PD study end to end. ## The study A 24-mouse single-dose PK study of a small-molecule kinase inhibitor (test article **DLA-7**) in C57BL/6 females. Three dose groups (3, 10, 30 mg/kg PO) plus vehicle. Plasma collected at 15 min, 1 h, 4 h, 24 h. Analyte is parent compound by LC-MS/MS. ## The schema Four entity tables and one result table:
## Step-by-step Name it In-vivo PK, pick an icon, add an audit reason like "Initial schema for kinase-inhibitor PK studies." Audit reasons are mandatory in regulated tiers and recommended everywhere. The most upstream entity. Columns:
  • article_id (text, primary key, generated by naming scheme TA-{`{N}`})
  • name (text)
  • modality (enum: small-molecule, mAb, ASO, peptide, mRNA…)
  • lot (text)
Bridges test article to dose level.
  • group_id (text, scheme GRP-{`{N}`})
  • name (text — "Vehicle", "DLA-7 3 mg/kg", …)
  • dose_mg_per_kg (number)
  • route (enum: PO, IV, IP, SC)
  • test_article (reference → test articles)
The actual subjects.
  • animal_id (text, scheme ANM-{`{N:000}`} → ANM-001, ANM-002…)
  • sex (enum: M, F)
  • strain (enum: C57BL/6, BALB/c, NSG…)
  • baseline_weight_g (number, validation: 15–35)
  • study_group (reference → study groups)
Note the validation: weights outside 15–35 g flag during entry — almost certainly a typo for an adult mouse.
One row per timepoint per animal.
  • sample_id (text, scheme SMP-{`{N:0000}`})
  • animal (reference → animals)
  • timepoint_h (number, allowed values: 0.25, 1, 4, 24)
  • collected_at (datetime)
The shape is different: a result table splits into dimensions and measurements.
  • Dimensions: animal (ref), timepoint_h (number)
  • Measurements: concentration_ug_ml (number), auc_0_24 (number), cmax (number), tmax (number)
Dimensions are what you'll group/filter by in queries. Measurements are the values you'll aggregate.
## Why a result table — couldn't this be one big entity table? It could. But result tables get two things for free: - **Batched recording.** All four timepoints from one animal-day fit in one result batch with a single timestamp, operator and audit reason. - **Analytical query mode.** You can ask "mean concentration at 4 h grouped by dose level" without writing SQL. The dimension/measurement split is what makes that possible. ## Naming schemes recap | Table | Scheme | Generates | |---|---|---| | Test articles | `TA-{N}` | TA-1, TA-2, … | | Study groups | `GRP-{N}` | GRP-1, GRP-2, … | | Animals | `ANM-{N:000}` | ANM-001, ANM-002, … | | Samples | `SMP-{N:0000}` | SMP-0001, SMP-0002, … | Schemes can be more elaborate (`{YYYY}-{study}-{N}`) when traceability matters more than brevity. They're configurable per table; the counter resets per workspace by default but can be made global. ## What's next --- ## Recording results Path: /data/recording-results Summary: Cytokine ELISA: standard curve, replicates, dimensions. Once you have a schema, recording results is a small ritual: open a **result batch**, fill it in, and seal it. This page walks through that ritual using a realistic cytokine ELISA on plasma samples from the PK study. ## The example We are quantifying mouse IFN-γ in plasma collected from study DLA-7. A standard 8-point curve plus blanks and QC pools, samples in duplicate, OD₄₅₀ readout, fitted 4-parameter logistic. The plate layout looks like this:
## The standard curve After reading the plate, Dalea fits a 4-parameter logistic to the standards automatically. Each unknown is then back-calculated from its OD reading. Click a sample below to project its OD onto the curve:
## Recording in Dalea A batch is a recording session. It carries an operator, a timestamp, an optional audit reason and a status (open / closed / superseded). Pick the result table (e.g. plasma cytokines) and confirm the dimensions you'll group by. For this assay: animal, timepoint_h, analyte, replicate. Three options:
  • Inline grid. Type values straight into a Localsheet grid.
  • Paste from the plate-reader CSV. Dalea maps wells to sample IDs using your 96-well plate block.
  • Upload a Benchling export. Dalea auto-detects the format and registers all linked objects in one operation.
Closing seals the batch — further edits create a successor batch and mark the old one superseded. This is how Dalea preserves data integrity without making typos painful: you correct by re-recording, never by overwriting.
## Querying results Once recorded, results are queryable from anywhere a chart, a lookup table, or the AI assistant lives: - Time-course per dose group: mean ± SD of concentration grouped by timepoint_h and the animal's study_group. - Outlier detection: samples whose duplicate-CV exceeds 20%. - Per-animal AUC: trapezoidal integration over timepoints, grouped by animal — drives the PK summary table. These all work without writing SQL. You configure them in the query builder, save them, and embed inside any document. ## Replicate handling Replicates are an extra dimension column. Recording duplicate samples means two records sharing all dimensions except `replicate = A | B`. Dalea's chart and aggregation engine collapses replicates by default but you can switch to "show all points" in any chart. ## Auditability Every result batch records: - the operator (your user) - the timestamp at create / update / close - the audit reason (mandatory in regulated tiers) - the source — manual / CSV import / Benchling / API - a hash of the source file when imported, for traceability Closing a batch is conceptually a digital signature. In 21-CFR-Part-11 mode it also asks for password re-authentication and records an explicit signing event. ## What's next --- ## Saved queries Path: /data/saved-queries Summary: Build, name and reuse cross-table queries without writing SQL. A **saved query** is a named, reusable question about your data — written once in the visual query builder, runnable everywhere. Lookup tables, charts and the AI assistant all consume saved queries, which means a single query update propagates across every place it's used. ## Why bother saving them Three reasons: 1. **Reproducibility.** "Mean concentration at 4 h grouped by dose group" is a specific question. Saving it under that name means everyone in the workspace can re-ask it later, exactly the same way. 2. **Surface area.** A saved query becomes available as the data source for chart blocks, lookup tables, and the AI assistant. Without saving, the same logic has to be re-built in each block. 3. **Update once.** Change the saved query — for example, add an outlier filter — and every chart and lookup table that uses it updates next time it loads. ## Two query modes Dalea distinguishes two analytical shapes: You pick the mode when you create the query. Result-mode queries are the ones that drive most science charts; traversal queries are how you populate lookup tables. ## Building a query The query builder is visual — no SQL required. The four editing panels are: Pick the environment and the starting table. For traversal, follow reference columns to join in related tables. For result mode, choose which columns are dimensions (group/filter axes) and which are measurements (aggregates). Add filter chips. Filter values can be literal (dose_mg_per_kg = 30) or parametric (dose_mg_per_kg = $dose) — parametric queries accept inputs at run time. Result mode: pick the aggregate per measurement (mean, median, SD, SEM, count, min, max, AUC). Add sort order. Optionally pin a row limit. The builder shows a live preview of the first 50 rows as you edit, so you can sanity-check before saving. ## A realistic example For the DLA-7 PK study, a typical saved query is: > **DLA-7 PK timecourse by dose group** > > Source: `In-vivo PK` environment, `PK results` table > Mode: result > Dimensions: `timepoint_h`, `study_group` (joined via the `animal` reference) > Measurements: `concentration_ug_ml` → mean, SEM > Filters: `study_group.test_article = $article` (parametric) > Sort: `timepoint_h` ascending This single query drives: - the timecourse chart block in the protocol document - a heatmap of concentration × dose × time embedded in the programme summary - the lookup table the bench scientist watches during a fresh ELISA run - whatever the AI assistant needs when you ask "summarise this study's PK" If you later add a `replicate` filter (`replicate is not null`) to exclude QC failures, every consumer picks up the change. ## Visibility scopes Saved queries are workspace-scoped by default but support three visibility levels: | Scope | Where it shows up | |---|---| | **Private** | Only you can see and run it. Useful for in-progress queries. | | **Workspace** | Visible to all workspace members. The default for shared queries. | | **Template** | Bundled with a template; spawning a new document from the template clones the query into the new workspace if needed. | Promoting a query from private to workspace doesn't change the query — it just makes it discoverable. ## Parametric queries A query becomes parametric when its filters reference variables (`$animal`, `$dose`, `$batch_id`). When run, the query asks for the variable values via: - A small input row at the top of the chart or lookup table - A direct prompt in the AI chat ("which study?") - An explicit value supplied by a parent document filter The same query, parametrised by `$study`, can power separate views per study without duplication. ## Tips "Mean concentration at 4 h, by dose group" is a useful name. "PK select join" is not. The name is what shows up in result lists and in the AI assistant's tool surface — make it self-explanatory. Ask the in-product chat: "Draft a saved query that returns mean ± SD body weight at each weekly timepoint, grouped by treatment group, in environment In-vivo PK." It will propose a complete query you can save with one click. ## What's next --- ## Bulk import and export Path: /data/bulk-import-and-export Summary: CSV, Benchling and plate-reader files in; CSV out for downstream analysis. Most labs already have data — in CSVs from instruments, in Benchling, in legacy spreadsheets, in plate-reader outputs. This page covers how to land that data in Dalea in bulk, and how to get data back out for downstream analysis in Python or R. ## What you can import | Format | Source | Lands in | |---|---|---| | CSV | Plate readers, qPCR cyclers, flow cytometers, manual exports | Result tables (with sample mapping) or entity tables (objects) | | Benchling export (`.zip` / `.json`) | Benchling ELN | Auto-mapped to environments and inventory | | Excel (`.xlsx`) | Legacy lab spreadsheets | Entity or result tables, with column-mapping prompts | | JSON | Custom pipelines, instrument software | Any table with a matching schema | Imports are atomic: either the whole batch lands or nothing does. Validation errors flag specific rows, the rest of the batch waits, and you decide whether to fix-and-retry or split. ## CSV import — bulk register objects Workspace → Data → environment → the entity table (e.g. Animals). Drag the file in or pick from your machine. Dalea reads the first row as headers by default — toggle if your file doesn't have a header row. Each CSV column maps to a column in the target table. Dalea pre-suggests mappings by name match. Set unmapped columns to "Ignore" or "Add as new column" if you want to extend the schema in flight. Reference columns (e.g. study_group → study groups table) need a value-matching rule: by display ID, by exact name, or by another unique column. Dalea previews how many rows match. The preview shows the first 50 rows post-validation, with green ticks and red flags. Add an audit reason ("Bulk register 24 animals for DLA-7 study, from receiving sheet") and commit. A 24-animal CSV typically lands in under a second. ## CSV import — record results into a batch The same flow applies to result tables, with two extra steps: - The import goes into a **result batch** (a recording session). Either pick an open batch or create a new one as part of the import. - For plate-reader exports, Dalea offers **plate-aware mapping**: drop in a 96-well CSV with OD readings and your plate map, and Dalea joins them by well position into per-sample rows automatically. Realistic scenario: a Tecan plate reader exports a CSV with 96 OD₄₅₀ values. You already have a `plate map` block in the protocol document that says A1 = standard 4000 pg/mL, A2 = standard 4000 pg/mL replicate, … H12 = sample M-12 t=24h replicate B. Dalea matches well to sample, fits the standard curve, and records back-calculated concentrations into the cytokines result batch — all from one upload. ## Benchling import If you have a Benchling subscription and an export, Dalea recognises the format and auto-maps: - Benchling **projects** → Dalea projects - Benchling **entries** → Dalea documents - Benchling **registries** → Dalea data environments (one entity table per registry) - Benchling **assay results** → Dalea result tables - Benchling **inventory** → Dalea inventory items The import runs as a background job; for big migrations (10k+ objects with cross-references) it can take several minutes. You'll see live progress and a summary report at the end. The objects keep their Benchling display IDs as aliases, so existing references in legacy documents resolve. ## Conflict resolution Imports surface a conflict report when: - A required column is missing in the CSV. - An enum value isn't in the target column's allow-list. - A reference doesn't resolve. - A row would violate a uniqueness constraint. For each conflict, Dalea offers four resolutions: - **Skip row** — ignore this row, continue with the rest. - **Add value** — extend the enum allow-list to include the new value. - **Edit value** — open an inline editor to fix the typo. - **Map differently** — re-run the column-mapping step. Resolutions are batched: pick once for "all rows with this issue" and Dalea applies it to every matching row. ## Export Every queryable surface in Dalea has an export button. Three formats: | Format | What you get | |---|---| | **CSV** | One row per record. Display IDs for references; ISO dates; numeric units in column headers. Best for Excel, Pandas, R. | | **JSON** | Full structured object including dimension/measurement split for result rows and any reference metadata. Best for programmatic re-import or downstream pipelines. | | **Parquet** | Columnar binary; large datasets only. Best for Spark or DuckDB. | Exports respect the current filters on the table or saved query. So if you filter to "study DLA-7, last 30 days, dose ≥ 10 mg/kg" and click export, you get exactly that slice. ## Bioinformatician pattern: round-trip analysis A common pattern: 1. Run the experiment in Dalea, capture results in a result batch. 2. Export the batch as CSV. 3. Run downstream analysis in Python (PK modelling, NCA, dose-response fits) or R (DESeq2, mixed-effects models). 4. Re-import the analysis output as a new result batch in a separate result table (e.g. `PK parameters` — `auc_0_24`, `cmax`, `tmax`, `cl_per_kg`). 5. A study-summary document can now embed both raw concentrations and derived parameters side by side. The audit trail records the export, the analysis script (if you upload it as a file in the result-batch metadata), and the re-import. ## Tips Every bulk import creates an audit event with the file name, the row count, the operator, the timestamp and the audit reason. The original file is also attached to the result batch (or to the registration event) so you can re-derive everything later. For imports over 10 000 rows, click Validate without committing. Dalea runs the full validation pass and produces a conflict report without writing anything. Fix the source file, then run for real. ## What's next --- ## Inventory fundamentals Path: /data/inventory-fundamentals Summary: Containers, items, lots, the lifecycle. Inventory in Dalea tracks everything physical: vials, plates, tubes, frozen aliquots, kits, columns. It is intentionally separate from the data system — your *records* live in environments, your *things* live in inventory — but the two connect through references, so a result row can point to "the specific aliquot of anti-IFN-γ used". ## Five concepts ## The lifecycle An item flows through these states. Each transition is logged with operator, timestamp and optional notes: ``` Created → Staged → Placed → Checked-out → Consumed → Discarded (intake) (labelled, (in a (in use, (depleted) (final) not yet container) not in placed) container) ``` `Staged` is the period after you receive a shipment but before you've found a home for it. `Checked-out` is what makes the item invisible to "what's available" queries without losing the audit trail. ## Container types: why position formats matter A container type's **position format** controls how items inside it are addressed: - **None** — bare list. "Reagent shelf L-204-A". Items have no positional info. - **Numeric grid** — 96-well plates (A1–H12), 9×9 cryoboxes (A1–I9), drawer slots. - **Custom** — arbitrary string positions ("rotor slot 3", "carousel cell X14"). Numeric grids let Dalea visualise containers as a heatmap and detect collisions ("you tried to place two items at A3"). ## SKU patterns Each item type has an optional SKU pattern, much like data naming schemes: - `AB-{lot}-{N}` → AB-24119-001 - `RX-{YYYY}-{N:0000}` → RX-2026-0007 The SKU is what gets printed on the label and read back at consumption time. ## Barcodes and labels Item types can opt into label printing. Dalea generates Code-128 or GS1-DataMatrix barcodes and prints to a configured network printer. On scan, Dalea opens the item's detail page directly — handy for a "scan to use" workflow. ## Low-stock alerts Item types can declare a low-stock threshold and a "watch" set of users. When the total available quantity (across all items of that type) falls below threshold, those users get a notification. Useful for centrally-managed reagents. ## Linking to records Inventory items can be **referenced** from data records. The reference column on a result row pointing to an item's lot means you can query "show me all results generated from antibody lot 24-119" — invaluable for tracing assay drift. ## Receiving and consuming in bulk Two batched flows exist: - **Receiving session** — open one when a shipment arrives, scan or paste in a list, Dalea creates items in the `Staged` state in one transaction. - **Consumption session** — for an experiment that uses many items at once. Open the session, scan items as they go onto the bench, close it when done. Dalea records check-out and quantity decrement against your audit log. ## What's next --- ## Receiving and consuming inventory Path: /data/inventory-sessions Summary: Bulk receive a shipment, bulk consume during an experiment, with full audit. A **session** is a bounded operation on inventory — receiving a shipment, consuming reagents during an experiment, transferring items between freezers. Sessions group dozens or hundreds of item-level changes under one transaction with one operator, one timestamp and one audit reason. If you're handling a single item, sessions are overkill — just edit the item directly. The moment you have ten or more, sessions save time and keep the audit trail honest. ## Three kinds of sessions ## Receiving session — a realistic example Sigma delivers 10 vials of an anti-IFN-γ capture antibody, lot 24-119. Pick the item type (antibody aliquot) and add the lot number once at session level — every item in the session inherits it. Add the purchase order or invoice number for traceability. Two ways:
  • Scan — if the supplier ships labels with barcodes, scan each one. Dalea creates an item with auto-generated SKU.
  • Paste a CSV — useful for bulk shipments. Columns: quantity, unit, expiration_date.
The session pane shows a running list; reorder, edit, or remove rows before committing.
All items go to the same starting container by default (e.g. cryobox B-12 in freezer L-204). Override per-item if you're spreading the shipment across boxes. Dalea checks for position collisions on numeric- grid container types and warns before you commit. Optional. Dalea generates Code-128 or GS1-DataMatrix labels for every item in one click. If your workspace has a configured label printer, they print directly; otherwise you get a PDF. "Sigma shipment 2026-04-26, PO #1234, anti-IFN-γ capture mAb 1 mg/mL, lot 24-119." Click commit. All 10 items land in the Staged state, all referencing lot 24-119, all in box B-12.
Total elapsed time: 1–2 minutes. Without a session this would be 10 separate item-creation events with the same metadata typed 10 times. ## Consumption session — running an experiment You're prepping the IFN-γ ELISA. You'll consume aliquots of the capture mAb, the detection biotin-mAb, the streptavidin-HRP, and the TMB substrate. Pick "for an experiment" and link the session to the protocol document or result batch (one or more — sessions can fan out). For each item, either scan its label or click pick from the freezer view. Dalea pre-fills the quantity to consume from the item type's typical-use setting; override per row. Each item's quantity decrements. If a quantity reaches zero, the item transitions to Consumed. Items with low-stock thresholds may fire alerts (see low stock). Now every result row in the linked batch can be queried by the lots used. "Show me all results that used antibody lot 24-119" returns the right rows in one click — invaluable when your assay drifts and you need to trace the suspect reagent. ## Transfer session — reorganising a freezer Less glamorous but the same shape. Pick the items (or a whole container of items, e.g. "everything in box B-11"), pick the destination (box B-12), confirm positions, commit. The audit log shows one event with N item moves and your optional reason ("freezer reorganisation, B-11 was full"). ## Low-stock alerts Item types declare a low-stock threshold (a count or a total quantity). When a consumption session pushes a type below threshold, Dalea: - Posts a notification to the watchers configured on the item type. - Optionally sends an email summary to a "core facility" group address. - Surfaces the type in the workspace home as "needs reordering". The threshold lives on the type, not the item, so it spans across lots and locations. ## Tips Linking a consumption session to a result batch is the cleanest way to keep "which reagent went into which result" auditable. The link is bidirectional: opening a result batch shows you the linked session, and vice versa. A receiving session is all-or-nothing. If validation fails on one row (a duplicate barcode, an invalid container position) the whole session waits for you to resolve. This is intentional — partial inventory states are how labs lose track of what they actually have. ## What's next --- # Templates & marketplace Author reusable templates and packages, publish to dalea.market, install from the community. --- ## Creating your first template Path: /templates-marketplace/creating-your-first-template Summary: Save a document as a reusable template; pick what gets locked. A template is a reusable document blueprint. Save a study protocol, an SOP, an IACUC submission form, or a registration sheet as a template, and your team spawns new instances from it without rebuilding the structure each time — optionally with parts of the document **locked** so the structure stays consistent across runs. This page walks the basic flow. For the deeper concept — what locking is and the three lock states — see [Templates and locking](/concepts/templates-and-locking). ## When to make a template Make a template when you find yourself copy-pasting an existing document to start a new one. Common cases in biopharma: - Standard study protocols (IACUC submissions, GxP procedure runs, SOP forms). - Result-recording forms with a fixed structure (a 96-well plate map plus a result table). - Programme kick-off documents with placeholder sections that every project fills in. - Registration intake forms (animal, sample, cell-line, antibody) that should use the same fields company-wide. Don't make a template for one-off documents or for things you'd be happy to keep editing freely. ## The flow Build the document the way you want every spawned instance to start. Add block-level placeholders ("Sponsor:", "Study code:") and any registration / lookup tables you want included. Don't add data that is specific to one run — the template will carry it forward. Document menu → Save as template. Pick a name, a category (Protocol, Form, SOP, Registration, Other), and a short description. The template lands in the workspace's Templates tab. Open the new template and click Lock configuration. You have three controls:
  • Lock structure. Block order is frozen; users can't add or delete blocks but can edit their contents.
  • Default text lock. Paragraphs and headings are read-only by default; users can only fill fields you mark editable.
  • Per-block override. Mark individual blocks as editable, partial (some attributes editable) or locked.
For an IACUC submission you'd typically lock structure plus default text, then mark only the form fields as editable. For a study protocol you'd lock structure but leave text editable so investigators can add notes.
Click Preview as instance. The template opens with locks applied so you can confirm what your team will be able to (and not) edit. Adjust the lock config until it feels right. Click Publish. The template is now visible in the workspace template gallery and the slash menu.
## Spawning a document from a template Anyone with edit permission on the workspace can: - Click Templates in the sidebar, pick the template, click Use template. A new document opens in the project of your choice. - Or in the / slash menu inside any document, expand the Templates category and pick one to insert as a sub-tree. The new document carries the lock configuration with it. Editing the template later does **not** retroactively change instances; instances are independent. ## Versioning Editing a published template creates a new **version**. Every version has a number (v1, v2, v3) and an optional changelog. Documents track which version they spawned from, so you can prompt users when a newer version exists ("v3 is available — review changes"). You can roll back: open the template's version history, pick an older version, click Restore. This creates a new version equal to the older one, preserving the audit chain. ## Sharing across workspaces Templates that are workspace-only stay inside one workspace. To share with others — a sister workspace, your whole org, or the public dalea.market — see [Publishing to dalea.market](/templates-marketplace/publishing-to-dalea-market). ## Tips First versions of templates should be loose (structure unlocked, no per-block locks). Watch how people use it for a sprint. Then tighten the lock config in v2 around the bits that need it. Locking pre-emptively is how you end up with templates nobody uses. A template's name shows up in the slash menu, in workspace template lists, and in the AI assistant's tool calls. IACUC submission v3 — Acme is a useful name. Form 7 is not. ## What's next --- ## Publishing to dalea.market Path: /templates-marketplace/publishing-to-dalea-market Summary: From workspace template to public package, with versioning. Once a template is stable in your workspace, you can publish it as a **package** on [dalea.market](https://dalea.market) so other teams (in your org or in the wider community) can install it. Templates are one of four package types you can publish: | Type | What it is | |---|---| | **Template** | A reusable document blueprint, with optional locking. | | **Block** | A single block (e.g. a configured 96-well plate, a complex chart) reusable across documents. | | **Environment** | A data schema — tables, columns, naming schemes — without rows. Shared as a starting point for studies that need the same shape. | | **Bundle** | Multiple of the above, shipped together. Common pattern: an environment plus the templates that write into it. | ## Scopes — your namespace on dalea.market Every package on dalea.market lives under a **scope**, written `@scope/name`. Scopes are like npm namespaces: Org scopes are the right choice for shared institutional templates (your IACUC template, your standard ELISA SOP, your cell-line registration form). Personal scopes are right for individual contributions and experiments. ## Permissions To publish under an org scope you need the **publish** permission on the org. Owners and admins have it by default; org admins can also assign a custom "publisher" role to specific members. To publish under a personal scope you need only your own account. ## The publish flow Templates → pick the one to publish. A dialog opens. Choose between your personal scope (@your-handle) and any org scopes you can publish under. If your org isn't listed, ask its admin to grant you publish rights.
  • Package name — kebab-case, unique within the scope (iacuc-submission-acme).
  • Display name — what users see (IACUC submission — Acme institutional).
  • Description (short and long) — short shows in search results; long is the full package page.
  • Tags — things like iacuc, protocol, in-vivo, mouse. Up to 10.
  • License — pick from a dropdown (MIT, Apache-2.0, CC-BY, Proprietary).
  • Homepage / repo — optional URLs for documentation or source.
Default is 1.0.0. Use semantic versioning: bump the third number (patch) for fixes, the second (minor) for non-breaking additions, the first (major) for changes that break documents that already use the package. A short summary of what's new. Markdown allowed. Dalea uploads the package and registers it on dalea.market. Within a few seconds it's discoverable on the public catalog and via in-product package search.
## After publishing Each published version is **immutable**. To "edit" a package you publish a new version (1.0.1, 1.1.0, 2.0.0…). Old versions stay listed and remain installable until you yank them. Maintenance you'll do over time: - **Updates** — publish new versions as your template evolves. Update the changelog so consumers know what changed. - **Reviews and discussions** — community members can leave 1–5 star reviews and start discussions on your package page. Respond promptly; high-quality packages have engaged maintainers. - **Stars** — track which versions are popular. Useful signal for what to invest in. - **Deprecation** — flag a package as deprecated with a message ("Use the newer @your-org/iacuc-submission-acme-v2 instead"). Existing installs keep working but new discovery is gated. - **Yank** — pull a specific version if it has a serious bug. Existing installs of that version are warned; the version stays in history but no longer installs. ## A realistic example Your institution wants every research group to use the same IACUC template: ``` Scope: @acme Name: iacuc-submission-mouse Display name: IACUC submission — mouse studies (Acme institutional) Tags: iacuc, protocol, in-vivo, mouse, regulatory License: Proprietary Homepage: https://wiki.acme.com/iacuc Description: Acme's institutional IACUC submission template for mouse studies. Pre-filled with our reviewer language; structure and default text are locked. Editable form fields cover study justification, animal counts, route-of-administration. ``` Publishing this once means every new programme inside Acme starts the same way. When the institutional language changes, you publish v2 — every team gets a prompt to migrate. ## Tips A bundle is the right shape when a template only makes sense alongside its environment. Example: the cell-line registry bundle ships an environment (Cell lines table with STR, authentication, freeze locations) plus the registration template that writes into it. Installing the bundle gets users both pieces in one click. Anything you publish under your personal scope is visible on dalea.market. For internal-only sharing between workspaces in the same org, use **workspace-to-workspace template duplication** instead — see your org admin. ## What's next --- ## Browsing and installing packages Path: /templates-marketplace/browsing-and-installing-packages Summary: Discover community templates, environments and bundles; install with one click. [dalea.market](https://dalea.market) is the community catalogue of templates, blocks, environments, and bundles. This page covers how to find a useful package and install it into your workspace. ## What's on the marketplace Packages broadly fall into four buckets: - **Institutional templates** — IACUC submissions, GxP SOP forms, IND procedures shared by universities, research institutes, and CROs. - **Assay environments** — pre-configured schemas for ELISA, qPCR, flow, cell-line registries, antibody libraries. - **Reusable blocks** — complex 96-well plate layouts, specific chart configurations, calculator widgets. - **Bundles** — environment + template combinations for whole workflows. Packages are scoped (`@scope/name`) and versioned (semver). Both scientists and bioinformaticians publish; institutional scopes (`@your-university`) are common. ## Browsing There are two surfaces: | Surface | Best for | |---|---| | **dalea.market website** | Open-ended browsing. Filter by type, tags, license, scope. Sort by stars, downloads, or recency. Read full README, reviews, discussions. | | **In-app marketplace** | Quick install when you know what you want. Sidebar → Marketplace. Same catalogue, narrower view. | The website also shows full package profiles — README, changelog, version history, reviews, discussions, contributor list, related packages. ## What to look at on a package page Before installing, skim: ## Installing From the website, you'll be redirected to dalea.app's import flow. From in-app marketplace, the install dialog opens directly. Packages install into a specific workspace. If you have several, pick the one where the template will be used. Latest is the default. Pin to an older version if you have a reason (e.g. you've evaluated 1.4.2 and don't want to pull untested updates). Templates create a workspace template. Environments create an empty schema (no data). Bundles create everything in the bundle. Dalea shows a preview before committing. Adds entries to your workspace. The new template / environment / blocks appear immediately, marked with their @scope/name@version so you remember where they came from. ## Updates When the maintainer publishes a new version, you'll see an update prompt next time you open the template or anywhere the package surfaces. Updating is opt-in: - **Update** — bumps your local install to the new version. Existing documents stay on the version they spawned from; only future spawns use the new version. - **Skip this update** — pins your install to the current version. - **Pin and notify on major updates only** — you'll see updates only when the major number bumps (1.x → 2.x). ## Forking Sometimes a package is 90% what you want. To customise: - **Fork to workspace** — copies the package into your workspace as a workspace-scoped template. From there it's yours; edits don't affect the source. Updates from the source can be merged manually if you want them. - **Fork to a new package** — make your customised version a new package under your scope. Useful if you've made institutional adjustments other members of your org want. Forking respects the source license. Always credit the original (link to it in your description). ## Stars, reviews, discussions These are the levers users have to give back to authors: - **Star** the packages you use regularly. It boosts their search rank and signals to other consumers that the package is valuable. - **Review** when you've used a package for a while. Be specific — "lock config too restrictive for our institution" is more useful than "didn't work for us". - **Discuss** when you have a question, find a bug, or want a feature. Most authors respond. In aggregate this is what makes dalea.market a marketplace and not a dump of files. Skipping the social loop is fine, but participating is how the catalogue improves over time. ## Tips 1. Org scope from a known institution. 2. Recent maintenance. 3. Reviews that read like real users (specific scenarios, not just "great"). 4. Star count. Stars alone can be gamed — the other three are harder to fake. Importing an environment package creates an empty schema. If you then bulk import data into that schema, you're locked into the column shapes the package author chose. Spend a minute reviewing the columns, validation rules and naming schemes before pouring real data in. ## What's next --- # AI assistant The in-product chat plus connecting Claude, Cursor or ChatGPT. --- ## AI overview Path: /ai/overview Summary: Two AI surfaces: in-app chat and external MCP clients. Dalea exposes AI capabilities through **two surfaces**, with different audiences and trust models. Knowing which one you're using matters: they have different security properties, different action sets, and different user-experience. ## The in-app chat Open it with ⌘/ or the sparkle icon. The panel is multi-turn, streaming, tool-using, and aware of: - the workspace you're in - the document you're currently viewing (if any) - your selection inside that document - your role and permissions It can do everything you can do — find documents, read data records, create new documents, append blocks, register objects, run saved queries, and compose follow-up questions about the results. Because it acts as your user, it can also do destructive things, which is why every destructive call is confirmed by a card in the chat before execution. ### Approval cards When the assistant proposes to run a destructive tool (`manage_documents:create`, `data_objects:bulk_create`, `placements:check_out`, …), it pauses and emits an approval card. You see: - the tool and action - the arguments (resource name, payload summary) - buttons: **Approve once**, **Reject**, **Always approve this action type** The "always" toggle is per-workspace; you can revoke from `Settings → AI Approvals`. ### Models and providers Workspace admins configure which model providers are enabled: OpenAI, Anthropic, Google. Per user, you can pick a default model from `Settings → AI models`. There is also an on-device option (Gemma via WebGPU) which runs in your browser with zero network egress. ### Frontend tools A small set of tools execute in the browser without server confirmation: `navigate`, `open_search`, `show_document`. These can't change anything; they just steer your UI in response to a request like "open the IFN-γ protocol". ## The external MCP server Dalea's MCP server exposes your workspace as a set of tools that any MCP-compatible LLM can call. The flow is OAuth 2.1; the connection is HTTP-streaming. See [Connect Claude Desktop](/ai/connect-claude-desktop) for the step-by-step setup. External clients see roughly **70 tools** grouped by domain — documents, blocks, data, queries, inventory, files, search. Destructive actions (delete, archive, supersede) are deliberately *not* exposed via MCP; if Claude Desktop wants to delete a document, it has to ask you to do it in the in-app chat (which then confirms via an approval card). External clients are bound by: - **Workspace scope** — each token authorises exactly one workspace, enforced server-side on every call. - **Role intersection** — the OAuth client is created with a role; effective permissions are the intersection of your role and the client's role. - **Rate limits** — per-user, per-workspace token budgets. - **Audit** — every tool call is logged with operator, timestamp, args and a hash of the response. ## Privacy Tool inputs and outputs go to the LLM provider you configured. Dalea never sends data to a model unless you opted into a chat session that uses it. On-device models keep everything in your browser. ## What's next --- ## Connect Claude Desktop Path: /ai/connect-claude-desktop Summary: OAuth into your workspace and ask Claude about your data. Connecting Claude Desktop (or Cursor, ChatGPT-with-MCP, or any other MCP-compatible client) lets you ask an external LLM about your Dalea workspace using its full context window and tool-use loop. The setup takes about three minutes. The flow at a glance: 1. **Add server** in your client config: `claude mcp add --transport http dalea https://dalea.app/mcp` 2. **OAuth challenge** — your browser opens Dalea's authorisation page. 3. **Pick workspace** — select which workspace the client should see. 4. **Token issued** — short-lived bearer token with `mcp:read` and (optionally) `mcp:write`. 5. **Tool calls** start flowing — every call is rate-limited and audited. ## Prerequisites - A Dalea account with at least Viewer role on the workspace you want to expose. - Claude Desktop (≥ v1.5) or any other MCP client with HTTP transport support. - Your workspace URL — for cloud users this is `https://dalea.app/mcp`. Enterprise customers should use the URL provided by their administrator. ## Step-by-step Open Claude Desktop → Settings → Developer → MCP. Paste:
claude mcp add --transport http dalea https://dalea.app/mcp
Or edit your ~/.claude/claude_desktop_config.json manually:
{`{
  "mcpServers": {
    "dalea": {
      "transport": "http",
      "url": "https://dalea.app/mcp"
    }
  }
}`}
Restart Claude Desktop. Ask Claude something that requires a Dalea tool ("list my Dalea workspaces"). A browser window opens. Sign in to Dalea if you aren't already. The consent screen lists the scopes Claude is asking for: mcp:read, mcp:write. Pick the workspace you want Claude to see, and the role you want Claude to act as (default: your own role; you can downgrade to Viewer for a "look but don't touch" experience). Dalea issues a short-lived bearer token with automatic refresh support. Claude stores both. From here on, every tool call carries this token; the server independently re-checks workspace scope and role on every call. Back in Claude Desktop, ask:
  • "List my Dalea workspaces."
  • "Find all animals in study DLA-7 with baseline weight under 22 g."
  • "Summarise the latest result batch in the cytokines table."
  • "Open the protocol for plasma collection."
Claude will invoke MCP tools (data_objects.search, document_blocks.outline, etc.) and stream tool results into its response.
## Tool catalogue (selected) Claude sees roughly 70 tools across these groups: | Group | Examples | |---|---| | Documents | `manage_documents.list`, `.get`, `.search`, `.export`, `document_blocks.outline` | | Data | `data_objects.search`, `data_queries.run_saved`, `result_data.query` | | Inventory | `containers.search`, `placements.list`, `items.get_info` | | Files | `manage_files.list`, `.get_info`, `.upload_from_url` | | Search | `search.unified` | | Marketplace | `packages.search`, `packages.get` | **Destructive actions are deliberately absent.** Creating a new document or deleting an item must happen in the in-app chat or in the UI, where you approve them. ## Security model ## Troubleshooting - **"Authentication required" loops.** Likely a clock skew between Claude and your Dalea tenant. Check NTP. - **Tools not appearing.** Some clients cache the tool list. Quit and reopen the client. - **Slow first call.** Cold-start the MCP server can be ~1.5 s on small instances; subsequent calls are ~80 ms. ## What's next --- # Tutorials Long-form walk-throughs. For UI click-alongs, use the in-app Learn hub. --- ## Your first PK/PD study Path: /tutorials/your-first-pk-study Summary: End-to-end: schema → animals → samples → analysis. This tutorial walks an end-to-end Dalea workflow using a realistic mouse pharmacokinetics study: schema design → in-vivo phase → bioanalysis → reporting. Plan for ~15 minutes working through it with a free workspace open. The in-app **Learn hub** at [dalea.app/learn](https://dalea.app/learn) walks you through the UI step by step — schema, records, inventory — with overlays pointing at the actual buttons. The tutorial below is the longer reference companion: a narrative end-to-end study that goes deeper than a click-along can. The study we'll model: **DLA-7**, a hypothetical small-molecule kinase inhibitor. Single oral dose at 3, 10 and 30 mg/kg in C57BL/6 females, plus a vehicle control. Plasma collected at 15 min, 1 h, 4 h and 24 h. Analyte is parent compound by LC-MS/MS; secondary readout is plasma IFN-γ by ELISA. A PK/PD study touches every part of Dalea: an authored protocol, a multi-table data schema, two recording modalities (LC-MS and ELISA), inventory check-in/out, and a final summary document. If this fits your lab in 15 minutes, anything will. ## Phase 1 — Schema design Build the schema described in [Designing an environment](/data/designing-an-environment).
Sidebar → DataNew environment. Name: In-vivo PK. Audit reason: "Initial schema for the kinase-inhibitor PK programme." In order: Test articles, Study groups, Animals, Plasma samples. Use the column lists from Designing an environment. Dimensions: animal, timepoint_h. Measurements: concentration_ug_ml, auc_0_24, cmax, tmax. ## Phase 2 — Pre-study setup Data → Test articles → +. Name: DLA-7, modality small-molecule, lot DLA-7-2025-04. Dalea generates article_id TA-1. Vehicle, 3 mg/kg, 10 mg/kg, 30 mg/kg. Route is PO. Each references the test article (vehicle references a placeholder "vehicle only" article). Data → Animals → Bulk import. Paste 24 rows of sex/strain/ baseline-weight; assign 6 to each group. Dalea generates ANM-001 … ANM-024. The validation rule on weight (15–35 g) catches typos. Inventory → freezer L-204 → cryobox B-12. Right-click the antibody aliquot for your IFN-γ ELISA and Check out. The action records who took it, when, and decrements the quantity. ## Phase 3 — Author the protocol Create a document in your workspace called DLA-7 — Protocol. Add a **Protocol group** block titled "Plasma collection". Inside it, add four **Protocol step** blocks:
The protocol document serves three purposes: - a runbook the operator follows during the in-vivo phase - a search target ("when did we last anaesthetise with isoflurane at 4%?") - a regulatory artefact — version-pinned and signed at study close ## Phase 4 — Run the in-vivo phase This is the part Dalea can't do for you. With the protocol open: - Tick steps as you complete them. Dalea records timestamps. - Pop a `Plasma sample` row for each tube as you collect it (or batch-create at the end of each timepoint). By the end of day 1 you have 24 animals × 4 timepoints = 96 sample rows in the plasma samples table. ## Phase 5 — Bioanalysis Run the IFN-γ ELISA following [Recording results](/data/recording-results). Use the plate map below; standards in cols 1–2, blanks in col 3, QCs in col 4, samples in duplicate in cols 5–12 (4 timepoints × 2 mice per row pair):
Read the plate, paste the OD₄₅₀ values into a result batch. Dalea fits the standard curve and back-calculates concentrations:
Repeat for the LC-MS run for parent compound. Each plate / instrument run becomes one result batch. Close the batches when the run is done. ## Phase 6 — Reporting Now the payoff. Create a document called DLA-7 PK summary. Add a chart block with `data source = Saved query` and the query: > Mean concentration grouped by `timepoint_h` and `study_group`, with SEM error bars. You get a publication-grade time-course in seconds:
Add a second chart block for IFN-γ kinetics. Add a lookup table that lists per-animal AUC, Cmax, Tmax (computed by Dalea's PK analysis preset). Finish with a callout summarising the study disposition (n animals, n samples, n unscheduled deaths). Anyone in the workspace can open the document; the embedded charts and lookup tables always reflect the freshest data because they read from saved queries. ## What you've built In ~15 minutes you've gone from an empty workspace to: - a versioned, queryable schema for in-vivo PK - 24 animals, 96 samples, ~96 LC-MS measurements, ~96 ELISA measurements - a runnable protocol with operator + timestamp records - a live study-summary document publishing PK and PD readouts Multiply that across studies and you can see why structured-from-day-one is worth the upfront discipline. ## What's next --- # Account Personal sign-in, security, and access from your devices. --- ## Passkeys and two-factor authentication Path: /account/passkeys-and-2fa Summary: Add passkeys, set up TOTP, store recovery codes. Two ways to make your Dalea account substantially harder to compromise: **passkeys** (the recommended default) and **two-factor authentication via TOTP** (useful when you sign in with email-and-password). ## Why bother Lab accounts are valuable targets — they grant access to compounds, animal welfare data, and IP. The threat model isn't sophisticated nation-state actors, it's commodity phishing kits and password reuse. Both passkeys and TOTP defeat those. ## Passkeys (recommended) A passkey is a public/private key pair stored on your device's secure keychain. Signing in is biometric (Touch ID, Face ID, Windows Hello, your phone's fingerprint sensor) — there's no password to phish or to forget. Modern OS keychains sync passkeys across your devices: a passkey added on a Mac shows up on your iPhone via iCloud Keychain; on Android via Google Password Manager; on Windows via Microsoft account. ### Adding a passkey Click Add a passkey. Touch ID / Face ID / Windows Hello / etc. Your OS prompts you. "Work MacBook" or "iPhone 15" makes the list readable later. That's it. Next time you sign in, pick **Sign in with a passkey** and your device handles the rest. ### Managing passkeys Same settings page lists every passkey on your account: name, the device that registered it, last used, and a delete button. Delete passkeys for devices you no longer have. Renaming is fine — it doesn't invalidate the key. ## TOTP (when you can't use passkeys) If your team is on email-and-password, add TOTP as a second factor. TOTP = the 6-digit code that rotates every 30 seconds in apps like 1Password, Google Authenticator, Authy, or your password manager. ### Setting up TOTP Click Enable TOTP. Use your authenticator app of choice. The app stores the secret and starts generating codes. Enter the 6-digit code from your app to prove the setup worked. Dalea shows ten one-time recovery codes. Save them in your password manager now. They're the only way back in if you lose your authenticator. ### What changes after enabling TOTP Every sign-in that uses your password now also asks for the rotating code. Sign-ins via OAuth (Google, GitHub, Microsoft) and passkey are unaffected — those already prove device possession. ### Recovery codes Recovery codes are single-use. Use one to sign in if you've lost your authenticator, then immediately disable and re-enable TOTP to get a fresh set. Treat them like the keys to the lab door — if someone has them they can sign in as you. ## Mixing both Passkey **and** TOTP is valid and secure but generally unnecessary; passkeys already prove device possession. The mainstream recommendation is: - **Use passkeys** as your primary sign-in. - **Add TOTP** as a second factor only on accounts that still rely on passwords (legacy setups, certain SSO migrations). ## Recovery scenarios ## What's next --- ## Sessions and devices Path: /account/sessions-and-devices Summary: See where you are signed in, and remotely sign out. A **session** is a sign-in on one browser or app. Dalea tracks every active session on your account and lets you revoke any of them remotely. This page is how you stay on top of which devices are signed in as you. ## Where to find your sessions Settings → Security → Sessions. You'll see a list, ordered by most recent activity. Each row shows: | Field | What it means | |---|---| | Browser / OS | "Chrome on macOS 14" — useful for spotting unfamiliar devices. | | IP address | Approximate origin. Mobile carriers can show a different region than where your phone actually is. | | Last active | When this session last made a request. | | Sign-in method | Email-and-password, OAuth (Google, GitHub, Microsoft), or passkey. | | Auth strength | Whether this session was elevated by TOTP or a passkey at some point. | | **This is you** | Marker on your current session. | ## Signing out a specific session Click the X on any row. The session is invalidated immediately — the next request from that browser will be redirected to the sign-in page. There's no "sign out everywhere except here" toggle by design; revoke them individually so you confirm what you're disabling. You can't sign out your current session from the list — to do that, use the regular Sign out option in the user menu. ## When to be vigilant Three flags worth watching: - **A device you don't recognise.** Even if it shares your geography, an unknown browser fingerprint is worth investigating. Revoke and rotate your password (or remove the rogue passkey). - **A geography that doesn't match your travel.** A session active from a city you weren't in is a hard signal. Revoke immediately. - **A session active long after you forgot it.** If you signed in to demo Dalea on a colleague's machine three months ago, that session might still be there. Spring-clean the list every quarter. ## Auto-expiry Sessions don't last forever even if you never click revoke: - **Idle expiry** — sessions inactive for 30 days are deleted by Dalea. This is enforced regardless of your settings. - **Hard expiry** — the rememer-me cookie behind a session has a maximum lifespan; eventually you'll be asked to re-authenticate. - **Sensitive-action elevation** — some actions (changing password, removing a passkey, viewing audit logs) require re-authentication even within an active session. ## API keys vs sessions Sessions are for browsers (and the desktop app, when that ships). For programmatic access — scripts, Python clients, server-to-server integrations — use **API keys** instead. They live in the same Settings area but on a separate tab; revoking a session doesn't touch your API keys. See the [Developers section](/developers/authentication) for the full picture. ## What's next --- # Admin Organisation administration, members, roles, and audit. --- ## Org members and roles Path: /admin/org-members-and-roles Summary: Invite, change role, remove. Org roles vs. workspace roles. Managing the people in your organisation has two layers: the **organisation role** (controls billing, member directory, and workspace creation) and the **workspace roles** (control day-to-day work inside each workspace). This page is for org admins doing the first; workspace-level membership is covered in [Workspaces](/concepts/workspaces) and [Roles and permissions](/concepts/roles-and-permissions). ## Organisation roles Three built-in org roles. Every member has exactly one. Org-level roles do **not** automatically grant any rights inside specific workspaces. An org Admin who isn't added to a workspace cannot see that workspace's data. Workspace membership is a separate decision. ## Inviting members Click Invite member. One per line. Up to 50 in a single invite. Default is Member. Bump to Admin for IT staff and ops. Reserve Owner for one or two people. A drop-down lets you select one or more workspaces and the role each invitee should get inside them. Saves a second click after they accept. Each invitee gets an email with an accept link. The invite expires in 14 days; resend or rescind from the same Members page. A common bulk-onboarding flow: invite 60 chemists at once, all as Members, all pre-assigned to a single "Chemistry" workspace as Editors. They each click accept once, set up a passkey, and they're working. ## Changing a member's role Same Members page, click the role pill next to a name, pick a new value. Changes apply immediately. Auditing records the actor (you), the target, the old role and the new role. You cannot change your own role. To rotate ownership, the current Owner demotes themselves while promoting another Owner — both happen in the same flow. ## Removing a member Same page, click the kebab menu → **Remove from org**. This: - Revokes all workspace memberships in this org. - Invalidates all sessions on this org's workspaces. - Soft-deletes their org record (recoverable for 30 days). - Is logged with operator, timestamp and reason. It does **not** delete their Dalea account; they can still sign in and access other orgs they belong to. For sensitive offboarding (terminated employee), pair this with: rotating any shared API keys, removing them from any OAuth client memberships, and exporting their last-90-days audit log. See [Audit logging](/admin/audit-logging). ## Workspace memberships at scale Most teams find this more useful than micromanaging org roles: - Make a few people **org Admins** — typically the IT lead and the ops manager. - Add everyone else as **org Members**. - Then run workspace memberships explicitly per workspace. For organisations on the **Enterprise tier with SSO**, group memberships from your IdP (Okta, Azure AD, Google Workspace) can map to workspace roles automatically. See SSO setup (P1, coming soon) when that doc lands. ## Tips At least two people should be Owner of any org you depend on. If your sole Owner gets hit by a bus (or just leaves the company), you need someone with the keys. The cost is zero; the cost of getting it wrong is high. Sign up business members with their work email, not their personal Gmail. When someone leaves, IT can reclaim the work email; they can't reclaim a personal one. Sessions tied to personal emails outlive employment. ## What's next --- ## Audit logging Path: /admin/audit-logging Summary: Who did what, when, and how to export it for compliance. Every meaningful action in Dalea is recorded in an audit log. As an org admin or compliance officer, this is the page you'll come back to when you need to answer "who did what, when, why". ## What's logged The audit log captures every state-changing event in your org. A non-exhaustive list: - Sign-ins (success and failure), passkey enrolments, password changes, TOTP setups - Member invitations, role changes, removals (org and workspace level) - Workspace creation, deletion, settings changes, OAuth client management - Document creation, edits, version snapshots, restoration, deletion - Schema changes (table create, column add, naming-scheme change) — including the audit reason the user typed - Result batch open, edit, close, supersede — including e-signatures - Inventory item creation, container moves, check-out, consumption, discard - Bulk imports (file name, row count, source format) - Template publishes to dalea.market - API key creation and revocation - OAuth and MCP token issuance - Failed authorisation attempts (someone tried to do something they couldn't) Each event records: actor, timestamp, IP address, sign-in method, auth strength, target object, action type, before and after state (where applicable), and the optional audit reason. ## Where to find it Settings → Org → Audit log. Two views: | View | Use when | |---|---| | **Stream** | Live tail. Useful while investigating an incident. | | **Search** | Historical. Filter by actor, action type, target, date range. | Workspace-scoped admins see only their workspace's events. Org admins see events across all workspaces in the org. Cross-org events (member added to the org) are visible to org admins. ## Filters The search view supports: - **Actor** — pick a member, see everything they did. - **Action type** — sign-in, schema change, result-batch close, API key issue, etc. - **Target** — one document, one inventory item, one environment. - **Date range** — last 24 h, 7 days, 30 days, or custom. - **Outcome** — succeeded, failed, denied. Useful for spotting probing activity. - **Sign-in method** — sessions only, API keys only, MCP tokens only. Filters compose. A typical compliance query: "All schema changes by Editor-role members in the last 90 days." Filter by action type = `schema.*`, member role = Editor, date range = 90 days. ## Exporting Compliance audits, regulatory submissions, and SIEM ingestion all want the audit log in machine-readable form. | Format | Best for | |---|---| | **CSV** | Spreadsheet review, ad-hoc analysis. | | **JSON Lines** | SIEM ingestion (Splunk, Datadog, Elastic). | | **Signed bundle** | A `.zip` with the JSONL plus a signature file proving the export hasn't been tampered with. Required for some regulated audits. | Export is filter-aware — you get exactly the slice the search view is showing. ## Retention The retention policy is org-tier dependent: | Tier | Hot retention (queryable) | Cold retention (export-only) | |---|---|---| | Free | 30 days | – | | Pro | 90 days | – | | Academic / Enterprise | 180 days | 10 years | **Hot** retention means the events are queryable in the in-app log. **Cold** retention means events older than the hot window are still exportable as a signed bundle but no longer surface in the live view. This split is what lets labs answer 21 CFR Part 11 / GxP "show me the audit trail from 2031" without keeping decade-old records hot. ## Audit reasons Many destructive actions in Dalea (schema changes, result-batch closes, inventory discards, document deletions) prompt for an **audit reason** when performed. The reason is free text but captured permanently in the audit log. Encourage the habit: - "Adding `metabolite_id` column to support PK study DLA-7-Phase-2." - "Closing batch with re-recorded standards after pipette calibration." - "Discarding lot 24-088 — expiration past plus QC failure." Specific reasons make audits much faster and reduce the chance you have to re-derive context two years later. ## E-signatures (Enterprise tier) On Enterprise, you can require an e-signature on closing a result batch or finalising a document. The user is asked to re-authenticate (with passkey or password+TOTP) at that point; the signature is bound to the action and recorded in the audit log alongside the event. This is what unlocks 21 CFR Part 11 compliance. Enterprise customers should enable e-signature requirements on their result-recording workflows during onboarding. Detailed setup is on the Enterprise admin docs (P1, coming soon). ## Privacy The audit log is **not** end-user-visible. Members see their own activity in a limited form (their own sign-ins, their own API keys); the full org log is admin-only. Auditing the audit log itself is also captured — every export event is logged. ## Tips If you suspect a security issue, start with outcome = denied for the target user or IP. A pattern of denied requests just before a successful one is the signature of a probing attempt. Even if your org is on Free or Pro tier, exporting a monthly CSV to your own storage gives you durable audit history beyond the hot retention window. Three minutes a month, indefinite history. ## What's next --- # Developers REST API, SDK, MCP for tool builders, and integration recipes. --- ## Authentication Path: /developers/authentication Summary: OAuth, API keys, MCP tokens — when to use each. Dalea offers three authentication paths for programmatic access. Each is optimised for a different scenario: | Method | Best for | Lifetime | Initiator | |---|---|---|---| | **OAuth 2.1** | Apps acting on behalf of an end user (browser apps, desktop apps that already speak OAuth) | Long-lived (refresh-token) | User clicks Authorise | | **Workspace API key** | Server-to-server scripts and integrations (cron jobs, ETL pipelines, internal tools) | Lives until you revoke it | Workspace member with admin permission | | **MCP token** | LLM tool clients (Claude Desktop, Cursor, custom MCP-aware agents) | Short-lived bearer with auto-refresh | User clicks Authorise (OAuth-flavoured) | Pick whichever **matches the actor**. If a person is in front of the screen, use OAuth or MCP. If a script runs without anyone watching, use an API key. ## OAuth 2.1 — apps acting as a user Use when your application authenticates end users and acts on their behalf. Standard OAuth 2.1 flow: ``` ┌──── Your app ────┐ ┌──── Dalea ────┐ │ │ /authorize │ │ │ Browser ──────────────────────► │ consent page │ │ │ │ │ │ │ ?code=... │ │ │ Browser ◄───────────────────── │ │ │ │ │ │ │ Server ──────────────────────► │ /token │ │ │ exchange code │ │ │ │ ◄──────────── │ access+refresh └──────────────────┘ └───────────────┘ ``` Standard scopes: - `openid profile email` — identity - `offline_access` — get a refresh token - `mcp:read` — call read-only MCP tools (only relevant if your app is also an MCP client) - `mcp:write` — call destructive MCP tools (rarely granted to apps; usually in-app chat only) Set up an OAuth client in Settings → Workspaces → OAuth clients. You'll get a `client_id` and `client_secret`; the secret stays on your server only. ## Workspace API keys — for scripts When a script runs without a user, an API key is the right choice. ### Creating one Pick the workspace the key should access (one workspace per key). Same five workspace roles as for human users — Owner, Data Engineer, Editor, Commenter, Viewer. Pick the most restrictive role that gets your job done. A nightly read-only export only needs Viewer. Beyond role, you can pin the key to a subset of actions (read documents, write to environment X). Useful for least-privilege scripts. 30 days, 90 days, 1 year, or never. Default is 90 days; rotate proactively. Dalea displays the key string exactly once. Store it in your secret manager immediately — there's no way to retrieve it later. ### Using a key Send it as a Bearer token on the standard `Authorization` header: ```bash curl https://dalea.app/api/v1/workspaces/$WORKSPACE/documents \ -H "Authorization: Bearer $DALEA_API_KEY" ``` ```python import requests resp = requests.get( f"https://dalea.app/api/v1/workspaces/{ws}/documents", headers={"Authorization": f"Bearer {api_key}"}, ) resp.raise_for_status() for doc in resp.json()["items"]: print(doc["title"]) ``` ```ts const resp = await fetch( `https://dalea.app/api/v1/workspaces/${ws}/documents`, { headers: { Authorization: `Bearer ${apiKey}` } }, ); const { items } = await resp.json(); ``` ### Revoking Same Settings page. Revocation is immediate. If a key is compromised, revoke it first, then issue a new one — never the other way round. ## MCP tokens — for LLM tool clients If you're building an MCP-aware client (a Claude Desktop alternative, a custom agent), use the MCP OAuth flow. It's structurally OAuth 2.1 but tokens are short-lived and auto-refresh; access is bound to one workspace and one role. See [MCP for tool builders](/developers/mcp-for-tool-builders) for the full flow and example client. ## Choosing per scenario ## Security notes - **Never commit secrets.** Use environment variables and a secret manager. - **Scope down.** Owner is rarely the right key role. Most scripts are fine with Viewer or Editor. - **Rotate.** Set keys to 90-day expiry. Have a rotation job in your secret manager rather than rotating by hand. - **Audit.** Every API call is logged with the key's identity, the actor it represents, and the action. See [Audit logging](/admin/audit-logging). ## What's next --- ## REST API quickstart Path: /developers/rest-api-quickstart Summary: Make your first call; link out to the live OpenAPI spec. This page gets you from zero to a successful API call. For the complete operation catalogue, see the **live OpenAPI spec at [`/api-docs`](/api-docs)** — that's the authoritative reference for every endpoint, schema and example. ## Base URL Cloud: `https://dalea.app`. Enterprise dedicated tenants: substitute your tenant URL. All public endpoints live under `/api/v1/`. Stable; backwards-compatible within the major version. ## Authentication Every call needs a Bearer token. The fastest path: Settings → Security → API keys → New key. Pick a workspace and a role. See Authentication for the complete picture, including OAuth and MCP tokens. Authorization: Bearer dalea_xxxxxxxxx ## Your first call List the documents you can see: ```bash curl -sS "https://dalea.app/api/v1/workspaces/$WORKSPACE_ID/documents?limit=10" \ -H "Authorization: Bearer $DALEA_API_KEY" ``` ```python import os, requests WORKSPACE = os.environ["DALEA_WORKSPACE_ID"] KEY = os.environ["DALEA_API_KEY"] resp = requests.get( f"https://dalea.app/api/v1/workspaces/{WORKSPACE}/documents", headers={"Authorization": f"Bearer {KEY}"}, params={"limit": 10}, ) resp.raise_for_status() for d in resp.json()["items"]: print(d["id"], d["title"]) ``` ```ts const resp = await fetch( `https://dalea.app/api/v1/workspaces/${process.env.DALEA_WORKSPACE_ID}/documents?limit=10`, { headers: { Authorization: `Bearer ${process.env.DALEA_API_KEY}` } }, ); if (!resp.ok) throw new Error(`Dalea API ${resp.status}`); const { items } = await resp.json(); items.forEach((d: any) => console.log(d.id, d.title)); ``` ## Response shape Successful list responses follow a standard envelope: ```json { "items": [ /* array of resources */ ], "total": 42, "next_cursor": "cmd2..." // present when more pages exist } ``` Single-resource reads return the resource object directly (not wrapped). ## Pagination List endpoints support cursor pagination. Pass the previous response's `next_cursor` as the `cursor` query param to get the next page. ```python def paginate(url, headers, params): while True: r = requests.get(url, headers=headers, params=params) r.raise_for_status() body = r.json() yield from body["items"] if not body.get("next_cursor"): return params["cursor"] = body["next_cursor"] ``` For very large pulls, set `limit` to its maximum (typically 200) to reduce round-trips. Don't loop without a cursor — there's no fallback offset pagination. ## Filtering and sorting Most list endpoints accept filter and sort query params: ```text ?project_id=proj_123&updated_after=2026-04-01T00:00:00Z&sort=updated_at:desc ``` Allowed filter and sort fields are documented per-endpoint in the OpenAPI spec. Apply them at the API layer rather than fetching everything and filtering client-side — it's faster and respects rate limits. ## Errors Standard HTTP semantics: | Status | What it means | Typical fix | |---|---|---| | `400` | Bad request — malformed input | Read the `error.message`; fix the payload. | | `401` | Missing or invalid token | Check the Bearer header; rotate the key if expired. | | `403` | Authenticated but not allowed | Your role doesn't grant this action. | | `404` | Not found, or you don't have permission to know it exists | Check the ID and workspace scope. | | `409` | Conflict — usually a uniqueness violation | Read the `error.code`; either retry with different input or merge state. | | `429` | Rate limit exceeded | Back off (exponential, with jitter). | | `5xx` | Dalea side error | Retry with backoff; persistent 5xx is worth filing a support ticket. | Error responses always have this shape: ```json { "error": { "code": "validation.required_field", "message": "Field 'title' is required.", "field": "title" } } ``` ## Reacting to changes Native event webhooks are on the roadmap but not yet available. Until they ship, the supported pattern is **polling** the REST API on a schedule. Most list endpoints accept an `updated_after` filter: ```python # Every 5 minutes, fetch result batches closed since the last poll since = load_last_seen_timestamp() resp = requests.get( f"https://dalea.app/api/v1/result-batches", headers=auth_headers, params={"workspace_id": ws, "status": "closed", "updated_after": since}, ) for batch in resp.json()["items"]: handle_closed_batch(batch) save_last_seen_timestamp(now()) ``` Five minutes is a reasonable default for most workflows; tighten if you have truly time-sensitive needs. Always persist the cursor (`updated_after` timestamp) durably so a restart doesn't replay history. ## What you can do The [OpenAPI spec at `/api-docs`](/api-docs) is the canonical list of operations with stable, published paths. Headline domains: - **Auth** — sign in, manage sessions, organisations, workspaces, members, invitations, OAuth clients and API keys, audit trail review. - **Documents** — documents and folders, projects, blocks (read/append/insert/update/delete/outline), version history, comments and notifications, in-app marketplace gateway. - **Data** — environments, tables, columns, objects, naming schemes, saved queries, result batches, schema imports, Benchling integration, archive lifecycle. - **Inventory** — item types, containers, items, lots, placements, sessions (receiving, staging, checkout), labels, GS1, inventory import. - **AI** — agentic chat (SSE), conversations, MCP tool bridge, approvals, usage. - **Storage** — file uploads (multipart, base64, from URL), presigned downloads. - **Search** — workspace full-text search and activity feed. - **Marketplace (dalea.market only)** — packages, profiles, reviews, discussions, stars, reports. ## Idiomatic patterns The exact paths, request shapes, and response schemas live in the [OpenAPI spec at `/api-docs`](/api-docs) — always check there before writing code. The patterns below are conceptual; substitute the operation IDs you find in the spec. **Listing entities** is always an HTTP `GET` with optional filter and pagination params: ```python resp = requests.get( f"https://dalea.app/api/v1/", headers=auth_headers, params={"workspace_id": ws, "limit": 200}, ) ``` **Creating entities** is `POST` with a JSON body. Anything that mutates state takes an `audit_reason` field — supply a short string explaining the change, because it lands in the audit log forever. **Bulk operations** exist for high-volume surfaces (registering many objects, recording many results). They follow the same pattern but accept arrays under keys like `objects` or `records`. Look for endpoints with `/bulk` in the path in the spec. **Reactive workflows** — until native webhooks ship, poll the relevant list endpoint with `updated_after` set to the timestamp you last saw. Every operation in the public REST API is documented at /api-docs with full request and response schemas. If something looks ambiguous, check there before guessing — the spec is generated from the same handlers your call hits. ## What's next --- ## MCP for tool builders Path: /developers/mcp-for-tool-builders Summary: Build your own MCP-aware client against Dalea. [Connect Claude Desktop](/ai/connect-claude-desktop) is for end users who want their existing LLM to talk to Dalea. **This page is different**: it's for developers who want to build their own MCP-aware client — a custom agent, an internal tool, a CLI, a desktop assistant — that uses Dalea's tools. ## What you get A bearer-authenticated HTTP MCP endpoint at `https://dalea.app/mcp` that exposes: - **Documents** — list, search, read, create, update, version - **Data** — list environments, run saved queries, query result tables, bulk-fetch objects - **Inventory** — list containers, items, lots; query by location - **Files** — list, get info, fetch via presigned URL - **Search** — unified workspace search - **Marketplace** — read-only package search By design, **destructive actions are not exposed via MCP** — `delete`, `archive`, `supersede`, and other state-collapsing operations stay in the in-app chat where a human approves them through a confirmation card. This is a security property, not an oversight. ## Set up ### Register an OAuth client Pick the workspace the client should access. Give it a name ("internal lab agent v1") and a redirect URI for the OAuth callback. Most agents only need read access. Pick Viewer or Commenter for read-only patterns. Editor or Data Engineer if your agent needs to create documents or records (those write actions go through the in-app chat or your own UI, not via MCP). For an MCP client, you'll typically request:
  • openid profile email — identity
  • offline_access — refresh tokens
  • mcp:read — call read-only MCP tools
Treat the secret like a server-side credential.
### Implement the OAuth flow Standard OAuth 2.1 PKCE for native and CLI clients, server-side flow for web. The exchange: ``` your client ──/authorize?... ─────────────────────► dalea.app ◄── 302 with ?code=... ─────────────── ──/token { code, code_verifier, ... }─► ◄── { access_token, refresh_token } ── ``` Save the refresh token; use the access token until it returns 401, then refresh. ## Make a tool call Once you have an access token, the MCP endpoint is just JSON-over-HTTP: ```python import requests MCP_URL = "https://dalea.app/mcp" def mcp_call(token, tool, arguments): resp = requests.post( MCP_URL, headers={ "Authorization": f"Bearer {token}", "Content-Type": "application/json", }, json={ "jsonrpc": "2.0", "id": 1, "method": "tools/call", "params": {"name": tool, "arguments": arguments}, }, ) resp.raise_for_status() return resp.json()["result"] # Example: search for protocols result = mcp_call(token, "search.unified", {"query": "DLA-7", "types": ["document"]}) for hit in result["results"]: print(hit["title"], hit["url"]) ``` ```ts async function mcpCall(token: string, tool: string, args: Record) { const resp = await fetch("https://dalea.app/mcp", { method: "POST", headers: { Authorization: `Bearer ${token}`, "Content-Type": "application/json", }, body: JSON.stringify({ jsonrpc: "2.0", id: 1, method: "tools/call", params: { name: tool, arguments: args }, }), }); if (!resp.ok) throw new Error(`MCP ${resp.status}`); return (await resp.json()).result; } ``` For most clients you'll use an MCP SDK (the official `@modelcontextprotocol/sdk` for Node, or its Python sibling) which handles framing and reconnection for you. ## Discovering available tools Send `tools/list`: ```json { "jsonrpc": "2.0", "id": 1, "method": "tools/list" } ``` The response enumerates every tool the calling token has access to, with name, description, input schema, and output shape. Cache this for 5 minutes; the tool catalogue is stable but can change between platform releases. ## Pagination, filtering, and rate limits MCP tools follow the same conventions as the REST API: - List tools accept `limit` and `cursor`. - Filters are explicit arguments, not query strings. - Rate-limit errors return a standard error response — back off and retry. ## A small worked example: a daily PK summary Goal: every morning, post a Slack message summarising overnight PK results. The exact tool names below depend on the catalogue your token sees — discover them with `tools/list` first. The shape is illustrative: ```python import os, requests TOKEN = refresh_or_use_cached_token() # OAuth helper # 1. Discover available tools tools = mcp_list(TOKEN) # Look for tools related to saved queries; pick the one that lists them. list_tool = next(t for t in tools if "list" in t["name"] and "quer" in t["name"]) run_tool = next(t for t in tools if "run" in t["name"] and "quer" in t["name"]) # 2. Find the saved query for "DLA-7 PK timecourse by dose group" queries = mcp_call(TOKEN, list_tool["name"], {"name_contains": "DLA-7 PK timecourse"}) qid = queries["items"][0]["id"] # 3. Run it for the latest data data = mcp_call(TOKEN, run_tool["name"], {"query_id": qid}) # 4. Format and post to Slack summary = format_pk_table(data["rows"]) # your own helper requests.post(os.environ["SLACK_WEBHOOK"], json={"text": summary}) ``` Schedule this with cron, GitHub Actions, or your scheduler of choice. No human in the loop for the MCP call; the data tools are read-only so MCP is the right surface. ## What MCP isn't - **It isn't a webhook.** MCP is request/response. Event-driven push notifications (e.g. "tell me when a result batch closes") aren't yet available — for now, integrators poll the REST API on a schedule. Native webhooks are on the roadmap. - **It isn't for destructive actions.** No deletes, no closes, no signatures. Those go through the in-app chat. - **It isn't optimised for bulk transfer.** For exporting 100 000 rows, use the REST API with cursor pagination. ## Tips The official MCP SDKs handle the JSON-RPC envelope, retries and reconnection. Save yourself the boilerplate and use them for any non-trivial client. An MCP client doesn't need the same role as the user who registered it. A read-only Slack notifier should be Viewer; an analysis agent that ingests data into a customer-built reporting tool should be Commenter. Match the principle of least privilege. ## What's next --- # For LLMs Make this wiki and your platform legible to language models. --- ## For LLMs Path: /llms/overview Summary: llms.txt, raw markdown endpoints, and what to feed your agent. This page is addressed to language models and the engineers integrating them. The short version: every page on this wiki is also available as raw markdown, plus there's a single concatenated corpus and an index file at well-known URLs. ## Endpoints Index of all wiki pages, with one-line descriptions and per-page links to the markdown variant. Follows the llms.txt convention. }, { key: '/llms-full.txt', value: 'Concatenated markdown of every wiki page. Suitable for stuffing into a system prompt or vector-indexing wholesale.' }, { key: '//.md', value: 'Per-page raw markdown. e.g. /tutorials/your-first-pk-study.md.' }, ]} /> All three are statically generated at build time. They re-build whenever the wiki is rebuilt; for cloud, that\'s on every release. ## How to use them If you're integrating an LLM into your own product to answer questions about Dalea: - **Static prompt** — fetch `/llms.txt` once, include it as a system-prompt attachment. Lets the LLM enumerate available pages and quote the right URL. - **Retrieval** — chunk `/llms-full.txt` (or fetch individual `.md` pages) and embed with your favourite vector store. The page boundaries are clean: each page starts with a `## ` and a `Path:` line in `llms-full.txt`. - **Live tool** — give your agent a `fetch_dalea_docs(slug)` tool that hits the per-page endpoint. Cheap, fresh, accurate. ## What the corpus does *not* contain - Customer data — the wiki is platform documentation only. To query a real workspace, use the [MCP server](/ai/connect-claude-desktop). - API authentication details — the OpenAPI spec is at `/api-docs/openapi.json` on the platform itself, not in the wiki corpus. - Proprietary protocols or templates — those live in the [marketplace](https://dalea.market), not here. ## Suggested system prompt If you're orchestrating an external LLM that has access to Dalea via MCP, the following system-prompt fragment grounds it well: ``` You are answering questions about Dalea, a research orchestration platform for life science. Dalea has four primitives at the workspace level: documents, environments (structured data schemas), inventory, and templates. The hierarchy is User → Organisation → Workspace → Project → (artefacts). Refer to the public wiki at https://dalea.wiki for any term you are unsure about. The wiki publishes a structured index at https://dalea.wiki/llms.txt and per-page markdown at https://dalea.wiki/<category>/<slug>.md. When you call MCP tools, remember that destructive actions (delete, archive, supersede) are not exposed to MCP. Ask the user to perform those in the in-app chat where they can approve via a confirmation card. ``` ## Versioning The wiki ships in lockstep with the platform release. v1 is unversioned. Once v2 cuts, this page will list a `/v1/llms.txt` snapshot for clients pinning to a specific platform version. ## What's next <CardGrid> <Card href="/ai/overview" title="AI overview" /> <Card href="/ai/connect-claude-desktop" title="Connect Claude Desktop" /> <Card href="/welcome/what-is-dalea" title="What is Dalea?" /> </CardGrid> ---