All posts
embeddingreactdatasetssdkdashboard

Embedding DuckViz when you already have APIs — Explorer to Deck in one flow

If your product already has APIs returning rows, you don't need an upload flow or a viz layer. Pass your data to DuckViz as datasets, proxy the AI endpoints through your backend, and ship a full analytics experience — Explorer, Dashboard, Report, Deck — in a few dozen lines.

Vikas Awaghade9 min read

Most teams I talk to about embedding DuckViz already have the hard part done. They have authenticated APIs returning JSON. They have a product with users who want to slice that data, build a dashboard, and walk into a meeting with a written report. What they don't want to build is the rest — the chart library, the dashboard grid, the AI flow that picks visualizations, the report editor, the deck export.

That gap is what the embed flow is for. And if you already have APIs, the integration is shorter than you'd think — because you don't need the upload step at all.

This is the path I'd recommend in that situation: datasets mode end-to-end. You pass your API data to DuckViz as named in-memory datasets. Users explore them, generate dashboards, save the result, and reuse the same data to produce reports and decks. No file pickers, no ingest pipeline.

The mental model

DuckViz components read from a single in-browser DuckDB. In datasets mode, you skip the upload — your APIs are the source. You hand each component an array of named datasets and the package registers them as DuckDB tables on mount.

That's the whole story. Four components share that mental model:

  • <Explorer /> — the three-panel browse + AI dashboard builder
  • <Dashboard /> — the embeddable grid for a saved dashboard
  • <ReportBuilder /> — the report editor
  • <DeckBuilder /> — the slide deck presenter

The same datasets prop feeds all four.

1. Wrap your app once

app/providers.tsx
"use client";import { DuckvizDBProvider } from "@duckviz/db";export function Providers({ children }: { children: React.ReactNode }) {  return (    <DuckvizDBProvider persistence arrowIngest batchSize={5000}>      {children}    </DuckvizDBProvider>  );}

That's it for the provider layer. persistence keeps DuckDB tables warm in IndexedDB so a refresh doesn't re-ingest. arrowIngest and batchSize are sensible performance defaults.

2. Mount Explorer with your APIs as datasets

If you have ten API endpoints, you don't want to fetch all of them upfront. Pass a list of datasets where each one's data is a function — DuckViz only calls it when the user actually opens that table:

app/explorer/page.tsx
"use client";import { Explorer } from "@duckviz/explorer";import { useRouter } from "next/navigation";const datasets = [  {    name: "Orders (last 90 days)",    data: () => fetch("/api/data/orders?range=90d").then((r) => r.json()),  },  { name: "Customers", data: () => fetch("/api/data/customers").then((r) => r.json()) },  { name: "Refunds",   data: () => fetch("/api/data/refunds").then((r) => r.json()) },];export default function ExplorerPage() {  const router = useRouter();  return (    <Explorer      authenticated      datasets={datasets}      dashboards={[]}      onCreateDashboard={(name) => createDashboardId(name)}      onAddWidgetToDashboard={() => ({ ok: true })}      onDashboardSaved={(id, payload) => {        saveWidgetsToBackend(id, payload.widgets);        router.push(`/dashboards/${id}`);      }}    />  );}

A few things are happening here, all with no work on your side:

  • The user sees your three datasets in the Explorer sidebar — no upload UI.
  • They can write SQL, run queries, and ask the AI to build a dashboard.
  • The lazy data: () => fetch(...) only runs if the user opens that dataset. Eight unopened endpoints stay unopened.
  • When the AI finishes generating widgets, the user clicks save. onDashboardSaved fires once with the full widget array.

Each saved widget carries a chart type, a title, a description, the SQL DuckViz generated against your datasets, an optional chart config, and a grid layout. The query is the source of truth — the chart re-runs on every render against fresh data, so you're not saving a snapshot of rows.

3. Forward your auth with customFetch

Your app already has authenticated users. The DuckViz components don't know about your session cookies, your bearer tokens, or your CSRF guards — and they shouldn't have to. Pass a customFetch and intercept everything they call:

lib/duckviz-fetch.ts
export const customFetch: typeof fetch = (input, init) => {  const url = typeof input === "string" ? input : input.url;  // Rewrite DuckViz's calls (`/api/widget-flow/...`) onto your proxy route  // (`/api/duckviz/widget-flow/...`).  const proxied = url.replace(/^\/api\//, "/api/duckviz/");  return fetch(proxied, {    ...init,    credentials: "include",                  // your session cookie rides along    headers: {      ...init?.headers,      "X-CSRF-Token": getCsrfToken(),        // or whatever your auth needs    },  });};

Pass it to every DuckViz component that talks to AI endpoints — Explorer, ReportBuilder, DeckBuilder all accept it:

<Explorer customFetch={customFetch} {...props} /><ReportBuilder customFetch={customFetch} {...props} /><DeckBuilder customFetch={customFetch} {...props} />

That's the whole client side of auth. Your backend is the one that holds the DuckViz token — the browser never sees it.

4. Wire the backend with the SDK

customFetch is half the picture. The other half is the route on your backend that forwards those calls to DuckViz Cloud with the server-only token attached. @duckviz/sdk ships first-party adapters for Next.js, Hono, Fastify, and Express — pick the one that matches your stack. All four share the same proxy core, so token injection, SSE pass-through, and abort propagation behave identically.

Mint a token at app.duckviz.com/settings/tokens and store it as a server-side environment variable. Never inline it in client code.

Next.js (App Router)

app/api/duckviz/[...route]/route.ts
import { createDuckvizHandlers } from "@duckviz/sdk/next";export const { POST, GET } = createDuckvizHandlers({  token: process.env.DUCKVIZ_TOKEN!,});

Hono

server.ts
import { Hono } from "hono";import { createDuckvizHonoHandler } from "@duckviz/sdk/hono";const app = new Hono();app.all(  "/api/duckviz/*",  createDuckvizHonoHandler({ token: process.env.DUCKVIZ_TOKEN! }),);export default app;

Fastify

server.ts
import Fastify from "fastify";import { duckvizFastifyPlugin } from "@duckviz/sdk/fastify";const fastify = Fastify();await fastify.register(duckvizFastifyPlugin, {  prefix: "/api/duckviz",  token: process.env.DUCKVIZ_TOKEN!,});await fastify.listen({ port: 3000 });

Express

server.ts
import express from "express";import { duckvizExpressMiddleware } from "@duckviz/sdk/express";const app = express();app.use(express.json());                     // required for body forwardingapp.use(  "/api/duckviz",  duckvizExpressMiddleware({ token: process.env.DUCKVIZ_TOKEN! }),);app.listen(3000);

Each adapter handles the same things: forwards the request 1:1 to DuckViz Cloud, attaches Authorization: Bearer <token> server-side, streams SSE without buffering, and forwards AbortSignal so a browser disconnect tears down the upstream request.

Not on Node?

The proxy is just authenticated request forwarding. You can write it in Go, Python, Rust, or whatever your backend already speaks — three rules to follow: attach Authorization: Bearer ${token} server-side, don't buffer SSE responses, and never log the token. First-party SDKs for non-Node stacks are on the roadmap; reach out if you'd like one prioritized.

5. Save the dashboard

DuckViz doesn't ship a backend, and in datasets mode you barely need one. Persist the widget array against the user, plus a name and an id. That's the whole schema:

type SavedDashboard = {  id: string;  name: string;  widgets: Array<{ id: string; /* widget fields from DuckViz */ }>;  createdAt: string;};

Postgres row, Supabase table, KV blob — pick whatever your stack already uses. The widgets are JSON-shaped and small.

6. Render the saved dashboard

When the user comes back, you re-fetch the saved widgets and re-feed the same datasets — but now you need them as arrays, not functions. Dashboard, Report, and Deck don't lazy-load: they ingest everything you give them on mount.

That's the one rule worth memorizing in this whole flow.

app/dashboards/[id]/page.tsx
"use client";import { Dashboard } from "@duckviz/dashboard";import { useDashboardData } from "@/lib/use-dashboard-data";export default function DashboardPage({ params }: { params: { id: string } }) {  const { config, datasets, isLoading } = useDashboardData(params.id);  if (isLoading) return <Loading />;  return (    <Dashboard      config={config}      datasets={datasets}      customFetch={customFetch}      interactive    />  );}

The hook fetches the saved widgets and pulls the rows for whichever datasets they reference:

lib/use-dashboard-data.ts
export async function loadDashboard(id: string) {  const dashboard = await fetch(`/api/dashboards/${id}`).then((r) => r.json());  // Pre-fetch only the datasets the saved widgets actually touch.  // Dashboard / Report / Deck need arrays here, not loader functions.  const slugs = referencedSlugs(dashboard.widgets);  const datasets = await Promise.all(    slugs.map(async (slug) => ({      name: slug,      data: await fetch(`/api/data/${slug}`).then((r) => r.json()),    })),  );  return {    config: {      name: dashboard.name,      widgets: dashboard.widgets.map((w) => ({        id: w.id,        type: w.type,        title: w.title,        description: w.description,        dataKey: w.duckdbQuery,    // ← Explorer says duckdbQuery, Dashboard says dataKey        config: w.config,        layout: w.layout,      })),    },    datasets,  };}

The duckdbQuerydataKey rename catches everyone exactly once. Same string, different field name. Map it on the way out and move on.

7. Reuse the same config for Report and Deck

This is the moment the architecture pays off. The same widgets and the same datasets feed the report editor and the deck presenter — no extra mapping:

import { ReportBuilder, DeckBuilder } from "@duckviz/report";<ReportBuilder  config={{ name: dashboard.name, widgets: dashboard.widgets }}  datasets={datasets}  customFetch={customFetch}  onBack={() => router.push(`/dashboards/${id}`)}/><DeckBuilder  config={{ name: dashboard.name, widgets: dashboard.widgets }}  datasets={datasets}  customFetch={customFetch}  onBack={() => router.push(`/dashboards/${id}`)}/>

Both builders accept widgets shaped with either duckdbQuery or dataKey. The report exports to PDF and DOCX; the deck exports to PPTX. Both happen in the browser — no server-side rendering, no headless Chrome, no DOM screenshots.

What you didn't have to write

Let's tally what you skipped:

  • An ingestion pipeline. Your APIs already return rows.
  • A query engine. DuckDB-WASM runs the user's SQL in the browser.
  • A chart library. 80+ D3 chart types ship with the dashboard package.
  • An AI flow. The widget builder, report generator, and deck composer already work.
  • An export layer. PDF, DOCX, PPTX are part of the report package.
  • An auth bridge. customFetch + one of the four SDK adapters is the full integration — Next.js, Hono, Fastify, or Express.

Your job was to wrap the provider, hand off your data, mount one route handler, and save a small JSON blob when the user clicks save. The rest is the components.

The constraint that holds it together

The privacy posture doesn't change when you embed. The AI sees schemas and aggregates. The rows — your customers' data — stay on the device. What you save is a SQL query, a chart type, and a layout. Re-rendering it tomorrow re-derives everything from the data the user has access to right now.

That's why you can drop these components into a SaaS without a security review per customer. The components don't store data. They render it.

The hosted app at app.duckviz.com runs the same packages with the same constraint. What's there is what you can put in your own product.

— Vikas