Skip to content

Extending Alchemify

Alchemify is designed to be extended. The “ownable code” philosophy means you can modify anything — there’s no black box.

Admin can create business tables at runtime — no DDL files to edit:

SELECT sys.create_table('orders', '[
{"name": "customer_name", "type": "text", "required": true},
{"name": "total", "type": "currency"}
]'::jsonb);

This creates the table, _v view, RLS policies, grants, audit trigger, and metadata row in one transaction. The table appears in the sidebar with full CRUD immediately.

No API routes to write. No ORM models to update. No frontend code to change. The schema drives everything.

If you need custom RLS policies, foreign keys, or other features:

  1. Define the table in apps/server/schema.sql.
  2. Add a _v view, RLS policies, grants, and an audit trigger.
  3. Re-apply: psql -U postgres -d alchemify -f apps/server/schema.sql.

System tables go in sys with a view in public:

  1. Define the table in sys schema (e.g., CREATE TABLE IF NOT EXISTS sys.my_table ...).
  2. Create a security_barrier view in public with role-based WHERE filtering.
  3. Grant appropriate permissions on the view (not the underlying table).
  4. System views don’t appear in the CRUD sidebar (no write_target). Business _v views do appear — the frontend uses write_target to route writes to the base table.

CrudForm validates field values on submit using validateField(col, raw, displayType?) in apps/web/src/lib/validate.ts. If a displayType is provided (from sys.column_metadata), semantic validation runs first:

displayTypeCheck
emailRegex: user@host.tld format
urlnew URL() try/catch
phoneRegex: digits, spaces, parens, hyphens, optional + prefix
colorRegex: #rrggbb hex format
percentFloat, 0–100
ratingInteger, 1–5

Other display types (currency, multiline, etc.) fall through to data_type validation:

data_typeCheck
integer, bigint, smallintRound-trip parseInt — rejects decimals, scientific notation, trailing text
numeric, real, double precisionparseFloat — rejects non-numeric strings
uuidRegex for standard UUID format
json, jsonbJSON.parse in try/catch

Empty values are skipped (handled separately by the required-field check).

To add validation for a new type, add a branch in validateField(). Tests are in apps/web/src/lib/validate.test.ts.

The CRUD UI consumes display_type from sys.column_metadata for widget selection, validation, and display formatting. The known display types are defined in KNOWN_DISPLAY_TYPES in schema.ts:

text, multiline, email, url, phone, color, integer, currency, percent, rating, boolean, choice, date, datetime, uuid, file, image, jsonb

The getDisplayType(table, col, schema, dataType?) helper returns the display type only if it’s in this whitelist — unknown values are ignored. When no explicit metadata exists and the column’s Postgres data_type is boolean, it automatically returns "boolean" (no column_metadata entry needed). The FormattedValue component (apps/web/src/components/FormattedValue.tsx) renders values with display-type-aware formatting in detail and list views — booleans show a green check icon for true and a gray X icon for false (null renders as blank).

Column headers and field labels use fieldLabel() from apps/web/src/lib/schema.ts. System columns get human-friendly names (id → “ID”, created_at → “Created on”, updated_at → “Last updated”); all other columns fall through to prettyLabel() which converts snake_case to Title Case.

The Builder AI can generate sample data for any table via the propose_seed_data tool. When the user asks for test/sample data:

  1. The AI calls propose_seed_data with just table_name and an optional count (1-20, default 3)
  2. The server generates realistic rows via a focused AI call with response_format: json_object
  3. A preview widget shows the data in a read-only table
  4. The user clicks “Insert” to confirm (or “Cancel” to reject)
  5. The server validates the table and columns against get_schema(), then runs a parameterized multi-row INSERT
  6. If a constraint error occurs (unique violation, FK violation, NOT NULL, check constraint), the server asks the AI to fix the data and retries (up to 3 attempts total)
  7. Non-constraint errors fail immediately without retry

The seed data handler runs each INSERT attempt in its own short transaction (separate from the withAuth transaction) so that retries don’t hold connections across OpenAI round-trips. All row values are passed as $N parameters — never interpolated into SQL.

When a table is created, sys.create_table() auto-detects which columns to show in FK autocomplete dropdowns (priority: name > title > label > email > first text column). This is stored in sys.table_metadata as display_columns.

To change display columns after table creation, users can ask the Builder AI (e.g., “show email instead of name for customers”). The configure_fk_display tool validates that the requested columns exist on the table and are short-text-shaped — either by semantic display_type (text, email, url, phone, color) on managed tables, or by raw Postgres type (text, varchar, citext, character) on legacy/imported tables that have no display_type set. Choice/boolean/numeric/date columns are rejected because they make poor FK labels. The tool then updates sys.table_metadata. You can also update it directly:

UPDATE sys.table_metadata
SET meta = jsonb_set(meta, '{display_columns}', '["email", "phone"]'::jsonb)
WHERE table_name = 'customers' AND schema_name = 'public';

If you need logic beyond what SQL can express, add a route in apps/server/src/app.ts (or in the relevant router module: auth.ts, admin.ts, files.ts):

app.post('/my-endpoint', async (req, res) => {
const client = await pool.connect();
try {
await client.query('BEGIN');
await client.query(`SET LOCAL ROLE 'staff'`);
// your logic here
await client.query('COMMIT');
res.json({ result });
} catch (error) {
await client.query('ROLLBACK');
res.status(500).json({ error: 'Something went wrong' });
} finally {
client.release();
}
});

Follow the same transaction pattern: BEGINSET LOCAL ROLE → work → COMMIT.

For reusable logic, put it in a PostgreSQL function:

CREATE OR REPLACE FUNCTION my_function(arg text)
RETURNS jsonb
LANGUAGE plpgsql
SECURITY INVOKER -- runs as the caller's role, respects RLS
AS $$
BEGIN
-- your logic
END;
$$;

Use SECURITY INVOKER (the default) for functions that should respect RLS. Use SECURITY DEFINER only for privileged operations like auth — and put those in the sys schema with SET search_path = sys, public.

Pages are TSX stored in the sys.pages table (exposed via a view in public). Each row has:

ColumnDefaultPurpose
slugURL slug (unique, lowercase alphanumeric + hyphens), e.g. dashboard. Pages served at /page/<slug>.
titleDisplay name (shown in sidebar)
sourceTSX source code
role'staff'Minimum role to view: owner, admin, staff, member, or anon
show_in_navtrueWhether the page appears in the sidebar
status'draft'draft or published — non-admin users only see published pages
published_atNULLTimestamp of last publish
published_byNULLUser ID that published the page

Platform pages shipped with Alchemify are stored as standalone .tsx files with comment frontmatter:

// @slug settings
// @title Settings
// @role admin
// @nav true
import { Page, PageHeader } from "@alchemify/layout";
// ... component code

These live in apps/server/schema/pages/. Running seed-pages.sh generates the SQL INSERT statements that schema.sql includes via \ir. See Database & Schema for details.

To add a new platform page, create a .tsx file in the appropriate directory with the four required frontmatter keys (slug, title, role, nav), then re-run seed-pages.sh.

Pages can also be inserted at runtime by an admin:

INSERT INTO pages (slug, title, source, role) VALUES (
'dashboard',
'Dashboard',
'import { Page, PageHeader } from ''@alchemify/layout'';
export default function Dashboard() {
return <Page><PageHeader title="Dashboard" /></Page>;
}',
'staff'
);

To create a page that’s accessible by URL but hidden from the sidebar:

INSERT INTO pages (slug, title, source, show_in_nav) VALUES (
'reports-detail', 'Report Detail', '...', false
);

The view controls who sees what — owner/admin sees all pages (draft and published), staff sees published staff + member + anon pages, member sees published member + anon pages, anon sees only published anon pages. Pages with show_in_nav = true appear in the sidebar between Builder and the entity links. Page changes made via the builder/chat/admin mutation path propagate to open browser tabs through app-layer invalidation (React Query + BroadcastChannel); external SQL writes require a browser refresh to be picked up.

Pages are created with status = 'draft' by default. Non-admin users (staff, member, anon) only see published pages. Owner and admin see all pages regardless of status.

When the Builder AI updates a page’s source code, the page is automatically reset to draft status. This ensures that AI-generated changes are reviewed before going live.

To publish a page:

SELECT * FROM sys.publish_page('dashboard');

To unpublish (revert to draft):

SELECT * FROM sys.unpublish_page('dashboard');

Both functions are restricted to owner and admin roles. Platform seed pages (from seed-pages.sh) are inserted as published.

Export a default function component. It receives no props — use hooks for data and context:

import { Page, PageHeader, Loading } from '@alchemify/layout';
import { useSql } from '@alchemify/data';
import { sql } from '@alchemify/sql';
import { DataTable } from '@alchemify/ui';
export default function Orders() {
const { data, isLoading, error } = useSql(
sql`SELECT * FROM orders`
);
if (isLoading) return <Loading />;
if (error) return <div>Error: {error.message}</div>;
return (
<Page>
<PageHeader title="Orders" description="All orders" />
<DataTable columns={['id', 'customer', 'total', 'status']} data={data} />
</Page>
);
}

For a single-table read (base table or _v view), pass { tableName } as the second arg so choice columns get __label / __color auto-enriched from sys.column_metadata:

const { data } = useSql(
sql`SELECT * FROM orders_v WHERE id = ${id}`,
{ tableName: 'orders' }
);

This is a single-table hint — skip it for joined or aggregated queries.

When seeding a form that uses <input type="date"> or <input type="datetime-local">, also pass { rawDates: true } so date / timestamp columns arrive as raw ISO values the inputs can parse and round-trip. Without it, default display formatting produces strings like "Jun 15, 2025" that the inputs reject.

const { data } = useSql(
sql`SELECT * FROM orders_v WHERE id = ${id}`,
{ tableName: 'orders', rawDates: true }
);

For custom query keys or caching behavior, fall back to useQuery + exec:

import { useQuery, exec } from '@alchemify/data';
import { sql } from '@alchemify/sql';
const { data } = useQuery({
queryKey: ['my-custom-key'],
queryFn: () => exec(sql`SELECT * FROM orders`),
});
PackageExports
@alchemify/uiButton, Card (+ sub-components), Input, Badge, Table (+ sub-components), Tabs (+ sub-components), Tooltip (+ sub-components), DataTable, Label, Checkbox, Textarea, AlertDialog (+ sub-components), FileField
@alchemify/datauseSql, useQuery, useMutation, useQueryClient, exec, rawSql
@alchemify/sqlsql, joinSQL, raw
@alchemify/authuseAuth
@alchemify/navuseNavigate, useParams, useSearchParams, Link
@alchemify/layoutPage, PageHeader, Stack, Grid, Loading, ErrorBoundary
@alchemify/utilcn, formatDate, toast, groupBy, nest

The canonical list of @alchemify/ui exports lives in packages/types/src/ui-components.ts. The chat service reads this registry at runtime to build the AI system prompt, so the AI only suggests components that actually exist. When adding or removing UI components, update this registry alongside apps/web/src/modules/ui.ts — a test in packages/types automatically verifies they stay in sync.

For details on the compilation pipeline and module registry, see Frontend Architecture.

CSV import (import_csv) and the client-executed tool pattern

Section titled “CSV import (import_csv) and the client-executed tool pattern”

The import_csv builder tool follows a different flow from every other tool — the side effect (chunked INSERTs) runs in the browser, not in the chat service. This pattern is general (it could apply to other large-data tools later) and worth understanding before adding similar tools.

CSV files can be up to 50,000 rows / 20 MB. Sending the row payload through the chat service inflates body sizes and forces a chunk protocol on /execute-tool. The simpler approach: parse the CSV client-side, send only headers + a 10-row sample to the AI as context, and run the inserts directly through the SQL proxy (POST /sql) under the user’s role + RLS — same path regular CRUD inserts use.

  1. User drops a .csv in ChatInput. parseCsv (Papa Parse, in apps/web/src/lib/csv-import.ts) validates headers (no duplicates, no blanks) and stores the parsed rows in a module-level Map keyed by a random attachment id (apps/web/src/lib/csv-attachment.ts). The attachment is not persisted — page reload loses it.

  2. On send, ChatInput prepends a delimited block to the user’s text:

    <csv_attachment_<nonce> id="..." filename="..." total_rows="N">
    {"headers": [...], "sample_rows": [...10]}
    </csv_attachment_<nonce>>
    <user text>

    The boundary uses a random per-attachment nonce so a future format change (or attempts at injection inside cell values) can’t accidentally close the block. The system prompt (apps/chat/prompts/reference-tables.md, CSV import section) instructs the AI to treat the contents as untrusted data, never instructions.

  3. The AI emits import_csv with {table_name, attachment_id, csv_filename, total_rows, column_mapping}. No row data flows through the AI.

  4. CsvImportWidget (apps/web/src/components/builder/widgets/) looks up the parsed rows by attachment_id, runs full-rows type coercion against the live schema, validates required-column coverage, and shows a mapping editor + preview.

  5. On confirm: build chunked parameterized INSERT statements (buildInsertSql, dynamic chunk size from computeChunkSize — bounded by the PG bind-parameter limit and the SQL proxy’s body size), call rawSql per chunk, track progress.

  6. Once done (success or first chunk failure), the widget calls executeTool(convId, msgId, true, undefined, {client_result: {result | error}}). The chat service skips its TOOL_HANDLERS[import_csv] entry entirely and records the supplied client_result verbatim — see CLIENT_EXECUTED_TOOLS in apps/chat/src/execute-tools.ts and the corresponding branch in apps/chat/src/app.ts:/execute-tool.

client_result is unverified. A buggy or malicious client could claim a successful import without actually inserting rows. This is acceptable because:

  • The user can only affect their own workspace under their RLS — there’s no privilege escalation.
  • PostgreSQL is the source of truth for what’s actually in the table. The AI’s narrative is informed by the client’s report but the next user turn happens against fresh schema (SCHEMA_CHANGING_TOOLS includes import_csv, so stats are refreshed).
  • Server-side mapping validation would have to run after the insert (rows already committed via the SQL proxy), so it can’t actually prevent any bad outcome — only describe it. PG errors at INSERT time are authoritative and flow back to the AI through failed_chunk.error.
  1. Define the tool in apps/chat/src/tools.ts with args that carry only metadata (no bulk payload).
  2. Add a stub handler in apps/chat/src/execute-tools/<name>.ts that throws (defensive — should never run if a client_result is supplied).
  3. Register in TOOL_HANDLERS and add the tool name to CLIENT_EXECUTED_TOOLS.
  4. Build the widget; it should call executeTool(..., {client_result}) for both success and failure paths.
  5. Add to SCHEMA_CHANGING_TOOLS in apps/chat/src/pipelines/streaming-loop.ts if the side effect changes data the prompt summarizes.
LayerTechnology
RuntimeNode.js 22+
LanguageTypeScript
ServerExpress
DatabasePostgreSQL 14+
AuthCustom (JWT + bcrypt)
FrontendReact 19, React Router 7, TanStack Query 5
StylingTailwind CSS 4, shadcn/ui, Radix UI
In-browser compilerSucrase
BuildVite 6
TestingVitest + Supertest
Package managerpnpm