Extending Alchemify
Alchemify is designed to be extended. The “ownable code” philosophy means you can modify anything — there’s no black box.
Adding a new table
Section titled “Adding a new table”Using sys.create_table() (recommended)
Section titled “Using sys.create_table() (recommended)”Admin can create business tables at runtime — no DDL files to edit:
SELECT sys.create_table('orders', '[ {"name": "customer_name", "type": "text", "required": true}, {"name": "total", "type": "currency"}]'::jsonb);This creates the table, _v view, RLS policies, grants, audit trigger, and metadata row in one transaction. The table appears in the sidebar with full CRUD immediately.
No API routes to write. No ORM models to update. No frontend code to change. The schema drives everything.
Manual DDL (for custom needs)
Section titled “Manual DDL (for custom needs)”If you need custom RLS policies, foreign keys, or other features:
- Define the table in
apps/server/schema.sql. - Add a
_vview, RLS policies, grants, and an audit trigger. - Re-apply:
psql -U postgres -d alchemify -f apps/server/schema.sql.
System tables
Section titled “System tables”System tables go in sys with a view in public:
- Define the table in
sysschema (e.g.,CREATE TABLE IF NOT EXISTS sys.my_table ...). - Create a
security_barrierview inpublicwith role-based WHERE filtering. - Grant appropriate permissions on the view (not the underlying table).
- System views don’t appear in the CRUD sidebar (no
write_target). Business_vviews do appear — the frontend useswrite_targetto route writes to the base table.
Form validation
Section titled “Form validation”CrudForm validates field values on submit using validateField(col, raw, displayType?) in apps/web/src/lib/validate.ts. If a displayType is provided (from sys.column_metadata), semantic validation runs first:
| displayType | Check |
|---|---|
email | Regex: user@host.tld format |
url | new URL() try/catch |
phone | Regex: digits, spaces, parens, hyphens, optional + prefix |
color | Regex: #rrggbb hex format |
percent | Float, 0–100 |
rating | Integer, 1–5 |
Other display types (currency, multiline, etc.) fall through to data_type validation:
| data_type | Check |
|---|---|
integer, bigint, smallint | Round-trip parseInt — rejects decimals, scientific notation, trailing text |
numeric, real, double precision | parseFloat — rejects non-numeric strings |
uuid | Regex for standard UUID format |
json, jsonb | JSON.parse in try/catch |
Empty values are skipped (handled separately by the required-field check).
To add validation for a new type, add a branch in validateField(). Tests are in apps/web/src/lib/validate.test.ts.
Display types and formatting
Section titled “Display types and formatting”The CRUD UI consumes display_type from sys.column_metadata for widget selection, validation, and display formatting. The known display types are defined in KNOWN_DISPLAY_TYPES in schema.ts:
text, multiline, email, url, phone, color, integer, currency, percent, rating, boolean, choice, date, datetime, uuid, file, image, jsonb
The getDisplayType(table, col, schema, dataType?) helper returns the display type only if it’s in this whitelist — unknown values are ignored. When no explicit metadata exists and the column’s Postgres data_type is boolean, it automatically returns "boolean" (no column_metadata entry needed). The FormattedValue component (apps/web/src/components/FormattedValue.tsx) renders values with display-type-aware formatting in detail and list views — booleans show a green check icon for true and a gray X icon for false (null renders as blank).
Column labels
Section titled “Column labels”Column headers and field labels use fieldLabel() from apps/web/src/lib/schema.ts. System columns get human-friendly names (id → “ID”, created_at → “Created on”, updated_at → “Last updated”); all other columns fall through to prettyLabel() which converts snake_case to Title Case.
Seeding test data
Section titled “Seeding test data”The Builder AI can generate sample data for any table via the propose_seed_data tool. When the user asks for test/sample data:
- The AI calls
propose_seed_datawith justtable_nameand an optionalcount(1-20, default 3) - The server generates realistic rows via a focused AI call with
response_format: json_object - A preview widget shows the data in a read-only table
- The user clicks “Insert” to confirm (or “Cancel” to reject)
- The server validates the table and columns against
get_schema(), then runs a parameterized multi-row INSERT - If a constraint error occurs (unique violation, FK violation, NOT NULL, check constraint), the server asks the AI to fix the data and retries (up to 3 attempts total)
- Non-constraint errors fail immediately without retry
The seed data handler runs each INSERT attempt in its own short transaction (separate from the withAuth transaction) so that retries don’t hold connections across OpenAI round-trips. All row values are passed as $N parameters — never interpolated into SQL.
Configuring FK display columns
Section titled “Configuring FK display columns”When a table is created, sys.create_table() auto-detects which columns to show in FK autocomplete dropdowns (priority: name > title > label > email > first text column). This is stored in sys.table_metadata as display_columns.
To change display columns after table creation, users can ask the Builder AI (e.g., “show email instead of name for customers”). The configure_fk_display tool validates that the requested columns exist on the table and are short-text-shaped — either by semantic display_type (text, email, url, phone, color) on managed tables, or by raw Postgres type (text, varchar, citext, character) on legacy/imported tables that have no display_type set. Choice/boolean/numeric/date columns are rejected because they make poor FK labels. The tool then updates sys.table_metadata. You can also update it directly:
UPDATE sys.table_metadataSET meta = jsonb_set(meta, '{display_columns}', '["email", "phone"]'::jsonb)WHERE table_name = 'customers' AND schema_name = 'public';Adding a custom endpoint
Section titled “Adding a custom endpoint”If you need logic beyond what SQL can express, add a route in apps/server/src/app.ts (or in the relevant router module: auth.ts, admin.ts, files.ts):
app.post('/my-endpoint', async (req, res) => { const client = await pool.connect(); try { await client.query('BEGIN'); await client.query(`SET LOCAL ROLE 'staff'`); // your logic here await client.query('COMMIT'); res.json({ result }); } catch (error) { await client.query('ROLLBACK'); res.status(500).json({ error: 'Something went wrong' }); } finally { client.release(); }});Follow the same transaction pattern: BEGIN → SET LOCAL ROLE → work → COMMIT.
Adding a SQL function
Section titled “Adding a SQL function”For reusable logic, put it in a PostgreSQL function:
CREATE OR REPLACE FUNCTION my_function(arg text)RETURNS jsonbLANGUAGE plpgsqlSECURITY INVOKER -- runs as the caller's role, respects RLSAS $$BEGIN -- your logicEND;$$;Use SECURITY INVOKER (the default) for functions that should respect RLS. Use SECURITY DEFINER only for privileged operations like auth — and put those in the sys schema with SET search_path = sys, public.
Writing a page
Section titled “Writing a page”Pages are TSX stored in the sys.pages table (exposed via a view in public). Each row has:
| Column | Default | Purpose |
|---|---|---|
slug | — | URL slug (unique, lowercase alphanumeric + hyphens), e.g. dashboard. Pages served at /page/<slug>. |
title | — | Display name (shown in sidebar) |
source | — | TSX source code |
role | 'staff' | Minimum role to view: owner, admin, staff, member, or anon |
show_in_nav | true | Whether the page appears in the sidebar |
status | 'draft' | draft or published — non-admin users only see published pages |
published_at | NULL | Timestamp of last publish |
published_by | NULL | User ID that published the page |
Platform pages (.tsx files)
Section titled “Platform pages (.tsx files)”Platform pages shipped with Alchemify are stored as standalone .tsx files with comment frontmatter:
// @slug settings// @title Settings// @role admin// @nav true
import { Page, PageHeader } from "@alchemify/layout";// ... component codeThese live in apps/server/schema/pages/. Running seed-pages.sh generates the SQL INSERT statements that schema.sql includes via \ir. See Database & Schema for details.
To add a new platform page, create a .tsx file in the appropriate directory with the four required frontmatter keys (slug, title, role, nav), then re-run seed-pages.sh.
Runtime pages (SQL insert)
Section titled “Runtime pages (SQL insert)”Pages can also be inserted at runtime by an admin:
INSERT INTO pages (slug, title, source, role) VALUES ( 'dashboard', 'Dashboard', 'import { Page, PageHeader } from ''@alchemify/layout'';export default function Dashboard() { return <Page><PageHeader title="Dashboard" /></Page>;}', 'staff');To create a page that’s accessible by URL but hidden from the sidebar:
INSERT INTO pages (slug, title, source, show_in_nav) VALUES ( 'reports-detail', 'Report Detail', '...', false);The view controls who sees what — owner/admin sees all pages (draft and published), staff sees published staff + member + anon pages, member sees published member + anon pages, anon sees only published anon pages. Pages with show_in_nav = true appear in the sidebar between Builder and the entity links. Page changes made via the builder/chat/admin mutation path propagate to open browser tabs through app-layer invalidation (React Query + BroadcastChannel); external SQL writes require a browser refresh to be picked up.
Draft/published workflow
Section titled “Draft/published workflow”Pages are created with status = 'draft' by default. Non-admin users (staff, member, anon) only see published pages. Owner and admin see all pages regardless of status.
When the Builder AI updates a page’s source code, the page is automatically reset to draft status. This ensures that AI-generated changes are reviewed before going live.
To publish a page:
SELECT * FROM sys.publish_page('dashboard');To unpublish (revert to draft):
SELECT * FROM sys.unpublish_page('dashboard');Both functions are restricted to owner and admin roles. Platform seed pages (from seed-pages.sh) are inserted as published.
Convention
Section titled “Convention”Export a default function component. It receives no props — use hooks for data and context:
import { Page, PageHeader, Loading } from '@alchemify/layout';import { useSql } from '@alchemify/data';import { sql } from '@alchemify/sql';import { DataTable } from '@alchemify/ui';
export default function Orders() { const { data, isLoading, error } = useSql( sql`SELECT * FROM orders` );
if (isLoading) return <Loading />; if (error) return <div>Error: {error.message}</div>;
return ( <Page> <PageHeader title="Orders" description="All orders" /> <DataTable columns={['id', 'customer', 'total', 'status']} data={data} /> </Page> );}For a single-table read (base table or _v view), pass { tableName } as the second arg so choice columns get __label / __color auto-enriched from sys.column_metadata:
const { data } = useSql( sql`SELECT * FROM orders_v WHERE id = ${id}`, { tableName: 'orders' });This is a single-table hint — skip it for joined or aggregated queries.
When seeding a form that uses <input type="date"> or <input type="datetime-local">, also pass { rawDates: true } so date / timestamp columns arrive as raw ISO values the inputs can parse and round-trip. Without it, default display formatting produces strings like "Jun 15, 2025" that the inputs reject.
const { data } = useSql( sql`SELECT * FROM orders_v WHERE id = ${id}`, { tableName: 'orders', rawDates: true });For custom query keys or caching behavior, fall back to useQuery + exec:
import { useQuery, exec } from '@alchemify/data';import { sql } from '@alchemify/sql';
const { data } = useQuery({ queryKey: ['my-custom-key'], queryFn: () => exec(sql`SELECT * FROM orders`),});Available modules
Section titled “Available modules”| Package | Exports |
|---|---|
@alchemify/ui | Button, Card (+ sub-components), Input, Badge, Table (+ sub-components), Tabs (+ sub-components), Tooltip (+ sub-components), DataTable, Label, Checkbox, Textarea, AlertDialog (+ sub-components), FileField |
@alchemify/data | useSql, useQuery, useMutation, useQueryClient, exec, rawSql |
@alchemify/sql | sql, joinSQL, raw |
@alchemify/auth | useAuth |
@alchemify/nav | useNavigate, useParams, useSearchParams, Link |
@alchemify/layout | Page, PageHeader, Stack, Grid, Loading, ErrorBoundary |
@alchemify/util | cn, formatDate, toast, groupBy, nest |
The canonical list of @alchemify/ui exports lives in packages/types/src/ui-components.ts. The chat service reads this registry at runtime to build the AI system prompt, so the AI only suggests components that actually exist. When adding or removing UI components, update this registry alongside apps/web/src/modules/ui.ts — a test in packages/types automatically verifies they stay in sync.
For details on the compilation pipeline and module registry, see Frontend Architecture.
CSV import (import_csv) and the client-executed tool pattern
Section titled “CSV import (import_csv) and the client-executed tool pattern”The import_csv builder tool follows a different flow from every other tool — the side effect (chunked INSERTs) runs in the browser, not in the chat service. This pattern is general (it could apply to other large-data tools later) and worth understanding before adding similar tools.
Why client-executed
Section titled “Why client-executed”CSV files can be up to 50,000 rows / 20 MB. Sending the row payload through the chat service inflates body sizes and forces a chunk protocol on /execute-tool. The simpler approach: parse the CSV client-side, send only headers + a 10-row sample to the AI as context, and run the inserts directly through the SQL proxy (POST /sql) under the user’s role + RLS — same path regular CRUD inserts use.
The flow
Section titled “The flow”-
User drops a
.csvinChatInput.parseCsv(Papa Parse, inapps/web/src/lib/csv-import.ts) validates headers (no duplicates, no blanks) and stores the parsed rows in a module-levelMapkeyed by a random attachment id (apps/web/src/lib/csv-attachment.ts). The attachment is not persisted — page reload loses it. -
On send,
ChatInputprepends a delimited block to the user’s text:<csv_attachment_<nonce> id="..." filename="..." total_rows="N">{"headers": [...], "sample_rows": [...10]}</csv_attachment_<nonce>><user text>The boundary uses a random per-attachment nonce so a future format change (or attempts at injection inside cell values) can’t accidentally close the block. The system prompt (
apps/chat/prompts/reference-tables.md, CSV import section) instructs the AI to treat the contents as untrusted data, never instructions. -
The AI emits
import_csvwith{table_name, attachment_id, csv_filename, total_rows, column_mapping}. No row data flows through the AI. -
CsvImportWidget(apps/web/src/components/builder/widgets/) looks up the parsed rows byattachment_id, runs full-rows type coercion against the live schema, validates required-column coverage, and shows a mapping editor + preview. -
On confirm: build chunked parameterized
INSERTstatements (buildInsertSql, dynamic chunk size fromcomputeChunkSize— bounded by the PG bind-parameter limit and the SQL proxy’s body size), callrawSqlper chunk, track progress. -
Once done (success or first chunk failure), the widget calls
executeTool(convId, msgId, true, undefined, {client_result: {result | error}}). The chat service skips itsTOOL_HANDLERS[import_csv]entry entirely and records the suppliedclient_resultverbatim — seeCLIENT_EXECUTED_TOOLSinapps/chat/src/execute-tools.tsand the corresponding branch inapps/chat/src/app.ts:/execute-tool.
What the chat service trusts
Section titled “What the chat service trusts”client_result is unverified. A buggy or malicious client could claim a successful import without actually inserting rows. This is acceptable because:
- The user can only affect their own workspace under their RLS — there’s no privilege escalation.
- PostgreSQL is the source of truth for what’s actually in the table. The AI’s narrative is informed by the client’s report but the next user turn happens against fresh schema (
SCHEMA_CHANGING_TOOLSincludesimport_csv, so stats are refreshed). - Server-side mapping validation would have to run after the insert (rows already committed via the SQL proxy), so it can’t actually prevent any bad outcome — only describe it. PG errors at INSERT time are authoritative and flow back to the AI through
failed_chunk.error.
Adding another client-executed tool
Section titled “Adding another client-executed tool”- Define the tool in
apps/chat/src/tools.tswith args that carry only metadata (no bulk payload). - Add a stub handler in
apps/chat/src/execute-tools/<name>.tsthat throws (defensive — should never run if aclient_resultis supplied). - Register in
TOOL_HANDLERSand add the tool name toCLIENT_EXECUTED_TOOLS. - Build the widget; it should call
executeTool(..., {client_result})for both success and failure paths. - Add to
SCHEMA_CHANGING_TOOLSinapps/chat/src/pipelines/streaming-loop.tsif the side effect changes data the prompt summarizes.
Tech stack reference
Section titled “Tech stack reference”| Layer | Technology |
|---|---|
| Runtime | Node.js 22+ |
| Language | TypeScript |
| Server | Express |
| Database | PostgreSQL 14+ |
| Auth | Custom (JWT + bcrypt) |
| Frontend | React 19, React Router 7, TanStack Query 5 |
| Styling | Tailwind CSS 4, shadcn/ui, Radix UI |
| In-browser compiler | Sucrase |
| Build | Vite 6 |
| Testing | Vitest + Supertest |
| Package manager | pnpm |