Quickstart
Get from zero to your first embedding migration in under 5 minutes.
1. Install
bash
pip install schift2. Get an API key
Create a free account at [app.schift.io](/app) and generate an API key from the Dashboard. Keys use the format sch_xxx.
3. First embed call
python
from schift import Schift
client = Schift(api_key="sch_your_key_here")
# Embed a single text
vec = client.embed("quarterly revenue report",
model="openai/text-embedding-3-large")
print(vec.shape) # (3072,)4. First migration (5-minute tutorial)
The key insight: Schift learns a projection matrix between two embedding spaces. You only need ~1,000 paired samples — no raw text required for the bulk migration.
python
import numpy as np
from schift import Schift
from schift.adapters import PgVectorAdapter
client = Schift(api_key="sch_your_key_here")
# Step 1: Embed a small sample with both models
# (you need ~1,000 representative documents)
sample_texts = load_sample_docs() # your 1,000 docs
source_emb = embed_with_ada002(sample_texts) # np.ndarray (1000, 1536)
target_emb = embed_with_3_large(sample_texts) # np.ndarray (1000, 3072)
# Step 2: Fit the projection matrix
proj = client.migrate.fit(
source="openai/text-embedding-ada-002",
target="openai/text-embedding-3-large",
sample_ratio=0.1
)
print(proj["id"]) # proj_a1b2c3d4e5f6
# Step 3: Benchmark quality before migrating
report = client.bench.run(
source="openai/text-embedding-ada-002",
target="openai/text-embedding-3-large",
data="./eval_queries.jsonl"
)
print(report.verdict) # "SAFE"
print(report.recovery) # 0.962
# Step 4: Migrate all vectors
source = PgVectorAdapter(dsn="postgresql://...", table="embeddings_old")
sink = PgVectorAdapter(dsn="postgresql://...", table="embeddings_new")
result = client.migrate.run(
projection=proj["id"],
db="pgvector://localhost/mydb",
on_progress=lambda r: print(f"{r['progress']:.0%}")
)
print(f"Migrated {result['total_vectors']:,} vectors")
# Migrated 1,234,567 vectors✓
Typical migration: 1M vectors in ~5 minutes. No downtime. No re-embedding cost.