artelus
artelus3mo ago

now i'm hitting this error constantly:

now i'm hitting this error constantly:
waiting for local backend to start...
✖ Local backend did not start on port 3210 within 10 seconds.
✖ Hit an error while running local deployment.
Your error has been reported to our team, and we'll be working on it.
To opt out, run `npx convex disable-local-deployments`. Then re-run your original command.
waiting for local backend to start...
✖ Local backend did not start on port 3210 within 10 seconds.
✖ Hit an error while running local deployment.
Your error has been reported to our team, and we'll be working on it.
To opt out, run `npx convex disable-local-deployments`. Then re-run your original command.
3 Replies
Vexoa
Vexoa3mo ago
Hey Lets move it here
import { query } from "./_generated/server";

export const listFiles = query({
args: {},
handler: async (ctx) => {
// Fetch all file metadata from the _storage system table
return await ctx.db.system.query("_storage").collect();
},
});
import { query } from "./_generated/server";

export const listFiles = query({
args: {},
handler: async (ctx) => {
// Fetch all file metadata from the _storage system table
return await ctx.db.system.query("_storage").collect();
},
});
Try this
artelus
artelusOP3mo ago
ah thanks! i see this is in the docs under file-storage/file-metadata which i missed. managed to get my local deployment working again by deleting the entry in .env.local and reauthenticating. main frustration is that i have 900 zip files, each with around ~10k files representing countless entities and i was using workflows to process them all. was getting tons of failed workflow runs due to the limitations of workflows, so i switched to workpools. needed to use node because some of the files were too big to fit in the javascript runtime's memory, but had another issue where if a node action enqueues another node action in a workpool, the local server crashes. that's how i got into that weird state where the local server wouldn't run until i wiped the ~/.convex directory
Vexoa
Vexoa3mo ago
Ahh that is a unique use case, will have a look into it and see what i can find out Have you tried batch processing using workpool and node actions, i managed to get the convex docs to generate some mockup code. Not tried it but give it a go:
// convex/processZipFiles.ts
import { mutation, internalAction } from "convex/server";
import { Workpool } from "@convex-dev/workpool";
import { components } from "./_generated/api";

const zipPool = new Workpool(components.zipWorkpool, { maxParallelism: 5 });

export const enqueueZipProcessing = mutation({
args: { zipFileIds: v.array(v.string()) },
handler: async (ctx, args) => {
for (const zipFileId of args.zipFileIds) {
await zipPool.enqueueAction(ctx, internal.zip.processZipFile, { zipFileId });
}
},
});

// convex/zip/processZipFile.ts
"use node";
import { internalAction } from "convex/server";
import { unzipAndProcessInBatches } from "./utils"; // your unzip logic

export const processZipFile = internalAction({
args: { zipFileId: v.string() },
handler: async (ctx, args) => {
// Stream and process the zip file in batches to avoid memory/time limits
for await (const batch of unzipAndProcessInBatches(args.zipFileId, 1000)) {
await ctx.runMutation(internal.zip.saveBatch, { batch });
}
},
});

// convex/zip/saveBatch.ts
import { internalMutation } from "convex/server";

export const saveBatch = internalMutation({
args: { batch: v.array(v.any()) },
handler: async (ctx, args) => {
// Save batch to DB
for (const entity of args.batch) {
await ctx.db.insert("entities", entity);
}
},
});
// convex/processZipFiles.ts
import { mutation, internalAction } from "convex/server";
import { Workpool } from "@convex-dev/workpool";
import { components } from "./_generated/api";

const zipPool = new Workpool(components.zipWorkpool, { maxParallelism: 5 });

export const enqueueZipProcessing = mutation({
args: { zipFileIds: v.array(v.string()) },
handler: async (ctx, args) => {
for (const zipFileId of args.zipFileIds) {
await zipPool.enqueueAction(ctx, internal.zip.processZipFile, { zipFileId });
}
},
});

// convex/zip/processZipFile.ts
"use node";
import { internalAction } from "convex/server";
import { unzipAndProcessInBatches } from "./utils"; // your unzip logic

export const processZipFile = internalAction({
args: { zipFileId: v.string() },
handler: async (ctx, args) => {
// Stream and process the zip file in batches to avoid memory/time limits
for await (const batch of unzipAndProcessInBatches(args.zipFileId, 1000)) {
await ctx.runMutation(internal.zip.saveBatch, { batch });
}
},
});

// convex/zip/saveBatch.ts
import { internalMutation } from "convex/server";

export const saveBatch = internalMutation({
args: { batch: v.array(v.any()) },
handler: async (ctx, args) => {
// Save batch to DB
for (const entity of args.batch) {
await ctx.db.insert("entities", entity);
}
},
});
As you are limited on functionality with Convex, but the team might be able to higher those limits for you

Did you find this page helpful?