ufoaz
ufoaz8mo ago

Locking mechanism in custom Action (getOrCreate)?

When implementing an Action to getOrCreate a particular doc in a table, we'd like to make sure that the doc, which has a unique location_id column doesn't get created twice. However, we don't currently have a schema defined. Is there a way to handle the duplication of doc when this function is called in quick succession without defining a schema for all our tables and specifying the location_id column as unique? We are using an action here because the create involves a 3rd party API request.
export const getOrCreate = action({
args: {
location_id: v.string(),
},
handler: async (ctx, { location_id }) => {
const location: null | any = await ctx.runQuery(internal.locations.get, {
location_id,
});

if (location) {
return location;
}
const combinedObj = await fetchLocationWithPhotos(location_id);
await ctx.runMutation(internal.locations.create, {
combinedObj,
location_id,
});

return {
...combinedObj,
location_id,
};
},
});

export const get = internalQuery({
args: {
location_id: v.string(),
},
handler: async (ctx, { location_id }) => {
return await ctx.db
.query("locations")
.filter((q) => q.eq(q.field("location_id"), location_id))
.unique();
},
});

export const create = internalMutation({
args: {
location_id: v.string(),
combinedObj: v.any(),
},
handler: async (ctx, { location_id, combinedObj }) => {
return await ctx.db.insert("locations", {
location_id,
...combinedObj,
});
},
});
export const getOrCreate = action({
args: {
location_id: v.string(),
},
handler: async (ctx, { location_id }) => {
const location: null | any = await ctx.runQuery(internal.locations.get, {
location_id,
});

if (location) {
return location;
}
const combinedObj = await fetchLocationWithPhotos(location_id);
await ctx.runMutation(internal.locations.create, {
combinedObj,
location_id,
});

return {
...combinedObj,
location_id,
};
},
});

export const get = internalQuery({
args: {
location_id: v.string(),
},
handler: async (ctx, { location_id }) => {
return await ctx.db
.query("locations")
.filter((q) => q.eq(q.field("location_id"), location_id))
.unique();
},
});

export const create = internalMutation({
args: {
location_id: v.string(),
combinedObj: v.any(),
},
handler: async (ctx, { location_id, combinedObj }) => {
return await ctx.db.insert("locations", {
location_id,
...combinedObj,
});
},
});
6 Replies
jamwt
jamwt8mo ago
hey there. mutations are acid transctions, so there is no possibility of data races as long as you double check the document doesn't exist in the final mutation where you write it you don't need a schema (and index) for that b/c you can just do await ctx.db.query("locations").filter(...).unique() to see if the table already contains a document with that location_id right before you insert it if the filter(...).unique() call returns non-null, then the document is already there and you shouldn't insert having said that, if there are a lot of these documents, you're going to want an index to look things up by location_id eventually, because scanning through the whole table (which is what filter does) get to be pretty slow once you have more than a few hundred documents and an index definition does require a schema
ufoaz
ufoazOP8mo ago
I did try this:
export const create = internalMutation({
args: {
location_id: v.string(),
combinedObj: v.any(),
},
handler: async (ctx, { location_id, combinedObj }) => {

const location = await ctx.db
.query("locations")
.filter((q) => q.eq(q.field("location_id"), location_id))
.unique();
if (location) {
return location;
}
return await ctx.db.insert("locations", {
location_id,
...combinedObj,
});
},
});
export const create = internalMutation({
args: {
location_id: v.string(),
combinedObj: v.any(),
},
handler: async (ctx, { location_id, combinedObj }) => {

const location = await ctx.db
.query("locations")
.filter((q) => q.eq(q.field("location_id"), location_id))
.unique();
if (location) {
return location;
}
return await ctx.db.insert("locations", {
location_id,
...combinedObj,
});
},
});
However then i get this error:
Error: [CONVEX A(locations:getOrCreate)] [Request ID: 0b58f445eec01b4f] Server Error
Uncaught Error: Documents read from or written to the "locations" table changed while this mutation was being run and on every subsequent retry. Another call to this mutation changed the document with ID "jn7583h56b0az3g8szkn49mjw96t8d22". See https://docs.convex.dev/error#1
at async handler (../../convex/locations.ts:25:53)

Called by client
Error: [CONVEX A(locations:getOrCreate)] [Request ID: 0b58f445eec01b4f] Server Error
Uncaught Error: Documents read from or written to the "locations" table changed while this mutation was being run and on every subsequent retry. Another call to this mutation changed the document with ID "jn7583h56b0az3g8szkn49mjw96t8d22". See https://docs.convex.dev/error#1
at async handler (../../convex/locations.ts:25:53)

Called by client
Fetching like this client side:
const [location, setLocation] = useState<any | null>(null);

const getOrCreate = useAction(api.locations.getOrCreate);
useEffect(() => {
const fetchLocation = async () => {
const result = await getOrCreate({ location_id: locationId });
setLocation(result);
};

fetchLocation();
}, [getOrCreate, locationId]);
const [location, setLocation] = useState<any | null>(null);

const getOrCreate = useAction(api.locations.getOrCreate);
useEffect(() => {
const fetchLocation = async () => {
const result = await getOrCreate({ location_id: locationId });
setLocation(result);
};

fetchLocation();
}, [getOrCreate, locationId]);
hmm ya seems we will need a schema sooner rather than later. if we had a schema, then would the best bet be to try catch bc the column for location id would be unique? then it would error out if already exists. then we wouldnt need to check if the location exists?
jamwt
jamwt8mo ago
yeah, basically, the system gave up trying to get a conflict-free mutation to happen because of the amount of contention this is causing. I'm guessing you now have several things in this table at least it's time to add an index that will make all those errors go away and speed up your app, all sorts of good stuff when you're reading the whole table, you're maximizing the change that you cannot cleanly commit your transaction because convex thinks your transaction may depend on any value in that table changing using an index (instead of just filter) makes it so convex knows you only care about that one location_id and then you won't have those issues anymore
ufoaz
ufoazOP8mo ago
ah ok that makes sense. will add a schema and index! appreciate it
ian
ian8mo ago
FYI you can add a schema and index without turning on schema validation or specifying all fields or tables. Check the docs about how
ufoaz
ufoazOP8mo ago
Cool, thanks! Ya just added schema and index for the locaiton_id column and it works!

Did you find this page helpful?