Sara
Saraβ€’3mo ago

I'm kinda curious, are you trying to

I'm kinda curious, are you trying to migrate to convex, or use convex and postgress together? if you're doing the first, I can probably help you with it (because I'm bored and curious), for the second, there are a few things that you can do like installing a webhook, and when ever something happens accross your API, you can make calls on, maybe something like supabase can help you with it
17 Replies
ulysses
ulyssesβ€’3mo ago
hey @Sara , apologies, just saw your message in the channel. I am doing the first, from postgres to convex. Currently I've migrated most routes/internal db calls to convex but have trouble with: 1. Leaderboard component, it seems like I have to restructure my data for convex as convex doesn't do my (overloaded to be fair) subqueries. 2. My postgres tables have an associated id per row and there are some FK constraints that I need to shift to Convex's _id column. Let me know what further information/context you need!
Sara
SaraOPβ€’3mo ago
have a public repository you'd be willing to share?
ulysses
ulyssesβ€’3mo ago
only have a private one but I can share the schemas/internal functions as text/files
Sara
SaraOPβ€’3mo ago
mmm, if you have time paste the leaderboard tables (and the connections to other tables), and if you're trying to migrate items in tables I suggest using the migration component, it does backfliing for you and is quite easy but I can help with writing the query for you πŸ˜… (curiousity kills a cat)
ulysses
ulyssesβ€’3mo ago
yeah let me paste the leaderboard route/query for you, it's very scrappy πŸ˜…
ulysses
ulyssesβ€’3mo ago
we used drizzle for the schema definitions
export const keySubscriptionEvent = pgTable('key_subscription_event', {
id: text('id').primaryKey(),
msgSender: text('msg_sender'),
creator: text('creator').notNull(),
subscriber: text('subscriber').notNull(),
keyAmount: numeric('key_amount', { precision: 78 }).notNull(),
isSubscribe: boolean('is_subscribe').notNull(),
hypeAmount: numeric('hype_amount', { precision: 78 }).notNull(),
protocolHypeAmount: numeric('protocol_hype_amount', {
precision: 78
}).notNull(),
subscriberAmount: numeric('subscriber_amount', { precision: 78 }).notNull(),
supply: numeric('supply', { precision: 78 }).notNull(),
timestamp: integer('timestamp').notNull(),
transactionHash: text('transaction_hash').notNull(),
blockNumber: integer('block_number').notNull(),
createdAt: timestamp('created_at', { withTimezone: true })
.notNull()
.defaultNow(),
updatedAt: timestamp('updated_at', { withTimezone: true })
.notNull()
.defaultNow()
})
export const keySubscriptionEvent = pgTable('key_subscription_event', {
id: text('id').primaryKey(),
msgSender: text('msg_sender'),
creator: text('creator').notNull(),
subscriber: text('subscriber').notNull(),
keyAmount: numeric('key_amount', { precision: 78 }).notNull(),
isSubscribe: boolean('is_subscribe').notNull(),
hypeAmount: numeric('hype_amount', { precision: 78 }).notNull(),
protocolHypeAmount: numeric('protocol_hype_amount', {
precision: 78
}).notNull(),
subscriberAmount: numeric('subscriber_amount', { precision: 78 }).notNull(),
supply: numeric('supply', { precision: 78 }).notNull(),
timestamp: integer('timestamp').notNull(),
transactionHash: text('transaction_hash').notNull(),
blockNumber: integer('block_number').notNull(),
createdAt: timestamp('created_at', { withTimezone: true })
.notNull()
.defaultNow(),
updatedAt: timestamp('updated_at', { withTimezone: true })
.notNull()
.defaultNow()
})
export const fameTable = pgTable(
'fame',
{
id: serial('id').primaryKey(),
fromUserId: integer('from_user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
toUserId: integer('to_user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
isPositive: boolean('is_positive').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull()
},
(table) => ({
uniqueDailyFame: uniqueIndex('unique_daily_fame_idx').on(
table.fromUserId,
table.toUserId,
sql`date(${table.createdAt} AT TIME ZONE 'UTC')`
)
})
)
export const fameTable = pgTable(
'fame',
{
id: serial('id').primaryKey(),
fromUserId: integer('from_user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
toUserId: integer('to_user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
isPositive: boolean('is_positive').notNull(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull()
},
(table) => ({
uniqueDailyFame: uniqueIndex('unique_daily_fame_idx').on(
table.fromUserId,
table.toUserId,
sql`date(${table.createdAt} AT TIME ZONE 'UTC')`
)
})
)
export const userActivityDatesTable = pgTable(
'user_activity_dates',
{
id: serial('id').primaryKey(),
userId: integer('user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
date: date('date').notNull()
},
(table) => ({
uniqueUserDate: uniqueIndex('unique_user_date_idx').on(
table.userId,
sql`date(${table.date})`
)
})
)
export const userActivityDatesTable = pgTable(
'user_activity_dates',
{
id: serial('id').primaryKey(),
userId: integer('user_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
date: date('date').notNull()
},
(table) => ({
uniqueUserDate: uniqueIndex('unique_user_date_idx').on(
table.userId,
sql`date(${table.date})`
)
})
)
https://discord.com/channels/1019350475847499849/1385157075851673681 I've migrated the fame component now (thread here). I used aggregates and sumValue as the is_positive column bool Would the best way to migrate the id relational links to use _id instead be to run the migration from bottom up and replace on matching ids?
export const chatGroupsTable = pgTable(
'chat_groups',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
name: text('name').notNull(),
description: text('description'),
avatarUrl: text('avatar_url'),
isGroup: boolean('is_group'),
ownerId: integer('owner_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull()
},
(table) => ({
// unique constraint where each user only has 1 non-group chat
singleNonGroupChat: uniqueIndex('single_non_group_chat_idx')
.on(table.ownerId)
.where(sql`is_group = false`)
})
)

export const chatGroupsRelations = relations(
chatGroupsTable,
({ many, one }) => ({
usersToChatGroups: many(usersToChatGroups),
threads: many(threadsTable),
owner: one(usersTable, {
fields: [chatGroupsTable.ownerId],
references: [usersTable.id]
})
})
)
export const chatGroupsTable = pgTable(
'chat_groups',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
name: text('name').notNull(),
description: text('description'),
avatarUrl: text('avatar_url'),
isGroup: boolean('is_group'),
ownerId: integer('owner_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull()
},
(table) => ({
// unique constraint where each user only has 1 non-group chat
singleNonGroupChat: uniqueIndex('single_non_group_chat_idx')
.on(table.ownerId)
.where(sql`is_group = false`)
})
)

export const chatGroupsRelations = relations(
chatGroupsTable,
({ many, one }) => ({
usersToChatGroups: many(usersToChatGroups),
threads: many(threadsTable),
owner: one(usersTable, {
fields: [chatGroupsTable.ownerId],
references: [usersTable.id]
})
})
)
export const threadsTable = pgTable(
'threads',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
authorId: integer('author_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
text: text('text'),
media: jsonb('media'),
chatGroupId: integer('chat_group_id').references(() => chatGroupsTable.id, {
onDelete: 'cascade'
}),
deletedAt: timestamp('deleted_at', { withTimezone: true })
},
(table) => ({
textOrMediaCheck: check(
'threads_text_or_media_check',
sql`text IS NOT NULL OR media IS NOT NULL`
)
})
)

export const threadsRelations = relations(threadsTable, ({ one, many }) => ({
messages: many(messagesTable),
author: one(chatGroupsTable, {
fields: [threadsTable.chatGroupId],
references: [chatGroupsTable.id]
})
}))
export const threadsTable = pgTable(
'threads',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
authorId: integer('author_id')
.references(() => usersTable.id, { onDelete: 'cascade' })
.notNull(),
text: text('text'),
media: jsonb('media'),
chatGroupId: integer('chat_group_id').references(() => chatGroupsTable.id, {
onDelete: 'cascade'
}),
deletedAt: timestamp('deleted_at', { withTimezone: true })
},
(table) => ({
textOrMediaCheck: check(
'threads_text_or_media_check',
sql`text IS NOT NULL OR media IS NOT NULL`
)
})
)

export const threadsRelations = relations(threadsTable, ({ one, many }) => ({
messages: many(messagesTable),
author: one(chatGroupsTable, {
fields: [threadsTable.chatGroupId],
references: [chatGroupsTable.id]
})
}))
export const messagesTable = pgTable(
'messages',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
deletedAt: timestamp('deleted_at', { withTimezone: true }),
authorId: integer('author_id')
.references(() => usersTable.id, {
onDelete: 'cascade'
})
.notNull(),
text: text('text'),
media: jsonb('media'),
replyToMessageId: integer('reply_to_message_id').references(
(): AnyPgColumn => messagesTable.id,
{ onDelete: 'cascade' }
),
threadId: integer('thread_id').references(() => threadsTable.id, {
onDelete: 'cascade'
})
},
(table) => ({
textOrMediaCheck: check(
'messages_text_or_media_check',
sql`text IS NOT NULL OR media IS NOT NULL`
)
})
)
export const messagesTable = pgTable(
'messages',
{
id: serial('id').primaryKey(),
createdAt: timestamp('created_at', { withTimezone: true }).notNull(),
updatedAt: timestamp('updated_at', { withTimezone: true }).notNull(),
deletedAt: timestamp('deleted_at', { withTimezone: true }),
authorId: integer('author_id')
.references(() => usersTable.id, {
onDelete: 'cascade'
})
.notNull(),
text: text('text'),
media: jsonb('media'),
replyToMessageId: integer('reply_to_message_id').references(
(): AnyPgColumn => messagesTable.id,
{ onDelete: 'cascade' }
),
threadId: integer('thread_id').references(() => threadsTable.id, {
onDelete: 'cascade'
})
},
(table) => ({
textOrMediaCheck: check(
'messages_text_or_media_check',
sql`text IS NOT NULL OR media IS NOT NULL`
)
})
)
Sara
SaraOPβ€’3mo ago
I've just given bad advice, no don't use the migration component from postgres to convex, you could use a combonation of both with an ORM and pass the function to convex (.findAll())
ulysses
ulyssesβ€’3mo ago
ah, is there a stack article/docs link for this? also, doing this online would be quite difficult I presume?
Sara
SaraOPβ€’3mo ago
+ if you don't have real users data that you can delete just start from convex I'm not sure, I'm reading the code you sent me so I'll be 10-20 min thank you for using drizzle btw
ulysses
ulyssesβ€’3mo ago
unfortunately we do have live users' data so we can't do that. I think at most, we can pause the existing systems and have a short downtime on the live app if that makes the migration much much easier πŸ˜… no worries! I'm stepping out for dinner right now, will try to respond asap HAHA on this, i was thinking another solution would be to maintain the links using the ids from postgres and then eventually run a migration while support the relational link using the id column. Then the first step would be swap out the values in id for those in _id and then swap the column out to _id
Sara
SaraOPβ€’3mo ago
I got lost way too many times, the query is way too long and too much sql strings
export const keySubscriptionEvent = defineTable({
id: v.string(), // keep this because the default _id can't be edited, so you will index your table by this field (too)
msgSender: v.string(),
creator: v.string(),
subscriber: v.string(),
keyAmount: v.number(),
isSubscribe: v.boolean(),
hypeAmount: v.number(),
protocolHypeAmount: v.number(),
subscriberAmount: v.number(),
supply: v.number(),
timestamp: v.float64(), // in convex dates are represented by numbers
transactionHash: v.string(),
blockNumber: v.number(),
updatedAt: v.float64() // you don't need the createdAt, you can migrate data from the createdAt to this column
}).index('by_id', ["id"]) // you can add other indexes here that help you find items faster
.index("by_timestamp", ["timestamp"])

export const fame = defineTable({
fromUserId: v.id("users"), // assuming you have a users table
toUserId: v.id("users"),
isPositive: v.boolean(),
updatedAt: v.float64()
})
.index(
"unique_daily_fame",
["fromUserId", "toUserId", "_creationTime"]
);

export const userActivityDates = defineTable({
userId: v.id("users"),
date: v.float64(),
})
.index(
"unique_user_date",
["userId", "date"]
)

export const chatGroups = defineTable({
updatedAt: v.float64(),
name: v.string(),
description: v.optional(v.string()),
avatarUrl: v.optional(v.string()),
isGroup: v.optional(v.boolean()),
ownerId: v.id("users"),
}).index("by_owner", ["ownerId"]);

export const messages = defineTable({
updatedAt: v.float64(),
deletedAt: v.optional(v.number()),
authorId: v.id("users"),
text: v.optional(v.string()),
media: v.optional(v.any()),
replyToMessageId: v.optional(v.id("messages")),
threadId: v.optional(v.id("threads"))
})
.index("by_author", ["authorId"])
.index("by_thread", ["threadId"])
export const keySubscriptionEvent = defineTable({
id: v.string(), // keep this because the default _id can't be edited, so you will index your table by this field (too)
msgSender: v.string(),
creator: v.string(),
subscriber: v.string(),
keyAmount: v.number(),
isSubscribe: v.boolean(),
hypeAmount: v.number(),
protocolHypeAmount: v.number(),
subscriberAmount: v.number(),
supply: v.number(),
timestamp: v.float64(), // in convex dates are represented by numbers
transactionHash: v.string(),
blockNumber: v.number(),
updatedAt: v.float64() // you don't need the createdAt, you can migrate data from the createdAt to this column
}).index('by_id', ["id"]) // you can add other indexes here that help you find items faster
.index("by_timestamp", ["timestamp"])

export const fame = defineTable({
fromUserId: v.id("users"), // assuming you have a users table
toUserId: v.id("users"),
isPositive: v.boolean(),
updatedAt: v.float64()
})
.index(
"unique_daily_fame",
["fromUserId", "toUserId", "_creationTime"]
);

export const userActivityDates = defineTable({
userId: v.id("users"),
date: v.float64(),
})
.index(
"unique_user_date",
["userId", "date"]
)

export const chatGroups = defineTable({
updatedAt: v.float64(),
name: v.string(),
description: v.optional(v.string()),
avatarUrl: v.optional(v.string()),
isGroup: v.optional(v.boolean()),
ownerId: v.id("users"),
}).index("by_owner", ["ownerId"]);

export const messages = defineTable({
updatedAt: v.float64(),
deletedAt: v.optional(v.number()),
authorId: v.id("users"),
text: v.optional(v.string()),
media: v.optional(v.any()),
replyToMessageId: v.optional(v.id("messages")),
threadId: v.optional(v.id("threads"))
})
.index("by_author", ["authorId"])
.index("by_thread", ["threadId"])
those are the tables you've shared, and part of the query,
export const topCreatorViewer = query({
args: { limit: v.optional(v.number()), sortBy: v.optional(literals('volume', 'fame', 'price', 'streak')), sortOrder: v.optional(literals("asc", "desc")) },
handler: async (ctx, args_0) => {

const limit = Math.max(args_0.limit ?? 50, 50);
const sortOrder = args_0.sortOrder ?? "desc";

if (args_0.sortBy && !['volume', 'streak', 'fame', 'price'].includes(args_0.sortBy)) {
throw new ConvexError('Invalid sortBy parameter. Allowed values: volume, fame, price, streak',)
}

if (sortOrder !== 'asc' && sortOrder !== 'desc') {
throw new ConvexError('Invalid sortOrder parameter. Allowed values: asc, desc')
}
const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000;
// you could also paginate this
const keySubscriptionEventsWithinTheLastDay = await ctx.db.query("keySubscriptionEvent")
.withIndex("by_timestamp", (q) => q.gte("timestamp", oneDayAgo))
.collect()

// volume sum,, why use sql string here
const volumeColSub = keySubscriptionEventsWithinTheLastDay?.reduce(
(sum, ev) => sum + (ev.hypeAmount ?? 0),
0
)
//
const users = await ctx.db.query("users").collect();
const volumeData = await Promise.all(
users.map(async (user) => {
// const events = ;///

const events = await filter(
await ctx.db
.query("keySubscriptionEvent")
.withIndex("by_timestamp", (q) =>
q.eq("creator", user.address.toLowerCase())
),
(doc) => (doc.timestamp > oneDayAgo)|| (doc.timestamp === oneDayAgo)
).collect()

const volume = events.reduce(
(sum, ev) => sum + (ev.hypeAmount ?? 0),
0
);

return {
userId: user._id,
volume,
};
})
);
}
})
export const topCreatorViewer = query({
args: { limit: v.optional(v.number()), sortBy: v.optional(literals('volume', 'fame', 'price', 'streak')), sortOrder: v.optional(literals("asc", "desc")) },
handler: async (ctx, args_0) => {

const limit = Math.max(args_0.limit ?? 50, 50);
const sortOrder = args_0.sortOrder ?? "desc";

if (args_0.sortBy && !['volume', 'streak', 'fame', 'price'].includes(args_0.sortBy)) {
throw new ConvexError('Invalid sortBy parameter. Allowed values: volume, fame, price, streak',)
}

if (sortOrder !== 'asc' && sortOrder !== 'desc') {
throw new ConvexError('Invalid sortOrder parameter. Allowed values: asc, desc')
}
const oneDayAgo = Date.now() - 24 * 60 * 60 * 1000;
// you could also paginate this
const keySubscriptionEventsWithinTheLastDay = await ctx.db.query("keySubscriptionEvent")
.withIndex("by_timestamp", (q) => q.gte("timestamp", oneDayAgo))
.collect()

// volume sum,, why use sql string here
const volumeColSub = keySubscriptionEventsWithinTheLastDay?.reduce(
(sum, ev) => sum + (ev.hypeAmount ?? 0),
0
)
//
const users = await ctx.db.query("users").collect();
const volumeData = await Promise.all(
users.map(async (user) => {
// const events = ;///

const events = await filter(
await ctx.db
.query("keySubscriptionEvent")
.withIndex("by_timestamp", (q) =>
q.eq("creator", user.address.toLowerCase())
),
(doc) => (doc.timestamp > oneDayAgo)|| (doc.timestamp === oneDayAgo)
).collect()

const volume = events.reduce(
(sum, ev) => sum + (ev.hypeAmount ?? 0),
0
);

return {
userId: user._id,
volume,
};
})
);
}
})
for relations you usually index tables using the .index(), and use it to find the tables required
Sara
SaraOPβ€’3mo ago
this could help you with it, https://stack.convex.dev/functional-relationships-helpers and I like using filter and map from convex helpers for Promise.all and filters https://www.npmjs.com/package/convex-helpers
Database Relationship Helpers
Traverse database relationships in a readable, predictable, and debuggable way. Support for one-to-one, one-to-many, and many-to-many via utility func...
npm
convex-helpers
A collection of useful code to complement the official convex package.. Latest version: 0.1.94, last published: 5 days ago. Start using convex-helpers in your project by running npm i convex-helpers. There are 16 other projects in the npm registry using convex-helpers.
ulysses
ulyssesβ€’3mo ago
thanks πŸ™πŸ» will take me awhile to dive into your solution. i noticed quite a number of collect calls, i think thats unfeasible especially on the users table? we would hit the limits quite easily
Sara
SaraOPβ€’3mo ago
you don't have to do that, you can use the .take(10) instead, that's me being over caffeinated
ulysses
ulyssesβ€’3mo ago
oh hmm but prior to the take(10), we need to compute the sorted list of either fame/streaks/volume. apart from that i need the list to be paginated as well (it’s like a scrollable component). the sql strings are because the values are bigints and could run into problems if using js numbers too early
Sara
SaraOPβ€’3mo ago
there's a bigint reference in js the eco system is fast but not stupid 😜 , I genuinely would consider moving away completely from using sql like that, pagination would make things simple, you don't have to worry about "limits" within the query itself as its already included (afaik), you could take a look at different stack posts, I'd play around with query streams from convex-helpers too https://www.npmjs.com/package/convex-helpers#example-1-paginate-all-messages-by-a-fixed-set-of-authors https://stack.convex.dev/complex-filters-in-convex
Using TypeScript to Write Complex Query Filters
There’s a new Convex helper to perform generic TypeScript filters, with the same performance as built-in Convex filters, and unlimited potential.
ulysses
ulyssesβ€’3mo ago
alright, thank you!

Did you find this page helpful?